¶ Intro / Opening
welcome to the tech meme right over tuesday may 20th 2025 i'm brian mccullough today all the headlines from the microsoft build 2025 keynote a whole slew of legislation that relates to tech is making its way through congress Temu and Shein can't catch a break. NVIDIA is serious about using AI for robotics and how something called social sensitivity can make autonomous cars even more safe. Here's what you missed today in the world.
¶ Microsoft Build Conference Overview
They did this to me last year, I think, as well. The Microsoft Build Conference kicked off yesterday, and Google I.O. is today, so it's hard to cover both at the same time without shortchanging one or the other. We'll cover Build Today and I.O. tomorrow.
¶ GitHub Copilot and Open Sourcing
Build first, beginning with Microsoft-owned GitHub debuting an AI coding agent for GitHub Copilot that can fix bugs, add features, improve documentation, and more. And they are open sourcing GitHub Copilot in VS Code. Quoting The Verge. To complete its work, GitHub says the AI coding agent will automatically boot a virtual machine, clone the repository, and analyze the code base. It also saves its changes as it works while providing a rundown of its reasoning in session logs.
When it's finished, GitHub says the agent will tag you for review. Developers can then leave comments that the agent will automatically address. The agent also incorporates context from related issue or PR pull request discussions. and follows any custom repository instructions, allowing it to understand both the intent behind the task and the coding standards of the project, GitHub says.
The new coding agent is available to Copilot Enterprise and Copilot Pro Plus users through GitHub's site, its mobile app, and the GitHub command line interface tool. Microsoft also announced that it's open sourcing GitHub Copilot and Visual Studio Code, which means developers will be able to build upon the tool's AI capabilities, end quote.
But that wasn't the end of the open sourcing because Microsoft also open sourced the Windows subsystem for Linux and released its code on GitHub, except for a few Windows-specific components.
¶ Microsoft Build: AI Platform Updates
Microsoft also launched NL Web, an open project that lets developers add a conversational interface to their website with a few lines of code, an AI model, and data. quoting TechCrunch. You can use the AI model of your choice and your own data. A retailer could use NLWeb to create a chatbot that helps users choose clothing for specific trips, for example, while a cooking site could use it to build a bot that suggests dishes to pair with a recipe.
Web pages built using NLWeb can optionally make their content discoverable and accessible to AI platforms that support MCP, Anthropics standard for connecting AI models to the systems where data resides. We believe NLWeb can play a similar role to HTML for the agentic web, writes Microsoft in press materials provided to TechCrunch. It allows users to interact directly with the web content in a rich semantic manner.
Microsoft didn't say either way, but NLWeb may have its origins in tech from chat GPT maker OpenAI, Microsoft's close collaborator, end quote. Microsoft also expanded Entra, Defender, and Purview, embedding them directly into Azure AI Foundry and Copilot Studio to help organizations secure AI apps and agents. For its Copilot Studio agents, Microsoft announced a computer use feature available in its Frontier program for select U.S. users, WhatsApp integration, and more.
They added Grok 3 and Grok 3 Mini to their Azure AI Foundry service, and sources say Satya Nadella had been pushing for Microsoft to host Grok. Quoting The Verge, It's a surprise addition that could prove controversial internally and further inflame tensions with Microsoft's partner OpenAI. Microsoft has been steadily growing its Azure AI Foundry business over the past year and has been quick to embrace models from a variety of AI labs. that compete with its OpenAI partner.
In January, I reported in Notepad that Microsoft CEO Satya Nadella had moved with haste to get engineers to test and deploy DeepSeq R1 as it made headlines around the world. Engineers didn't sleep much over those days while they worked overtime to get R1 ready for Azure AI Foundry. Sources tell me Nadella has been pushing for Microsoft to host Grok, and he's eager for Microsoft to be seen as the hosting provider for any popular or emerging AI model.
Grok is the latest model to join the Azure AI Foundry, which is quickly becoming an important AI service for Microsoft as it seeks to be seen as the platform to host AI models for businesses and app developers, end quote. Microsoft also debuted Foundry Local for Windows and Mac OS on Onyx Runtime to let developers build cross-platform AI apps that can run models, tools, and agents on device.
And finally, Microsoft announced Microsoft Discovery, an agentic platform to accelerate scientific discovery and enterprise R&D efforts. Quoting Neo when Microsoft Discovery doesn't lock researchers in with Microsoft's own tools. Instead, it has been built to be highly extensible, allowing researchers to integrate with models from other partners and even their own models, tools, and data sets when required.
Also, it is built on top of a graph-based knowledge engine to provide a deep understanding of conflicting theories, diverse experimental results and more. Researchers can also validate and understand every step or make changes if required. Microsoft says its own researchers use Microsoft Discovery's AI models and HPC simulation tools to discover a new coolant prototype to be used in data centers and around. 200 hours, a process that Microsoft says would have taken months or years otherwise.
Microsoft is already working with a select set of Microsoft customers from various industries, including chemistry and materials, silicon design, energy manufacturing, and pharma to develop Microsoft Discovery's capabilities. With its extensible architecture and AI capabilities, Microsoft Discovery promises to significantly shorten research timelines across various scientific fields. Back in February, Google announced a somewhat similar product called AI Co-Scientist,
It's a multi-agent AI system built with Gemini designed to serve as a virtual scientific collaborator that helps scientists generate novel hypotheses and research proposals. It will be interesting to see Microsoft and Google compete to win over researchers and scientists in the coming years, end quote.
¶ New US Tech Legislation Discussed
President Trump has signed the Take It Down Act criminalizing the distribution of non-consensual intimate images, basically deepfakes, and requiring platforms to promptly remove them when notified. quoting The Verge. The bill sailed through both chambers of Congress with several tech companies, parent and youth advocates, and First Lady Melania Trump championing the issue.
But critics, including a group that's made its mission to combat the distribution of such images, warn that its approach could backfire and harm the very survivors it seeks to protect. The law makes publishing NCII, whether real or AI generated, criminally punishable by up to three years in prison plus fines.
It also requires social media platforms to have processes to remove NCII within 48 hours of being notified and, quote, make reasonable efforts to remove any copies. The Federal Trade Commission is tasked with enforcing the law and companies have a year to comply. Under any other administration, the Take It Down Act would likely see much of the pushback it does today by groups like the Electronic Frontier Foundation and Center for Democracy and Technology.
which warn the takedown provisions could be used to remove or chill a wider array of content than intended as well as threaten privacy-protecting technologies like encryption since services that use it would have no way of seeing or removing the messages between users.
But actions by the Trump administration in his first 100 days in office, including breaching Supreme Court precedent by firing the two Democratic minority commissioners at the FTC, may have added another layer of fear for some of the law's critics who worry it could be used to threaten or stifle political opponents.
Trump, after all, said during an address to Congress this year that once he signed the bill, quote, I'm going to use that bill for myself, too, if you don't mind, because nobody gets treated worse than I do online. Nobody, end quote. The Cyber Civil Rights Initiative, which advocates for legislation combating image-based abuse, has long pushed for the criminalization of non-consensual distribution of intimate images, or NDII. But the CCRI said it could not support the Take It Down Act because
It may ultimately provide survivors with, quote, false hope. On Blue Sky CCRI, President Mary Ann Franks called the takedown provision a poison pill that will likely end up hurting victims more than it helps. They wrote, platforms attempting to identify authentic complaints may encounter a sea of false reports that could overwhelm their efforts and jeopardize their ability to operate at all end quote
But that's not all on the legislative front because the U.S. Senate has advanced the Genius Act, a stablecoin bill, after a group of Democratic senators dropped their opposition to the bill, marking a major win for the crypto industry. Quoting Bloomberg, The industry-backed regulatory bill is now set for debate on the Senate floor, with a bipartisan group hoping to pass it as soon as this week, although senators said a final vote could slip until after the Memorial Day recess.
Democrats had United to filibuster the legislation earlier this month amid a furor over President Donald Trump's crypto dealings, along with other concerns related to the regulation of stablecoins. But the Senate voted 66-32 on Monday night to end the filibuster. Crypto-friendly Democrats led by Senators Kristen Gillibrand of New York and Angela Ulsterbrooks of Maryland.
negotiated modifications to the legislation and urged their colleagues to support it, even without a ban on Trump profiting from his family's many crypto ventures while in office. But Democratic Senator Mark Warner of Virginia, an influential moderate on the banking committee, announced Monday he would support the measure, adding that concerns over the Trump family's business dealings shouldn't sideline broader stablecoin legislation.
The legislation is, quote, not perfect, but it's far better than the status quo, Warner said, end quote. And one more bit of congressional news. The U.S. House Budget Committee has advanced a budget bill that would ban U.S. states from enforcing any law regulating AI for 10 years. The bill now goes to the House. If you're a security or IT professional, you've got a mountain of assets to protect. You've got devices, applications, and employee identities.
you've got the scary stuff outside your security stack like unmanaged devices shadow IT apps and non-employee identities it's a lot Fortunately, you can conquer these risks with 1Password Extended Access Management. For example, did you know most ransomware attacks stem from unmanaged devices? That's why 1Password Device Trust Solutions
blocks unsecured and unknown devices before they access your company's apps. And don't worry, 1Password still protects against the biggest attack source, compromised credentials. Its industry-leading password manager helps employees create strong, unique logins for every app. Secure devices? Check. Secure credentials? Check. But what about employee productivity? 1Password, Extended Access Management. Empowers hybrid employees to join the security team with end user remediation that teaches them
How and why to fix security issues without needing help from IT. You know I love 1Password and use it every single day. You should too. Go to 1Password.com slash ride to secure every app, device, and identity, even the unmanaged ones. Right now, my listeners get a free two-week trial at 1password.com slash ride, all lowercase. That's 1password.com slash ride.
Life's been a little crazy lately, and one thing that helps me unwind is cornbread hemp's CBD gummies. Long-time listeners will recall the gummies that helped me unwind after Chris and I used to do ride home experience recordings late at night.
Have you tried everything under the sun for peace of mind? It sounds like it's time for you to try cornbread hemp's CBD gummies. Cornbread hemp's CBD gummies are made to help you feel better, whether it's stress, discomfort, or just needing a little relaxation. They only use the best part of the hemp plant, the flower.
for the purest and most potent CBD formulated to help relieve discomfort, stress, and sleeplessness. They have THC for relaxation, CBD for sleep, CBD oil for recovery, and CBD gummies for stress. All products are third-party lab tested and USDA organic to ensure safety and purity. Right now, Tech Meme Ride Home listeners can save 30% on their first order. Just head to cornbreadhemp.com slash ride and use code ride at checkout. That's cornbreadhemp.com slash ride and use code ride.
¶ EU Plans Fee on Low-Cost Imports
The hits keep coming for Temu and Sheehan. Sources and a document. have revealed that the EU is planning to levy a flat 2 euro fee on billions of small packages that enter the bloc mainly from China on a daily basis. So pretty negative for the business model of you know who.
Quoting the FT, the European Commission circulated a draft proposal on a handling fee on Monday after pressure from other member states whose customs authorities are overwhelmed by the 4.6 billion items annually imported directly to people's homes. The proposal seen by the FTE does not set the fee level, but people familiar with the commission's thinking suggested it would be about two euros. Some of the money would cover customs costs but also go into the EU budget, adding billions annually.
Trade Commissioner Maros Sefkovich has promised to tackle the surge in packages, which he said had led to an increase in dangerous and non-compliant goods and complaints by EU retailers of unfair competition, end quote.
¶ NVIDIA Advances Humanoid Robotics
NVIDIA has unveiled Isaac Groot N1.5, an open customizable AI model for humanoid reasoning and skills. and also released GrootDreams, a tool for generating synthetic motion data. quoting GamesBeat. NVIDIA said it is racing ahead with humanoid robotics technology, providing a custom foundation model for humanoid reasoning, a blueprint for generating synthetic motion data, and more Blackwell systems to accelerate humanoid robot development.
At the Computex 2025 trade show in Taiwan, NVIDIA unveiled Isaac Groot N1.5, the first update to NVIDIA's open, generalized, fully customizable foundation model for humanoid reasoning and skills. NVIDIA Isaac Groot Dreams is a blueprint for generating synthetic motion data and NVIDIA Blackwell systems to accelerate humanoid robot development.
humanoid and robotics developers agility robotics boston dynamics four-year Foxlink, Galbot, Manti Robotics, Neuro Robotics, General Robotics, Skilled AI, and XPeng Robotics are adopting NVIDIA ISAAC platform technologies to advance humanoid robot development and deployment. Showcased in CEO Jensen Huang's Computex keynote address, NVIDIA Isaac Dreams is a blueprint that helps generate vast amounts of synthetic motion data, aka neural trajectories.
that physical AI developers can use to teach robots new behaviors, including how to adapt to changing environments. Developers can first post-train Cosmos predict world foundation models for their robot, then using a single image as the input. GrootDreams generates videos of the robot performing new tasks in new environments. The blueprint then extracts action tokens, compressed digital pieces of data that are used to teach robots how to perform these new tasks.
The Groot Dreams Blueprint complements the Isaac Groot Mimic Blueprint, which was released at the NVIDIA GTC conference in March. Uses the Nvidia Omniverse and Nvidia Cosmos platforms to augment existing data. Groot Dreams uses Cosmos to generate entirely new data. Jim Phan, director of AI and distinguished scientist at NVIDIA, said in a press briefing, NVIDIA has a very strong robotic strategy and it is centered around what Jensen calls the three computer problems.
He noted the firm has the OVX computer that is meant to do simulation and graphics simulation physics engines and is used to synthesize and generate data. And this data is consumed by the DGX computer, which are used to train foundation models. And then it is deployed to the HX computer, which is the runtime on the edge for platforms like humanoid robots.
Groot is the life cycle of physical AI and robot-based workflows, Fan said. It is an instantiation of the three-computer problem, he said, end quote.
¶ Autonomous Vehicles Gain Social Sensitivity
you Finally today, this is interesting, autonomous vehicles trained to use what they're calling social sensitivity in assessing the collective impact of multiple hazards. leads to fewer injuries during road accidents involving these autonomous vehicles. Quoting the FT, autonomous cars that are trained to respond more like humans to danger will cause fewer injuries during road accidents, according to a study that shows how driverless vehicles might be made safer.
Vulnerable groups such as cyclists, pedestrians, and motorcyclists saw the biggest gain in protection when driverless cars used social sensitivity in assessing the collective impact of multiple hazards. The study, published in the U.S. Proceedings of the National Academy of Sciences, highlights growing efforts to balance AV's efficient operation with the need for them to minimize damaging collisions.
The issue of AV ethics is attracting increasing attention as growing use of the cars offers the prospect of eliminating driver problems such as spatial misjudgments and fatigue. The study suggests human behavioral methods could, quote, provide an effective scaffold for AVs to address future ethical challenges.
said its China- and U.S.-based authors led by Honglang Liu of the Hong Kong University of Science and Technology. Based on social concern and human plausible cognitive encoding, we enable AVs to exhibit social sensitivity and ethical decision-making, they said. Such social sensitivity can help AVs better integrate into today's driving communities.
Social sensitivity included being attuned, like human drivers, to the vulnerabilities of specific road users and being able to judge who was likelier to be more seriously hurt during a crash. The researchers drew on evidence from neuroscience and behavioral science that humans navigate using a cognitive map to interpret the world and adapt accordingly.
The scientists base their instructions for the AV on a concept known as successor representation, which encodes predictions of how different elements in an environment will interrelate across time and space. They examined the results of harnessing their Model 2 Ethical Planner, a system AAVs use to make decisions accounting for various risk considerations.
The researchers modeled 2,000 benchmark scenarios measuring total risk of each one by assessing the probability of collision and the likely severity of harm for the people involved. The scientists found that using their human-inspired model with ethical planner cut overall risks to all parties by 26.3% and by 22.9% for vulnerable road users. compared with using ethical planner alone in crash scenarios all road users suffered 17.6 percent less harm rising to 51.7 percent
for vulnerable users. The occupants of the AV were also better off experiencing 8.3% less harm." Today I learned that Jesse Armstrong, that dude, The TV show Succession has a movie coming out about tech billionaires. It's actually coming out at the end of the month on HBO, I guess. It's called Mountainhead. Must see viewing for us, I guess, right? Link to the trailer and the show notes today. Talk to you tomorrow.