Wed. 05/21 – Everything Google Everywhere And All At Once - podcast episode cover

Wed. 05/21 – Everything Google Everywhere And All At Once

May 21, 202517 min
--:--
--:--
Download Metacast podcast app
Listen to this episode in Metacast mobile app
Don't just listen to podcasts. Learn from them with transcripts, summaries, and chapters for every episode. Skim, search, and bookmark insights. Learn more

Summary

Dive into Google I/O 2025 highlights, featuring the wide rollout of AI mode in Google Search with new agentic capabilities, deep search, and personalization. Learn about Google's significant push into smart glasses with Android XR, detailing partnerships with Warby Parker, Gentle Monster, and Xreal, alongside a hands-on demo experience. Discover the latest AI advancements, including new models for video (Veo 3), images (Imagine 4), and music (Lyra 2), plus AI integration into Google Meet live translation and other products, concluding with Google's expanded sovereign cloud options in the EU.

Episode description

It’s Google day. Everything Google, everywhere, all at once. All the headlines from IO and there were a ton. What even is Google search in the age of AI? Google’s big push into smartglasses, a wild new video model and a ton, ton more. Here’s what you missed, yesterday, mostly, I guess, in the world of tech.

Sponsors:


Links:

See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

Transcript

Intro and Google Search AI

welcome to the tech meme right home for wednesday may 21st 2025 i'm brian mccullough today it's google day everything google everywhere all at once all the headlines from io and there were a ton what even is google search in the age of ai Google's big push into smart glasses, a wild new video model, and a ton, ton more. Here's what you missed yesterday, mostly, I guess, in the world of tech. So this is why I decided to split up the developer conference days into multiple shows.

Google announced so many things at IO yesterday that this entire episode is probably just going to be covering all of that. Beginning with Google rolling out AI mode to all Google search users in the U.S. and announcing deep search, agentic capabilities, chart generation, and more coming to labs users.

quoting Engadget. For the uninitiated, AI mode is a chatbot built directly into Google search. It lives in a separate tab and was designed by the company to tackle more complicated queries than people have historically used its search engine to answer. For instance, you can use AI mode to generate a comparison between different fitness trackers. Before today, the chatbot was powered by Gemini 2.0. Now it's running a custom version of Gemini 2.5.

What's more, Google plans to bring many of AI mode's capabilities to other parts of the search experience. Looking to the future, Google plans to bring deep search, an offshoot of its deep research mode, to AI mode. Google was among the first companies to debut the tool in December. Since then, most AI companies including OpenAI have gone on to offer their take on deep research.

which you can use to prompt Gemini and other chatbots to take extra time to create a comprehensive report on a subject. With today's announcement, Google is making the tool available in a place where more of its users are likely to encounter it. Another new feature that's coming to AI mode builds on the work Google did with Project Mariner, the web surfing AI agent the company began previewing with trusted testers.

at the end of last year. This edition gives AI mode the ability to complete tasks for you on the web. For example, you can ask it to find two affordable tickets for the next Major League Baseball game in your city. AI Mode will compare hundreds of potential tickets for you and return with a few of the best options. From there you can complete a purchase without having done the comparison work yourself.

AI mode will also soon include the ability to generate custom charts and graphics tailored to your specific queries. At the same time, AI mode will be more personalized in the near future with Google introducing an optional feature allowing the tool to draw their past searches. The company will also give people the option to connect their other Google Apps to AI mode starting with Gmail.

for even more granular recommendations. As mentioned above, Google is adding a suite of shopping features to AI mode, and Gadget has a separate post dedicated to the shopping features Google announced today, but the short of it is that AI mode will be able to narrow down products for you and complete purchases on your behalf with your permission, of course. All of the new AI mode features Google preview today will be available to labs users first before they roll out more broadly, end quote.

Google also said Project Astra will power new Google search experiences, including a new Search Live feature, as well as in the Gemini app and third-party products.

Android XR Glasses Development

From the smart glasses are the next big hardware thing file. Google said it is partnering with Samsung, Gentle Monster, Xreal, and Warby Parker. to create Android XR smart glasses, offering AI assistance via Gemini. This is part of a newly announced Project Aura, expected to be the first Android XR glasses powered by a separate puck with a Qualcomm chip. slated to launch early next year.

Google said it is committing up to $150 million, including $75 million for product development and commercialization as part of its partnership with Warby Parker in particular. The partnership hints that Google is taking style a lot more seriously this time around. Warby Parker is well known as a direct-to-consumer eyewear brand that makes it easy to get trendy glasses at a relatively accessible price.

Meanwhile, Gentle Monster is currently one of the buzziest eyewear brands that isn't owned by Essilor Luxottica. The Korean brand is popular among Gen Z, thanks in part to its edgy silhouettes and the fact that Gentle Monster is favored by fashion-forward celebrities like Kendrick Lamar, Beyonce, Rihanna, and Billie Eilish. Partnering with both brands seems to hint that Android XR is aimed at both versatile everyday glasses as well as bolder trendsetting options.

The other thing to note is that Google seems to be leaning on Samsung for XR glasses hardware too. As part of a keyword blog, Google's VP of XR, Shahram Izadi, noted that it's, quote, advancing its partnership with Samsung to go beyond headsets and into glasses. Also announced today at I.O., Google noted the first pair of Android XR-enabled glasses will be made by Xreal under the name Project Aura.

As for what these XR glasses will be able to do, Google was keen to emphasize that they're a great vehicle for using Gemini. So far, Google's prototype glasses have had cameras, microphones, and speakers so that its AI assistant can help you interpret the world around you.

That included demos of taking photos, getting turn-by-turn directions, and live language translation. That pretty much lines up with what I saw on my Android XR hands-on in December, but Google has slowly been rolling out these demos more publicly over the past few months. Altogether, it seems like Google is directly taking a page out of Meta's smart glasses playbook. That's a big deal, but it's a direct nod to the success Meta's had with its Ray-Ban smart glasses.

The company revealed in February that it's already sold 2 million pairs of its Ray-Ban smart glasses and has been vocally positioning them as the ideal hardware for AI assistants, end quote. Now, Android Central got hands-on with some of this hardware, and they had this by way of describing what it's like to use, quote,

I got hands-on time with Google's prototype AR glasses at Google IO 2025. While they had some technical difficulties and lag during the live keynote demo, these wireless glasses impressed me in a way I didn't expect during my brief press demo. Google's Android XR glasses have no tether or puck, and they feel surprisingly light and svelte compared to other AR glasses like Meta Orion.

It had only one display in the right lens, but to my surprise, this didn't end up bothering me. Dual displays may be the future of Android XR, but I'm fine with Google bringing back the Google Glass style for now. Google had nothing to share on specs like weight or battery life, unsurprisingly, but the Android XR prototype glasses leave me convinced that Google might actually pull AR glasses off. And at the very least, it's non-holographic smart glasses are going to be a big deal.

My Android XR demo booth was full of art, books, and other visual content for Gemini to analyze. You activate the multimodal assistant by pressing and holding the right temple side, which has a touch area. It then starts analyzing and remembering your surroundings until you tap and hold it again.

I looked at a book of epic hikes and asked Gemini to recommend one in the Bay Area where I live. It pulled info from that book and pointed me to Yosemite, which is a bit of a drive, but still relevant context. I then looked at a painting, had Gemini summarize its artist and history, and then compared its themes against the painting next to it. Gemini complied happily. It's the kind of thing you'd expect if you use Gemini Live on your Android phone, except almost entirely hands-free.

With the holographic display you see relevant info in response to your Gemini commands. That said, I can already envision how it would work without a display since Google said during the keynote it would be optional. This would let Google and Samsung sell cheaper Android XR smart glasses that compete directly with Meta Ray-Ban glasses.

My immediate favorite Android XR feature was Google Maps. They had a destination preloaded for me to walk out of Shoreline, but what fascinated me was how it showed a simple arrow and street name pop-up while looking forward, but switched seamlessly to a live map.

If I looked downward, other Android XR apps were more straightforward, showing calendar reminders or message pop-ups in the bottom portion of my vision. But changing the heads-up display content based on where you're looking is a simple but excellent idea. I could imagine a Fitbit app showing HR and pace normally during a run, but adding more data if you stop and look down, for instance.

Just like the original Google Glass, this Android XR prototype has a single display, though Google has said the OS works for dual display glasses too. I can't speak to exact field of view, resolution, or brightness. The blue I'm listening Gemini line had a tiny bit of blurring around the edge, as does small text. But when the glasses display a full-size message or app pop-up, I had no trouble reading it.

Equally important, it's carefully placed so that most of the time it doesn't block your vision. Some actions like taking a photo do dominate center view, but everything else is carefully placed so that you can see the info while going about your day.

I've tried AR glasses in the past where the field of view is so small and the heads-up display so awkwardly placed that I struggle to perch the glasses properly to even see the content. It's especially difficult to find the sweet spot on dual display AR glasses, at least in my experience.

That's why I didn't mind a monocular display. Google prioritizes visibility in one eye and doesn't have to bulk up the size to fit a second. It only took me about five seconds to perch my Google Glasses in the sweet spot and then I didn't have to make any adjustments.

Google allegedly plans to sell its single-display Hypernova glasses later this year, so it'll be interesting to see how Android XR glasses match up against a Ray-Ban design and meta-AI assistance for usability and subtlety, end quote. Ever wonder what ChatGPT and Claude are actually doing with your conversations? Have you ever even stopped to think about that?

We all know Alexa listens to us and recommends products based on our conversations. Meta retargets us based on our browsing and engagement history. But now, in this new AI era, there's a new privacy problem to consider. Think about what we tell these AI platforms, our thoughts, our dreams, sensitive questions, business ideas.

They take all this information, tie it to your identity, and then sell it to various third parties and governments. ChatGPT literally has the former director of the NSA sitting on their board right now. That's why I've started using Venice.ai, who is sponsoring today's podcast. Venice.ai. is a generative AI platform that is private and permissionless. They utilize leading open source AI models to deliver text, code, and image generation to your web browser.

There's no downloads, no installations of anything. Venice AI doesn't spy on you or censor the AI at all. Messages are encrypted and your conversation history is stored only in your browser. This is a cause I can get behind if you want to also... and you want to use AI without fear of handing over your most intimate thoughts to a corporation or the government, you can get 20% off a pro plan using my link at venice.ai slash techmeme and code techmeme. That's venice.ai.

slash techmeme and code techmeme. The best piece of money and investing advice I've ever gotten was to simply always do it. Always suck something away even if the market is bumpy because being constant We'll smooth things out in the end. Today's episode is sponsored by Acorns. Acorns is a financial wellness app

That makes it easy to start saving and investing for your future. You don't need to be rich. Acorns lets you get started with the spare money you've got right now even if all you've got is spare change. you don't need to be an expert acorns recommends a diversified portfolio that can help you weather all of the market's ups and downs you just need to stick with it and acorns makes that easy too acorns automatically invests your money giving it a chance to grow with time

Sign up now and join the over 14 million all-time customers who have already saved and invested over $25 billion with Acorns. Head to acorns.com slash ride or download the Acorns app to get started. Paid non-client endorsement compensation provides incentive to positively promote Acorns. Tier 1 compensation provided. Investing involves risk. Acorns Advisors LLC and SEC Registered Investment Advisor view important disclosures at acorns.com. slash ride.

New AI Media Generation Models

On the pure AI front, Google announced new video and image generation models VO3 and Imagine 4, new AI filmmaking tool Flow, and expanded access to music generation model Lyra 2. quoting CNBC. The artificial intelligence tool competes with OpenAI's Sora video generator, but its ability to also incorporate audio into the video makes it a key distinction. The company said VO3 can incorporate audio that includes dialogue between characters as well as animal sounds.

VO3 excels from text and image prompting to real-world physics and accurate lip-syncing, Ellie Collins, Google DeepMind Product Vice President, said in a blog post Tuesday. The video, audio, AI tool is available Tuesday to US subscribers of Google's new $249 per month ultra subscription plan. which is geared toward hardcore AI enthusiasts. VO3 will also be available for users of Google's Vertex AI Enterprise platform.

Google also announced Imagine 4, its latest image generation tool, which the company said produces higher quality images through user prompts. Additionally, Google unveiled Flow, a new filmmaking tool that allows users to create cinematic videos by describing locations, shots, style preferences. Users can access the tool through Gemini, Wisk, Vertex AI and Workspace.

The latest launches come as imagery and video become popular use cases for generative AI prompts. OpenAI CEO Sam Altman in March said ChatGPT's 4.0 image generator was so popular that it caused the company's computing chips to melt. The company said it had to temporarily limit the features usage. The Mountain View, California company also updated its VO2 video generator to include the ability for users to add or remove objects from videos with text prompts.

Additionally, Google opened its Lyra 2 music generation model to creators through its YouTube Shorts platform and businesses using Vertex AI, end quote. You might have seen some of the video generated by these new models making their way around socials. It's actually pretty wild, pretty good stuff.

AI Features Across Google Products

Google also debuted DeepThink in enhanced Gemini 2.5 Pro reasoning mode that excels at math and coding benchmarks available to trusted testers via the Gemini API. and rolled out Jules. It's a synchronous coding agent unveiled in December in public beta for free with usage limits and will share pricing after the beta. Google also said its weather apps have graduated from beta on Android Auto and cars with Google built in, and Android Auto will soon get browser and video apps.

Google said Gmail's smart replies will now use AI to pull context from a user's inbox and their Drive account. launching in Google Labs in July and available first in English. They also brought live translation to Google Meet, matching the user's tone and cadence, first in beta for Spanish and English on Google AI Pro and Ultra Plans.

What's it like to use that live translation? Quoting the journal, it isn't just that the translator turns your words into another language, it emulates your voice and tone, translating with a few seconds of lag. The effect is like watching an overdubbed foreign language speaker on a news broadcast, but the voiceover is created by AI in the speaker's same voice.

A Google Meet pop-up warned me that the two Spanish-speaking employees, Kemi and Jer, that the experimental translation might not always be correct, and we clicked to agree. Then we began to converse with each other in our native languages. They talked about... where they liked to eat after work and travel for weekend getaways in various Latin American locales. Their digitally produced English alter egos had slight Spanish accents. For the most part, the translation was fluid with minimal lag.

The feature can work with up to 100 participants though even with just us three there was some confusing crosstalk due to the delay. As a speaker you don't hear your translated voice so you don't know when it stops talking. There were times when the audio was initially stilted like there was a connection issue, but eventually the translation caught up. Deciding how much of a speaker's audio to translate at a time was one of the Google team's biggest hurdles.

says Awanish Verma, Google's Senior Director of Real-Time Communication. The technology starts interpreting before it has the full message. That's hard work because of context. If you say bear, you could mean an animal giving birth or carrying something.

When I tested it with my husband, Will, who spoke Spanish, it translated the English word match as in a tennis match to fight in Spanish. He also said whenever I started speaking, the first sentence was a bit garbled, but it smoothed out after that. Sometimes the voiceover placed an emphasis in the wrong place or produced broken English. The heat, the climate, always very warm.

and some direct translations just don't sound right. I am fascinated by the power to have many options. As an example, translation is an art, and Google Beta isn't flawless, but I got the gist. I realized how good this clone tech was once I heard my own voice while watching a playback. It sounded scarily like me. Even Will was impressed, end quote.

Google Cloud Sovereign Options

Finally, on this day of all, Google things from the sovereign tech stack file. Google announced it is expanding its sovereign cloud options in the EU, including a new data shield that provides additional cybersecurity protections to European clients. Quoting the FT, the Silicon Valley giant already provides cloud computing offerings in Europe that ensures sensitive information remains on local servers and adhere to EU laws on data privacy.

Google told the Financial Times on Wednesday it was broadening these so-called sovereign cloud options, including a new data shield that provides additional cybersecurity protections to European clients. The U.S. tech company said it would work with local partners in sensitive industries, such as the French defense electronics group, Tails, to better ensure it complies with tougher data protection requirements for those sectors.

Google said it would also launch a similar arrangement in Germany soon. The move comes as European groups raise concerns that the Trump administration could use the continent's reliance on digital infrastructure from U.S. big tech groups as leverage in trade talks without naming Trump directly. Hayat Gallo, Google's president of customer experience, said global tensions were, quote, creating anxiety in the world and customers were, quote, looking for options to manage their business.

And suddenly, in the current environment, everybody is thinking about it, end quote. For defense intelligence and other sensitive sectors, Google also said it provided an air-gapped solution, which means a client's data does not have to be connected to other networks.

Gallup said she wanted to reassure European customers about their, quote, requirements and expectations that they have around sovereignty, and we are here to provide a layered set of options so that our customers can operate and then their customers can benefit from it, end quote. The move echoes a recent announcement by Microsoft, the first large American cloud computing business to try to reassure European customers last month, end quote. nothing more for you today talk to you tomorrow

This transcript was generated by Metacast using AI and may contain inaccuracies. Learn more about transcripts.
For the best experience, listen in Metacast app for iOS or Android
Open in Metacast