Fri. 11/22 – An LLM Siri But… Only The Year AFTER Next? - podcast episode cover

Fri. 11/22 – An LLM Siri But… Only The Year AFTER Next?

Nov 22, 202418 min
--:--
--:--
Listen in podcast apps:

Episode description

Wait, how long is it going to take Apple to make Siri behave like ChatGPT already does today? Maybe that talk of OpenAI buying the Chrome browser isn’t completely far fetched after all. Is Threads feeling the heat from Bluesky? Do I want to wear a watch on my finger? And, of course, the Weekend Longreads Suggestions.

Links:


The Weekend Longreads Suggestions:

See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

Transcript

Welcome to the Tech Meme Right Home for Friday, November 22nd, 2024. I'm Brian McCullough. Today, wait. How long is it going to take Apple to make Siri behave like ChatGPT already does today? Maybe that talk of OpenAI buying the Chrome browser isn't completely as far-fetched as I thought. Is Threads feeling the heat from blue sky? Do I want to wear a... watch on my finger, and of course, the weekend long-range suggestions. Here's what you missed today in the world of tech.

From the, well, of course they are, why wouldn't they be? It would be almost criminally negligent if they weren't file. Mark Gurman says, Apple is testing a more conversational version of Siri, dubbed LLM Siri, with plans to release it in the spring of 2026 as part of iOS 18 and macOS 16.

Quoting Bloomberg, the new Siri, details of which haven't been reported, uses more advanced large language models or LLMs to allow for back and forth conversations, said the people who asked not to be identified because the effort hasn't been announced. The system also can handle more sophisticated requests in a quicker fashion, they said.

The new voice assistant, which will eventually be added to Apple Intelligence, is dubbed LLM Siri by those working on it. LLMs, a building block of generative AI, gorge on massive amounts of data in order to identify patterns and answer questions. Apple... has been testing the upgraded software on iPhones, iPads, and Macs as a separate app.

but the technology will ultimately replace the Siri interface that users rely on today. The company is planning to announce the overhaul as soon as 2025 as part of the upcoming iOS 19 and macOS 16 software updates, which are internally named. luck and cheer, the people said. The revamped Siri will rely on new Apple AI models to interact more like a human and handle tasks in a way that's closer to ChatGPT and Google's Gemini.

It also will make expanded use of App Intents, which allow for more precise control of third-party apps. And the software will be able to tap into features from Apple Intelligence, such as the ability to write and summarize text. Like Apple Intelligence this fall, the new features won't immediately be included in next year's crop of hardware devices. Instead, Apple is currently planning to release the new Siri to consumers as early as spring 2026, about a year and a half from now.

As early as spring 2026, This might be the most powerful signal we've seen yet. How far behind Apple is really in AI. They need 18 months just to add LLMs to Siri. Like, how much do you think the chatbot scene will have changed by the time they get feature parity to what smaller players already have in the market today? Quoting Joe Rosensteil on Mastodon.

There's bound to be a really interesting story of corporate intrigue about how the people in charge of Siri have been actively opposed to virtual assistants for a decade and have been waging a silent campaign of sabotage behind the scenes. Either that or a really boring story about unimaginative managers hitting minor progress targets for their bonuses, end quote. About that Google selling Chrome to OpenAI possibility, I don't know that this materially changes the odds of it actually happening.

But for what it's worth, the information is reporting that OpenAI has considered making a browser in the past, even discussed deals to power AI features on Samsung devices, and search on sites and apps from Conde Nast and others. Quote, if OpenAI launches some or all of these products, it would become an even bigger competitor to Google, which dominates the browser market with Chrome and the search market with Google Search and powers Samsung's phones with its Android software. More recently...

Google's Gemini AI began powering features on Samsung devices, such as providing a text summary or a voice recording or using image generating technologies to edit photos. The state of talks between Samsung and OpenAI couldn't be learned, but Google has been preparing for the possibility of competing with OpenAI to power such features, said a person with knowledge of the situation.

For Samsung, it makes sense to have more than one potential provider of such technology as it negotiates the terms of deals. ChatGPT generates billions of dollars a year in revenue from subscriptions and has added search-like features that show real-time information from the web, in part with help from Microsoft's search technology. But ChatGPT hasn't visibly hurt Google Search yet, even though if some people are using it as a partial replacement for Google.

Still, ChatGPT is growing quickly and currently dominates the nascent market for AI chatbots. Making a web browser could help OpenAI have more control over a primary gateway through which people use the web, as well as further... boosting ChatGPT, which has more than 300 million weekly users just two years after its launch. It isn't clear how a ChatGPT browser's features would differ from those of other browsers.

In a signal of its interest in a browser, several months ago, OpenAI hired Ben Gujur, a founding member of the Chrome team at Google. Another recent hire is Darren Fisher, who worked with Gujur to develop Chrome. But OpenAI isn't remotely close to launching a browser, multiple people said. Launching a browser is timely and complicated because browser providers need to ensure people's data doesn't leak to websites and the browser needs to work with various types of extensions.

adequately compete with incumbent browsers, among other things." I continue to say that the sudden explosion over on BlueSky is not necessarily, or so much people fleeing X, as it really might be people finally giving up on threads. Blue Sky is closer to old Twitter than basically anything else is at the moment. And that's all people ever wanted, I argue. Because meta made threads in its inimitable meta way.

It does not function well as a breaking news platform or a digital town square. Like when the Matt Gaetz news broke the other day, within seconds, it was immediately all over X, all over blue sky. But over at Threads, as M.G. Siegler joked, you were still being served up posts like, oh, what a lovely new lens for my Leica camera.

Well, maybe Threads is feeling the heat because Adam Mosseri says, oh yeah, maybe we will tweak some of the things to give you what you've been asking me for four months now. Quoting The Verge. We are rebalancing ranking to prioritize content from people you follow, which will mean less recommended content from accounts you don't follow and more posts from the accounts you do starting today, Threads boss Adam Aseri announced on Thursday. This is another significant change.

to thread since many people started flocking to Blue Sky. It lets people default to seeing their following feed when they open the app and offers a lot of customizability, including custom feeds. Threads just yesterday rolled out its take on custom feeds less than a week after it started testing them.

The change could make the For You feed include more accounts you care about, but we'll have to wait and see if the updates address the problem of the feed surfacing posts that are very old and no longer timely. You still can't leave threads in the following feed, though. It also means that creators should expect unconnected reach to go down, but connected reach to go up, Mosseri says, end quote. On those quickly rolled out custom feeds features, quoting TechCrunch,

Hoping to capitalize on user demand for more personalization, custom feeds are meant to allow Threads users to easily build feeds around specific topics or those including certain user profiles. This would make it easier for Threads users to tap into the communities and conversations.

that are most important to them, and could help to challenge BlueSky's own set of tools for personalization, including those that let users build their own algorithms, feeds, and lists, as well as those that let them configure their own moderation tools.

custom feeds on Threads comes only days after Meta CEO Mark Zuckerberg announced the feature was entering testing. That signals that Threads is moving quickly to topple some of the momentum that Blue Sky has recently seen. Since the U.S. elections, Blue Sky adoption has soared as

users looked for an alternative to the now more right-leaning X owned by Elon Musk. Other decisions at X have also pushed people to depart, including how it has changed the block feature and its policy around training AI models on user data. Some users on threads have been disappointed by its decision to deprioritize politics and have asked for other options beyond the default algorithmic for you feed and the chronological following feed.

That's where custom feeds come in. To use the feature, you'll first need to search for and then tap into a topic to see the latest posts. From there, you'll tap on the three-dot icon next to the search term and choose the option Create New Feed. You can also choose to add specific user profiles to a feed by visiting the user's profile, tapping the three-dot icon above their profile picture.

and then tapping to add them to one of your feeds. After the feeds are created, they can be pinned to the top of the thread's home screen on the web in the columns-based view, similar to Twitter's TweetDeck or XPro, making them easier to access, end quote. This episode is brought to you by incogni incogni is a service that helps protect your personal data from data brokers who collect aggregate and sell it.

Incogni reaches out to data brokers on your behalf and requests your personal data removal. With the family and friends plan, you can also add up to four members to your subscription so that everyone can benefit from Incogni's services. brokers continue collection of your personal information even after they've removed it, Incogni makes sure your data stays off the market by conducting repeated removal requests.

Incogni will handle any objections from data brokers and keep you updated on their progress every step of the way. Take control of your data privacy today. Visit incogni.com, sign up, and enjoy a 30-day money-back guarantee to ensure you're completely satisfied with their service. Take your personal data back with Incogni. Use Ride Home at the link below and get 60% off an annual plan. incogni.com slash ride home. That's I-N-C-O-G-N-I incogni.com slash ride home.

Lumen is the world's first handheld metabolic coach. It's a device that measures your metabolism through your breath. And on the app, it lets you know if you're burning fat or carbs and gives you tailored guidance to improve your nutrition, workouts, sleep, even stress management.

Because your metabolism is at the center of everything your body does, optimal metabolic health translates to a bunch of benefits, including easier weight management, improved energy levels, better fitness results, better sleep, etc. All you have to do is breathe into your lumen first thing in the morning and you'll know what's going on. on with your metabolism then lumen gives you a personalized nutrition plan for that day

based on your measurements. You can also breathe into it before and after workouts and meals, so you know exactly what's going on in your body in real time, and Lumen will give you tips to keep you on top of your health game. So if you want to stay on track with your health this holiday season, go to lumen.com. Lumen makes a great gift, too. Thank you, Lumen, for sponsoring this episode.

Interesting gadget here. How about a smart ring like an Aura smart ring, but it's actually a watch, specifically a tiny little olden timey Casio watch. This is not from The Onion, quoting The Verge. Not to be outdone by its longtime rival, Casio has announced its own digital ring watch that brings more functionality than Timex's wearable that debuted last month.

Created to help commemorate the 50th anniversary of Casio getting into the digital watch business, the CRW0011JR will be available in Japan starting in December for around $128.

Although the tiny watch's case is just shy of being an inch in size, Casio has managed to squeeze in a retro six-segment LCD screen that can display hours, minutes, and seconds. The ring also includes three functional buttons that can control additional features like displaying the date or the time in a different time zone and a stopwatch.

The Ringwatch's screen even has a light and an alarm function that will flash the display instead of playing an audible sound. It's powered by a single battery that Casio says will keep the waterproof watch running for about two years. but is also easily replaceable when it does die. To accurately recreate the design and intricate details of the larger digital watches the ring is based on, Casio says the wearable is manufactured as a single piece using a metal injection molding process.

That starts with powdered metal. As a result, unlike Timex's watch ring that features a stretchable band to accommodate different users, the CRW-0011JR is... permanently a US size 10.5. If your fingers are smaller than that, Casio includes a couple of spacers to improve the fit. If your fingers are larger than that, you'll need to find some other way to check the time, end quote.

As ever, this is probably one of those segments that it's worth clicking through on to see the pictures of. And then tell me I don't need this. Tell me I don't need this. Tell me I don't need this. Tell me I don't need this. Time for the weekend long read suggestions. First up from The Verge.

Just a pretty comprehensive guide to Blue Sky and what exactly the features are that people are flocking over there to enjoy, like the custom algorithmic feeds and how to set them up, the starter packs and how to make use of them. the ability to do custom domains and change your handle, and labelers, a thing that I did not know existed. Then, a chaser to the whole idea of Apple being behind in AI. From Fortune...

a look at how Mark Zuckerberg made Llama a cornerstone of the AI ambitions at Meta, whose smartphone-era services and products have always been constrained by Apple and Google, as we know. Quote,

While ChatGPT remains the dominant gen AI tool in the popular imagination, Llama models now power many, if not most, of the Meta products that billions of consumers encounter every day. Meta's AI assistant, which reaches across Facebook, Instagram, WhatsApp, and Messenger, is built with Llama, while users can create their own AI chatbot with AI Studio.

Text generation tools for advertisers are built on Llama. Llama helps power the conversational assistant that is part of Meta's hit Ray-Ban glasses and the feature in the Quest headset that lets users ask questions about their surroundings. The company is said to be developing its own AI-powered search engine, and outside its walls, llama models have been downloaded over 600 million times on sites like open-source AI community Hugging Face. Still, the pivot...

has perplexed many metal watchers. The company has spent billions to build the llama models. On its third quarter earnings call, Meta announced that it projects capital expenditures for 2024 to reach as high as $40 billion, with a, quote, significant increase likely in 2025.

Meanwhile, it's giving Llama away for free to thousands of companies, including giants like Goldman Sachs, AT&T, and Accenture. Some investors are struggling to understand where and when exactly Meta's revenue would start to justify the eye-watering spend. Nonetheless, Lama's contrarian success has allowed Zuckerberg to shrug off the lukewarm response to his metaverse ambitions and the company's painful year of efficiency in late 2022 and early 2023.

The rise of Lama has also given Zuckerberg a chance to address a long-simmering sore point in his otherwise meteoric career. The fact that Facebook and now Meta have so often seen their services and products constrained by rules imposed by Apple and Google, the rival giants whose app stores are Meta's primary points of distribution in the mobile device era.

As he wrote in a July blog post, we must ensure that we always have access to the best technology and that we're not locked into a competitor's closed ecosystem where they can restrict what we build. With Llama, Meta and Zuckerberg have the chance to set a new industry standard.

I think we're going to look back at Llama 3.1 as an inflection point in the industry where open source AI started to become the industry standard just like Linux is, he said on Meta's July earnings call, invoking the open source project that disrupted the dominance of proprietary operating systems like Microsoft Windows."

And from the Financial Times, a long read, and I mean a long one, but a very, very interesting detailed look at how AI, as we've discussed before, might be the technology that unlocks robotics. at scale. The extraordinary progress in generating text and images using AI over the past two years have been accomplished by the invention of large language models, the systems underpinning chatbots.

Roboticists are now building on these and their cousins visually conditioned language models, sometimes called vision language models, which connect textual information and imagery. With access to huge existing troves of text and image data, researchers can pre-train their robot models on the nuances of the physical world and how humans describe it, even before they begin to teach their machine students.

specific actions. But in the chaotic, ever-changing real world, machines not only need to be able to execute individual tasks such as these, they also need to carry out a multitude of jobs in different settings. Those at the heart of robotics believe the answer to this generalization is to be found in foundation models for the physical world, which will draw on growing databases relating to movement, banks of information recording robot actions.

The hope is these large behavioral models, once big enough, will help machines adapt to new and unpredictable environments, such as commercial and domestic settings, rapidly transforming business and our home lives. But these models face numerous challenges beyond what is required. for linguistic generative AI. They have to drive actions that obey the laws of physics in a three-dimensional world and adapt to dynamic environments occupied by other living things.

The current challenge for developing these large behavioral models is the scarcity of data, a difficulty also facing large language models as a result of human information sources being exhausted. but a great communal effort is underway among the robotics community to generate new sets of training data.

We've been seeing more data be available, including data for very dexterous tasks, and seeing a lot of the fruits of putting that data in, says Stanford's fin. It suggests that if we are able to scale things up further, then we might be able to make significant breakthroughs in allowing robots to be successful in real world environments. Click through on this one because tons of graphics are in there to explain everything.

The weekend bonus episode this weekend is me on another podcast, the Daily Detroit podcast. I'm sharing this one because the topic we discuss is something that I've not really been able to get into on this show. The how and why I think it's so hard to seed new tech hubs in places beyond Silicon Valley. and crucially, what I think regions should do to actually make that happen. Listen, enjoy, talk to you on Monday.

This transcript was generated by Metacast using AI and may contain inaccuracies. Learn more about transcripts.