Mon. 03/18 – Apple To Partner With Google For AI On iOS? - podcast episode cover

Mon. 03/18 – Apple To Partner With Google For AI On iOS?

Mar 18, 202417 min
--:--
--:--
Listen in podcast apps:

Episode description

I did NOT have on my bingo card Apple turning to Google to power its first big foray into modern AI on its hardware. Even while a new Apple AI model might be pointing to breakthroughs in AI reasoning. One of the biggest e-sports competitions in the world has been hacked by cheaters. And some pretty bearish news for the VR industry.

Links:

See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

Transcript

Welcome to the Techmeme Ride Home for Monday, March 18, 2024, I'm Brian McCullough today. I did not have on my bingo card Apple turning to Google to power its first big foray into modern AI on a tartware, even while a new Apple AI model might be pointing to breakthroughs and AI reasoning. One of the biggest e-sports competitions in the world has been hacked by cheaters and some pretty bearish news for VR. Here's what you missed today in the world of tech.

Well, this is a bit of a record scratch moment that breaks a bunch of narratives all at once. Mark German's sources are telling him that Apple and Google are in active talks to use Gemini, Google's AI model, to power some new AI iPhone features this year. Apple apparently also has been holding talks with OpenAI to use its models. Quote, if a deal between Apple and Google comes to

fruition, it would build upon the two companies' search partnership. For years, Google has paid Apple billions of dollars annually to make its search engine the default option in the Safari web browser on the iPhone and other devices. The two parties haven't decided the terms or branding

of an AI agreement or finalized how it would be implemented the people said. A deal would give Gemini a key edge with billions of potential users, but it also may be a sign that Apple isn't as far along with its AI efforts as some might have hoped and threatens to draw further antitrust scrutiny of both companies. Apple is preparing new capabilities as part of iOS 18,

the next version of the iPhone operating system based on its own AI models. But those enhancements will be focused on features that operate on its devices rather than ones delivered via the cloud. So, Apple is seeking a partner to do the heavy lifting of generative AI, including functions for creating images and writing essays based on simple prompts. Since early last year, Apple has been testing its own large language model, the technology behind generative AI codenamed Ajax.

Some employees also have been trying out a basic chatbot dubbed Apple GPT. But Apple's technology remains inferior to tools from Google and other rivals according to the people making a partnership look like the better option. But a partnership between the two Silicon Valley giants would likely draw the eye of regulators. Google's current deal with Apple for search is already the focus of a lawsuit by the US Department of Justice. The government has alleged that the companies have

operated as a single entity to corner the search market on mobile devices. The pair has justified the arrangement by saying Apple believes Google's search quality is superior to rivals and that it's easy to switch providers on the iPhone. While the talks between Apple and Google remain active, it's unlikely that any deal would be announced until June when the iPhone maker plans to hold its annual worldwide developers conference. It's possible that the companies don't reach an agreement or

Apple ultimately chooses to go with another generative AI provider like OpenAI. Or Apple could theoretically tap multiple partners as it does with search in its web browser. Other generative AI providers include Anthropic, which offers a chatbot called Claude. Yeah, I don't know how to parse this in about a million different ways. Does this indicate desperation on Apple's part? Are they really that far behind in AI? I mean, given the obvious opening, this would create for

regulatory scrutiny that would lean me in the direction of desperation. But maybe it's what Parker Thompson said on Threads, quote, even before LLMs, it felt like Apple was making bad strategic decisions with regards to data slash ML while getting high on their own supply of privacy propaganda. So it's good to see them getting sober quickly as this gets serious. End quote. Dar Obasanjo said this on

mastodon quote, 2024 is the year Apple faced its limitations. First, giving up the dream of competing with Tesla in EVs and now conceding it can't compete with Google and OpenAI in generative AI. This means iOS users end up winning as we get actual cutting edge features and not Siri warmed over end quote. To counter my regulation thoughts, though, this is James Tittcom on Twitter, quote, this would suggest Apple and Google are pretty confident that US courts aren't about to

break up their $18 billion per year search arrangement end quote. Max Wolff has an interesting point, quote, neither Google nor OpenAI's infrastructure could handle the potential scale of Apple's user base given the scaling issues both companies are having. Even Google doesn't have enough GPUs

end quote. Dan Ives thinks this is a stepping stone thing. IE Apple leverages Google for a while before going all in on its own stuff, quote, this potential Google strategic partnership is a missing piece in the Apple AI strategy and combines forces with Google for Gemini to power some of the AI features Apple is bringing to market. For Apple, this will give them the foundation slash

blueprint to double down on AI features end quote. And that does lead to an Occam's razor thought that is also compelling as summarized by at Kaka She 111 on Twitter, quote, well, I guess they are not ready with their own model of Siri AI who would have thought and they must present a generative AI if not investors will punish them end quote. Yes, there have been plenty of articles in recent weeks about how Apple stock is underperforming the NASDAQ 100 by the largest percentage in years.

Remember, revenue at Apple has been hitting a wall in all sorts of various categories. Investors are looking for some new revenue lever to be pulled by Apple, but are also super hot on AI right now which Apple is not giving them. So what if Apple knows it simply can't deliver anything compelling in terms of AI on its own this year and instead of seeing Wall Street punish the stock, it's just doing this quickly as a, you know, break glass in case of emergency holding pattern.

Which makes the fact that this surfaced over the weekend even more interesting. Apple researchers quietly posted to archive MM1, a series of multimodal LLMs with up to 30 billion parameters that they say achieves state of the art performance in multiple AI benchmarks, but also might lead to future breakthroughs quoting venture beat. Apple researchers have developed new methods for training large language models on both text and images enabling more powerful and flexible AI systems in what could

be a significant advance for artificial intelligence and for future Apple products. The work described in a research paper titled MM1 methods analysis and insights from multimodal LLM pre-training that was quietly posted to archive.org this week demonstrates how carefully combining different types of training data and model architectures can lead to state of the art performance on a range of AI

benchmarks. By training models on a diverse data set spanning visual and linguistic information, the MM1 models were able to excel at tasks like image captioning, visual question, answering and natural language inference. The researchers also found that the choice of image encoder and the resolution of input images had a major impact on model performance.

We show that the image encoder together with image resolution and the image token count has a substantial impact while the vision language connector design is of comparatively negligible importance they said. This suggests that continued scaling and refinement of the visual components of the multimodal models will be key to unlocking further gains.

Surprisingly, the largest 30 billion parameter MM1 model exhibited strong in context learning abilities allowing it to perform multi-step reasoning over multiple input images using few shot chain of thought prompting. This points to the potential for large language multimodal models to tackle complex open-ended problems that require grounded language understanding and generation

end quote. Remember a lot of people believe that getting models to do actual reasoning as opposed to just what in a, you know, reductive way to put it has just been glorified pattern matching so far is the next big breakthrough for AI. Rumors have swirled that reasoning will show up in GPT-5 or that a range of startups we've been talking about recently have made a breakthrough in doing reasoning at least to some degree. Maybe Apple is hinting at something similar.

Meanwhile, as Elon promised, XAI has open sourced the base model weights and network architecture of GROC-1, a 314 billion parameter mixture of experts model under the Apache 2.0 license. Goatting TechCrunch. In a blog post, XAI said the model wasn't tuned for any particular application such as using it for conversations. The company noted that GROC-1 was trained on a quote, custom stack without specifying details. Some AI powered tool makers are already talking about

using GROC in their solutions. Perplexity CEO Arvin Srinivas posted on X that the company will fine tune GROC for conversational search and make it available to its pro users end quote. More details from venture beat. Parameters refers to the weights and biases that govern the model, the more parameters, generally the more advanced, complex and performant the model is. At 314 billion parameters, GROC is well ahead of open source competitors such as Metas Lama 2,

at 70 billion parameters and Mistrull 8X7B at 12 billion parameters. GROC was open sourced under an Apache license 2.0 which enables commercial use, modifications and distribution, though it cannot be trademarked and there is no liability or warranty that users receive with it. In addition, they must reproduce the original license and copyright notice and state the changes they've made. GROC's architecture developed using a custom training stack atop Jax and Rust in

October 2023 and incorporates innovative approaches to neural network design. The model utilizes 25% of its weights for a given token, a strategy that enhances its efficiency and effectiveness. GROC was initially released as a proprietary or closed source model back in November 2023 and it was until now accessible only on Musk's separate but related social network X, formerly Twitter, specifically through the X Premium Plus paid subscription service which cost

$16 per month or $168 per year. However, GROC's release does not include the full corpus of its training data. This doesn't really matter for using the model since it has already been trained but it does not allow for users to see what it learned from. Presumably user text posts on X, the XAI blog post states it opically as, quote, base model trained on a large amount of text data,

not fine tune for any particular task. It also does not include any hook up to the real-time information available on X which Musk initially tatted as a major attribute of GROC over other LLMs. For that, users will still need to subscribe to the paid version on X, end quote. When you go through airport security, there's one line where the TSA agent checks your ID and another line where a machine scans your bag. The same thing happens in enterprise security but

instead of passengers and luggage, it's end users and their devices. These days most companies are pretty good at the first part of the equation where they check user identity but user devices can roll right through authentication without getting inspected at all. In fact, 47% of companies allow unmanaged untrusted devices to access their data. That means an employee can log in from a laptop that's had its firewall turned off and hasn't been updated in six months or worse. That laptop might

belong to a bad actor using employee credentials. Collide finally solves the device trust problem. Collide ensures that no device can log into your Octa protected apps unless it passes your security checks. Plus you can use collide on devices without MDM like your Linux fleet, contractor devices, and every BYOD phone and laptop in your company. Visit collide.com slash ride to watch a demo and see how it works. That's k-o-l-i-d-e.com slash ride.

Want a better way to simplify your business finances across expenses, vendor payments, and accounting? If so, ramp could be a complete game changer. Ramp is the corporate card and spend management software designed to help you save time and put money back in your pocket. Ramp gives finance teams unprecedented control and insight into company spend. With ramp, you're able to issue cards to every employee with limits and restrictions and automate expense

reporting so you can stop wasting time at the end of every month. Ramp's accounting software automatically collects receipts and categorizes your expenses in real time so you don't have to. You'll never have to chase down a receipt again and your employees will no longer spend hours submitting expense reports. The time you'll save each month on employee expenses will allow you to close your books eight times faster. Ramp also saves you money. Businesses that use ramp save an

average of 5% the first year. Ramp is easy to use. Get started, issue virtual and physical cards and start making payments in less than 15 minutes whether you have five employees or 5,000. And now get $250 when you join ramp. Just go to ramp.com slash tech meme ramp.com slash tech meme Hey, look at this some non AI news. Respawn and EA have postponed the North American finals of apex legends after an unknown user hacked the game and gave pros hacks like aim bots during the event.

Quoting Forbes. Respawn and EA have postponed the North American finals in the wake of the quote competitive integrity of the game being compromised. This involved a wild situation where someone was giving the players hacks like aim bots and wall hacks as they were playing in the finals event effectively ruining the entire thing without anyone actually attempting to cheat. Players are now starting to potentially worry about their own accounts and the overall safety of the game.

What's unclear is the extent of the breach. There is some concern that it might not just be for messing with the pros at the finals but a larger security issue with the entire game that could affect the wider player base. Some creators are claiming on social media that they've scanned their PCs and are finding viruses though there's so much panic going around. There's no evidence that that has to do with this hack. But if the hack could breach a pro match it would seem to be

something that could breach normal players even if it's not actually doing so right now. Many believe this is the work of one hacker destroyer 2009 who has previously been hacking pros and this was in our CE remote exploit using their PCs but none of that has been confirmed. It's hard to underestimate just how unprecedented something like this is in a major eSports event. A finals event getting put on ice because someone breached the game to give players hacks is simply something

that does not happen. This has led to a mass of complaints about apex's anti-cheats systems which clearly failed in a massive way for this situation. But it also speaks to just how advanced

cheats have become as this is a private lobby for pros playing in an eSports final. Not that this is necessarily related but respawn was just hit days ago with 23 layoffs including apex legends developers some of whom were longtime veterans though if anything this shows that EA needs to beef up apex's security team to some extent as something like this requires all hands or more hands on deck than they currently have now it seems. And finally today sources are saying Sony has paused PlayStation

VR2 production to clear unsold inventory. Sony recently confirmed that it was testing PSVR2 PC support in what looks like an attempt to get creative about broadening the platform's appeal quoting Bloomberg sales of the $550 wearable accessory to the PlayStation 5 have slowed progressively since its launch and stocks of the device are building up according to the people who asked not

to be named as the information is not public. Sony has produced well over 2 million units of their product since launch in February of last year the people said PSVR2 shipments have declined every quarter since its debut according to IDC which tracks deliveries to retailers rather than

consumers. The surplus of assemble devices is throughout Sony's supply chain the people said still IDC's Francisco Geronimo sees a recovery for the product category in coming years with the help of Apple's entry we forecast the VR market to grow on average 31.5% per year between 2023 and 2028 he said alongside meta Sony has been one of the leading purveyors of virtual reality gear but both have struggled to attract enough content and entertainment creators to make their

platforms compelling. A similar problem stocks apples much pricier vision pro headset as it made its debut without tailored apps from key entertainment platforms like Netflix and YouTube. Tokyo based Sony last month announced its shutting down its PlayStation London division which was focused on making virtual reality games that move was part of a wider set of layoffs that also affected in-house studios like guerrilla games which had worked on creating a PSVR2 exclusive game

in its popular horizon series Horizon Call of the Mountain. The high price of VR hardware acts as the main hurdle for its expansion said an analyst currently there are limited games that support VR devices and that will also lead to lack of motivation for players to purchase VR hardware. This limited content also has a reason the development cost for VR games is substantially higher than

normal titles end quote. I'm sure there are Harvard Business School case studies on this but it's wild the degree to which ARVR still suffers from the chicken and egg platform and developer problem like think back Apple flipped the switch just turned on the app store with the iPhone all those years ago and everyone immediately started building for it and use cases were immediately apparent to both users and developers even though for the first few years there wasn't the user base on

the app store that we assume today. What I'm saying is wild is even all the hype in the world at least as of this moment can't seem to convince enough people to buy and use VR devices or to convince developers to make things for them and actually vice versa. I'm sure tons of people will quibble about this but they're sure still doesn't seem like there's a killer app to kick start the VR industry a spreadsheet equivalent from the PC era or even an iBIR equivalent from the early

app store era. Always running late but boy I'm on time for you. Woke up with that in my head so song lyrics talk to you tomorrow.

This transcript was generated by Metacast using AI and may contain inaccuracies. Learn more about transcripts.