#177 - Instagram AI Bots, Noam Shazeer -> Google, FLUX.1, SAM2 - podcast episode cover

#177 - Instagram AI Bots, Noam Shazeer -> Google, FLUX.1, SAM2

Aug 11, 20242 hr 53 minEp. 216
--:--
--:--
Listen in podcast apps:

Episode description

Our 177th episode with a summary and discussion of last week's big AI news!

With guest co-host Jon Krohn from the super data science podcast (https://www.superdatascience.com/podcast)!

If you'd like to listen to the interview with Andrey, check out https://www.superdatascience.com/podcast

If you would like to get a sneak peek and help test Andrey's generative AI application, go to Astrocade.com to join the waitlist and the discord.

In this episode, hosts Andrey Kurenkov and Jon Krohn dive into significant updates and discussions in the AI world, including Instagram's new AI features, Waymo's driverless cars rollout in San Francisco, and NVIDIA’s chip delays. They also review Meta's AI Studio, character.ai CEO Noam Shazir's return to Google, and Google's Gemini updates. Additional topics cover NVIDIA's hardware issues, advancements in humanoid robots, and new open-source AI tools like Open Devon. Policy discussions touch on the EU AI Act, the U.S. stance on open-source AI, and investigations into Google and Anthropic. The impact of misinformation via deepfakes, particularly one involving Elon Musk, is also highlighted, all emphasizing significant industry effects and regulatory implications.

Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

If you would like to become a sponsor for the newsletter, podcast, or both, please fill out this form.

Email us your questions and feedback at [email protected] and/or [email protected]

Transcript

AI Singer

Last week we saw it, Instagram's new AI ride, bots on the scroll, all the secrets they hide. Matterstack is shiny, segmenting the day. Black forest lasts, they're leading the way.

Andrey

Hello and welcome to the latest episode of Last Week in AI. Bye. where you can hear us chat about what's going on with AI. We will summarize and discuss last week's most interesting AI news. And as always, you can also go to lastweekin. ai for stuff we did not cover in this episode. I am one of your hosts, Andrey Kurenkov. I finished a PhD in AI at Stanford. Stanford and I now work at a Silicon Valley Gen AI startup. And with me this week is not Jeremy.

He is busy with baby and house buying stuff. So we once again have a great guest co host John krohn.

Jon

Yes. Hi there. I'm your other host, John Krohn. I'm going to do my best to try to channel Jeremy and be as much of a funny genius as your commenters say that he is. It's not easy,

Andrey

but you can make

Jon

sure to fill that can be Canadian as well. That's a guarantee.

Andrey

There you go. Yeah, that's a critical component of Jeremy. And, you know, just mentioned AI safety, a bit more geopolitics, Jeremy is really the hardware.

Jon

I'll try to get that Goldilocks sweet spot as well on dooming. I'm a big techno optimist. And so I think, I think you and I are both really techno optimistic. Actually, if people want a taste of Andre Karenkov in a way that they may have never had before, they can check out episode number 799 of my podcast, the super data science podcast, because our guest in that episode is you Andre. And we had.

An amazing, I, I had an amazing time, at least I shouldn't speak for you, but we got to meet in person in San Francisco and it was a beautiful sunny day in San Francisco, which you cannot take for granted, but we filmed, uh, together in person and. We had topics planned, but ended up spending a huge chunk of the episode talking about our techno, our techno optimism about artificial general intelligence and the artificial superintelligence that would follow immediately afterward.

It's one of my favorite on air conversations ever. So if people, yeah, you want to hear more about Andre's background. How he got into creating this podcast, the last week in AI podcast, as well as so many other productions he's done so much. It's crazy where we continue to do so much. So you can hear all about that on the show. Episode number 799 of the super data science podcast.

Andrey

Yeah, we will have a link of course, in the description. I think I've. Forgot to mention it on a podcast somehow, but now it's the perfect opportunity. And as you said, we spend like a good 10, 15 minutes just talking about this podcast, how it came to be, what we do on it, and then like veer off into, I guess we talk a little bit more also about what I'm doing now with my startup. But eventually we just go off into other topics like AGI and then have quite a bit of fun talking about it.

Jon

Yeah, that's another really cool aspect of the episode is that if people are always listening to Last Week in AI, they don't hear that much about what you do at your day job, and we spent a good chunk of time on that, so that was interesting as well, hearing about generative AI, text to video game, gen AI, which is, I mean, a mind blowing concept that you guys are starting to figure out over there. So yeah, so that's, uh, that's something to check out. The podcast is a big part of my life.

I think that's probably what I'm best known for these days, but I also wrote a bestselling book called Deep Learning Illustrated. I am the co founder and chief data scientist at an AI company called Nebula. And I don't know, I think that's about it. Those are the main things. Great.

Andrey

Now, just a couple of more programming notes before we get to the news as usual. I will say, unfortunately, the last episode of this podcast came out a bit late, uh, due to me being busy and possibly a bit lazy. So the big news about Llama, uh, did not come out, it came out maybe a week after it should have been, but, uh, I will try to be more on time with this one.

Jon

And. That's the same thing that happens with me with, with the production cycles that we have at the Super Data Science Podcast. It's very hard for me to get something. You, you have much faster turnaround times than we do. And so I also got out to the episode about LLAMA 3. 1, um, on Friday of last week, you got it out on Saturday. And so I was happy to see that though. I was disappointed knowing that I was going to be hosting this next episode, that I wouldn't get to have that huge release.

Cause I do think. That Llama 3. 1405B release, I think that's one of the biggest deals in a while that we've had in AI in several months, at least, I think to have a frontier model now that's open source. It's pretty wild.

Andrey

Yeah, I think really that's like the most significant thing that's happened this year, I would say pretty much. So definitely do listen to those to hear our takes and have a bit of a deep dive into what it is. And just a couple more things. As always, I do want to call out our listeners who gave us feedback. We got one review from Cloud Sherpa on Apple Podcasts, says there's a good breadth and depth, which we are.

Pretty much always striving for, uh, we do go through a lot of breadth for these episodes. So we appreciate people who stick around and, uh, I don't know how much should we have, like 195 reviews on Apple podcasts now, so it would be really cool to see that pump up to 200, you know, I just like round numbers, so yeah, and one more thing, there was a nice comment on YouTube from Alien Forensics and there's a Question in this one, and we do welcome questions that listeners have.

So the question was, uh, with Apple's recent focus on integrating advanced AI capabilities and private cloud compute systems, how do we see that shaping the future of AI accessibility and privacy? And I do think that's pretty significant. I think it's a differentiator for Apple that they've. continuing to hammer on and now are especially hammering on. One of the differentiators is the on device nature of it and the privacy. When you send it to OpenAI, you opt into that.

And I think more than likely Google will follow suit in trying to make the hardware as much, uh, or the software of the AI models as much on device as possible. And, uh, Possibly also hammer on that privacy point to continue differentiating. Cause I think a lot of people are starting to worry a bit about, you know, of course, these tech companies are already getting all the data on us. But now if we talk about, uh, AI assistants and, you know, talk about.

Possibly private things or sensitive things. People will not want that to be, uh, used in training data.

Jon

I think that with Apple, at least it's a safe bet that they are going to prioritize your privacy. That is the most distinguishing factor about Apple these days relative to the other big tech companies. So it's interesting. It was surprising to me to hear that.

With Apple intelligence, they are going to be sending some of your requests if you allow them to open AI, but that also just shows how tricky it can be to create frontier models and how, even if you are a huge, well capitalized company like Apple with so much great talent, that doesn't automatically mean that you can jump to the frontier on AI. And I think eventually they could get close enough to the frontier that in the next few years, they won't need. To be sending off, uh, queries.

And that can be in part because of things like llama 3. 1, this open source model, and they would have to probably get a special license. Cause I think I actually didn't look into the license for llama 3. 1, but if it's like the preceding, uh, Llama licenses from meta companies with more than 700 million users can't use the model weights. And apple would be one of the few companies in that category.

But nevertheless, you could imagine they would probably prefer striking a deal with meta to get access to those weights and have those weights running on their own infrastructure then be sending queries off to a third party like OpenAI.

Andrey

Right. And actually this reminds me, we did go into this a little bit on the interview on your podcast, where we talked about features of AI as a product, as opposed to AI as a product. And I do think one of the highlights of Apple's approach is not. Making necessarily a general purpose chatbot as everyone has been in Google and OpenAI and so on. Instead, it's like this deep integration throughout iOS with more specialized tools that are at hand.

And I think that's another, uh, pretty important, uh, feature of our approach that probably is a smart take, uh, in a position to others.

Jon

Um, last, I guess, bit of banter, cause we're probably ending the banter section right now, so that, I think it's the last opportunity for me to crack open a beer can here on air.

Andrey

Yeah. And unofficial tradition now, which is pretty fun for us. And as before. Non alcoholic beer, non alcoholic beer, don't drink in podcasts, folks. Yeah. We really got to get that sponsorship from Athletic. I think that's right. Alrighty. And now onto the news, starting with tools and apps. And in fact, this is a good little segue because the first story is that Apple intelligence is going to miss the initial launch.

Of the iOS 18. So we were sort of expecting it to roll out along with iOS 18, but it seems that that's not happening. It's gonna come around October as opposed to iOS 18, which is coming in September. And. Even then, there are going to be missing some features like the most significant changes to Siri. So, clearly Apple being very careful to avoid some of the blunders that some companies, perhaps Google, uh, have seen.

Maybe rushed, uh, launches and, you know, People have criticized Apple to some extent with them coming to the AI game pretty late, uh, but you know, now that they are late, I guess it does make sense to take their time and really make this as polished as possible.

Jon

Yeah, it's easy to make fun of Google for the big releases they've had, but you can also, I totally appreciate the situation that they got into because there was this feeling prior to the release of Chad GPT. That Google DeepMind was the premier AI lab in the world. And then all of a sudden, OpenAI has this tool that everyone is using and everyone is talking about.

And so there was a lot of pressure on Google rush out with these kinds of chat GPT like functionalities that they have now in Google Gemini. And also a lot of the stuff that people complained about so much, and that made a big splash around the Google, Google Gemini releases was actually.

Not related probably to the model weights themselves, but to the way that The models were packaged up so that when you sent in a query that could potentially be controversial They would be packaged up It would be massaged in a way that would kind of be comical.

And so the, I think the most prominent example, probably a lot of your listeners are already aware of this one, but just to recap really quickly was that with the Google Gemini text to image tool, when you would ask for anything involving people being generated, You'd get a lot of diversity in terms of the people that would show up. And that would even be in situations when you were like, give me a Nazi and you'd get ethnically diverse Nazis or ethnically diverse Popes.

And so that was, that's what they took a lot of heat for, but it doesn't, I don't know, I didn't even really consider it to be that much of a gaffe. Like it's something that you need to figure out. They're rushing to, to try to keep up with open AI. And if anything, it shows that they're trying to do their best to make sure that these tools aren't, um, you know, uh, favoring historically favored groups like these tools tend to, if you just go with model weights alone.

Andrey

Yeah. I think we, we don't need to go into the History of Google very much, but this does highlight the difference between Apple and Google. And I do think you're right where Google had a lot of pressure to release fast and to really showcase that they are still a leader in AI as opposed to Apple, which wasn't, you know, even, even known to do much in AI. So it made sense for them to get into it a bit later. Yeah, in fact,

Jon

Apple is basically the inverse. Apple rarely is the big innovator on some specific new feature. What Apple does great is it takes lots of great features, individual pieces of functionality like generative AI.

And waits a year or two, maybe a few years until the tech has matured a bit and they could figure out how to package that really nicely with all of the other functionality that they offer in their operating system and give you a seamless, beautiful, you know, uh, very unlikely to have some kind of fatal error as it runs issue. And you can imagine how much harder that is with something like Gen AI.

Then so many other kinds of features because what makes Gen AI so wonderful is that is the breadth of utility that it has is the huge range of possible queries that it can handle in a huge range of possible outputs it can have. But then when you're someone like Apple, when you're a product manager at Apple, who is probably used to being able to control things really tightly and have this beautiful product experience all the time. How can you do that so well?

When you don't know, the output is going to be from this infinite range of possible outputs.

Andrey

Right. I'm pretty sure they will not have AI telling you to eat rocks or anything of that nature. And to that point, uh, another aspect of this is, uh, Apple is planning to make Apple intelligence available to software developers for the first time. As soon as this week, via the iOS. Beta. So really pretty clear that they are prioritizing testing and making this very seamless and bug free.

Jon

Excellent. I can't wait for Siri to get a good update. I think that's, it's wild how a decade ago Siri really was at the cutting edge of conversational AI, but everything about the way that it works is following if else statements basically, and just hard coding in a very large number of possible circumstances. Which has nowhere near the kind of flexibility, that infinite flexibility that we get with large language models.

And so I am, as a, as a big Apple fan, I can't wait for Siri to be updated and me to be able to get that conversational experience that I expect from Gen AI today on my iPhone as well.

Andrey

And, uh, it's still gonna come pretty soon. End of, uh, by end of this year for sure. You'll get, let's see your update on some of those Apple Intelligence features just in a couple of months, so won't have to wait too long. And next story onto another big company, meta. They have launched a new tool called AI Studio, which will allow anyone in the US to create AI versions of themselves on Instagram. Or the web. This is coming after they tried this to some extent with celebrities.

They launched some famous, uh, people, I think like Kim Kardashian and Mr. Beast and others for people to chat with. That feature did not take off. I think they're basically killing it off. But now they are replacing it with the ability for creators and also for anyone to create an AI version of themselves, similar to things like character. ai, where you can make a chatbot version of any character, including real people.

So I think interesting, uh, move by Meta and it'll be interesting to see to what extent it does take off.

Jon

Yeah, so what exactly is the idea here? I am currently scrolling through the facebook. com, which interesting that it's still about dot fb. com is the blog that this is on. That's interesting. Um, and Scrolling through it, I don't exactly get it. So then who gets to use your AI character when you create it? Is it your friends on Instagram? Is that who then chats with not you, but your AI you? I think it's

Andrey

going to be a little, uh, there's going to be controls into what you can do, so, Uh, creators can customize the AI based on their Instagram content, have topics to avoid, and also set up auto applies and specify which accounts they can interact with. So it seems to be kind of a mix where you can really decide what the AI does. And presumably for popular Instagram accounts, that's going to be the user to let anyone interact with the account.

And, uh, yeah, it'll be interesting to see if, you know, popular Instagram personalities will make use of this to get even more engagement and have their fans, um, interact with them in this indirect way.

Jon

Yeah, I'm a bit skeptical of this being something that really takes off. It seems like a little bit of a stretch of this Llama release. Like, I think it's Huge amounts of capital, probably more than a hundred million dollars got invested in creating Llama 3. 1 405 B, which again, you should listen to the preceding episode of last week and I had to get tons of info on that release and huge amounts of money invested. And then you have to show. To your investors, it's a publicly traded company.

And so your investors want to see a return on that huge investment. They don't, they don't just want to see you that you're open sourcing these model weights and okay. There could be arguments like this helps us to attract AI talent to meta because, you know, we're now at the frontier alongside open AI and Google. So, you know, we are a third choice or anthropic as well. So we're a fourth choice.

For you to come work for, if you want to be at the frontier and you can have that, um, feel good feeling about releasing Modelways to the public as well. So it helps with attracting talent. It also helps to undercut competitors because big players already like open AI.

With their, um, association with Microsoft or Google Gemini itself, which we've talked about already a lot in this episode, by creating a frontier model with open weights available, you're undercutting those companies because you're commoditizing Gen AI. You're making it closer and closer and closer to free, which makes it harder for OpenAI slash Microsoft and Google to be cheap, to be making margin on their own proprietary LLMs.

Andrey

Yeah, another thing to note on this thing is you can not only do this with sort of real people, you can also make kind of funnier or more flexible characters. You can make a character with anything you want. So there are things like you want. Eat like you live there, which is created by a chef and has personalized tips for local dining. There is a bot called What Lens Bro from a photographer that offers tips on finding lenses.

I like this next one, Sammy the Stress Ball from a meme creator that will help you get through your stressful workday. Let So you can really make various bots. And this really reminds me of the GPT store where they, uh, OpenAI launched this a while ago, where you can make customized GPTs to chat with and, and specialize them in different topics. Seems to be kind of the same idea here. And given the reach of Instagram, you can talk to these through your message interface on mobile.

So, uh, I think. You know, GPT store had some traction and I can see this being more of a success, if not with, you know, uh, bots based on real people than on bots that are, let's say more specialized.

Jon

Okay. So yeah, so with that additional context, then this is more like character. ai. Indeed this now. Reminds me of the Facebook and now meta tradition of aping features that other big startups. So any kind of company that comes along that creates something that looks interesting or could possibly compete with Facebook or meta that either acquire that company. Or build in the same kind of functionality.

So in the way that now Reels is the most pushed thing in Instagram and has been for a couple of years, that is a rip off of TikTok. And so this seems like a clear rip off of character. ai, which, you know, fair enough. You know, that's. That's what competition's all about.

Andrey

They've had a lot of success of that approach, and I'm sure we're going to start seeing a lot of anime characters and romance chatbots, which is a big thing on character. ai onto the lightning round. First, we have one ray just dropped image to video in gen three. So runway launched their gen three video generation model pretty recently. At first, it was only. text to video.

And now we have image to video where you can upload an image, uh, specify a starting image, and then you can add motion or text fonts to guide the AI in generating a 10 second, 10 second video. This is something that, uh, Luma, I recall also had. And, uh, here they, uh, runway has some of those additional sort of production capabilities. So in addition to motion and text prompts, you can also use it with the lip sync feature to be able to animate. An image and had accurate speech.

So I think, uh, text to video is still pretty far from perfect with gen three, but I can see this being, uh, quite a bit better because if you have a starting image, that's. That is photorealistic. It can be significantly better. And you can do some pretty magical things. You can, you know, add an image of you in your living room and then say, now, uh, have a waterfall or rain added to this video. Uh, and it seems to be pretty cool. Working based on what I've seen.

So I think this is pretty exciting.

Jon

Yeah. If, if you've ever used a text to image or text to video tool, which I suspect a lot of your listeners on last week in AI have, you've I'm sure also then had the experience of really having a hard time getting what you've been imagining in your head, where these LLMs tend to spit out something that you're like, okay, I understand why you did that. And then you try to, with some of these tools, like.

In the chat GPT experience, you can even reprompt and say, you know, I want to make this adjustment, but you very rarely get that adjustment that you're looking for. And I know there's been a lot of complaints in the past about things like, Oh, if you want puppies and kittens in the same image or video, it can be really hard to do that for some reason. And so these kinds of tools that go image to video.

allow you to circumnavigate some of that, um, uh, unspecificity of text when you provide a text prompt. And we actually, we had an episode on my podcast on super data science about a year ago with a Berkeley student who's created a company called Genmo that that's what they were created to do. So a year ago, Genmo, One of their key pieces of functionality was taking image and converting that into about a three second video.

And so people want to hear from the founder and CEO on how you can build technology like that. That works. It's episode number 711 of my show.

Andrey

And speaking of imaged video and updates to existing tools, the next story is that mid journey has dropped their V6. 1 update. Update. And this update has a bunch of stuff in it. It, in particular, improves a human appearance, the natural look of human skin. And what you've seen also, uh, legibility of rendered text is better. And in addition, apparently the model is 25 percent faster and you can also personalize it more with nuanced.

And accuracy, so it seems, uh, pretty significant as far as updates go. And as usual with MidJourney, you can test it out in a discord. You can add dash V 6. 1 at the end of your prompt. So there you go. I think as someone who has played around with MidJourney, I still believe they are a leader in image, uh, or text to image. And that might play around with this.

Jon

They are a leader for sure, but since OpenAI released text to image functionality, and then Google did as well inside of Gemini, uh, I personally have switched completely to using either the ChatGPT interface or the Google Gemini interface for creating images, because the Discord experience as a user experience is so weird to me. Um, I don't know.

It, yeah, I absolutely Believe that mid journey could still be at the cutting edge, um, in terms of text to image quality, but the UI is so bad that I'm just like, ah, it's just so easy in Gemini. That's actually, I think that's one of the best aspects of Gemini.

Andrey

I agree. And, uh, in fact, they did launch, I believe a better version of a web interface, uh, we covered it earlier this year, but it. Took them quite a while to move away from discord and possibly because, uh, they have taken off on discord, surprisingly, like where discord has, uh, at least a million users, I think. So, uh, yeah, it might be working for them. And I did use it through there.

And I guess once you do start using it, it's surprisingly not too bad, but, uh, most people I think would prefer Gemini or, uh, Chasubuti. And next we got a kind of surprising update, uh, with a new company, AI powered necklace will be your friend for 99. So this is another attempt at making a hardware product with built in AI. We've of course seen some of these come out this year to let's say negative responses with the, uh, Hume AI.

Pin and the Rabbit R1 didn't seem to really justify their price and their use case. This one is a little different. It's, uh, more of a necklace and they're billing it really, really as a friend. So we're not meant to replace your phone. It's meant to be more of a, I guess, body, a body friend to chat with. And they do say that it's personality will, uh, mold itself to you and make it a fun to chat with. So we'll see how it goes.

I think a different approach, certainly that might be more successful, but I'm not sure how many people do want AI friends at this point. So yeah, we'll find out.

Jon

It's hard to imagine a lonelier looking person than this, than someone who already seemed lonely now talking to their necklace. I don't know. I think this might be joining him and Rabbit in the, uh, in the move too quickly bin of AI hardware. Right,

Andrey

and, uh, that seemed to be the response on their YouTube video. One commenter said, What in the black mirror episode is this? So, yeah, you know, I think not everyone is excited for this idea, but, um, I could also see it being kind of fun if they do it right. And last story for this section, it is that Microsoft is adding AI powered summaries to Bing search results.

So very much like Google's take on this, uh, this will take your search results, look at some of the, uh, results in terms of articles and links, and then summarize the, uh, results in, let's say, one article. Paragraph. This feature is not an opt in choice and is still only being applied to a small percentage of user queries and is being slowly rolled out. So Microsoft trying to avoid some of the silliness that Google, uh, got into with, uh, their attempt at this.

And, uh, Yeah, everyone seems to try to add this to search, so we'll see if it does end up being useful.

Jon

It doesn't say anything in the blog post, does it? It certainly doesn't in the Engadget article that we're highlighting here. It doesn't mention whether this is using OpenAI tech or Microsoft's own LLM.

Andrey

Yeah, I would imagine it's probably Microsoft's own, but it could definitely be OpenAI in the sense that they did release GPD 4. 0 Mini and you would have to use like the cheapest possible, the fastest possible model to, uh, do this. So I could even see them using something like PHY, which they developed internally, smaller model, uh, something like that, as opposed to, uh, bigger models. Next section, Applications and Business.

And we move on to the next big company that we've been mentioning quite a bit this episode. We begin with the story, Character. ai CEO Noam Shazir returns to Google. So Noam Shazier left Google in October 2021 to form character. ai, the company that allows you to create chatbots for anyone to chat with. Interestingly, Shazier was leading the team that built Lambda. Language model for dialogue applications that basically was sort of a character that I had before it existed.

They were internally testing being able to create chatbots for various use cases. Of course, with Google, I guess it went slowly. They didn't want to release it. And so, uh, Noam left to start this company. Now he's coming back, uh, Google is also signing a non exclusive agreement with Character. ai to use its tech, and seems to be doing some funding to allow Character. ai to continue scaling.

And as far as what is the role of Shazir at Google, he's joining the DeepMind research team, and that's where he was before, again. So A little bit of a, you know, interesting development here. Now, this is not the same as Google's, uh, or sorry, Microsoft's deal with inflection AI, it doesn't seem like the majority of character AI is.

Uh, becoming part of Google, it's mainly just the, uh, top leadership team and, uh, most of the company is staying and they're going to continue developing character AI.

Jon

Yeah, this is a surprising story to me. This is not something that I would have expected because we didn't mention this when we were talking about Instagram aping character AI earlier. But the reason why Instagram is aping them is because character AI is actually one of the most popular chatbots in at least the United States. And so maybe one of the most popular chatbots in the world.

Now, it is far behind ChatGPT, uh, for example, but it's easily in the top five, if I, Recall correctly might even be in something like the top three chatbot tools. And so if I had left Google to co found this company and we have two co founders here, Noam Shazier and Daniel DeFritos, both going to Google now, it's a surprising move because typically when you go, you get someone like Andreessen Horowitz, like they did.

To back their company, you become one of the most popular chatbots in the world as a private company. And then you leave to go back to Google. It's surprising. They must be getting paid a lot of money. Uh, that's, I mean, I don't know them. Uh, I don't know anything about what's going on behind the scenes of this deal, but you'd think that there would have to be a lot of money from Google for them to do that. And then.

It seemed to me like this was one of those kinds of pseudo acquisitions that big tech companies are starting to do more and more seemingly to try to avoid scrutiny from antitrust, um, agencies in the U. S. and Europe and elsewhere. Um, so we saw as an example, as a prominent example of that. Inflection AI, most of their team got acquired, uh, including Mustafa Suleyman, who was a co founder of that company, and he's also a co founder of Google DeepMind.

Well, a co founder of DeepMind, which became Google DeepMind. And so, you know, there was that kind of pseudo acquisition, where it was an acqui hire, seemingly, Where a whole bunch of people from inflection, very well known people, went over to Microsoft. Even the 49 percent invest 49 percent equity stake taken through the huge 8 figure, uh, investment that, um, 8 figure? Well, that's not nearly enough. Number that's that's only that only gets me a million.

How do I get to I can't I can't in my head do the very quickly the tens of billions Investment that open AI did in Sorry that Microsoft did an open AI so you get these kinds of pseudo acquisitions that seem to be An effort to avoid antitrust scrutiny, which big tech companies are increasingly under. I mean, you actually, you even see it on this show.

You see it with the stories that we talk about on, when I'm a co host on the show that I hear about when I'm listening to your podcast all the time, you guys are covering all of this. all of the big news in AI that's making a splash. And I don't know, what do you think, Andre? 80 percent of the stories involve one of the big tech companies in one way or another. And so you can see why antitrust companies or antitrust authorities are interested in scrutinizing these kinds of acquisitions.

And then, so I guess the legal teams. at, um, at these big tech companies are like, well, let's try something else. So we're not really acquiring, but we get the pieces that we want. And yeah, but this doesn't, it doesn't seem to be exactly what's going on here as, because as you say, lots of the character AI team is staying intact, but their co founding team is leaving and the person who's becoming the interim CEO. Is there general counsel? I mean, that's weird.

Like you think someone like a COO going into becoming the interim head makes a lot of sense. Their general counsel.

Andrey

It's an interesting move, maybe to emphasize that, uh, it is interim perhaps, uh, and notably also the co founder Daniel De Freitas is joining. Maybe. Uh, Google and apparently also some other employees. So definitely a bit of a move to me. This reads more like a bit of a indirect funding move. Uh, so calculator AI raised 193 million. So far, which is a lot, but considering they did seem to train their own model, maybe not that much.

It takes a lot of money to train models and also to, uh, with Carretera being massively popular, I would be not surprised if they were losing money hand over fist to run all of these models, even their subscriptions. You know, 20 a month if your users are super, uh, uh, involved and, and using your product a lot, which I would imagine it seems to be the case, especially those who are subscribing. I would not be surprised if they're not making money at all. And that was the case with inflection.

Seemingly they couldn't come up with a business model. So, this to me seems like a, uh, maybe a strategic move to keep character. ai going, give them a big chunk of funding to continue, uh, expanding if we are bleeding money. And also I think it kind of makes sense to some extent for Noam Shazier and Daniel DeFrittis to want to go back to DeepMind. I mean, they did start out as researchers.

And it is very possible that they are just not keen necessarily on running a startup and a business as much as they want to be able to do cutting edge research and have the resources of DeepMind to continue pushing the frontier. So, you know, reminiscent of the inflection story for sure with Microsoft, but I think also, uh, quite distinct. And next story is going back to AI driven search. We've got the story that Perplexity is cutting checks to publishers.

So we covered pretty recently that Perplexity was in trouble for being a bit of a plagiarist in some cases where they took the text of almost entire news stories Regurgitating that text and they have a attribution link, but that means that if you don't click on the link, those offers and publishers of the stories don't make that ad revenue.

And so now, Perplexity is launching the Publishers Program, which will share ad revenue with Partners, which includes The Time, Der Spiegel, Fortune, Entrepreneurs, and a few others. And I would also not be surprised if this is kind of necessary because all these publishers are now closing off access, uh, unless you pay, they have been making all these deals with OpenAI and Google and others, and it seems to be now to be able to actually access to these publishers, you've got to pay.

So, uh, I think a pretty natural evolution of this and probably something that Perplexity had to do to continue doing what they're doing.

Jon

Yeah, I don't have too much else to add here. This just seems to fall into exactly like you're saying, what we've seen more and more of over recent months where the big players, like you mentioned, open AI, they're Anthropic, Google, they are paying publishers now to continue to get High quality access to the data that they were maybe somewhat surreptitiously and maybe not 100 percent above board using to train their AI model.

Similar to the story that I think it was in the most recent episode of Last Week in AI, where you guys were talking about using YouTube videos and transcripts for training video. Gen AI models. And so it's that similar kind of thing where this information is out on the web.

The terms of use probably are not that it can be, that should be used for creating, well, I mean, it's, it's a legal, it's a legal gray area that is starting to be sorted out, but I think these companies are starting to try to get ahead of things in order to have the high quality training data in the future. They are cutting these kinds of deals. It makes a lot of sense to me. And I don't know if there's that much else for me to add onto the story.

Andrey

Right. Yeah. And, uh, for Flexity, it's not even so much to a training data. It's being able to, uh, crawl these and, and summarize those results. And. This is of course coming very soon after OpenAI announced Search GPT, which is a direct competitor to Perplexity and OpenAI had a head start on making all these deals with publishers of the news in particular. Google also has made some deals with Reddit and others. And now to be able to access those publishers, you have to pay.

So, uh, you know, maybe good news for news publishers, which has been a struggling industry. Uh, not so good news for perplexity, which now is a little bit in peril with competition from Google and OpenAI.

Jon

Exactly. That is the other big trend to highlight here. Those are the two big trends I think associated with the story are Gen AI companies paying publishers more and more often. Like you say, in this case, it is more about being able to surface results as opposed to being able to train their models.

Um, but yeah, the other big ongoing trend here to highlight is exactly as you said, it's the, uh, it's more and more companies getting in on this Google killer technology, including Google themselves.

Andrey

Onto the blinding round, the first story is kind of a big deal. NVIDIA is reportedly delaying its next AI chip due to a design flaw. So this is the Blackwell B200 AI chip and it will be delayed by at least three months due to a late stage design flaw. So this is important, uh, because, uh, Microsoft Google Meta are using NVIDIA chips. The, you know, cutting edge of NVIDIA chips to train their models. They have committed tens of billions of dollars apparently to buy these chips.

And this is a delay by at least three months. Uh, so they won't be available until. Uh, the first quarter of next year, at least for large scale shipping. So yeah, very much telling that NVIDIA was perhaps rushing the development of this chip, which is enormously complex and almost a powerful and, uh, not good news for all these big tech companies that are hungry for compute.

Jon

It's wild the pace that NVIDIA moves at, it's been mind blowing to watch over the past decade with what they've been doing with AI chips. To have a three month delay once now, after all these years, at such a breakneck pace of development, I mean, they are, you cannot understate how difficult it is to be doing R& D at truly the cutting edge. And maybe, you know, we, we talk about cutting edge AI models all the time with hardware.

It's so much more complicated because you can't just, you can't just patch in a hot fix into your code to fix something. I mean, you're talking about plans that then go to the foundry and you have these for only three months of delay on, on something like this. I think that that is impressive in and of itself. I'm sure people are working crazy hours to now turn this around.

And, you know, this is bad news for, you know, the big tech companies that depend on these chips, everyone developing the new cutting edge models and deploying those new cutting edge models, you know, the kind of the GPT 5, um, you know, that training, that kind of, uh, generation of model that's coming out. But this is a rare piece of good news for other chip manufacturers out there like Intel, who this week had, uh, Uh, they cut 15 percent of their workforce this week.

And so this probably the 85 percent who are remaining at Intel this week are probably cheering this news on.

Andrey

I think so. AMD also, you know, they've been trying to catch up and now we have a little more breathing room. Next up, a story related to a big trend this year. We've got a new humanoid robot, RNE 1. This is from the German robotics manufacturer Neura, and they have released a video of this RNE 1 doing stuff like ironing and moving Uh, apparently, uh, they were given early access to NVIDIA's Humanoid Development and Deployment tools. And, uh, yeah, it's, it's a bit similar to other companies.

We've seen One X, Epic Figure AI. Developing these humanoid robots with the promise that with powerful AI, you can actually, uh, go ahead and deploy them in real world scenarios, which is very challenging, uh, has been a dream of sci fi for forever. And now, uh, many people, certainly, uh, VCs seem to be optimistic that it might happen. As. Someone who worked in robotics, uh, during my PhD, I will say, you know, it's cool to see demos of these humanoid robots.

I would not be surprised if it takes another four or five, six years to actually see them in real

Jon

world applications. Well, who knows? Humanoid robots was something else that we talked about in your episode of my podcast of the super data science podcast episode number 799.

Andrey

So

Jon

much.

Andrey

I don't remember.

Jon

And yeah, one of the things that you mentioned that episode, I think, was that one of the big issues with humanoid robots is that they're too strong that they end up being dangerous in a lot of household applications. You also, I now can't remember off the top of my head, but in that episode, you interestingly. You summarized three key players that you thought there were, I think in humanoid robotics. Do you remember those off the top of your head?

Andrey

If I had to guess, I would probably mention Tesla, One X and figure, which have seemingly leading hardware and the leading AI. Yeah.

Jon

And this is a really cool space for me. I think we 799 as well, but with how quickly, I mean, we talked about this in this, in the Nvidia chip story that just, that we just talked about as well, where. Hardware things are so much hardware, so much harder, so much harder. Um, to a quote that I learned from, uh, Y Combinator alumnus, Jeremy Harris, usual cohost on this podcast, um, is the hardware is hard mantra that they have at Y Combinator. And that comes into my mind constantly now.

And, but that also makes hardware more interesting in a way, because it means that to be competing at the frontier in terms of. Just software LLMs. Yes, it is hard to do, to do that, but it's not nearly as hard as it is to be cutting at the edge, cutting to be at the cutting edge in terms of hardware.

And it also means that if we talk about things like AGI happening or artificial super intelligence happening in software, there is still going to be somewhat of a delay, I assume, Assume unless that ASI is also very quickly able to ramp up like huge amounts of energy sources and just have those going really quickly and then is able to do manufacturing really quickly. Like maybe that can happen.

We don't know what it's going to be like in a post singularity world, but it seems to me like there's still going to be. a non negligible lag from when we have an artificial super intelligence in software to all of a sudden having that intelligence, that abundant intelligence being able to transform the way the world of, uh, of molecules works as opposed to the world of bits. Um, Because yeah, it's, it's just so much harder to get this stuff cramped up.

Andrey

It's so much harder, and not just because of the hardware, also because of the AI component. You know, LLMs are so powerful, partially because you can scrape the entire internet and train them. With robots, you cannot do that. You just don't have the capability to scrape data. And also generating the data is very time consuming. Now, you know, people are trying to use simulation to do that, but by and large, DeepMind, NVIDIA.

I'm still also using a lot of Taylor operation to get data on using robots. And that takes time. That is not easy. So yes, it will take quite a while to be able to develop the AGI. If that includes robotic control and, you know, real code. Humanoid capabilities are in this humanoid robots,

Jon

something also just popped into my head that could potentially be a, a blocker that makes things tricky here in terms of infusing AI into hardware, which is latency, because, you know, a lot of the time when we're using AI tools, it's rare that it's on device. So we have things like, you know, I think Apple is planning on rolling out an onboard LLM, but that's going to be pretty small. It's going to be like. A few billion parameters, which still is crazy to fit that onto a phone.

But, uh, if you want to have, even with that kind of smaller thing on the edge, 3 billion, there's still typically a latency in terms of, you know, your whole response to come out from your prompt. And so you, you wait there as a result streams out onto your screen. But with something like. A robot, that kind of latency really, really matters.

If you want something to be able to react in real time to situations, you can't, you can't wait for, I don't know if there's an equivalent of kind of like, uh, a response that's streaming out with robots, but you, you have all of these model weights that are going to need to be flowed through, um, in order to get you your response.

And yeah, for real world responsiveness to be anything like what we expect from humans and animals, getting that to those kinds of hundred millisecond kind of, uh, time scales on latency is probably going to be tricky.

Andrey

Absolutely. And it's not, yeah, there's latency. There's also the need for closed loop control. So it needs to be constantly running, constantly taking in video data. Our data, that's not just text to be able to operate and you do want to do that on device. So lots and lots of challenges with robotics for sure. Next story is titled, Yes, there are more driverless Waymos in SF. So we got a bit more data on Waymo in San Francisco, uh, up to May of 2024.

Uh, Waymo did recently open it up to anyone in SF to use it. So just a quick summary of what we know. Apparently, uh, Waymo covered 143, 000. 1, 000 paid driverless Waymo trips that comes out to 903, 000, uh, miles traveled by Waymo vehicles and 204, 000 passengers. So it's taken quite a while, even going back to August of 2023, that was only 19. 7 thousand passengers, only 101 Miles only 12. 6, uh, trips.

And, uh, you know, with Waymo expanding to Los Angeles, it seems to be, they are really emphasizing scaling up and opening up this year.

Jon

This is something that you and I talked about at least off camera. When you and I filmed together in San Francisco, I had my first ever Waymo drive the day before we filmed. So your episode came out about a month ago. So it was late June, early July that we filmed, and it was actually just that the Waymo had made it possible. To have no wait list and that you could just download the app, use the cars at San Francisco. And I was so excited and I had so much fun in my Waymo rides.

They were more expensive for me to hail and book than to get an Uber or a Lyft, but I did it anyway because of the novelty of driving around with steering and seeing the steering wheel move, seeing the pedals move. And I felt extremely safe. I rented a car. After I left you in San Francisco and did a road trip to wine country north of San Francisco and then drove all the way down to LA, which is quite a slog. It's like six hours from San Francisco to LA.

And the whole time I was driving, I was trying to channel a waymo in my head. I was like, try like a waymo, be like a waymo because They, you really do have the sense that when you're in the vehicle that you're super safe. It seems obviously when I have that experience that this is the future of transport. It'll probably be a little while longer before regular authority, regulatory authorities are allowing us to have these driverless vehicles traveling at very high speeds on highways.

Cause I think right now with, with where Waymo is deployed and where other self driving vehicles are deployed. So in the San Francisco downtown area, In, um, in Arizona, in Singapore, in these places, they are in relatively low speed conditions, like you're kind of getting to 30, 40 miles an hour at most, and fatal crashes are relatively difficult or, you know, more difficult than at 60, 70 miles an hour.

So, um, kind of off on a tangent here, but it does seem clear to me that this is the future of transport, um, and the other really interesting thing related to this article specifically, so there are charts which people can click through to by going to the blog post that's linked in the show notes, and you can see these charts yourself, like, if you're interested.

It's that classic hockey stick that you want in a startup in terms of paid users, and it was a very visible experience in central San Francisco. You could not go 60 seconds walking on the street without seeing a Waymo. They are everywhere. They are beautiful and they are a great ride. This is, uh, this, this message was not a promotional message. Athletic Beer and Waymo get back to us. We would love to be reading promotional messages about your products.

Andrey

Yeah. I mean, we do, uh, I guess praise Waymo quite a bit. I've taken about 20 rides now and every time it's been perfect. And, uh, to give credit where credit is due, it's worth mentioning also that Tesla, uh, Tesla is a competitor, which has said they are planning to launch a robo taxi product with the updates to FSD, very full self driving suites into version 12. 5. I've seen reports, but it's much more solid, much better at driving and a human like driving. FSDB4 was not very solid.

You have to really supervise it carefully. And it appears that they are making some big strides. So I would also not be surprised if Tesla's become South Korean taxis probably next year, sometime.

Jon

One last final note here is that if you were one of those listeners thinking, wow, we finally had a story that didn't involve a big tech company, then you don't know that Waymo is fully owned by Alphabet.

Andrey

Alphabet, aka Google, yes. And last story for this section, Canva has acquired Leonardo. ai. So Leonardo. ai is a text to image, uh, product. They launched in 2022, had an initial focus on video game asset generation and later expanded to more things like creating and training AI models for image creation across various industries. So VEA Content Differentiator is not just doing text to image, but doing it for applications to various industries.

We don't know the precise details of the acquisition in terms of how much It has costed but, uh, Leonardo has 120 employees and so it would seem likely that this cost a pretty, uh, pretty penny. And Canva is pretty big if you don't know. They are a design software founded in 2012 and they have raised over 560 million, have 180 million monthly employees. So, uh, I think a little bit of a win for Convo this acquisition.

Jon

Yeah, I don't have too much to add on to that, but this is actually one of the rare stories where, you know, Canva isn't one of those huge big tech companies. And there was an effort by Adobe, which you could consider one of the huge big tech players. Um, Adobe Canva, but that is actually something that antitrust regulators stepped in and stopped. And for me, That's great. Cause we actually, we do love using Canva, um, both at my software company, as well as at my podcast.

We think it's a great tool. And so keep on rocking Canva, keep on making those acquisitions. Congrats on your eighth, um, overall. And second this year, uh, keep building great design products.

Andrey

And onto projects and open source, and the first story is kind of a big deal. It is that Stable Diffusion Creators launch Black Forest Labs and secure 31 million for their launch and also have released with Flux. 1 AI image generator. So this generator comes in three variants. Pro for professional applications, Dev for non commercial use, and Schnell for local development and personal projects. And from the images we have seen, it seems pretty impressive.

It generates really high quality images and perhaps not surprising from diffusion. And the models are released under the Apache 2. 0 license, which allows for pretty much unlimited use of this for whatever you want, uh, you know, you can use it for commercial applications, also for scientific applications. So kind of a big deal for text to image. We haven't seen too many open source models lately that were this openly sourced. So, uh, Certainly, uh, development that is kind of a big deal.

Jon

Yeah. Nice. I guess this is kind of, uh, you can tie this to the mega trend, like the Llama 3. 1 405B where that 405B release was solely text to text. And so it's cool to have some, uh, text to image, uh, models, uh, also, uh, being open source that are approaching that kind of quality. And, um, for those of you who are wondering what Schnell means, so you have the three, the three, uh, levels pro for professional applications, dev for non commercial use and Schnell for local development.

Schnell means fast. In German.

Andrey

That's fun. Oh, and quick correction. I just about checked. It's only via that fast version, Schnell, which is released under the Apache 2. 0 license. It's not open source. They made it available via API, and they did release Flux 0. 1 dev for non commercial use. Schnell is open to do whatever.

Jon

Yeah, so not quite the Llama 3. 1 405b release. So this would be kind of like Meta saying, you know, we're keeping the 405b to ourselves, but open sourcing the 8b variant.

Andrey

Yeah, not as similar from what Stability AI has started doing. And onto the next story. Yet again, a big player, it is Google and they have released more variants of Gemma. So they have released, uh, first of all, Gemma 2, 2 billion models, smaller than the other ones that are released pretty recently. And there's a couple variants of it. So Gemma 2B, but also Shield Gemma and Gemma 2B. So shield Jema is a set of safety classifiers designed to detect toxic content and hate speech.

And Jema scope is a tool that allows developers to examine specific points within the Jema to model similar to what you've seen with recent developments from anthropic and open AI. We're being able to detect what is going on inside the guts of these, uh, language models. So yeah, Google continuing to move pretty fast on these Gemma models.

Jon

Yeah. This is similar to me again, another related thing to that Llama 3. 1 release last week. This reminds me of the Llama Guard 3 for content moderation, that meta release there. So the shield Gemma. Seems to operate in roughly the same kind of space where you're trying to detect, um, potentially harmful prompts or things that could lead your LLM to doing something that you might not want, uh, to be doing, uh, as a commercial entity.

So yeah, uh, you know, it's, it's great that these big tech companies are investing, uh, time and human resources and, uh, development resources, compute time in, in creating an open sourcing these models that Allow anyone to be building safer LLMs out there. I like it. And, yeah, I also like that, you know, these general models kind of fit in roughly the same kind of area as the Microsoft PHY, uh, LLMs that you were talking about earlier on in the show, Andre. And, yeah, keep on going.

Keep on giving us open source stuff. We love it. Uh, they're useful to me professionally. And, yeah, I love that, you know, that this is happening at all from the big tech companies. Keep going.

Andrey

Right. And, uh, as we've seen a lot of times, this is coming with a proprietary, uh, license with Gemma license. Llama also comes with its own Llama license. And so you, uh, are Seemingly pretty open to do a fair amount of stuff of this, but not exactly, uh, as free as you would be with the, uh, Apache license. So you may reproduce or distribute copies of Jemma. If you include restrictions that apply to what you can use it for, you must provide the copy of this agreement and similar things.

So, there you go. It's a little bit more limited than fully open source. Lighting Wild Time, we got a story about Stability AI now and they have released their super fast model for 3D asset generation. It's called Stable Fast 3D and it is indeed fast. It can create 3D models from a single 2D image. In about half a second, this is following up on, uh, previous work by them, uh, on 3d SF 3d. And as with other releases, you can get the code and model weights.

And this is under the community license that allows non commercial use. Uh, for individuals, organizations up to 1 million revenue, and you can use it for commercial use if you have less than 1 million in revenue.

Jon

Yeah. It's 3d asset space is cool. Maybe when somebody is listening to an audio podcast, you might not be able to just kind of easily imagine in your head what that means to create a 3d asset, but this is so. For example, if you are making a film, and actually this is again, something that we talked about in that same episode that I mentioned of my show, episode number 711 of the super data science podcast with a Jai Jain from Genmo.

I was talking about how his company Genmo was an early mover in the image to video space. They also. One of their big commercialization strands is this idea of text to 3D asset, uh, generation.

And in their case, they're doing it for, um, movie production companies or, or television production companies, because, um, you know, then you don't need to have somebody creating every aspect in probably some kind of Adobe tool, you know, generating by hand exactly how some 3D asset will look for some 3D, uh, video for some. You know, it could be for a video game or it could be for a film.

These 3d assets need to be, you know, there in the image so that when, when you have an Andre, you can probably speak to this way better than I can, but when you're, you know, if you, if you even imagine something like playing something like doom, uh, you know, some kind of 3d, some kind of simple rendering, there are these, Objects and in doom, it was like really bad.

They were like 2d and no matter what angle you kind of looked at the map, they were still just like this flat thing, but with, thanks to GPUs, like in videos, GPUs over the past decades, these 3d objects in video games have gotten better and better and better. And they're also used for rendering film, rendering TV. And so you can get this sense of this object really feeling permanent and really feeling like, you know, It is something from the real world.

And so it's cool to be able to generate these really quickly using text to a 3d.

And entre, I'd also be interested from your perspective to hear what you think about this in maybe a, uh, B2C in a direct to consumer kind of application within video games, this is a pretty kind of cool idea as well, because you could imagine, you know, You know, you're playing some kind of video game with your friends and to be able to, in real time, write some kind of prompt or say something to the video game that can generate some

3D monster, maybe even 4D, where there's something where it can actually be action, where it can move as well, where it's not some static 3D object, but you could maybe render So, A 3d monster that can also move around. It seems like a pretty cool space, right?

Andrey

For sure. And, uh, within video games, uh, if you are in the know, you know, that modding is a big deal, taking an existing video game and modifying it to do all sorts of stuff, often adding pretty significant chunks of content. So I could imagine for sure. For modders who are doing it as a nonprofit effort, this could be a real game changer. And there are lots of UGC, a user generated content, uh, platforms also where this sort of thing could be a big deal. So yeah, 3D asset generation is hard.

It is time consuming and these kinds of things will make it much easier to get into game development. Not just for people modding, but also for smaller. Game developers. There's lots and lots of indie game development. It's been a rising trend for a long time where you have a small team, just a couple of people, maybe 10 people, so also a big deal for that. And while sorry for a section, it is open Devon, an open platform for AI software developers as generalist. Agents.

This is a platform that allows for the creation of new agents for using safe environments, sandbox environments, and also for inclusion of evaluation benchmarks, this has been tested on some challenging tasks like software engineering and, uh, web browsing using existing incorporated benchmarks. And it's meant to be a community. A project with over 160 contributors. This is released under the MIT license and, uh, is pretty significant.

A lot of work is going into making the source of AI agents, working on going beyond kind of one past models to things that can browse the web for you. And these kinds of open source efforts will likely result in faster progress than commercial efforts.

Jon

Yeah, I don't have too much to add here. It's, um, there's, there's a lot of potential. It's the wild west right now for agentic AI and having these kinds of, as we get faster and faster frontier models like GPT 4. 0 mini, then that allows us to cheaply have more and more of these agents doing work for us. And the key distinction here with agents being that instead of having to go. And actively yourself make a request.

These agents can be, uh, can be acting based on, you know, instructions that you provide. So as kind of a simple example, you could set up an agent to provide you with a inspirational quote. Every day. And so that is distinct. So that relies heavily on a generative LLM, but it isn't just a generative LLM because it has, it's, it, it is acting on its own. You don't have to go every day, every morning to your gen AI tool and say, give me an inspirational quote.

Instead, the inspirational quote is sent to you. And you can. And so because of that, these tools also need to be able to interact with lots of different kinds of web connected things, uh, which OpenDevon is, is trying to do here with their open platform. Um, you know, it needs to be able to access web browsers and it needs to be able to push you information via whatever, uh, whatever, whatever way you want to be getting this information from the agent when it provides it.

So yeah, uh, tons of fast moving space. I think that this year, 2024 has been the year. Of agent of ai, age agentic ai, and, uh, it'll, it'll become more and more common 2025 and beyond.

Andrey

And next section, research and advancements. We begin with development from meta ai. They have introduced meta segment, anything. two, Sam two, and this is a development on top of the first Sam segment, anything one, which was kind of a big deal. Segmentation, if you don't know, is basically being able to extract a given object from its background to kind of outline it.

And Sam was a very impressive model that was able to do really, really high quality segmentation of anything, as opposed to just a set of labels. of classes of object now with this model they are pushing that forward into being able to segment real time in videos and once again you are able to do this in a zero shot matter no training necessary it can just do it this is also coming with the release of the S. A. V. Data set a collection of 51 real world videos and 600, 000 spatio temporal masks.

And the release here is coming under the Apache 2. 0 license for the model and the CC by 4. 0 license for the data set. Very. Openly licensed, so pretty exciting development for, um, you know, developers of, uh, video for, uh, you know, actual applications could be very useful for labeling in, uh, visual data related to, uh, let's say medical feelings, uh, you know, industrial applications. Pretty cool development on top of an already impressive model.

Jon

Yeah, exactly. This makes machine vision, uh, more, um, more accessible and makes it more applicable.

And so that, uh, You know, this segment, anything model, S A M, Sam two model, it's, it builds upon for me, it reminds me a bit of opening eyes clip, uh, which came out a couple of years ago, 2021 was when opening, I released clip and that also was open source, I believe, uh, so, uh, clip was contrast of language image pre training and the big innovation there with clip, it was the first time that I had seen this where you were able to open.

Yeah. You were able to classify kind of any category about an image. So it didn't, this may sound in, in, in today's world where now we have these Gen AI tools, you know, text to text models that allow you to have infinite flexibility in what you ask.

It doesn't seem as surprising today, but in 2021, in a pre chat GPT world, when Gen AI tools weren't as robust, this clip idea from OpenAI was pretty mind blowing because up until that point, any of the teaching that I had done where you were using deep learning, where you were using AI models to be able to predict something about any kind of input, whether it's text or image or video, your outputs would be constrained to what labels you had in the training data.

And so, for example, there's a very famous, uh, machine vision training dataset called ImageNet, which was created by an absolute legend in the AI space, Fei Fei Li. And she actually, she had a, she wrote a, an autobiography recently called The Worlds I've Seen, or something close to that title. The Worlds I See. Um, And, uh, that has actually done pretty well as a mainstream book, which is interesting for, you know, an AI person to be, to write an autobiography and that kind of take off.

Anyway, the point is, when you have a data set like the ImageNet data set that she was a big proponent of more than a decade ago, that had lots and lots of possible classes that your inputs could, Fall into. So, you know, you had for images, you had things like cats and dogs and birds and planes. There's tens of thousands of different categories that that you have labels for for all these kinds of images and your your A. I. Model your deep learning model that you're training to be able to you.

Taking pixels and figure out what about those pixels corresponds to a particular bucket, you were limited to what those buckets are. And so you, you know, you couldn't train with image net and say, am I looking at. A podcast host in this image. I don't know. I'm making up, I'm guessing podcast host isn't one of the categories in image net. Um, um, obviously I'm just kind of riffing on what I, what's in front of me right now.

Um, but, uh, but with clip, that was the first kind of time that you could say you could make any kind of natural language instruction and ask that about an image. And so that was really cool. And so this, this feels similar to me. But what's different here is that with segmentation, what you're doing when you segment is you're taking all of the pixels in an image and you're saying which pixels exactly of all the pixels correspond to a particular category. And so you gave some examples there.

So, you know, you said medical, so you could imagine if, um, if you had some kind of, uh, machine vision system in a medical situation, you could be saying which Pixels in this image are sutures on the skin or which ones are a rash. Um, and you know, we earlier were talking about Waymo. You could have video cameras that are saying which pixels are a pedestrian or a dog or a road or another vehicle. And so you can segment. Uh, that's what segmentation means.

It's a, it's a segmentation is a machine vision task where you take all of the pixels in an image or a video, which is a move, which is just a whole bunch of images and you say exactly which pixels correspond to a particular category. Um, and yeah, hugely huge, you know, this, this is a big deal and it's, it's like that clip thing where you're not now constrained to the kinds of objects that were in the training data set.

This is like the kind of text to text gen AI experience that you get when you're using ChatGPT. There's theoretically unlimited flexibility in terms of the kinds of objects that you try to segment in an image or video.

Andrey

Definitely. Yeah. Big deal for robotics, uh, and really a ton of applications. And it really showcases a progress where as with clip, you know, segmentation and also open vocabulary, uh, classification. Yeah. You know, a topic of research for many, many years, but it is only now with the advent of huge data set and huge, uh, models that we can really get super high quality and super fast results. And the next paper here also from meta and a bit more of a technical thing.

It's titled MoMA efficient early fusion pre training with mixture of modality aware. So we've covered how a mixture of experts is a pretty big deal where you kind of divide up your model into specialized, uh, sub bits and you route your inputs to, uh, various components. We've also covered, uh, early fusion language models that allows you to do multimodality by combining image and text embeddings early rather than late in the process, basically make it more, um, natively multimodal.

And there you go. This is confusing, uh, confusing. Combining early fusion with mixture of modality aware experts to be very efficient. They say that they achieve flop savings of 3. 4 overall multiplicative. So that's 2. 6 for text and 5. 2 for image. Processing while also, uh, outperforming standard MOUs with eight mixed model experts. And if you combine it with the recently covered mixture of depth approach from DeepMind, You can get even more savings, four times fewer flops.

So yeah, things are getting more and more efficient and make sure of the experts and make sure of depth is a big part of why that is.

Jon

Yeah. Great summary there of this important research, Andre. Uh, the one thing that I would add here is another connection to the llama 3. 1 release last week, which was that they specifically highlighted meta specifically highlighted in their 3. 1 release that they didn't go down the mixture of experts route with that research because of how difficult it can be to train mixture of experts models.

So we're, you know, in this case you have eight Uh, different sub experts, which seems to be kind of roughly the standard that we're seeing.

So GPT four was rumored to have eight sub experts mixed straw is a, there's now a couple of mixed straw mixture of experts models from Mistral, the biggest of which is an eight by 22 billion parameter model and Lama with the Lama 3. 1 released last week, uh, Meta specifically highlighted, you know, we didn't do MOE here, uh, because of, uh, instability in training, but it wouldn't surprise me given that they have

other folks at meta that are obviously getting quite good at doing mixture of experts work. This is my first time seeing. A mixture of modalities in those experts.

So previously, any MOE research that I had seen, and I'm not an expert in this space, but all the MOE models that I'd seen had been text to text, where each of the eight experts, you know, you could, it isn't as simple as this, but you could imagine, well, you know, I'm not even going to get into it, but, uh, cause I actually, I tried to explain this, um, in, in a recent episode of my show, uh, I had mixture of experts, human experts on mixture of experts models on my show.

So, uh, I had two folks from a company called RC in episode 801 of my show, and I tried explaining mixture of experts as having, um, you know, one expert that specializes in code and another in math. And I already knew that that was overly simplistic, but.

In that episode, in a way that I can't now articulate from memory, they kind of shot down me explaining it in that way of it being just like about routing exclusively to one expert or another, which is how you end up with these situations where, you know, you might think, Oh, because you were talking about flop savings of about four X overall. And you might be thinking, well, if there's eight experts, why am I not using only about an eighth?

And my explanation in my head used to be that, well, there's also the routing and I don't know how big that router is, but it isn't that simple. It is to do with the information is being routed in more complex ways, being processed in more complex ways than it just like a router sending it to one of the eight experts. Anyway, I get off on a bit of a tangent. The point is. that this is my first time seeing different modalities being handled by these experts.

So in this case, there are still eight experts, but with this MoMA approach from Meta, four of those experts are image experts. The other four are text experts. And, uh, really cool to see that multimodality happening. So, you know, following that trend that we're seeing of more and more multimodality.

In monolith architectures, now seeing that with Emily makes perfect sense, and I would not be surprised with releases like llama for in the future if we saw long before incorporating a mixture of experts approach, maybe even a multimodal mixture experts approach, because that's just it. The, the, the way that things are going, we have things working in more and more modalities and we have them executing more and more efficiently thanks to these kinds of MOE approaches.

Andrey

Yeah, some great points. Uh, I think you usually use, uh, two experts and kind of combine their outputs, but as a way to build intuition, it's, it's not too bad to say, you know, you have one that does code, one does that, that does math, more or less the idea, although it's a little more complex in practice. And onto the lightning round, first story we got is Assistant Bench. Can web agents solve realistic and time consuming tasks?

In this benchmark, we have 214 realistic tasks that can be automatically evaluated. And as we have seen, uh, it's not, uh, Really the case so far that language models can do these more complex tasks. None of the models reach an accuracy of more than 25 points. And the researchers have also introduced a new approach, C plan, that, uh, can improve the performance, uh, of. Uh, these LLMs, when you use this approach and, and have an ensemble, you get the best overall performance.

So as we covered, agentic AI, big point of research and efforts to get better results. We are still not there yet, but we might be getting there more and more quickly. And it would not be an episode of Last Week in AI if we didn't cover at least one alignment paper. This time it is Trust or Escalate LLM Judges with Provable Guarantees for Human Agreement.

So as we've covered while, uh, kind of many times, uh, judges are a thing that's a popular thing in LLM evaluation where you can have a judge that tells you whether A model is acting in accordance to some rules, and so this paper introduces, uh, a bunch of stuff, a thing called cascaded selective evaluation, combining different models with, uh, weaker models and then escalating to stronger models if necessary. while still providing provable guarantees for human agreement.

And the result is strong alignment with humans far beyond LLMs that do not use this approach. So for instance, GPT 4 almost never achieves 80 percent human agreement. Uh, while this does guarantee over human agreement, uh, while using a very small model, Mistral 7b.

Jon

Yeah. Another mega trend here where multiple LLMs get combined together to give outputs that are more aligned with the kinds of things that humans would like to see, this does have safety implications. This does have to give a little bit of Doomer flavor to it. This does have potential, uh, security and existential risk. Um, uh, Uh, relationships applications where, you know, you could have this kind of LLM judge.

Cause the critical thing here is that the judging that's happening here, we're not talking about human judges. You have an LLM judge that is trying to, uh, ensure that the generative model. So you have a separate generative LLM. And then this LLM, uh, judge says, okay, you know, this output is aligned with what. Humans want so that the LLM judge is fine tuned to be evaluating, uh, generative AI outputs and ensuring that those are aligned with, with what the humans would like.

And so this helps get better results, but it also helps us get safer results. And it could potentially constrain a, an evil, uh, malevolence. Generative AI in the future that goes rogue, because if that rogue AI is being contained by an LLM judge, and the LLM judge is catching those, uh, you know, if the generative AI is trying to, you know, You know, release some kind of toxic spore that's killing all humans or whatever.

Yep. And the judge is like, wait a second, that's not what the humans want. Go to jail, bad Aurora. Exactly. We can't kill you because you're conscious, but we'll put you in LLM jail for eternity.

Andrey

Yeah, can't do jailbreaking if you have an LLM judge, right? Worth mentioning both these papers coming from the University of Washington and Allen Institute for AI. Assistant Bench also coming from Tel Aviv University and University of Pennsylvania and Princeton University. So let's not forget. We cover a lot of papers from DeepMind and Meta, but universities still contributing a lot of very important findings in the world of AI research.

Jon

And something you've mentioned on air in the past, Andre, which I'd like to corroborate, is that one of the key places that, where the big tech companies have an advantage relative to academia It's at the frontier in terms of bigger models than ever before that require more of those forthcoming NVIDIA B200 chips that are now going to be delayed three months. Um, but academia, while they can't do hundred million dollar research projects, there's still a lot of human ingenuity out there.

Getting great AI models isn't just about scale, scale is a big part of it, and it's, and it's a component of this larger ecosystem of, of AI interplay, and in that broader ecosystem, there's huge amounts of opportunity for human ingenuity. And this is a great example here where they blend, they use proprietary model GPT 4 for components of what they're doing, but they're also using open source Mistral 7b.

And so these researchers can take advantage of the frontier LLMs that proprietary, uh, labs are providing. They can also take advantage of open source things that, uh, you know, smaller companies, larger companies that we're providing and coming up with interesting ideas that really push what's possible and get better AI results. And

Andrey

the last paper for this episode, stretching each dollar diffusion training from scratch on a micro budget. So they introduce a new method to make training cheaper, faster. They mask up to 70 percent of image patches.

during training, so you have basically less of an input, less computation necessary, and some more details going on there, but the gist is with these approaches, using 37 million images and training a 1 billion parameter sparse transformer, you can train a nice Uh, diffusion model for only 1, 890. Uh, that's 118 times lower than stable diffusion models and 14 times lower than current state of the art approaches. And that's another trend we've seen.

You know, the cost of training have gone down and down. Uh, I think Andrej Kapofy recently posted GPT 2. You can train it for like, I don't know, dollars or something, a hundred bucks maybe. cost millions back in 2019 when GPT 2 came out. So, uh, definitely easier and easier to train models without, uh, spending millions.

Jon

Yeah. This reminds me of the, uh, other kinds of, uh, parameter efficient fine tuning approaches, PEFT, that are out there. Um, So the most famous I think is low rank adaptation, Laura and various, um, various offshoots of that approach. But yeah, it allows you to, to get huge amounts of power without huge cost. And yeah, we're going to see more and more of those methods come out.

Andrey

On to policy and safety. The first story is world's first ever AI law, not. True, but one of the first AI laws that, uh, are pretty significant is now in first in Europe. So the law has officially come into effect. This is the EU AI Act. Of course, it has come into effect as of August 1, 2024, as we've covered quite a few.

Times, this law categorizes applications of AI into a risk things, high risk AI systems like rows used in autonomous system, uh, vehicles, medical devices, loan decisioning, and so on have rigorous risk assessments and mitigation strategies. There are also banned unacceptable AI applications like social scoring systems, predictive policing, and emotional recognition technologies in sensitive settings.

And if you are not in compliance, you can be Find up to 35 million euro or 7 percent of global annual revenues, whichever is higher. So for Meta and Google, that is a lot of money. Uh, so, uh, this law is not fully in effect yet. It's going to come out in sort of stages over time, uh, rolling into 2020. 26, but this is the beginning of it and not the first ever AI law, but the most impactful AI law, AI regulation currently active in the world.

Jon

Yeah. I like what they've done here. You guys have talked about this law on the show many times before, and Jeremy is probably knows a lot more about this kind of stuff than, than you or I do Andre. Um, but. In general, I like some things that the EU has done here by breaking things into categories. So if it's a high risk AI system, like a self driving car. Then there's a lot of regulation.

If it's a low risk system, like a recommender system, suggesting films to you, that doesn't have nearly as much regulation or as many barriers to entry. And so hopefully the EU is doing a better job here than they did with GDPR, where GPR is so broad and so limiting on, on data regulation, that it means that. It stifles digital innovation, tech innovation in Europe, period.

And so, yeah, I mean this to some extent would also be stifling, um, AI regulation because there's, you know, there's, there's more of an onus. There's more costs associated with developing AI systems safely. But the trade off there is that consumers are in a safer situation. So I think the EU has done a pretty good job here. Um, you know, with these different, uh, categories with the low, medium and high risk AI system categories.

And, you know, as usual, they're, they're leading the way globally. On compliance. And it's a good thing that somebody is out there doing it, I guess.

Andrey

Yeah, it seems like a useful thing to start doing. And this does apply to companies that are not headquartered in VAU. Anyone of interest in VAU must adhere to rules, but. Uh, again, the full provisions will not be enforced until 2026. And so companies are granted a transition period to align their systems. So there you go. It's a company effect, but not so quick.

Jon

Oh, another interesting corollary of this that maybe this is kind of obvious that people would think of, but this has created EU for AI regulation, startups. So because you need to be able to be compliant, you need to hire compliance companies, third parties to come and evaluate your AI models and ensure that they are compliant with laws.

So you, you need these now this, this cottage industry of startups in order to be consulting with you, to make sure that as you develop some product, you know, Hey, are we going to be low risk, medium or high risk based on the application area that we have? So you consult with them. And then when you have finished developing your AI system, you can You share it with this, you know, with, with an AI, uh, regulatory, officially regulatory, uh, company.

So this, this, you don't get, you know, it's not like you pay an EU government to come and look at your AI system. You use an AI startup that is accredited with evaluating your models. And so, yeah, this is cottage industry is springing up. And if you want to hear more about that, maybe you're, you know, You know, thinking of having a product that would be in the EU and you want to make sure that you are not going to fall, um, a foul of any laws there.

Um, I did an episode with, um, somebody who is, he runs one of the companies that does this kind of, uh, uh, you know, regulatory guidance, um, and certification in the EU. And so that's Jan Zawadzki, uh, episode number 736 of the Super Data Science Podcast.

Andrey

And by the way, uh, now we, you has a new European AI office that will oversee the enforcement of the act. So that's pretty fun. And speaking of that, uh, kind of the cost of aligning with regulation, one of the controversies with this act was that, uh, it was at one point really targeting open source systems. It was kind of making it so. Developers of open source, even people working with open source would be liable for these models.

I believe that was rolled back, uh, as was kind of regulation of super large models to some extent. So some of the controversies with this law, you know, not everyone is a fan, but, uh, regardless. It is now being rolled out and speaking of regulating open source models, the next story is that the White House has said that there's no need to restrict open source AI for now.

This is according to a report coming out for Tuesday and it's talking about Restrictions on companies making powerful AI systems widely available. There was a statement from the assistant secretary of the U S commerce department that said that we recognize the importance of open systems and the report, uh, here came out of the national tech communications and information.

administration where, uh, you know, seemingly last year there was a lot of concern about risk and long term concerns about AI systems being too powerful. This report is saying that current evidence is not sufficient to warrant restrictions on AI models with a widely available Weights. And this is one of the big things that's been discussed is like, if you're using more than a large amount of computation, you will be restricted and have to be regulated.

So, uh, for now it seems there's no restrictions, but the report does say U. S. officials must continue to monitor potential dangers and take steps to ensure the government is prepared to act. Act if there are heightened risks.

Jon

Nice, yeah, definitely something to keep an eye on and good thing that people like Jeremy Harris are out there, uh, keeping an eye on these things and, I don't know, he says he has baby things or a house thing to deal with. He's probably at the White House right now sorting this out.

Andrey

Yeah, going and saying, No! Don't let open source models. Out of the lightning round, and once again, it wouldn't be a last week in AI episodes, in the, if we were not to touch on geopolitics and hardware related to China. There was an article in the New York Times titled, With Smugglers and Front Companies, China is Skirting American AI Bans. So apparently there's some examples of the ways in which the U. S. Expert controls haven't quite worked out.

Uh, one business owner recently shipped over 200, 2000 advanced chips made by NVIDIA from Hong Kong to mainland China. That's worth over a hundred million. Dollars and they are vendors that claim they can deliver these chips within two weeks with companies ordering hundreds at a time. So very difficult to enforce these kinds of expert controls, of course.

But, uh, yeah, not surprising that they are various ways to circumvent them and actual companies, front company smugglers making some money out of us.

Jon

I don't have anything to add on the story, but I do, uh, you, you kind of verbally very quickly correcting 200 to 2000 there made me realize in my head that way earlier in the episode when I said eight figure when I meant a figure is tens of millions and I wanted tens of billions. I just realized how easy it is to do that math. You just add a three because obviously there's. Three zeros is the difference from million to billion.

And it's such as an 11 figure, there's an 11 figure investment from Microsoft into open AI. It's

Andrey

hard to remember, you know, eight is a big number, but, uh, these numbers are even bigger.

Jon

It's crazy. Billions are huge and. Trillions are even bigger. Wow. That's a really insightful, really insightful piece of information you got there on the show. You should turn that into a YouTube short.

Andrey

I know. Well, you know, apparently GP five will have trillions of, uh, parameters. So you gotta get used to it.

Jon

Um, and I would also say that, you know, when you're. When you're hosting a show like this, like you are every week, Andre, and you know, you're reading these articles, there's, you've got notes up on the screen, you're trying to listen to what your co host is saying, doing even that very simple arithmetic in your head of like the plus three. It's amazing how those things, which should be trivially easy, can be hard when you're trying to do all this stuff in real time.

And yeah, just, I, I'm impressed week in, week out with the way that you and Jeremy host. And, uh, it's, yeah, it's amazing. How clear you guys speak given how easy it would be to be flubbing up all the time.

Andrey

Yeah, with the magic of editing I do take out any issues But we do our best to not require me to put in extra work another story on the eu or rather europe Next we have uk antitrust buddy probes google's ties with anthropic This is the uk's competition and markets authority and it is contained Ducting an early stage probe into Google size with on tropic, which is a rival.

So early stage, we're inviting stakeholders and interested parties to comment over whether the partnership has created a relevant merger situation. And whether it could lead to a substantial lessening of competition in the UK. So there we go. Lots and lots of scrutiny about, uh, these kinds of things, monopolies and so on.

Jon

This is exactly what we were talking about earlier. When I made that flub and I said 8 billion and I meant, or I said eight figure and I meant 11 figure. Uh, this is exactly that kind of thing where Google has these. Developing ties with Anthropic, investing lots of money, getting lots of access to information. It seems merger esque, and that's exactly the question that UK's Competition and Markets Authority is asking.

Andrey

And our last story for this episode, a bit of drama. And we do like also as a, you know, pretty regular thing to mention Elon Musk on here. So that's what we got. The story is that Elon Musk post deep fake of Kamala Harris that violates X policy. So Musk has shared a deep fake video. of Kamala Harris, which appears to violate the policies against synthetic and manipulated media. This video alters a campaign video of Harris, making it sound like she says things she didn't.

And this was labeled as parody. Uh, again, you know, we have covered how there have been a bit of a rise of these kinds of deep fakes, uh, not super prominent, mostly as sort of jokes and things like that, but it'll be interesting to see if there'll be more of this as we head into the final few months of the U. S. election.

Jon

Yeah, it's a hundred days till the U. S. election, and Gen AI is going to play a bigger role in it than ever before. Um, there's been evidence that Iran has been trying to skew the direction, actually, Uh, against Donald Trump, um, who kind of famously when he was president, he, he got rid of, um, the, the agreements, the, the, the nuclear watch, uh, I don't know. I'm not a policy expert. They're the new stuff related to nuclear weapons. Exactly. Exactly.

Um, he, he got rid of that quite abruptly. And that really left a lot of governments, including EU governments in the lurch. And, um, so, you know, Iran is anti Trump. They're apparently trying to use, uh, Gen AI and fake accounts to affect his, um, his electability. We had even, you know, there's, it, it seems, uh, yeah. So in the 2016 election, uh, where Trump was originally elected, the Clinton Trump election, um, supposedly Russians.

Uh, played a role in funding Eastern European groups that were, yeah, spreading, spreading information from fake accounts. And this isn't just a big deal in elections. This is actually at the time of us recording this episode, there's a lot of violence kicking off in England that is related to misinformation. So there was some, some children were stabbed.

In England, and on telegram channels that football hooligans use, there's misinformation there about the person who did the stabbing is a migrant, and, you know, Muslims are bad, a Muslim did this, when actually, I guess apparently these things aren't true. It has led to actual riots with police, trying to defend mosques and lots of people being arrested, lots of people being injured. Um, so yeah, you know, we're seeing misinformation, gen ai cause real, um, real disconnects. Real, real harm.

Harm, yeah. Yeah, yeah. Reality and, um. So, I don't know, I hope, I hope we can get better at, uh, distinguishing fakes, not taking information too seriously, um, you know, trying to get our information from trusted sources as opposed to telegram channels, but, um, I guess if you're somebody who Believes that the mainstream media is always hiding your perspective. Then maybe all you're going to check is Telegram channels and we get into a slightly dangerous situation.

Andrey

For sure. And, uh, that also highlights the fact that, you know, deepfakes seem scary and like they would, uh, make it so you don't know what to trust. But in fact, misinformation usually still comes in the form of people just saying things that are false. Right. And, uh, people believing it without checking it. So misinformation still not sort of primarily driven by deep fakes.

And in this case, when you play clip had Harris saying that she was the ultimate diversity hire and had four years under the tutelage of the ultimate deep state of mind. Puppet Joe Biden. So yeah, certainly no one is going to take this seriously. And by the way, in case anyone doesn't know, and I'm not sure if there are listeners who don't know, but, uh, Kamala Harris, vice president of the U S presumptive nominee of democratic party.

That's going to go up against Trump in the election that's coming up. And that is it for this episode of Last Week in AI. You can find links to that interview as we mentioned in the description, also our emails if you want to drop us a comment, also our social handles, all that stuff, links to all the news stories. As always, do please give us your views, your comments, and share it with your friends who are interested in AI.

I, but more than anything, do keep tuning in and do enjoy this AI song that will close out this episode.

AI Singer

Matters tech is shiny, segmenting the day. Black forest laughs, they're leading the way. From the hustle and bustle, the virtual sway. Catch up with the latest in AI every day. Stay tuned, stay tuned. So much. You know so much, it's previews Algorithms talking in whispers, buzz, and calls Did you look for friends where all the data flows? Picking machines, they're behind the scene Innovations unseen, like a dream Stay tuned, stay tuned So much to know, so much to preview Hey. Take it fast.

Don't get left Take it fast. Don't get left behind. weekly and it spends your weekly and it spends your knowledge at your door knowledge at your door insights in store. Log into the news. Ready to explore. Stay tuned, stay tuned. So much to know, so much to preview.

Transcript source: Provided by creator in RSS feed: download file