#168 - OpenAI vs Scar Jo + safety researchers, MS AI updates, cool Anthropic research - podcast episode cover

#168 - OpenAI vs Scar Jo + safety researchers, MS AI updates, cool Anthropic research

May 28, 20241 hr 45 minEp. 207
--:--
--:--
Listen in podcast apps:
Metacast
Spotify
Youtube
RSS

Episode description

Our 168th episode with a summary and discussion of last week's big AI news!

With guest host Gavin Purcell from AI for Humans podcast!

Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

Email us your questions and feedback at [email protected] and/or [email protected]

Timestamps + Links:

Transcript

Andrey

Hello and welcome to the latest episode of Last Week in AI, where you can hear us chat about what's going on with AI. And as usual, in this episode, we will summarize and discuss some of last week's most interesting AI news. As always, I do want to mention beyond the stories we are covering in this episode, there's a bunch more at our text newsletter at lastweekin.AI. I am One of your hosts, Andrey Kurenkov. I finished my PhD at Stanford last year and I now work at a generative

AI startup. And once again, Jeremy is not able to be here this week. He'll be back next week if people miss him as a co-host. But we have another very cool guest co-host this time. And I'll let you introduce yourself.

Gavin

And I miss Jeremy too. By the way, I listened to last week's episode. It was great. I don't can't remember what her name was, but she was. She was really good. But I do miss Jeremy as well. You guys are good together. My name is Gavin Purcell. I have been on this show once before. I am one half of the podcast I for humans, we are specialized in kind of like, a little bit less technical, a little bit more human centric focus of AI.

And we cover all the stuff that, you know, we do cover the big news stories in the tech, but we also kind of do it through a human lens. Our goal is to kind of demystify AI for a lot of people. And Andrey, this is a big week for that kind of story. There is a lot of stuff to talk about. So I'm glad you had me on here, because I may not be as technical as as you and Jeremy, but I think we I will have a lot to say about some of the things we're going to talk about.

Andrey

I know. Right? Yeah. And, as I mentioned last time, you were, a guest on this show, I really am a fan of AI for humans as a podcast. It does something pretty different from what we do and what all these interview podcasts do. There's a lot of examples.

I think you play around with a lot of AI tools, you have an AI co-host, with different personalities and, voices, and you talk about, you know, image generation, making little videos, things like Pi where, you know, that's a different type of chatbot.

Gavin

Rip, rip. I know it's a bummer. That was like our first love. All right. Honestly, pie was the original, Scarlett Johansson. That's what I like to call it. It's got Pi was the original, very good voice chat bot, and it's not going to be around for much.

Andrey

Longer than the original. Emotionally intelligent. Yeah, exactly, exactly. Not quite as flirty.

Gavin

Yes. Although my wife would beg to differ. She would. She would hear. It's like that, that PiS sure sounds like she's interested in you.

Andrey

Yeah. And, you know what? Before we dive into this week's news, since last week was pretty big with GPT four. Oh, and Project Astra. No doubt you've covered that in your own pod last week. So, real quick, I guess what were your reactions, especially to GPT, differ?

Gavin

Oh, I mean, so I can't Kevin and I talked obviously it was a big deal. I think the thing that was really interesting, a friend of mine said to me, hey, you guys covered it slightly different than other people in that you really spend time thinking about how this could impact and broaden the appeal of AI to people. And I think that's the thing I took away from it. I know a lot of people heavy in the AI community were like, this this announcement sucks. It's not it's not cutting edge enough.

It's not GPT five. It's not doing interesting reasoning stuff. And I think you all have to remember that we are a small group of people that are really deep in this space. I think if this works as planned and granted, we all have to be clear, it may or may not. Once we get this thing in our hands, we'll see how good it actually is. But there was another video. Andre, I know if you saw this that came out of a guy in AI.

Cameron I think it was in France. He was doing a demo of multimodal, ChatGPT for audio with voice, and he was showing it off, and it seemed to work in real time via the Mac app very well.

So my thing is, if we now have a persistent interruptible really fast voice assistant, this is like, I know this is gonna sound like hyperbole, but like it is kind of a fundamental shift in how humanity operates, meaning that if we have an always on interesting, smart AI that's always available to us to ask questions or interact with, there's all sorts of positive and negatives. But like once that gets into the hands of like 50 million, 100 million, 500 million people, life may change.

And I think that's that's an important thing for people to take away, because I don't think very few people know what the capabilities of this is yet or are yet, and I think that's going to be a big deal.

Andrey

I totally agree. Yeah. I think, it's still not entirely intuitive to use a chat bot. You know, you if you haven't used any of these things before, it might be a little daunting. You may not know exactly what to say because we're used to computer programs where, you know, you write stuff in word. Yeah. And if you haven't really dug in, you don't know that you can kind of do anything. And in some sense, it's hard to believe its capabilities.

But it's with GPT four, are you taking it to the next level where I think it's very intuitive to just talk to someone? Yes. Right. As if they are human. And so I tend to agree.

Gavin

Yeah, yeah. My only concern, and I think this is a concern most people hold, is that the hallucination problem will become amplified now. And I think we see we see this a little bit with what's going on with Google. I search I don't know if you've if you've seen that today or yesterday, there's been all these people posting pictures of how Google AI search is getting stuff wrong and sometimes getting stuff wrong in a very

bad way. I think that's going to be the next kind of hurdle, and I hope that GPT five can find a way to kind of figure that out, because I think that if you're hallucinations are too bad, you're not going to get as wide of adoption as possible as you could. So that that's my overall thing. I think it's honestly, I think it's a it's as transformative as GPT four was. If it comes out in the form that it's in. That's what I would say.

Andrey

Yeah I will say I think those nations are somewhat solvable. And in some sense I think these issues are due to people kind of racing to release. We have also, yes, we have perplexity where, you know there are techniques you can use to double check the outputs. Yeah, but it's expensive and it's not exhaustive.

Gavin

Like Google, like you're just going to push it live to a billion users worldwide and see what happens. You're going to have some edge.

Andrey

Cases, I assume. Exactly. Yeah. Well, that's it for discussing last week's news. We all have some stories that are building on was news of course, this week. And there's some not quite as exciting stuff, this week, but still, some interesting and surprising.

Gavin

I would say. Here's the here's what I would say. I've been thinking about this under like last week, showed the absolute promise of where I can go. In some ways, this week shows the hurdles that we're going to have to go past to get there, right? Like it's almost like a yin and yang. It's like the news last week felt like, oh my God, we're really moving fast. And then this week it was like, wait a second here. There's a few things to think about before we race to the future.

That's my kind of take on it overall.

Andrey

And one more thing before we get into the news, as usual, I do want to call out some feedback. We've seen from our listeners, there's a new review on Apple Podcast from Am Lars, where the headline is it don't change Your format arguing for, not going daily, keeping it weekly, and diving more deeply into things beyond what other podcasts do. And that's very much appreciated. We do really enjoy the feedback, and, I think that's the plan.

We're not I think we also like this last week and I around up I mean, that's the name of our podcast. So we will keep it as it is. All righty. Well, let's get started. The news as usual. The first section is Tools and Apps. And we begin now with some news about GPT four. Oh, so if you've been following some of the big headlines, you've already heard this. There was a bit of controversy with the notion that the demo of their JB 400 model sounds a lot like Scarlett

Johansson. So, with this kind of got kicked off with Scarlett Johansson actually releasing a statement that revealed that, OpenAI contacted her twice asking to basically have her voice for this. And the statement essentially said that, she refused and that she was, let's say, displeased with seeing the voice sound quite a bit like her.

They are comparisons out there. It's not exactly the same, but it does sound similar to her in particular from the movie her, where she played an intelligent AI assistant. Very reminiscent of GPT four. And of course, Sam Altman did tweet her. And this was kind of broadly acknowledged in the conversations. Then, as a follow up, OpenAI did say that you didn't use Scarlett Johansson's voice. There was another voice actor for voice.

But they also said that they would not be releasing this particular variant of the voice of GPT for, oh, they would pause it, until this kind of got cleared out. So, yeah, that's an example, as you said, of one of the hurdles where, even if you kind of sound similar or sound alike, where it wasn't necessarily breaching any copyright or using any data, still, you know, you're going to ruffle some feathers.

Gavin

Yeah. And I think the thing to know about this is obviously anybody who's been in this space knows that this voice was in the voice at the chat. GPT voice apps in September 2023. And it was you. Honestly, people thought it sounded like Scarlett Johansson, and I really did think it's true. I thought they had made a sound alike kind of on purpose.

And what was funny about this is it I don't know if you caught this, but like, there was a blog post about from OpenAI that went out Sunday night and it was like this kind of like, whoa, why are they putting out a blog post on Sunday night? And I dug into it and was like, oh, they're actually saying, this doesn't sound like Scarlett Johansson directly in the blog

post. And it's like, that's weird. And then Kevin and I made a video about it on Monday, and then that whole day was when Scarlett's thing came out. Like, if you read Scarlett statement, it is pretty significant, right? Like she says, Sam Altman asked me to do it. I denied to do it. And then two days, before this last one came out, the my agent was reached out to, but I didn't talk to them, and then they

put it out. I also think that Scarlett O'Hara, this is all maybe gossipy for your audience, but like, Scarlett Johansson's husband is Colin Jost, and they told a joke about it on Weekend Update this weekend. So, like, maybe it bubbled up ahead of time because of all the news that came out last week. It is not her voice, her quote unquote, her voice. Sorry, everybody. But I think that there is something that like this feels like, you know, cellphone on OpenAI's point, right?

Like OpenAI. And a lot of people have talked about Sam Albon's hubris. Right. One of the other things to be aware of this week, I think we may get into later, is there's been a couple other controversies around OpenAI in the way that Sam Altman has behaved, per se. To me, this was solvable. Like, just don't use that voice in your demos. The other voices, I'm sure, are just as good. Or maybe they're not, and they could only focus on one of these demos to

make it really good. I don't know, because we haven't seen the other voices in action. Really. Actually, that's not true. We have seen one other voice now, a couple other voices. But anyway, it feels like this is like the tech world not understanding how their technology may be perceived by the larger world. And this was like not it was not needed. This is a story that didn't have to happen.

And if they had just kind of like looked down and kind of thought about what they were doing, they could have figured this out. And OpenAI is a company you hope has people inside of it that are kind of saying those things. But then again, maybe those are the people that are no longer there because a lot of people have been leaving OpenAI, so who knows? Either way, I think this is not a good thing for the AI industry in the mainstream at large.

I think it makes it look bad. I also, as somebody who's worked in TV and media, lives in Los Angeles, it's really bad in the Hollywood industry because there's already a whole narrative going around around deepfakes and all these other things. The Sag-Aftra strike was there was a piece of it all about AI. I don't think this in any way in engendering the tech world to the Hollywood community. And, you know, the tech world after the election stuff, the election massive Facebook is not exactly.

And everybody in the world's good books, let's say. So I think this is a bigger story than the tech community is giving it credit for. And maybe not as big a story as the mainstream is is saying. Does that make sense to you?

Andrey

Yeah, exactly. I think the cell phone, angle makes a lot of sense. And not just with regards to the voice, also with sort of the positioning of it where there were tweets that explicitly kind of don't.

Gavin

Hurt, don't tweet her. Also, like why you knew that you were already asking Scarlett Johansson and she said, no, just don't tweet it. I think, honestly, that tweet was what gave Scarlett Johansson the ability to kind of come after him a little bit more.

Andrey

Yeah, and I think that comes from a place of, you know, being inspired certainly to some extent by that movie.

Gavin

Which, by the way, does not end well. Andre does not. In that case, have you seen her? It is not a fun ending for the humans. That doesn't mean all the humans die, but, like, go watch that movie and tell me if you should make a video. If you should plan on the future looking like that. I'm not sure if we want that.

Andrey

Yeah. And, one of the outcomes of this was. Yeah, a lot of people saying that a voice presented in the main demo, in the big reveal of it was flirty, was the word used, and The Daily Show had a segment on it that basically just covered that angle. Yeah. The voice being sort of very, you know, the kind of voice that a woman arguably take on to, cater to men. Yes. And that's another aspect of it where maybe we didn't fully consider very action people would have.

Like, it's good to go away from very biotic thing we used to. And it's impressive with, you know, Siri and other assistants are much more formal, but maybe they took it too far where, you know, maybe video show would have covered the notion of it kind of being mind blowing. You had ChatGPT. Now you have the aspect of it where you can look at your homework with its eyes and help you or whatever, and we can chat about how wow, AI is progressing too fast. And this is both. Cool and scary.

But no, like this segment was entirely about the voice and how it sounded. And not the other aspects. So yeah, I think they maybe fumbled it a little bit. But in the big scheme of things, it's not going to impact I'm too much.

Gavin

The tech is still incredible, right? If that if if what we saw in those demos and what we saw in that most recent demo, from I think the thing in France, I am really shocked at how good that got. And, you know, we did talked about you probably remember this, but like OpenAI has talked about how they had these voice models for like longer than anybody else and they weren't going to put them out right away. You can tell there's a lot of work that has gone on to this.

The biggest thing, I think is the latency. I can't wait to have a non latent voice assistant because it's almost it's fun to use the voice assistant as is. And even Pi was this way. But having to wait a couple of seconds for an answer or even 3 or 4 seconds, it's too much. And I think the latency part of it's going to be a big deal.

Andrey

It's a huge deal for sure. And the next story is again about GPT four, oh, this time about Microsoft's copilot assistant getting of a GPU for upgrade. And that's kind of a big deal because, GPT four oh is GPT four kind of grade is what we've seen in terms of performance as a chat bot, even without a voice. It is about as good as GPU, for that was a paid tier for, chat GPT, and I think maybe Microsoft Copilot to get access to that better model.

But, seemingly it'll be much more broadly available, for people to have a GPU for. Oh, and aside from a voice, also GPU for oh is, set to be twice as fast as DVD four and cheaper. Like, I've been using it in my job, and, it seems like you could just replace GPU for turbo or GPU for what has.

Gavin

Been your experience in the coding side of it? Because I've seen mixed reactions. Some people are saying it's great at coding, it's better, other people are saying it's not nearly as good. Have you had that? Have you had a reaction to one way or the other yet?

Andrey

Personally, I haven't had much need to do sort of big coding.

Gavin

Yeah.

Andrey

Like yeah, exactly. I haven't had to generate entire files or whatever. So it's for me, kind of hard to say, but yeah, still, these things, you know, we have, everyone to use as an extension of GitHub copilot in the company, basically. And yeah, I still get a lot of use out of it.

Gavin

We, I was gonna say really quickly the, we used, ChatGPT each week, whatever the latest model is, to kind of make our AI code prompts or to do other things with it. And for I have found 400 to be, like, really good, like, much better at doing things and taking complicated tasks on. These are not math based tasks.

These are more like kind of creative character tasks, but it understands better and it has a better sense of like how to parse out a longer kind of prompt because they're like, those prompts are pretty long, like 3 to 4 paragraphs, and it's very good at it.

Andrey

And related to this for a couple more announcements from Microsoft, they had a pretty big event. We won't be going super deep into those, but it did also announce kind of certification program almost for PC laptops that are like Copilot laptops where there's a requirement as they come up with chips that are specifically for AI.

Gavin

The NPU, the neural processor unit, which is like a seems like a gibberish word. They just kind of made up. It's your brain, man. It's all in your brain. It's neural.

Andrey

Yeah, we have the CPU now. You have NPCs that are meant to be included in hardware and they are building this capacity. Think deep into the OS. So the intent areas. Yeah. If you want real time, if you want really fast usage of these models, ideally it can be on device. And ideally, you know, it could be as fast as possible. And I think partially that's why we've invested a lot in these smaller models like fi. So yeah, quite a bit of news from Microsoft.

And as we've seen, the partnership with OpenAI has paid dividends, including with this GPU for O, feature in the VR copilot assistant. So if you've been using Copilot, I guess good news for you.

Gavin

And also don't forget about recall. Recall was a big deal from there. Recall is a privacy concern, but the idea is that Satya Nadella announced that, like there's a program now on these AI enabled PCs, which will allow you to basically be listened to and it will watch all this stuff and be able to search your computer locally, not in the cloud locally, and you'll be able to retrieve stuff and ask questions in natural language. I think, honestly, this is interesting and cool.

I know a lot of people are concerned they don't want a computer watching them, but like. It is a fascinating use case of AI. It is the ultimate like Star Trek computer. It needs to know everything about you as well as other people and about the outside world. So I think that's interesting. I know a lot of people who are like, I'm never, ever, ever going to turn that on, which I understand for sure too.

Andrey

Yeah, exactly. Yeah. I should mention, recall, there was a bit of pushback idea that they AI will see everything you do. And I think my take on it is broadly Leo, in some sense, if you want a really good AI assistant, it it needs to know about everything you do. So this is a way towards the best possible. The most helpful AI is if it can just see what you're doing. And you already to some extent have this would have like Google integrations into Gmail and Google Docs.

Like it knows your documents. It knows what you are saying, right?

Gavin

Yeah, it knows who you've been emailing. Probably not. Knows exactly the content. It may not know everything, but it knows pretty well.

Andrey

Right? So my guess is, you know, there might be an opt out way to disable it, but more and more, this will just be the case where I will be there and know what you're doing to generate this outputs.

Gavin

Big brother's just a fun guy. That's all right. He's a fun guy that you can have around your shoulder every time. They'll just be watching you. It's not a big deal.

Andrey

Yeah. Big brother, you know, is there to help.

Gavin

So that should be their new slogan. Big brother's here to help anyways.

Andrey

Yeah, yeah. And onto the lightning round with some less big stories and some quick ones. The first one is about 11 labs launching an AI voice screen reader app. And that's kind of what it sounds like. 11 labs, of course, is one of the premier text to voice startups that for a while has been used a lot to demonstrate really impressive voice since the PCs. And now they launched this new thing that isn't just sort of an API offering to generate voice, it's an actual sort of app.

You can use. It's called 11 Labs Reader, and it can read web pages, PDFs, and other documents in various voices. And it's currently being offered free to download and use in the US, the US, UK and Canada. And it's, yeah, nice to see. You know, one of the things that we maybe forget but are worth, capturing is that I will make accessibility for a lot of things much more powerful for, you know, people who have trouble with vision in particular or even hearing. I will help quite a bit.

So it was nice to see us. Next up, Adobe Lightroom gets a magic eraser and it's impressive. That's title. So Lightroom is another offering by Adobe, somewhat similar to Photoshop, but more catered specifically to editing photographs. Rather than creating arbitrary images like you have in Photoshop, we've covered a lot of features we have added to Photoshop, like Generative Fill, and this feature is called Generative Remove.

Once again powered by Adobe's Firefly, and similar to the inpainting, it allows users to paint over unwanted objects in images and delete them. It will offer you a few variants of how to replace the object. So again, I think Adobe has impressed us with the amount of things that have rolled out into Photoshop and Lightroom. And this will be pretty useful I think, for photographers.

Gavin

I think this is so interesting that Adobe does these things and artists are like, yeah, we love this. And then if other AI companies do things, artists are like, they're stealing everything. And I don't mean that as like they didn't steal everything because there's a lot of bad stuff. But like Adobe is because they've provided tools for artists for so long. It's an interesting, just like branding situation.

They've done a very good job of giving access to things to artists that are useful using AI, and in some ways, the artists don't balk at it because they say it's in their workflow. And I think that's an interesting kind of dynamic about how people see AI

in Adobe. Thank goodness for Adobe, because they're doing a good job of kind of like onboarding some of those people, because I think you and I and Kevin, I know we will all believe the sooner that these people take on the tools that are coming, the better that they will be suited for the jobs of the future. So I think this is great. I honestly love that Adobe's doing this and they're not getting a lot of crap for it too.

Andrey

Yep. And they, I think, also positioned themselves smartly with Firefly saying that it was trained on license data, not copyright data, which.

Gavin

Was although there was that one story. Do you see this? There was a story that they have found. It was trained on some Midjourney data as well, which kind of like a little poison of the well, not perfect data. Be right. I appreciate that too, as well.

Andrey

Yeah. And this is currently. Free to use in beta, but we'll probably adopt, very generative credit system you have and the Firefly power tools. Next, Microsoft and Khan Academy will provide a free AI assistant for all educators in the US. So they have this AI assistant Conmigo for teachers, and the AI assistant can do various things, of course. Help prepare lessons, analyze student performance plan assignment. For context. Khan Academy is a major institution for

learning. They offer a lot of classes, that cover math, cover English, etc. they have been around for quite a while, and so, yeah, I think this is pretty exciting. Teachers have a lot of work. It's quite a burden. And I assistance can really help significantly to make it easier to keep up with it.

Gavin

Yeah. My wife is, teaches writing. She's a novelist and teaches writing. And I think I see a lot of the education side through her. And I really think, well, first of all, Khan Academy is amazing. It's been an incredible thing around for a long time. I think this is great. I think teachers, a friend of mine runs, an organization called AI Edu that specializes in nonprofit teaching of teachers. Like, their whole goal is to try to get out there and get

teachers to understand AI. And they do, specific things. They can figure out how to do workshops and stuff. I think the sooner we get teachers on board with what this is, the better. And I think the other thing, I think this is an interesting point. That is a larger point we have to talk on. And Lightning Round is it's going to be interesting to see how you can individualize education per student using AI. And I think that's the transformative power of AI that's coming.

This may be a starting point that getting the teachers up and running on it, but that's coming, especially when we talk about GPT four. Oh, in your ears. If everybody's accessible to that, you might have a certain kind of pathway through education that's different than the student sitting next to you. And that is really exciting, I think.

Andrey

Totally. Yeah. I think that's another kind of overlooked, maybe positive aspect of AI is a lot of people can't afford tutors. Yes. And ChatGPT is a great tutor.

Gavin

My daughter is a junior in high school, and she's kind of in a it's not in advanced. It's honors physics for her age and she's not like a super physics head. She's not a but she likes it. She actually used ChatGPT and took the picture of this the homework and it helped her. And this was like, you know, GPT four. And it was really fascinating to watch. It's the use case that is the most interesting to me as a, as a parent, but also just like that's how people are using it now.

It's real world use case.

Andrey

The last story for the section that Microsoft Paint is getting an AI powered image generator that responds to your text prompt and doodles. This tool is called co-creator, and we've seen similar things before is as you doodle things in paint now, you have the ability to call on AI to generate an image basically conditioned on your doodle. So you can enter a text prompt. Let's say you draw a very rough looking robot with, you know, a rectangle head and some arms.

They will then take that and generate a really impressive looking robot with sort of a general structure of what you've input. So yeah. Paint.

Gavin

I love Ms.. Paint, man. It's cool to see it come back. I will say there's so many startups in this space that have like that use this feature like a month or two ago as like a highlight. And now it's fully Microsoft grabbed it. It's it's a piece of science technology that everybody has access to. I feel for the startups that are constantly chasing the edge in the visual technology world, because I don't think a lot of them are going to win because this is what's going to happen.

These are all tools that are like, you know, those papers are published on like, say, six months ago. Then they get productized in some way at a company like create, create, create a. Yeah, create. I rolled this out like I think 2 or 3 months ago. It is really cool. But now you've got one of the biggest computer software companies in the world rolling out, like, it's just feels like a losing battle for those companies trying to kind of do this again and again.

Andrey

Yeah, the barrier to entry is relatively low and these kind of applications and onto next section applications and business. And we start again with open AI and some controversy. We haven't had the best of weeks following before. Oh and this one is the title of the story is open AI founders Sam Altman and Greg Brockman go on the defensive after top safety researchers quit.

So last week we covered how Ilya Sutskever and Jen Leaky, the, Elisa's coworkers, of course, major part the leader of AI efforts until relatively recently, and that General Leakey was one of the leaders of their AI super alliance. Teams and a very influential figure. He helped develop a lot of realignment techniques that a lot basically everyone uses.

And we covered this briefly where there was somewhat amicable split over a year ago, where there was a lot of kind of nice words being said, Gen Leakey last week just said, I quit or something like that. Yeah.

Gavin

He was like, I'm out. I'm out not dealing with this anymore.

Andrey

And then just a bit after, just a week ago, on May 17th, there was a much longer kind of thread that he posted in which he said, let me quote here. Over the past few months, my team has been sailing against the wind. Sometimes we are struggling for compute, and it was getting harder and harder to get this crucial research done. And then also over the past years, Safetyculture had processes have taking a back seat to shiny products. So that was one of the causes of concern.

And Greg Book broke him. And then Sam Altman then had to respond and say, we value safety a lot, we've been doing a lot for it, etc., etc. there was a second kind of controversy with I'm not sure if it was leaky or someone else, quitting and saying that they didn't want to sign Nvda. And so they basically, according to the clauses that you sign with OpenAI, if you don't sign Vanda, which is super restrictive, it basically says that you're not allowed to say anything disparaging about OpenAI

ever. So this person said they didn't sign Vanda, and that would mean that they forfeit their equity in OpenAI. They lose a ton of money, with equity being kind of a part ownership you gain as an employee of a startup. So, yeah. This, which is.

Gavin

OpenAI is a lot of money, right? If you think about what the valuation is versus a startup, like, say, you joined OpenAI in 2019 or 2020, you that's a lot of money you're looking at that you don't get a hold of.

Andrey

Right? Yeah. And so yeah, I think not, you know, there's no exodus. Not that many people are quitting or leaving. But this was enough of a kerfuffle, especially with very leaky comments and some of your stories covering Vanda that Sam Altman and others had to respond to. Sam Altman also said about equity that he was not aware of his clause. And, I'm sure he was aware at least of NDA, right?

Gavin

Yes, he would hope so. You would hope so. At least it can't be. It can't be, Sam. Just saying. I didn't know of again and again. Right. You can't have that be the answer.

Andrey

Yeah. And they did also say we never clawed back equity things like that. And they'll be viewing that. So this certainly led to some, negative press. And you know, it's hard to say from the outside. It certainly sounds like the emphasis on research has been downgraded. I will say, at the same time, from what we've seen of our publications, OpenAI still does seem to be doing a lot of safety research. Yeah.

Gavin

I, I, I'm interested in. So I'm obviously people listening to podcasts have follow to how Yann LeCun believes that, like, we're not really in danger of AI in trouble. And obviously the whole conversation I, you know, about where AI goes from here is kind of open. I think we're all trying to figure that out. Still, if it continues to scale, then obviously we're going to be in some interesting places in the next five years.

I think this is in my mind, this whole thing just kind of is, is the Sliding Doors moment of last November, right? Like this is all the fallout, finally, of what happened when the board went against Sam. And, you know, this is what's so weird and confusing about OpenAI as a company in that you had this relatively small company start doing something incredibly technical and yet also incredibly moral

morality based. Right. And that's a weird place for a tech company to start, because you don't often have a tech company start with the idea of, like, we're going to push this technology super far forward, but also that we have to be really careful about what we're doing along the way. And I think my gut is telling me that at some point, we I don't know, this happened, but in November, Sam probably had like the the Darth Vader mask come down a little bit. I'm not saying he's bad.

I'm I'd say he's evil, but like, he probably was like, I'm going into battle. And I think there's a world where there might have just been like bad blood amongst these people. And as much as, like John said, you know, that I'm not getting enough compute. It might be that, like, there was still some lingering anger over the fact that Sam wasn't taking this seriously enough. And and also, who knows what where yawn stands on the, you know, is I going to kill us all thing?

Maybe John is a. I don't know. John is ahead of super alignment, so you assume he feels pretty strongly about this? If John believes that, that is the most important question of our time and he's not getting enough compute, he should quit. You know what I mean? Like he should go find a way to do somewhere else. Either way, this feels like, to me the kind of reconciliation or the end of that, that thing that happened in November more than it is like something to be, like, super concerned about.

That's that's my interpretation of it.

Andrey

Yeah, I agree. And, I think you said, John, this is Gen Leakey, who quit. Whose is it?

Gavin

Oh, his name, Gen is I. How you say his name is or is it John? Oh, I see we didn't say John because I think he's. I think it's John. Like. Like, I don't know, but there's. Yeah, there's John Lagoon and then John, which I think might be John as well. But I've been saying, John, I could be wrong.

Andrey

Okay. That makes sense. Yeah. I tend to agree. I think, as if you're really a safety person and are concerned about. Oh, yeah, in a strong way. I think you would be much more concerned about this news.

Gavin

By the way, Jeremy should have been here for this conversation. I'm sure Jeremy would have some strong thoughts on this right now, for sure.

Andrey

Yeah, and it sounds like the team itself has been disbanded. Super alignment team. And they are doing the most aggressive research, the research that is less about, let's say, a present tense of AI and more about the future implications of the danger of AI to even like extinction level. So yeah. Yeah, I think if you're someone really concerned about with potential for extinction or WMDs and things like that, this is bad news. I am a little less concerned on that direction.

So I think for me, it's similar to what you said where, I don't think it's a huge blow necessarily if you go, but I, I still does have quite a big commitment to safety. It's just that we have to shift to being commercial. The priorities have definitely shifted to some extent. Next up, it's another story about OpenAI. It feels like.

Gavin

Making the news, man, they're making.

Andrey

The news. They're, doing a lot. And this one is about how OpenAI has made a deal where the, WSJ, Wall Street Journal owner News Corp's that is valued at over 250 million. So it sounds like the deal is 250 million over five years. And that would include compensation in form of cash and credits for use of opening technology. We've seen this before with, for instance, with Financial Times where at it I remember. Yeah, yeah.

And then there are many newspapers where OpenAI is licensing the data for training and is, also providing with tools to these publishers to, integrate OpenAI's technology into their tech. And this deal is the biggest one we've heard so far. I mean, this is a big number, 250 million. And the News Corp's, shares went up 7% in trading after that. Exactly. So, a bit less to say on this one, but I think it's it's a very big deal just to see that OpenAI is going after these deals

aggressively. This is, I think about with 6 or 7 one we know about. And they have many ongoing negotiations, no doubt. So it seems like another kind of, thing that everyone is doing, not just, OpenAI. Google also was, set to license read its data. And it's another kind of truth. The present of AI is it's a little past the phase where you could just scrape the internet, as all the companies basically did. Now you have to actually pay people.

And it's another kind of challenge to the commercialization of AI and trying to be profitable. But, in some sense, it's, it's good news because, being a news publisher is hard. Oh, he has been out for a while, and this is one of the ways that it's going to be a little bit easier.

Gavin

I think so, and I think the other kind of secret story here, we had Kevin Rose, the guy who founded Digg, on our podcast not that long ago, who's a friend of ours from way back when. He is a startup founder, right. And also like as a VC venture capital investor. And his big thing was everything that's valuable in the future is going to be data, because that's what we're going to be finding is what

is what is valuable here. And, you know, I also recommend very highly to go listen to there's a great podcast called Search Engine, which is by a guy named PJ Vogt, and he had Casey Newton on this week talking about the kind of Google, the issue with Google and how like, I search is going to kind of take away a lot of the links to the internet. And so I think that the kind of really difficult thing for

journalists or people that work in the. News industry right now is how do you get paid for doing the work you were doing forever? If you're not going to get traffic to your website? And this is a big, big problem because the other side of this is if you don't get paid, if there is no way to get paid to do that, people start stop doing those jobs, you know what I mean? Like, they'll be a lot less

people providing the data. So this is like the kind of balance the AI business world has to kind of deal with right now is like, how do we value data? How do we value this stuff that we need to train these these AI systems on, versus how do we value the people that make those things and write those things?

And as some of you and I are both podcasters like, I very much assume in the future, if you're doing a weekly podcast like this, at some point you're going to get a call from like an OpenAI or somebody like that and say like, hey, we'll pay you, you know, a very small amount or depending on how big your audience is, this much money to train on your podcast, are you okay with that? And I kind of think everybody's going to have to make

that conversation and choice. And right now is just the corporations, because you can make these broad deals. But I think there's going to be this kind of opt in for something in the future.

Andrey

Yeah. And to some extent, maybe it'll become more of, anyone can do it potentially over time, right, where anyone can opt in, not just to big companies, but YouTubers can just say, okay, I'll provide my data and you'll get not necessarily a huge amount, but you'll get some amount. So that'll be interesting. Exactly. Until Lightning Round, first up core we've raised 7.5 billion in debt for AI computing push. We do like covering a billion. The stories services pretty big core.

We've is an AI cloud computing startup. And they got this 7.5 billion for various investors for private debt financing deal. This is following up 1.1 equity funding from two weeks ago that valued core at 19 billion. And they're saying that, we're going to expand it a lot. So they operate 14 data centers as of last year, and they're planning to double to 28 data centers at the end of this year. So just demonstrates a lot of efforts going into compute everywhere.

Gavin

Yeah, I saw this. And I was like, wow, there's just so much VC money being poured into AI right now. It does worry me a little bit that maybe we're in a bubble, but, it's hard to know because it's really difficult to know. Is, is this the is this web 2.0? Is this the internet? Is this like computers? Like, you know, you don't really know. You go back through like kind of the transitions or maybe it's like web. It's like Web3 zero, which was, you know, whatever.

Then you had mobile web, then you had web two. Oh, which is like kind of the social web. Then you had the internet and then you go further back and you have like, you know, actual personal computers. It's hard to know how big this is, but it feels like based on the amount of money that's being poured into it and how fast things are changing, it's at least. Internet level, which is a big deal, but that is a really big deal. The fact that we are talking about internet level.

But if you remember, Andre, the thing that happened after the big internet boom, there was a giant, internet bust and a lot of these companies went belly up and it was ugly. So I don't know. We'll have to see where this ends up at for sure.

Andrey

And, yeah, you have to ask yourself, it might be kind of a.com bubble situation. It could also be more like the smartphone situation where the last I think major technological shift was everyone started having a smartphone. And you could argue that in this case, everyone will start using AI. Yes, exactly. And the VC money mostly is going to be like top tier AI developers. And for in this case, infrastructure company, which.

Gavin

Makes sense, by the way, because it's kind of like the internet. You had to like build the cable, you had to build like the pipelines to get everybody broadband. Like that was a really good investment back then. This is similar in some ways as well too.

Andrey

And another story about compute. One of the stories we didn't cover last week is that Google announced Trillium, its sixth generation of tensor processors. So they have their TPUs now have TPUs V6 that are set to be 4.7 times faster than the predecessors. And they also have pods that will have up to 256 of them in, kind of a cloud offering.

And as I like to say, it's worth remembering that Google, you know, it's many people said there may be a little bit behind OpenAI, Microsoft, but they do have a lot of advantages in infrastructure. And this is one of them with the GPUs that they've had over a decade now of development. So, yeah, Google still pushing on that front pretty rapidly. And after that, next up, inflection a, has revealed their new team and their new business plan. So, few weeks ago or months ago, I forget,

inflection. I, the developer of Oppo, the emotionally intelligent chat bot, had this kind of crazy news. Most of them, most of the company going to work for Microsoft, and getting some large amount of funding or payment, rather, to get the investors whole following, I think, kind of admitting that they haven't figured out how to commercialize and make a profit after getting more of more than a billion in funding.

Well, now they have announced, the new members, their new CEO, CTO, CEO and leading leader of product. And the statement is they are going to be pivoting somewhat to business, to providing their, emotionally intelligent kind of functionality to, businesses for them to build that personalized chat bots that can remember customer interactions and respond in ways that feel considerate and helpful, so makes a lot of sense. If you want to make money, you usually want to cater to businesses.

And we'll see. I guess if inflection AI does stick around.

Gavin

I think this is interesting, mostly because Reid Hoffman is a big investor and, inflection and Reid Hoffman, also the founder of LinkedIn. So I bet you're going to see a LinkedIn voice chat in some form powered by pi. And listen, I love Pi. Pi was our first like real love of voice chat. I hope this continues and survives like in some form or another.

Andrey

And while story for a section data labeling startup scale AI raises 1 billion as valuation doubles to 13.8 billion, this company has been around for quite a while. This is their series F round. I don't even.

Gavin

Know they got that far. I didn't know they did series. That's that's amazing.

Andrey

I know that's pretty. I've been around I forget maybe for at least a decade. Oh they founded in 200, 2016. So a while ago, and they at the time, you know, were riding the wave of deep learning. AI has been hyped for quite a while. We may forget now, but, it's been around for a while, in the mainstream. And they are specialized in, kind of data curation. Data gathering have pretty essential component in the, development of machine learning models.

So, yeah. And now we're billion, going to a major company.

Gavin

And its infrastructure. Right? I mean, this is like, this is the thing that's going to make these work better. Like, and I think that's going to be a huge part of what everybody needs when they look with these, these models. So I think this this what this one makes sense to me. Kind of more. So then you see that, we might cover this, but 11 labs raised like $150 million or something like that recently on $1 billion valuation.

And I love 11 labs, but also, like, it's hard for me to understand exactly what their business model will be with what OpenAI and other companies can do. Whereas this company, you can see the deep need for this across the board. It's just it makes sense to me.

Andrey

Totally. Yeah. Onto projects and open source. And we begin with upper gas I releasing smog Lama free 70 B instructs.

Gavin

These guys to get better at naming these. I love the open source community, but these names have to change. Like make it something a little bit less crazy maybe.

Andrey

Well, they do at least have smog in there. Yeah, exactly. That's fine. Yeah. And they say this is a new benchmark, an open source, kind of a conversational AI that rivals TubeBuddy for turbo. So they basically have combined lamb for free and trained a bunch more on it. They have already released previous smog models and even a technical report when we save it. Kind of figured out a lot of details of how to fine tune these models to get them better.

And so they compare it to Lamar Free, and it seems to be quite a bit better, especially on the harder, benchmarks of arena hard. And it's, not quite at the level of JB for turbo, but they say it does rival it. If you look at empty bench scores, another benchmark. So yeah, it's the open source isn't quite at the level of cloud opolis or GPT four, but we are getting closer. And this is what we've seen over and over is if you have a big model like Lamar, three people will make it better.

And that's just the magic of open source.

Gavin

Which is great. That's what you want with open source, although there is a rumor. Right now. If you don't see this Andre that like. And this is completely rumors. I don't mean this that Metta is considering not open source saying there are 4400, I think 400 billion parameter model of Lama, which is a big deal, right? Like if they're going to then pivot and have their biggest models not be open source, that's like a big thing.

And and I think everybody's been given Mark Zuckerberg a lot of credit rightly for supporting open source. And I think to your point, open source development is an amazing thing because it allows people to build off of it. It'll be interesting to see if they kind of continue that with some of their bigger models as well.

Andrey

Definitely. Yeah. I think, one of the modes, one of the things that really stands out is not many companies can build a GPU for a level system really right now on Tropic and open the eye of a two big players. I'm out front and Google also with Gemini. So if, meta catches up in some sense and builds a GPU for level, model, it certainly will be, let's say hard to release, and not keep the advantage of being one of the few players that can do this.

Gavin

I have a question for you. That's five companies you just said, right. The let's say meta, Google anthropic GPT four. That's four, I guess. Right. So the OpenAI four, do you think those four companies can exist, like do we need four of these models, these these cutting edge models. Like what where do you see that world going to meet four companies seems like a lot in that space.

Andrey

Yeah I think argument has been or one of the ways to put it is this is in some sense is going to be a commodity market. Yeah. Where you know, it's.

Gavin

It'll just get cheaper. The bottom. Yeah. Yeah.

Andrey

That makes me so bottom. And the space is big enough for supporting different players. I think it does make sense. Some sense to have you know an offerings of different models. There'll be various partnerships Amazon is working with anthropic right. Was working on that sounds. So it's similar to how they're different cloud providers in the ecosystem. There'll be different frontier model providers. But it's not going to be a ton of players.

That makes sense. That makes sense. And next up speaking of the evaluation of smog, are these hard prompts? Well, the chat bot arena has introduced a new category of evaluation called Hard Prompts that evaluate, models with user submitted prompts that are more complex and rigorous, designed to really test for capabilities of the biggest models. So on these tests, Cloud Opus has a score of 62 before turbo is at 82, so it's a little less saturated, so to speak.

We've seen this a lot with a ton of, benchmarks that pretty rapidly they just get into the area of solved. So evaluation is one of the things that is getting harder and harder with these things. And now they have, these kinds of more hard things. And the way they built this, you know, briefly is they actually used, lama free to label over 1 million arena prompts on whether they meet certain criteria like specificity, the marginal edge, real world

application. And if you scored high enough on these criteria, you then lead to being a hard prompt. So no doubt we'll start seeing the numbers, with any new model on how does on hard prompts.

Gavin

You know what I want to see with this? I this is a dumb thing, but I think it would be really interesting. I think they should start giving these to humans as well. Meaning that you should get in in addition to the models, you should have a human score, which could be like a randomized. Maybe. Maybe there's some sort of system where you can send these out to people to answer, or almost like Mechanical Turk style, get a bunch of

people to answer them. I would love to see, like, how actual humans perform on these new prompts, because I think that will be an interesting dynamic too, because of course, you can always assume that like this is what they do. But like, I would love to see that as an aspect of this as well.

Andrey

Yeah, that exists for some benchmarks I believe. Like math I.

Gavin

Think so, yeah. Exactly. Yeah. I mean, they're taking basic ideas. Yeah, exactly. But you can see that in the heart prompts too, because I bet a lot of humans would have trouble with some of the heart prompts.

Andrey

Yeah, for sure, I think so, because they need to know a lot of facts. Yeah, yeah. Exactly. Scale. Easy. No it's easy. Yeah, yeah. And the last three for this section that Microsoft brings out a small language model that can look at pictures. So yet again Microsoft has introduced a variant of. Our fire model models. Very small. Large language models. And this time it's five. Free vision, which can analyze images and describe

their contents. So we've seen this broadly where you no longer really have a lens. In a lot of cases everything is a multimodal model. And this is another example of that. As with previous five models, they focused on making it relatively small, 4.2 billion parameters. But, they have managed to squeeze a lot of capability out of that. And they have previously launched, yes, five free family models just in April.

So we are building a lot of stuff and, no doubt this will be utilized in some of our offerings.

Gavin

Absolutely. I mean, smaller models are great for people to try different stuff. And I think it's a very cool thing to see.

Andrey

Onto research and advancements. And we begin with quite a big, development on the research front coming from on topic. So we even got some press out of this of the, coverage, for instance, from Gizmodo is titled New Anthropic Research Sheds Light on AI's Black box.

Gavin

Also, there's, you can talk to the Golden Gate Bridge. Andre. That's a big part of this, as well as the other thing, too.

Andrey

Yeah. And, you know, you can definitely make some fun, kind of things out of this. So with research, as the title suggests, is primarily on interpretability, being able to understand how large language models work, what kind of goes on inside the crazy large neural net that leads to certain outcomes and not others. And this is building on a couple of years of research from

anthropic. So it's really building out techniques they've been and exploring for quite a while to explain it in brief about going super technical. What we do is we look at the activations of a neural net and sort of the middle layers of a model. If you look at three outputs that are happening in there, and they train another model to try and compress them. So they build a dictionary of features that represent the different combinations of neurons in the neural net, at their outputs.

And then what's happens is you can look at these features and label them as being related to certain things. So for instance, there is a Golden Gate feature. That is the neurons output the certain pattern of activations for parts of the input that relate to the Golden Gate Bridge.

So that's where when you see that feature kind of having high numbers as opposed to being low, and they discovered a ton of features, I think it was, I forget the exact number, but many, many, many, many features related to all sorts of stuff. So related to coding, related to, names and of interest to safety, reasons. There are features, related to bias, to sycophancy, deception.

And one of the big deals about this is we've seen some research in the past where you can, manipulate the outputs and the behavior of neural nets by, basically messing with the outputs of a neural net so you can take a certain a set of activations and set it to zero or maximize it. And then you have to behave differently. So that Golden Gate example is they took this feature and they maxed it out. And then as soon as it.

Gavin

Did was the Golden Gate Bridge, it suddenly believed that we were talking to the Golden Gate Bridge was this is such an interesting thing to me. I think the thing whenever I talk to people that I and tell them the fact that most AI researchers didn't really understand how these things were working, they were like, are you crazy? The people that built these things didn't really understand why they were giving the answers back. And that part is a huge deal.

If they can kind of unders, if we can understand better why these things work. I also think, and this is like pop science in my own take is like how little we understand about how the human brain works in some ways too, right? Like what's interesting is like, I wonder if kind of getting inside the black box of these eyes and how they're putting things together may illuminate some more interesting stuff in the world of brain science, even.

Right? Like finding different ways that this kind of stuff could lead to even bigger understandings outside of AI is pretty exciting. But mostly, I think the good thing I honestly, it's funny with anthropic because we always joke about anthropic as being like the goody two shoes AI model and that, like, they don't give us the answers that we want to be the funniest or we can't manipulate the, AI to do the things that we might want to do creatively on the show sometimes, but.

I really do appreciate the fact that they're doing this work in, in conjunction with releasing Cutting-Edge models, because, look, we needed this. We needed to understand better how these things worked. And now we have a sense of it.

And I think especially on the manipulation side, because I think the bigger worry that the super alignment teams have or I have more so than like the I, like, you know, paperclip problem of taking over everything is that somebody would allow an AI to do something that could be harmful to us being driven by a human. So like being able to kind of stop the most malicious use cases of AI is a good thing. And I think this will get us closer to that space for sure.

Andrey

And I always want to push back a little bit on this notion. We don't understand what's going on, so.

Gavin

Please do because I don't I that's how what I've heard along the way. But yeah, let's hear your take on it.

Andrey

I think this is a very typical thing to say is we just don't know what's happening. And to some extent it's true. But on the other hand, there has been, you know, that kids, research and especially more in the deep learning era, coming up of interpretability techniques. And this right, of course, is building on years of research and years of reasoning.

Gavin

So this is not like out of the blue. This is not they're not saying like we did it. It's like there are multiple like people who have kind of added to the research that kind of brought this forward essentially.

Andrey

Right. Yeah. So it's kind of as a simple summary. It's true. We don't know exactly what's going on, similar to how we don't really know what's going on in our brains. But we do have some notion of these.

Gavin

Yes, yes, yes and hopefully more as we go forward. Right. Like that's the coolest thing. And in both cases, both AI and brains, it's like, I would hope over the next five to say 15 years, we're going to understand really well how both of those things work based on how science is advancing.

Andrey

Right. And, another thing I'll say is I think this is a pretty big deal. Like this is a very impressive result in the world of interpretability and the ability to steer all these models do. I would recommend to people if you're interested, anthropic has a blog post mapping the mind of a large language model that is pretty approachable. You don't have to.

Gavin

It's very approachable. Somebody is not a machine learning expert. I read it and totally grok that and understood it, which was great.

Andrey

Yeah. And it's it's pretty fascinating to see that in these large language models. You wind up with a feature for the Golden Gate Bridge. And, just to be clear, it's not just the Golden Gate Bridge. That's like the thing it activates for most, but it also to some partial, extent, represents things that are nearby the Golden Gate Bridge, San Francisco, and the 40 Niners team, the San Francisco Bay area, Stanford. These are some of the things that are nearby.

And there are just what features exist. Whereas an Amelia Earhart, feature, there are features for immunology, for book of titers, gender bias, code errors, tons of examples of really interesting, findings as to what is represented in the neural net and what you can find. And you can really do a lot with this. I think functionally you can, start using neural nets in a slightly different way. So and.

Gavin

Personalized, personalized them. Right. Like like your Amelia Earhart example. Like sure, you can make an Amelia Earhart character that you want to interact with, but if you kind of push on the model a little bit and say, no, you are Amelia Earhart in a way that makes it, it makes sense. That's a really much more powerful version than telling the generalized model, okay, pretend to be Amelia Earhart. That part's a really big deal as well, too, I think for sure.

Andrey

Next up, a paper from Meta Chameleon mixed model, early Fusion Foundation models. So in some sense similar to GPT for that in GP for oh, they said it was a natively multimodal system. And so here it's, related where it's this early fusion token based mixed model, mixed modal models capable of understanding and generating images and text in a sequence. So it is differing from some of the, other approaches to multimodal, training.

One of approaches you can have is you just take an image and create its embedding and input that separate from the inputs to the large language model. And then you, merge that knowledge kind of later on. So you treat them separately. And so here we have a notion of early fusion where basically you treat both images and text as one long stream. And that the model gets everything, and it can also output everything as a long string of text and then image and text, so on.

So I think very related to things like GPT for. Oh, and there are a bunch of examples in the paper of, you know, asking for some quirky looking birds and then the model can say absolutely carefree birds. It can give you a picture. And also, this kill built two can, apparently the image of a bird. So, yeah, I think multimodality and native multimodality, where the model is just trained on inputs that are sound, that are text, that are images, is the new frontier of development.

Gavin

I was gonna say, one of the things that Kevin Scott, the CTO of Microsoft, talked about at their build conference this week was the idea that he doesn't see, the scaling laws of AI slowing down despite all the kind of AI, winter people out there saying that it might be slowing down. To me, as a non-scientist, this is where you go next, right? Because we've trained a lot of text, we've trained a lot of things.

But if you're suddenly training on Sound the World videos, all these things, I still think that the and this is good, as I agree, is because I don't have any basis for this. But like, I still think like the edge of training is going to be putting video at device capture devices on people as they walk around, because then you are literally training for the real world in full 3D sound, audio, smells, all these things. Eventually that is where we're going.

And I think the idea that multimodal is the base layer is just make sense, right? Because we are not just, thinking or typing creatures. We are talking creatures, we are visual creatures. We are listening creatures, like all that stuff. This is where I think we're going to get way, way closer to what AGI is. And I think it's pretty exciting to see not just OpenAI pursuing it, but Metta, who's got a crapload of of resources at their disposal.

Andrey

Yeah. And the paper does go into it. And these numbers always kind of blow me away. They going to retrain the 7,000,000,044 billion variant. And they say for 34 billion variant, they used 3072 concurrent GPUs. That amounted to four, 400,000, more than 400,000 GPU hours.

Gavin

That's incredible. You need to.

Andrey

Yet to even in this case, do research like researchers are spending like probably hundreds of thousands. oh. Sure. These papers.

Gavin

Yeah, I would I mean, honestly, and this is and I think there's a little bit of a competition to publish, right? Because like if you're at meta and you publish something that probably with the people at Google look like, oh, damn, we were working on that. Like it. It's very, academic wise, I think it's very competitive in that sense too. Although, I don't know, I shouldn't say I'm not. I'm not at these research departments, but I assume there is a race to get the stuff out as quickly as you can.

Andrey

Oh, definitely. Yeah. And onto the lightning round. First up, Cat 3D create anything in 3D with multi view diffusion models. So yeah is you can give this approach one image. or maybe just a couple of images. And they use a diffusion model to generate other viewpoints. that that thing that you input. And from those many images you can then use kind of a known techniques to create a 3D reconstruction.

So in the past, it used to be you needed a ton of images, like 60 images of a given object, for instance, to create a 3D reconstruction, 3D, has been seeing very rapid advancements in the eye. And, this is one example of that where just with one image, you can now get pretty high quality reconstructions. And the next paper is also on 3D. It is coin 3D controllable and interactive 3D asset generation with proxy guided conditioning.

And what that fancy wording means is instead of inputting images, let's say you want text to 3D, what you can do is give it, essentially a sketch in 3D, kind of let's say you, add some a rough 3D, shape of a humanoid, and then you say a teddy bear, panda and robot. For each of those things, you would have a rough shape of your sketch of your 3D sketch. But with that identity so very similar to what you've seen in a 2D domain, would you give it a sketch and you can prompted to do

different things of that sketch? Now you can do that in 3D and yeah, very fun.

Gavin

That, that feels like we're getting to the sci fi world that I wanted when I was a kid. Right? As a sci fi nerd, like imagine. And again, we're this all is. Depending on like at some point there will be some phase computer that will be usable by everybody and they'll like it, and it won't feel like you're sitting in the ski goggles that put you in a dungeon.

But like, imagine a world where you can like suddenly and wherever you are in your world, like draw a circle, a square and two little sticks, and suddenly you can develop a little robot character for you to be around. It's like, that stuff is where you're like, I get this. It's super fun. It's also the Ms. Paint thing. It's like, Ms. Paint is kind of a toy.

What? They announced this idea, but imagine that then in the real world, being able to do that in 3D, like that feels like a very fun place to live.

Andrey

Yeah. And, you know, there's a lot of concerns about the impact this will have an artist, rightfully so, this kind of thing. In the future, it'll be much easier for people to create 3D models. Yes. Even already was to some extent a case. And there's a big pro on that where if you're creative in, let's say you want to develop a video game as a solo indie dev. Yeah. Much easier now. Yes.

Gavin

But also they don't need 30 people on the on the on the animation side, this is the give and take of all this stuff.

Andrey

Next up, paper from Amazon. Speech offers a large scale, generalizable audio language model. So this is about an approach where they do multitask training and the various tweaks to create a speech and text foundation model where it can do all sorts of things. You have speech transcription, it can identify intent, number of speakers,

lots of that sort of thing. And, you know, this is one of the challenges typically with audio models we've seen is it's harder to get a ton of data compared to large language models. So this is a big effort on Amazon's front. And they do show that compared to task specific models, as is usually the case, this big foundation model of the can do all sorts of stuff is better. And last up in the section is another paper from Amazon released on the same day as that previous model.

It is Speech Guard, exploring the adversarial robustness of multimodal large language models. And it's kind of interesting. We've covered a lot how you have jailbait jailbreak prompts, where you can tweak the text, you input your model and get it to do things that it's not supposed to do, like tell you how to make drugs. Well, with these text audio models. Turns out you can do that with audio as well.

They showcase how you can perturb the audio a little bit to get the model to do, whatever you want it to do. And they do present a way to guard against that with some pre-processing. So yeah, it's always going to be a race between people who want to get these models to do nefarious things and these kinds of defense mechanisms.

Gavin

And, well, I always wonder at this because as somebody who's been on the internet, Matt, from the very beginning, because I'm not that old, but from like the 80s, there's always been this idea that, like when you're on the internet, like, oh, look, you can find this thing and you can do this thing, but then eventually you're like, yeah, it's there. I don't need it. I don't care about it.

So I wonder if at some point with these LMS and voice or text, it'll just be like, yeah, sure, you can find where to make methamphetamine. What are you going to do it? Like, I don't want to be in the business or I don't think anybody shouldn't be this business of censoring specific stuff. I don't think that's part of why we do this stuff. The open web is a really important thing to me personally. I it's different though, and this is where the voice thing comes in.

Weird, right? Like if, if your kid here if some maybe not my family but like say my kid asked like that question just because they're being sneaky, like how do I make methamphetamine? And then Amazon Alexa said it out loud and they're like, well, here's how you do it. You can bet like a middle of America, there's going to be a lot of parents that will be upset if they hear that, whether or not the kid's ever going to act on it. So this stuff is important.

I almost think from a business standpoint more than it is from a important like, we need to break this so that it doesn't sort of doesn't do these things. Now, granted, the other side of this is jailbreaking is something that in the in the, AI Dumas world, eventually could be used to let the AI go rogue and then and then do the paperclip problem on all of us. So that side of it I'm happy to be aware of.

Andrey

As well, too. Yeah. And I think even right now you can protect against creating not safe work images. And maybe that's not the worst thing. But then again, you might make deepfakes of people. And yes, that's also really okay.

Gavin

You're right. I'm I'm on I'm on your side now. I've come around.

Andrey

Yeah. Very. We are examples for sure. Where you on doing a jailbreak like. Yeah I know some cases are more, benign, but they're also pretty bad ones. Yeah, yeah. Onto the next section policy and safety. And we start once again with a story about. Out the EU AI act, which we've covered probably a dozen times at least over the years, as its, kind of move towards final ratification. Well, this story is that it gets the final EU

green light. It sounds like this is basically finally going towards being adopted. As you've covered many times before. It does a lot with regulation of AI. In particular, it creates risk categories. And that would impose some requirements on developers and companies releasing said AI to ensure safety. And those include things like predictive policing, emotional recognition, social scoring, things like that. So finally, there is a final green light.

We kind of knew this was coming, but it's cool to hear kind of this big milestone. And then now the slow rollout of it is going to begin. So it's not going to be impacting anything for a while. But big deal in the history of AI regulation.

Gavin

Yeah. And you know it is probably going to be messy right. Like I feel like all these things are there's a messiness to them. And and I think this is also the trade off of like, okay, well we want to regulate because we don't want these bad things to happen. But then what what doesn't get developed because those things don't happen. Like, that's the kind of story of technology throughout all the universe. I often wonder sometimes, did the social media revolution

or whatever you want to call it, devolution? Some people would say, set the table for overregulation now. And that's an interesting thing. Like when you think like, are we being too strong? And the EU obviously comes at this in a much stronger way, I think, than the States does. But it is an interesting to think about, like, would we have gotten this strong of regulation around something like this if we hadn't had the problems with Facebook or other social media companies?

Andrey

Yeah, and I think we you has taken a very strong stance, for a while now with, anti-trust, data protection laws, etc.. So I would say it's definitely shaped their general philosophy towards regulation. Yeah, exactly. And the next story is also about regulation. This time in the US, Colorado governor has signed a sweeping AI regulation bill.

So this is apparently the first in the nation law, I think, from a big state regulating AI that would have requirements of high risk AI systems similar to what we have in the EU act. That would, require using reasonable care to avoid algorithmic discrimination, for instance. So it's, yeah, very much focused on the issue of discrimination. And, yeah, it's, one of the early kind of outputs of, states we've seen before.

I think we, did cover some laws that exist for facial recognition and identifying people. And, this one is coming more after ChatGPT and its whole big, impact. So part of, I think one of your early bills, but there's going to be a lot of laws being that's all around the country.

Gavin

I think there's, so, you know, this is somebody who was a I don't think you're not born in America, right? Andre? Is that right? Yeah. So, so in America, so the EU is a bunch of countries, actual countries that have banded together to act as a one organization. In America. We have 50 states and all are passing their own laws that in America, the weirdest thing is you have state law and federal law. I am bit really worried that this is just going to get messy as hell.

There was another story this week. A California, the California Senate has passed a very comprehensive AI regulation bill that is conceivably holding these startups, specifically, you know, the larger startups to a higher standard than, say, the they had were that they were held during the internet era.

And it's all good. But like, I wish we had some sort of coalescing, coalescing around the ideas that everybody agrees on because it's in America especially, it's going to get very messy because, you know, there was this bill passed in Tennessee, the Elvis act, which was mostly about deepfakes. And now you've got this bill in Colorado, which is about something different, like, there's so many of these issues to deal with. I just wish there was a more comprehensive strategy.

And from what I've heard at the federal level, and I'm not like sitting in these meetings with Chuck Schumer or anybody, but there isn't enough pressure to kind of get around this whole thing. And also, there was a big story a couple of days ago, and I remember we're covering this later, is that the lobbying efforts of the AI companies has skyrocketed. Right. So you had in the at a federal level, this sense of like, okay, we're going to figure out some sort of regulation.

Now, maybe they're a little bit more like. Let's let this kind of play out a little bit. This doesn't make me feel good. I'm not a big fan of. I don't think government functions that well in the first place. But now we've got non-technical people in different states making laws that are going to have to intertwine and interact in different ways. And this almost worries me more than

the alignment issue, right? Like, this feels like the fundamental problem we're going to be dealing with is how do non-technical people interpret the dangers and situational problems with AI? That feels like a big deal.

Andrey

For sure. And, yeah, we're not going to be going into it. But I did see that article about lobbying that basically said that, major companies, meta, Nvidia and so on have started spending a lot on lobbyists where there's kind of a general direction of downplaying, downplaying the danger of AI. It used to be that more lobbying happened from effective

altruists and so on. Now these companies are pushing a much different direction where AI is not necessarily super dangerous, certainly not extinction level dangerous. But we do want to, limit the ability of China to develop AI, things like that. And so, yeah, it will have a big impact on and things like regulating copyright and the use of data. And we'll see what comes out of it.

Gavin

Isn't it funny how quickly narratives can change? And, you know, for all we know, it could be the fact that crypto crash and Sam Altman got out of the game, which is it's not. It could be the fact that, crypto crash and Sam from. You know what, Sam? What is that? Sam's last name? Well, it can't remember it.

Andrey

As BF.

Gavin

Friedman. Yeah, yeah. That's right. For all we know, it could be the fact that crypto crash and SBF has got out of the the, the, vector altruist lobbying game. But that's where things happen, you know what I mean? Like, that's the weirdest thing to me. Still is an American who's lived here my whole life. It is a weird system. America is a weird system. And it will continue to be.

Andrey

Yeah. And, dividing round. First up, going to the federal level of the US, this, bipartisan group of senators has proposed a 32 billion plan in annual spending, but also deferring regulation. So the plan is titled driving us innovation in AI. And they are calling for this 32 billion in annual funding by 2026 for, both development of AI in government and the private sector. There's recommendations for the creation of federal data privacy laws and various things like that.

But they are not, as you said, really pushing for regulation very quickly. On this front.

Gavin

Yeah, it's it's it's this was actually what I was just referring to is this idea that it does feel like it. I heard a podcast earlier, I can't remember it was, but they were talking specifically about this idea that, AI is an important fact to everybody, and and everybody understands now how big a deal it is, but nobody is really pushing in one specific direction from a federal level. And it's a little worrisome. But also, maybe that's just where we're going to be for a second.

Andrey

Next up, Google DeepMind launches new framework to assess the dangers of AI models. This is the frontier safety framework by DeepMind. And the intent is to be a little bit less ad hoc and more, principled. And how they do evaluation of safety. One of the things it says is they want to reevaluate DeepMind's models. Every time we compete for years to train, the model increases six fold or is fine tuned for three months. That's really a sign of where we are at.

And they do say, well, I want to collaborate with other companies, academia and lawmakers to improve a framework. We have some plans to start implementing auditing tools by 2025. So, yeah, very much more explicit, detailing of how they are planning to in particular for frontier, like every large next step in AI models, how they plan to, have safety and innovation in mind.

Gavin

Yeah. I mean, to me, if you want, if you want, future job, safety in general, like this is not a bad position to go into. You have to think if you're listening to this, the show and it's interesting to you. AI safety is going to grow very fast as a as an opportunity for people to work in. Because to your point, these are going to come much quicker. And you're gonna have a lot of people kind of focused in on this. And even if they're not, as many of them are at open AI everywhere, it's

going to grow. And I think it's a big deal. And and kudos to DeepMind for doing this.

Andrey

And speaking. Speaking of safety frameworks, the next story is that tech giants have pledged to air safety commitments, including a kill switch, if they can, to mitigate risks. And, so some of these giants are Microsoft, Amazon and OpenAI. And this happened at the Seoul AI Safety Summit, as we've covered before. This is kind of a follow up to the first AI safety summit that happened in the UK. So they are committed.

This is a voluntary commitment to, publish safety frameworks that detail how they will handle challenges and have, red lines that define intolerable risk associated with frontier models. And, yeah. As for title, in extreme circumstances, they plan to have a kill switch to halt the development of AI models if they cannot guarantee risk mitigation as before. You know, this is words. We'll see if it converts to actions, but, still doesn't hurt.

Gavin

I don't think that shocks me about this story. And I remember when it came up, I just like, you know, the fact that, the UK Prime Minister said that this is the world first in terms of global AI companies agreeing on safety agreements, like, can you imagine even three years ago, something at this scale, talking about the world of AI and how fast we've progressed from what I think a lot of people, for a little while, maybe in the three years is too soon

for a lot of people. For a little while I was like, oh, this is just not going to be the thing we thought it was too suddenly. Now we have world leaders saying that this is one of the most important things that they are dealing with, and that feels it just makes me kind of step back and be like, oh my God. We really have gotten into a completely different sort of universe than we were back then. It just feels weird.

Andrey

It feels very weird. And, yeah, I it's I've been reflecting on this. You know, I kind of miss the days of 2020.

Gavin

Hearing as.

Andrey

Much. Yeah. 2021 when it was like AI development has been crazy fast for a long time now. For at least a decade. Yeah. And, you know, it it felt like we were making rapid progress, years ago already. But in comparison to now, you know, now it's every month, whereas big news is big. Progress are seeing this with 3D, with audio. And it's a little tiring, honestly.

Gavin

It is. And also, you know, it's change, right? People humans are not are really not made for change happening this fast. And I think that's why I always tell people like if what they're like, how do I prepare for the future if my job goes away, is I end up, first of all, like, you never know if your job is going away. But second of all, just like, just try stuff and just try. Don't get overwhelmed because it's easy to get overwhelmed. But like just like continue to stay like aware of things.

Listen to podcasts like this or like ours and it like it does help, but it's got to change. We're just changing faster and faster. It's just the way time works. But it is crazy. This is this part really made me stand up and be like, wow, this is crazy.

Andrey

Yeah. I mean, you got to at least start adopting. I think, yes, it's going to get more and more crazy. Yeah, exactly. And after vast sections, synthetic media and art. And first up, we have a story that Sony Music has warned tech companies over an authorized use of its content to train AI. Apparently, they issued warnings to over 700 tech companies and music streaming services, cautioning them against using its music to train AI without explicit permission.

Sony Music does have a lot of notable figures like Harry styles, Beyonce, Adele and Celine Dion, and I will say, I think they they believe it sounds like that some of these companies have already made another use. Another AI is used of audio and artwork lyrics, and looking at some of these big, text to song things like audio. And now I would be surprised if I haven't use copyrighted data and that we're training our models.

Gavin

I've spent a lot of time with these, but these I really because I'm a big music person. But also I was really, you know, we had, the CEO of sumo on our show a while ago, maybe six months ago, actually, and they're really

magical. But this is a big but there were stories that came out a couple weeks ago where people it was kind of the mainstream was finally getting a hold of audio and, you know, and I had this experience myself where I swear to God, one of the times that I generated a song, it was Johnny Cash's voice. And I mean more so than people think. Scarlett Johansson is, Sky, this was Johnny Cash singing a song, and I want to believe that these companies

are operating the. Right way, but they're probably, I think it's pretty obvious. They probably wouldn't have gotten this much this good without training on some real music. So I, I also in kind of handed him with this goes is that you just raised a big round right in like snow rays. I think around it at $125 million round it like $1 billion valuation as well.

I am so curious to know how the legality of all this plays out, because if you're familiar with the history of the internet, the music companies are some of the most litigious companies that have ever existed in this space. They are going to come after it hard.

Now, the question will be is, do they want to find a way to grow their pie, which might mean these AI tools, they will make a deal with them, or perhaps buy one of them and say, we are going to find a way to let our artist, you know, get paid for this, and we are going to get paid for this, and then we're going to allow this to exist, or they just going to go after them and like shut them down.

I think it's probably more likely the former, based on how Napster went and kind of screwed them over the Napster problem. But I think this is a huge problem for those music, generating, eyes. And I think they're going to have to come to the table and pony up a little bit. That's my.

Andrey

Feeling. That's right. Yeah. This letter apparently asks recipients to provide details about which of the group's songs were used to train their AI systems, however, access. And they were given a deadline to respond. And it sounds like, there was a warning that if you don't respond, Sony is planning to enforce its copyright to the fullest extent

of law. And I do agree that, you know, we've seen this with text or image models that companies did use copyrighted data and kind of got away with it so far. Yeah, there's been various lawsuits, from Offerors against OpenAI and so on. But in music, it's a whole different ballgame to some extent.

Gavin

I think the big question will be, and the maybe this will force the hand of this, the larger legal question about all this training data is the idea is, do you see the AI system as a person, quote unquote, who is taking this data in and creating something unique? Right. Because if that's the case, then you get a defense for a lot of this stuff right now, that Johnny Cash voice that I heard was very close, probably too close to be somebody that really went through the filter.

But I think maybe this will cause that case, which is the case of canon AI, essentially create something original based on interpreting other people's art, be forced forward. So we get an answer on that sooner than later. And if they lose that case, if the major AI companies lose that case, they are in deep doo doo, right? Because if that case goes into law and then say the Supreme Court, protects it, all of these companies are kind of screwed a little bit.

And I think that could set the AI universe back a lot. Now, will politicians let that happen? I don't know, it's there's so many complicated questions going on here, but music companies you don't want to mess with. And I think in this instance, as much fun as those music generation softwares are, they're going to have some problems.

Andrey

Yeah, I think it's very true that we are still at the phase of trying to figure out two broad principles that we agree upon. Open AI has had this argument of free use. Yeah, for data. And that can applies to all of AI to some extent. But when you do have beyond if you use question also questions of generation like if you're generating deepfakes and people's likeness. Yeah that allowed yeah things like that. And it's very much going to be still up in the air, but probably not parallel.

Gavin

Right. There's like AI working. My background is I come from like television and late night comedy, but also on the tech side, doing things parody law is very specific, right? Like you can parody if you're making a comment on something and you have a certain amount of rights, but you can't just straight up use somebody's voice, like with the Scarlett Johansson thing. Or in fact, there is a legal precedent.

Bette Midler, the famous singer Bette Midler sued, Ford in the 80s because they she was asked to do a commercial for them. She said no, and they used her voice, a sound alike voice. They got a sound alike voice. And she won. And that's why whenever you hear a commercial with a celebrity impersonator, now they have to say celebrity, celebrity, personality. What am I trying to say? Or. Sorry, celebrity. Oh, celebrity in person. That now that's why they have to say celebrity

impersonated. So this is all of that same stuff kind of balled together in one.

Andrey

And speaking of celebrities and the land of L.A., the next story is about the Hollywood agency CAA, aiming to help stars manage their own AI likeness. This is the Creative Artists Agency, apparently a leading asset sustainment and sports talent agency. And they developed the CAA vault. Virtual media storage systems for celebrities to store their digital assets, including AI clones. This was developed in partnership with AI tech company Very Down.

And yeah, we'll securely store digital doubles and we'll kind of make it so you can only access roads by offering useful users, enabling these celebrities to control and monetize these, doubles. So, yeah, I guess people are trying to monetize and sort of create infrastructure for this likely future where people will have AI that was that they can license out to commercials and stuff.

Gavin

Yeah. So I am, I used to be a CAA client. That's the world I come from. And now I'm a UTA client, which is United Talent Agency, which is a similar one. And what a couple things to say about this, because I have an insight is one, all of these agencies, the world of Hollywood is struggling right now. And I think Hollywood in general is feeling kind of under threat not only by AI, but a number of other issues that you can get into.

Which about streaming wars, people not spending as much money, all sorts of other things churn. I don't know if an agency is the right person to do this, but I understand why, because what they're trying to do is pivot out into another business, another way to make money, another way to help their clients, which is it is a client based business.

So if I am an actor, say I'm the Rock, I have one of these agencies kind of like being my legal representation to the world at large and helping me do the deals that I do in the world. I think this is a big deal in the future. I just actually talked to a friend of mine who, a guy that I've recently met who's running a company in stealth, and their whole goal is to create a simple licensing agreement for people to do this. Because the theory here is, say, the rock makes $5

million for a Dorito ads, right? Doritos ad right now. Well, that kind of deal and the celebrity level that the Rock is at is probably not going to exist for much longer because it's distributing across multiple groups of people. Many, many more people are getting semi-famous. There's very few people who are like Uber famous anymore. And that deal, that $5 million deal is probably going to be like $1 million or $500,000, but there might be 3 to 5 of them.

So and if there's 3 to 5 of them and say one of them is in, I don't know, India, another one is in Madagascar and one's in America, well, is there a world where the rock could like just say to the Madagascar one, like, yeah, you can use my likeness and you can use my voice. I have approval over it, but I don't have to do anything.

I just get a license about. Yes. And that's like a great deal for the Rock, because he can spread himself out more and maybe make as much as he was making originally by doing stuff in different places. So this is a big deal when it comes to the future of celebrity and licensing and all that stuff. I'm not convinced that an agency's the right place to own it, but I understand why they're doing it, if that makes sense.

Andrey

Yeah. And, it seems like, if nothing else, you know, it's to their clients, it's probably add on.

Gavin

Yeah. It's valuable. Right?

Andrey

Yeah. It's a competitive advantage. Exactly. Yeah. I think it's pretty much inevitable. It seems like we've already seen examples of this being done in practice. Yes. And it's it's just going to be like in the future is going to be like, oh wow. They actually did this in person.

Gavin

They actually really did.

Andrey

Get to be a big deal. Yeah. Yeah, exactly. And one last story for the section and for this episode coming from the New York Times, it's what do you do when I takes your voice and it's about to voice actors who listen to a podcast about the rise of AI and threat it posed to actors and interpretation professionals. And as they listen to this episode, they heard their own voices, being generated by like, the little interview segment. And they didn't actually, license their voice.

In fact, apparently they listen to Pro and they say it sounded just like Mr. Lerman is one of them. And, apparently this was, generated by a company called Lavoe that we did provide some clips to without signing an agreement for, voice generation. So now they're suing that company, and they are actually doing this as a class action lawsuit. So they're looking for other voice actors to also join in on that.

So, yeah, another example of seemingly unauthorized use of their likeness and some legal, activity on this front.

Gavin

I mean, again, to me, this is like early days of the internet where you had people taking content from people. You had YouTube uploading, the famous lonely, Lonely Island, lazy Sunday clip like it is the piracy stage of AI. And I can guarantee you. This is all going to get worked out, but you're going to have companies and I don't know a lot about Laveau, but you're going to have companies that probably made a mistake early on.

And also people don't remember this, but like YouTube got saved by Google a little bit, right? Like YouTube really did get saved when Google came in and bought them. And it was great for the YouTube founders. They're like, they made a crapload of money. But also they could have really been mired in deep, deep legal issues for a very long time. And a company the size of Google coming and helping them kind of navigate through that and paying for a lot of the legal conversation was important.

So I think we're just in that stage of AI content. I think we're in the oh, move fast and break things. Oh, oh, I did something that might not have been okay. Well, sometimes you're going to get caught in that will. Maybe that company won't exist. Or maybe the technology's so good, like something like sooner or audio, you'll have somebody come in and be their backer in some form or another.

Andrey

Yeah, exactly. That's totally true. This happen of YouTube. One of the reasons they sold to Google is that this copyright, challenge existed. And it's for sooner in YouTube in particular, but later on also video generation. Yeah. You're going to need this safeguards for copyright and there are going to be added. And you've seen this to some extent already with things like open AI character, AI Tropic getting more restrictive or more careful with generations.

And many people on Reddit and some other have complained of like these models are getting useless because they won't allow me to do stuff that I used to do. But these are the sorts of steps that companies have to take. Or this will take, to avoid some of the fallout of their technology, doing things that people don't want it to do.

Gavin

That's right. Or the businesses that own the IP is doing letting other technologies do stuff that they don't want it to do. That's an important thing too.

Andrey

Yeah. And with that, we are done with this episode of last week. And I thank you, Gavin, for co-hosting. It was a lot of fun.

Gavin

Thank you so much for being here. Yeah. If you if you're interested in our podcast that we've you've probably you've heard us be on here before, but it's called AI for humans. You can find it both in the Apple and Spotify podcasters. But we also do a video podcast or YouTube is going to be kind of popping off. Lately we've been doing a couple videos a week in addition to the main show that comes out on Thursday mornings, but yeah, please go listen to it.

And always thanks to Andre for having me here. I love this because it's such a much deeper dive. We do like, you know, we do a news chunk in our show too, but we don't spend like, we go through like 3 to 5 stories. And I love going through and kind of getting more deep on all these specific ones.

Andrey

Yeah, it's a lot of fun. And, yeah, we'll include a link in the description of episodes. Do check out AI for humans. And do keep listening to this one, hopefully. And as I always say, we appreciate it if you share it with friends and give us a review, things like that. But we definitely, more than anything, enjoy knowing that people listen and benefit. So keep tuning in and enjoy the.

Gavin

Review for these guys. Leave a review for these guys. I'm going to go. I'm not even sure if I let you read, and I'm going to leave you one right now in case I haven't.

Andrey

Okay, great. Yeah. And enjoy the AI outro song that, is coming after this statement. All right. Wrapped up a great show. Inside it to us. Stories. Everything. A shout out to all downloads. Listen. Try? Not sure. Yeah. Me and the crew. We did you? Good boy. Till next time, friends.

Unidentified

Keep your air spirits high. Thanks for your.

Transcript source: Provided by creator in RSS feed: download file
For the best experience, listen in Metacast app for iOS or Android
Open in Metacast