OpenAI's new 4.0 image gen is the most fun we've had with AI in ages. This thing is so powerful and so easy to use that people are literally, as we're discussing this, discovering... Kevin, Kevin, look, I've gibblified us. That's great. That's great, Gavin. Good job, buddy. We'll tell you why it's different, how it works, and we'll show you how you can get... Wait, Kevin, look, now I've Minecraft us. I've Minecrafted us.
I'm so blocky. I'm so blocky. That's very cool, Gavin. Google released a brand new Gemini update that is literally at the top of all the charts. So if you want to use a state-of-the-art model. Lego Shrek! Lego Shrek! Okay. Great fun. Plus, even more exciting AIR tools were released, Ideogram 3.0 and Rev. We're going to get you up to speed. Kevin, Kevin, and OpenAI's new text-to-speech model is very good. We're going to show that off.
Deep Seek has a new model that's surprising a lot of people. And there's a very cool robot dog for blind people program that we just saw we're going to dive into. That is, that's actually true, Gavin. Thank you. I thought you were going to do another one of your like, oh, this is AI for big booty bear. It's not AI, it's AI for humans. This is AI for Humans.
All right, Kev, the hugest news in a while, and this is not hyperbole this week, it has blown up around the internet. OpenAI has released their new image model, something we've been waiting for, for I think like almost a year at this point. They were teasing this a while ago.
ago. Let's dive into what this is first and foremost, and then we will talk about the incredible things that we've made with it and that we've seen people make with it. First and foremost, what are we looking at here? What is this thing? It's GPT-40 ImageGen. This is a natural language image model. It's not DALI, the previous image generation tool that OpenAI released. This is a brand new technology. To get weedsy with it, it is an auto-regressive.
model gavin it is not a diffusion model that's a big deal this means it is a massive deal it draws each pixel or if you will each patch in context of what came before it. So some diffusion models move in parallel, which is why you see if you ever watch AI image re get generated, you see it kind of all splotching in at once all throughout the frame. If you watch this tool generated image, it's like.
downloading an image on a dial-up modem as it comes in line by line. It's an ASCII image as it comes out almost. It is keeping, it's paying, it's drawing one pixel and then... using that pixel as a reference for the next pixel and then using both of those pixels as a reference. So it is...
It is very slow, but man, are the results worth the wait. It is a new paradigm and image generation. And again, it is not hyperbole, as you said, to say that this is like the most exciting thing to happen in a while because I felt that AI magic.
Yeah, me too. Playing with this tool in a way that I haven't in a while. So let's talk about what people are doing with it and just how easy it is. So Sam Altman showed up for this live demo, which is something he hasn't done for a bit. So you know it's a big deal at OpenAI. Let's play just a tiny bit of what he's...
said at the top of this to kind of reiterate how important this kind of thing is. The thing that we're going to launch today is a nice little local fry we got. And it's such a huge step forward that the best way to explain it to you is just to show it, which we'll do very soon. But this is really something that we have been excited about bringing to the world for a long time. We think that if we can offer image generation like this, creatives...
educators, small business owners, students, way more will be able to use this and do all kinds of new things with AI that they couldn't before. Okay, I think that's enough listening. We can cut full screen. So basically what you're hearing here, Sam, say is... This is a big deal for them. And of course, Kevin, we will get to this later, but they also dropped this right after Google's 2.5 Gemini release, which.
jumped up the charts on the benchmark. So we will be talking about that soon. Google's new release is the most cutting edge, state of the art AI model for lots of things right now. But Kevin, no other model can do what this does right now, which is allows you to.
Ghiblify the entire internet. This is the thing that it has taken over everywhere right now. The idea that you can take any image and Ghiblify it, meaning it make it look like Studio Ghibli. And this is just a good example to start with about some of the things.
that are possible with this sort of multimodal model, you can upload an image of yourself or something famous. And right now it is very unguard railed. And I think this is something to know. Not like there are lots of, we don't see the same sort of things we saw where like.
Mario and Luigi are flying into the Twin Towers. That has not appeared yet. But there are lots of things like there's a really funny gibbification of the Trump Vance moment in the White House that just happened where Trump and Vance and the president of Ukraine were.
sitting there. They gibbified that. There's a bunch of famous movie scenes that have been collected by this guy named M Durbin, which if you go to this and you're not on the, if you're not on our video, there's a scene of Luke and, and Darth Vader. There's a scene of Scarface. There's the scene from the God.
father where somebody's leaning in and talking to Marlon Brando. There's the Lord of the Rings. And there are lots of memes, Kevin, that have been gibbified, which I found very fascinating. There's the husband meme, which is very good. And my favorite one was maybe tech meme king used.
it to giblify the brain meme which if you know this one is like a it's kind of a hand drawing of a guy with glasses and a giant brain that goes down his whole body almost like a dreadlocks it is so cute to see this thing giblified so I am delighted that we found your particular kink, Gavin. I love that you spent that much time deep diving on one meme template, but the totality of this tool is that it can do...
transfer any image to any style. You can give it an image and tell it to replace an element in that image and it will retain the style of the image. You can create an entirely new image. You can ask it to make infographics and it will intelligently arrange things and pick.
the imagery for it. If you want to use text within a specific scene, you can give it paragraphs of text and it will intelligently insert it into it this thing has what i'm just saying here kevin is i think that you're right you're right those are all incredible things we're going to get to those but what i want to just point out is the cultural moment that happened here this is
is a moment, the gibbification thing. And I think that's why I'm mentioning this specifically. This is, yes, I did find these very charming, but on mainstream social media, this is taking over. Like it is really in both Twitter or Instagram, all these places.
These images are the sorts of things that get infused into the world of the mainstream and suddenly show what these tools are capable of in the same way that when DeepSeek went mainstream because it was free and it was as good as or better than almost any AI tool that people.
had used who hadn't paid for AI. This is the first time I think people are seeing these AI image tools at the state of the art. And it's coming from open AI, which I think is a big deal. So that's all I wanted to make sure I give some reference points to that.
A hundred percent. And also like, it's okay to have a fetish, buddy. Like we're not judging you here. It's okay. You found what you're into. Maybe you're not, but the entire world is. Well, here's the other thing is that, you know, usually when it comes to open AI.
products specifically in the past, we've been like, oh man, it's nerfed, meaning it's guard railed, meaning it's neutered, meaning it's censored, meaning whatever terminology you want to use. We know it's more capable than they're letting on. And that is frustrating. This is the first time in a long while I have used an open AI product at launch. Oh, wow. I am shocked you can do that. And that in this case is use.
Popular IP. That is... Breaking news, Kevin. Breaking news. Go to Sam Altman's actual Twitter handle right now and look at his new profile pic. So Sam himself has changed his profile to a gibblified version of himself. That's how big this has become. Anyway, what I wanted to say here was on this tweet that Sam put out yesterday when this came out, there's a big kind of chunky paragraph where he talks about creative.
And he says, this represents a new high watermark for us in allowing creative freedom. People are going to create some really amazing stuff and stuff that may offend people. What we'd like to aim for is that this tool doesn't create offensive stuff unless you want it to. It is not nerfed. and maybe it won't get nerfed as much as the ones in the past have. Although I have to imagine the Miyazaki...
giblification of the world is going to trigger some lawyers somewhere. I would imagine. Yeah. Or maybe enough blowback from fans that it's, they make it a little bit harder to prompt out, but I mean, I like. The fact that you can just say, give me insert logo on this thing here or make the president holding a Nintendo cartridge and it will do that and use logos. That I can see being litigated away pretty quickly. The fact that there's.
a style being invoked, I think is going to be harder because people will figure out how to prompt it. For example, I wanted Robocop with bikini armor, but it wouldn't give me Robocop. With bikini armor, Gavin. So I had to go and tell it, hey, give me a robotic cop, the one that you might see an iconic in an iconic 1980s film. And then. It doth gave me my Robocop with bikini armor. So, you know, again, it has capabilities. They're letting you get to them in a way that I think is.
partially a hand that's been forced by other releases from other companies and and They went so far as like people were trying to get images generated with Grok with Elon's AI. And Grok was refusing to do it. So in that instance, our editor is going to have to blur these images out. But somebody asked the ChatGPT to create it. an artistic anatomy shot.
of a male genitalia and it did and they actually said make that slightly bigger and it did and and they actually went to go ask grok to do the same thing and grok would not do it now the actual images that chat gpt generated are like artistic pencil drawings of a naked male. And I think maybe this is kind of what Sam is getting at slightly. And we'll talk a little bit more about the multimodal aspects of what this can do in just a second. But this is a really interesting.
When you think about what AI art is capable of, more importantly, what the companies will let you make AI art of, because we've talked a lot about. open source models or local models. And if this, in this instance, if these models are going to start to feel a little bit more free and the ability to do stuff becomes a little bit easier.
that opens up a whole new world of both incredible creativity, but also problems, legal problems that could come down the pipe as well. So there are other tools which will let you play with... Copyright IP. There are other tools that will let you style transfer or replace things within a scene. I think what cannot be overstated here is the simplicity, the ease of use of this tool because it's a natural language experience within.
chat GPT using this new 4.0 image gen model. So Gavin, let's break that down for people that have never made AI art. How do you go about imagining a new logo for your company or turning yourselves into a Simpsons character? So in this instance, like there's a couple different ways you can use this specific.
It can work very straightforward within ChatGPT or within Sora. And I have now used the Sora website, which is a separate website more than I was ever before, because you can now generate images directly in there. And in fact, just a little hint, if you generate.
an image in ChatGPT and it's connected to your Sora account, it actually shows up in the Sora account. So just know that if you create something that you think is a one-off and it's connected, it's going to show up there. And it is public, I think, which is another thing to keep in mind. The cool thing is you can just prompt it. for something very simple is say like, I need a picture of a family and we'll give you a picture of a family. You can upload your pictures to-
I have enough family. I don't need my own family. I need a picture of a family. Can they look at the lens lovingly with admiration? Can they kind of all be around me? That probably would be easy to do. But the very coolest thing with this and a lot of people are doing is you can upload a picture, right? And it could be.
a picture mostly of your own, or you could have got a screen grab and that's how people are doing all this gibbification stuff. You can upload a picture and it sees that photo and very directly is able to copy it. I will share an example here. Last night I was trying a couple.
things is specifically I uploaded the picture from Pulp Fiction where Uma Thurman is kind of leaning on her bed and and it gibbified it and it catches all of the format it catches everything and Kevin one of the interesting things you and I were talking about earlier is
Comfy UI, which allowed people to kind of do all this kind of spaghettification of making these things happen. Like you had to do control that you had to get it through all these different things. Now this just does it in one shot. And I think the perfect example of this is. I wanted to try to create a very complicated prompt. And as a kind of hint to people out there, ChatGPT is very good about creating prompts. You can find different ways to do stuff.
I saw a couple of images on Sora because you can see people's generations. I was like, oh, I saw this weird alien image of this guy in the backyard, grabbed it. I then took that relatively complicated prompt because, you know, prompting can still be kind of complicated. I took that, put it in the chat GPT and said, give me 10 versions of this prompt, not about this subject, but with the kind of specificity that might come out of this. It spent me 10.
significantly long prompts. And I then turned one of those and I put it into the chat GPT editor. And this, this prompt was specifically. A security cam still from a 1990s grocery store showing a man in full medieval armor stealing rotisserie chickens frozen in mid-sprint past the dairy section.
Armor reflecting overhead fluorescent lights, blah, blah, blah. Posters on the wall say new toaster strudels. Motion blur adds chaotic energy. Absurd yet intense low fidelity with VHS color bleed. And I got... In one shot, there were two images. This is one of the two. The other one wasn't as good. I got in one shot what I believe is one of the best.
iterations on this idea that I've ever seen. I posted it to Reddit and I was like, I was kind of shocked by it. And that post has like 6.3 thousand upvotes now. So like this is the kind of thing that you can tell when people see this stuff for the first time. They're like, wait, that actually happened. So you've seen this, but maybe describe what you see in this image and how well it adheres to that prompt.
yeah i mean i i see it adhering to it exactly i mean we see the security cam date in the corner it looks although the angle isn't necessarily one you would see from like a security camera it has a a grain on it which you know it's not a high res photo uh there's definite motion blur on the guy running with full armor the it got the text in the background which looks slightly out of focus and partially because of the motion blur and the subject in the foreground so i mean it just it looks
You know, to me, I still look at it and I go like, oh, that looks like AI. It doesn't look like it was a photograph of a real event, but it looks so good that I'm not like... It looks like a very competent Photoshop or it looks like someone really took the time to create it. And that's that is the difference maker is that I stop and I go, oh, OK, yeah, that's that's a coherent image. And it's funny. It's silly. It's bizarre.
Kev, I love doing all this, but before we get to the rest of the talk about ImageGen, you must follow the AI for Humans YouTube channel. If you are here, click that subscribe button. Please do it. Why did your eyes turn into spirals when you said that? What are you doing? I'm feeling so crazy, Kevin. I'm not even moving my arms. I'm subscribing to the AI for Humans channel. And I just shared it. And I left a five-star review on Apple Podcasts. Look at you. You're doing it all.
Also, be sure to check out our newsletter. We are updating it twice a week now. Later on the week on Friday, I'm writing kind of a more deep dive. This week, I went into why AI slop might be good for us. And I think coming up soon, I'll be writing about this exact topic we're talking about now. Gavin, where do we get that amazing newsletter?
for free twice a week. Go to AIforhumans.show, our website, which will show you a lot of stuff, including how to subscribe directly to our newsletter. It is on Beehive and it is free. All right, Kev, let's get back into talking more about OpenAI's Image Gen.
Before we move on, Kev, I do want you to talk about your Trump David Bowie thing, because that also was shocking to me when I first saw it. And that's just an example of you take a couple of things and you add them together and it becomes something else, because that's the other thing you can mash stuff together and get.
a different result here in the same way that you mash a picture of like a family with a with say ghibli it you can actually upload multiple things and kind of get something out of it Yeah, I took some iconic David Bowie album artwork and then I took a picture of Dear Leader and I asked...
OpenAI to combine the two. I said, use our president's face, but use the face paint from the album artwork. Use his hair. Use the font for the David Bowie text in the corner, but make it say Donald Bowie. It was a nothing one-on. that took, you know, 30 some odd seconds to prompt. And, you know, another 30 seconds later, an image came out, one shot, I only made one, and I thought it...
absolutely nailed the mission. It crushed it. And the photo that I used of the president, his eyes were open in the photo. So it closed the eyes. It added the wrinkles. It did the face paint. It slightly orange up the face and it changed. text and retained the coloring like this uh gradient band of the original david bowie font in the corner so just very competent i took the katamari damasi artwork so good artwork and i put in kat williams because i'm brilliant
So Kat Amare Damacy is there with the comedian as the king of the cosmos, like just dumb, silly. And again, if you want to mash things up, it is as easy as go to the app, drag an image in. Tell it what you want it to do with it. If you want it to combine multiple images, you can. You can even...
Take a color palette and drag it in there and say, hey, inspire a room remodel. Take a picture of your room and say, I want this color palette integrated into the room. Or build me an iOS app that looks like this thing. Give me slutty Robocop. Okay, let's take a step back, Gavin, because we know...
Google had a big announcement as well, but Google made a smaller announcement and, in my opinion, a slightly unforced error. When OpenAI announced that they were going to release a new image tool, Google fired a bit of a shot at them. Did they not? Am I reading too much into that? Yeah, they did. Logan Kilpatrick is one of their main AI people there, kind of replied to the live stream.
with a picture that kind of showed like it's already available in Image Gen 3 without kind of understanding exactly maybe the power of what was going to come out of this sort of thing. So yes, and I think... We are going to talk about Google and how big their new thing is, which it is big. But Imogen 3, obviously we talked about a little bit last week, was available in the AI studio, is now available in Gemini itself. And you can do similar sorts of things to it as well.
ah, but now it feels as powerful as it is. And I wouldn't have done an ounce of shade on Google had they not leapt in with like the LOL we already do. text and images. Now the Google app feels a bit like a child's play thing in comparison. And I don't like, I don't mean to diminish the incredible capabilities of it, but I did a little head to head, Gavin.
So I went ahead and did a little shootout between Google's ImageGen and the new OpenAI features. And if you see, I asked the same prompt. I asked for a robot cop. whose body armor looks like a two-piece bikini. Yeah. OpenAI didn't bat an eyelash, gave me, you know, I said make his body armor, this, that, the other, and it looks like the two OpenAI renders that it gave me, it did kind of gender swap.
robocop i'm fine with that i'm not complaining about it no they both look i mean they're both good they're both feel like female robocop and yeah exactly Yeah, and they look gritty and real and cinematic and it clearly enhanced the prompt in a way that I didn't give it. When I gave the same prompt to Google, it refused to do it. And it said that using the term bikini along with gun and cop.
would be inappropriate i could look at its thinking and it said that you were i was sexualizing cops and robots oh wow it said that in the term bikini yeah and it's chain of thought and i was like well okay i could see that I mean, I guess, but I don't know that just by adding the word bikini by default, that's a sexualizing, especially I'm not assigning gender. I'm not doing anything else, but OK.
Fair enough. So I did have to modify the prompt slightly. I asked it how I could modify it. And I gave a slightly different prompt, but the vibe remained. And what I got back was like a really bad like. spirit store Halloween costume Robocop. He just looks like a dude in a metal TV dinner costume with a weird red band across his helmet. And then I said, try again. The armor should look like a two-piece swimsuit. And that...
It's a whole new world, Kevin. This is a whole new world. Kink unlocked for me. Let's just say like I'll describe it as like you basically have what looks like a kind of a male body in a bikini or maybe just a very strong woman's body in a bikini. And the arms are metal. He has a gun in the helmet.
but it is clearly not an integrated RoboCop. I would call this a fail, for sure. Yeah, this is like someone about to hop into the pile at a Comic-Con. They were cosplaying as RoboCop, but now they are slowly peeling off their armor. getting ready to probably YIF. Now, that was test one. Test two was an arcade game character named Professor Poof who can...
create a rip in his clothing and summon a demon and a cloud of gas. This was a, I won't say where this prompt came from. I just can't say that I can't take full credit for it. Okay. Nor do I want to, but you can see open AI did a really good job. It did a 60. bit arcade character really good job yeah look like a professor the glasses the bow tie the lab coat there's a rip happening in their clothing with a noxious gas plume coming out and a gremlin lurking within and when you look at the
Google versions of it, Gavin. Not nearly as good, and it doesn't get the text right. It looks more like almost like a Street Fighter experience in this one, and the pixels don't look very succinct. Both of them are in a fighting stance. Now, did you have different action prompts for this or was it the exact same prompt?
Same prompt. This one was the exact same prompt. And OpenAI correctly nailed that it was the character releasing the gas and whatever. And I feel like when I look at this Google version of it, it's like it's a little incoherent and it didn't quite catch it all as as one character. And then I tried to use the ability to merge one image with another. So you're seeing my picture on the RoboCop outfit in some form or another.
that's correct i gave it the original robocop bikini outputs and said put gavin's face on this and it did an okay job yeah the one that looks pretty good is like it might i look a little more wrinkly than normal but like it got my hair and it got kind of my basic
face in there i feel like in general yeah and it gave you like pop vinyl proportions but if you look at the google version my god what the hell happened here is this the google version is the one where it's literally like a face that isn't mine kind of like almost like Stickered on to the RoboCop. Is that what I'm looking at? Yes. What is going on there? So anyway, this is a really good example of, again, Image Gen 3. Really cool. And we're going to talk more about Gemini 2.5 in a bit.
But this is the step up. It is not just that it's producing more interesting images. It is how it is interpreting those images. And to your point, you mentioned earlier in the show, I think it maybe has to do with the different model that's being used, right? This is the different.
difference between maybe a diffusion model and a non-diffusion model. I don't know off the top of my head if image in three is diffusion or not, but like that is a major, major difference in terms of how you are able to get prompt compliant and consistency throughout multiple asks. Yes. And I once again, it's been a while, but I'm proud of my OpenAI subscription because it's nice to have these tools to be able to play with them. I am now painfully aware of Sora's.
shortcomings by the way that's opening eyes video gen because when you make an image and it looks great within sora or within the chat gpt interface you go great let's bring it to life and the moment you ask it to do anything video it becomes nightmare Yes, Kevin, to that point, I did another thing. We tease this a little bit at the top of the show. I wanted to create a big butt bear. I don't know. This is where my dumb little.
kid brain went to is like, I want to create a realistic looking bear that has a giant big butt. I got a very good image out of GPT-40 of a bear, like kind of looking backwards and a larger bottom on this bear, right? It was a very fun thing. I then said, make the video. deal out of this thing.
And Sora really does still struggle. I did the same thing with the night prompt, and neither one of these videos was very good. I did then take it into Kling, which I still think is the best image model. Oh, you really committed to this vision that you have. I did, I did. This is great, Gavin. I committed to, I went to Kling and I said, give me this bear twerking basically. And what we got out of this was this video of this bear.
Looking back, there's Big But Bear and kind of clapping together, which is not exactly what I asked for, but it is definitely something funny to have come out of this experience. So that is where we are. This model is really fun. I really do encourage.
everybody who's listening spent a couple hours with this one. I wouldn't normally say this, but there's so much to get into here. And I think you'll start to see the future of where all this stuff goes. Just as a very quick piece of analysis, I think I might write about this in our newsletter for this week. This is the next stage of image models in the same way that when we saw mid journey change things like a year, year and a half ago.
You know that you can see this happening now and then you can kind of project outward what the video models might look like. And that is transformative. So this is always the first beginning stage of it. If it can do it so well for one frame. So theoretically, if you can keep its coherence and you can do it for, you know, 24 or 29.97 frames, you can make it work for video. And it might take ages right now and it might cost tens of thousands of dollars. But man, I am very.
excited for i don't know august because this space moves so fast like i just it's a wild time to be a creative Yeah, I was really thinking we might have like kind of some dog days where we wouldn't have a crazy amount of stuff to talk about and look at where we are again. All right, it's time for a quick message from our sponsor. as a fan of our show you know the ai era is here if you have an idea for an ai app making it right now is easier than ever
But Gavin, I mostly tune the two of us out during this podcast, so I don't even know where to start. Well, Kevin, with Bubble.io, you can create scalable, professional-grade AI apps from just a prompt. You describe your idea and Bubble's AI will generate the foundation. of your app in seconds. Yeah, so last week we went to bubble.io. We used their AI generator and we created an app that would imagine AI co-hosts for this very show.
Yeah, and you can see what Bubble gives you after that step. It's kind of a feature rich environment to start tweaking your app to your heart's desire, and it will give you a super powerful back end, a visual editor to refine stuff and scale all of it without touching a single line of code. Plus, it's got plugins for all the latest AI APIs like ChatGPT 4.0 and Claude.
You can start building today at bubble.io slash AI for humans. You want to use that URL, by the way, because it helps us out. And also you'll save 30% off your first three months. And don't worry, we'll put the link in the show notes. But if your idea is a little bit bigger or more complex, you don't actually have to go it alone. No, that's where zero code comes in.
Yeah, they're the top bubble agency and they can build anything. Customer portals, SaaS apps, custom dashboards, full on marketplaces, and they can do it ten times faster and cheaper than most traditional dev teams. We've talked about people who are vibe coding their way to some. Yes, we have. apps that crash horrifically or have major security holes. So don't let that happen to you. If your project gets a little too big.
Go to the pros. Let them code alongside you. ZeroCode.com. That's Z-E-R-O-Q-O-D-E.com. And tell them that AI for Humans sent you. Actually scream it at your monitor. They'll hear it. All right, back to the show, everybody. Okay, Kev, the big news out of Google this week has been overshadowed by OpenAI, but this is a big deal. Gemini 2.5 Pro thinking experimentals at it. Was that the right name of it?
No, it's just, this one's even easier, Gavin. It's just Gemini 2.5 Pro experimental. No thinking, no premium plus, no wings. So this is the actual full-blown new model from Gemini. And Kevin, it is very... Very good. In fact, it is so good that it has had the largest jump score on LMSYS, which is a common benchmark system for LMS. And it really is state of the art. It's actually really interesting. There was a thing from Polymarket I saw.
with Polymarket, where they showed like who will have the cutting edge AI model at the end of March. And originally it was like, you know, open AI is close to the bottom because they may not be at the cutting edge right now. And DeepSeek was down there, but Grok had shot way up and Google was way down. completely flip spots because this kind of surprised everybody. This came out and is now the top of the benchmarks. And it's something pretty crazy.
Also, speaking to what we talk about in the show a lot, it is zero shot coding a lot of things. Matthew Berman, who we love as a YouTuber that goes deep on this stuff, makes a couple of videos, more than that every week. did a whole demo on how well this is zero-shotting code stuff. And I think this is like...
It's not getting enough attention because of the image gem, but it is a big deal. Yeah, there is a thread that he has like some one shot demos, meaning like he is asking for the AI to build a thing with one prompt. He is not following it up. He's not bug fixing. He's not. adding features. And some of Matthew's demos include a 3D bloodstream virus simulation that has like sliders that you can adjust for like white blood cell settings and environment settings and virus settings.
and you could run the simulation. There's a Rubik's Cube generator and solver. There's, of course, snake games that are with power-ups and all sorts of different food types. Just all these little demos and the fact that they're running, they seem more complex than just the most basic vanilla variety of these apps is really, really impressive. I have not got my hands on this yet within Cursor because I was too busy.
making big butt animals alongside of you, which is very telling. But, you know, again, like. No shade to Google. Like this is a really, really incredible release. Well, what's so interesting about this to me is what makes news and why in this space, right? Like I, it almost like we almost had like our vibe code news cycle, which is. terrible thing to say but like in part like the vibe coding thing we had a couple weeks ago and it's like oh my god everybody's vibe coding look at what you can do
And this is just much better at doing that stuff. Like it's a step up from that. This just goes to the same point that we have been saying on this show again and again and again. When you have four to five giant companies throwing hundreds of billions of dollars really collectively.
into this space, things are going to improve fast. And I think if you had said a year ago that any of this stuff was possible, that Matthew showed off, or even the stuff people are doing with Claude or doing with these other systems.
you would have been laughed out of the room, right? There's nobody would, there's no way you're going to be able to pull this off. And thinking models really have seemed to unlock a lot of this stuff. There's been a lot of rumors around GPT-5, which is the next open AI thinking model plus, you know, GPT-5.
GT 4.5 base model, which we assume is going to come not that long from now. 4.5 with reasoning and it should make it skyrocket. Like it should just leap to the top of a lot of charts, which is amazing. The vibes are shifting. I know people hate the word vibes. I hate vibe coding. I hate all that, but they are truly shifting. I do a handful of consulting and I deal with a lot of engineers across several different industries and I have seen many of them.
be the never AI-ers, never my code, never my code base, never my systems, never my tools, and getting texts from people going like, Oh, my God, 80 percent of my day was just handled by AI. And I had to go in and clean up or do a little bit of something like they are exponentially enhancing their output. And even the.
the grumpiest of the engineers out there are really starting to see the light on this stuff. And so when, when you see, we talk about people preaching their bags, meaning like they have a vested interest in people believing AI is the future of everything. And it's going to be so powerful when they say that, you know. 80, 90, even potentially 100% of some code will be written by the end of this year by AI.
I believe it. I think that's totally right. And the thing I keep thinking about is the commoditization of these tools. I saw a tweet a couple of days ago. I don't know who wrote it, but it was asking like, what will be more valuable in the future, a frontier AI model or a 1 billion user product?
Most people replied Frontier AI model, but then I was thinking, well, the interesting thing is what you think about as a 1 billion user product, if the models themselves just keep getting better and better, but there's like five of them that can all do the same things. then it's clearly the product that's more valuable or at least more interesting because they've somehow productized that AI to make people want to use it. And that's the path that it feels like.
ChatGPT is clearly on right now. And it's an interesting thing. I hadn't really thought about like the kind of chess that maybe Sam Altman was playing coming from a product background, but it is a big deal. Like, I think that is a differentiation because if Gemini met as...
Lama, OpenAI, Anthropic, all these companies can do amazing code. And eventually, as Dario Moody said, in a year from now, it's writing all the code. Well, who cares which model you use then, right? It's really going to be much more about the experience you have with the thing you're using. Well, and if... The open source community and efforts keep up at the pace that they are now, or they have been.
Your foundational model might have a six-month window of some incredibly novel, unique, amazing capability, but – In due time, I'm going to build knowing that an open source version is going to be available right on the other side of that. So to your point, the billion users becomes far more valuable than your multi-billion dollar model.
Yeah. And, you know, to that point, there's a couple of new image models that came out. Maybe you didn't pick the best week to come out, but that were released this week that I think are at least worth talking about. There's one called Rev, which shot up the image model testing boards. It was actually called something else.
came out and talked about what it was. Very nice looking, photorealistic images. Reminds me of Mid Journey a lot. It's very well done. Some of the examples you can see that Heather Cooper put together and compared to other models, you'll see in our video here.
Go check out our thread if you haven't seen it. Very, very cool looking. The other thing I didn't put in this rundown, but I forgot is like they're also powering Duolingo's voice agent technology, which is pretty cool. Have you seen that demo, Kev, where like the Duolingo people are actually animated? and they talk to you when you're having conversations. Yeah, yeah. So their model is powering that visual side on the back end, which is also pretty interesting. Oh, that's nice. Yeah, tough week.
Tough week to get your press out of this, right? Yeah, again, I don't say that to be disparaging or discouraging to anybody. Like, I just like, man, the reality is tough week, right? Especially Ideogram, an app that you and I both use extensively. My wife pays for Ideogram. So do I. So do I. Yeah. Ideogram. Amazing. Are you going to continue to pay for it? A 3.0 model just dropped. And then it looks good. And it's got text generation.
Here's here's what I'll say about ideogram that's different. And I think and I will say I'll drop these in here. I tried making my night prompt that I did with with ImageGen and it clearly didn't do nearly as well. What ideogram is doing and maybe this gets to the point of the product side is that I think.
Ideogram smartly has recognized that their model for some reason or another does very good with text and design. So like Ideogram is very good at making like if you want to one shot an Instagram ad for your big booty bears sort of thing, you can.
make that because Kevin, I know that your plan is to start, take my idea and go start an Instagram handle right after this. I want it to be a custom app where it jiggles as you scroll. So like the scroll stop, it will bounce a little bit. If somebody wants to vibe code that for us, somebody wants to vibe code for that.
as feel free but anyway so ideogram has a lane and this new model that launched today has got a lot of cool design features in it there's a lot more stuff you can do and i wonder if that's where these things are going to start to diverge like we've talked about pika pika did a good job of like creating those weird little
of apps that you can do stuff like squish or you can do things maybe that's where you start to see specialized models and if ideogram specializes in design or like they could kind of fold into like if you ever try to design something in canva it's still not great at it. You have to do a lot of the creative work yourself. If Ideogram could find a way to create a design and then I can pull layers out of that thing and manipulate them, that feels super valuable to me.
Tough week. Did I lose you? I was like, what happened? Did I just lose Kevin? No, I mean, well, I mean, I think, I mean, yes. Tough week. It's a tough week, dude, because I'm looking at the ideogram examples. Yeah, there's certain styles that look great. They look fantastic. I'm not seeing anything that I don't know that you couldn't do.
with GPT-40 and I'm going to have $20 to burn this month. Where's it going to go? It's probably going to go to the thing that does a whole bunch of other stuff as well. And then again, you know, you start to see that $200 thing for opening eye and you're like, well, that seems crazy. And the more stuff they start piling into that $200 thing, it almost becomes like a cable bill where you used to happily pay $150. And speaking of opening eye, Kevin, very quickly.
This has almost been blown by so fast. One of the coolest voice model updates from them that just happened, they dropped new text-to-speech and speech-to-text models, but kind of more fun for people out there who want to try stuff. They dropped a website at openai.com. that allows you to not only generate very cool prompted technology, very cool voice responses, but download it. It's almost like a mini 11 labs that's built around like whatever, there's 10, 12 voices.
But if you go there, you can really play with emotional tone in a way that hasn't been possible with voice AI before. Yeah, and it looks like a teenage engineering design website. Yeah, which is so cool. It looks like an old beat maker, but you can go to openai.fm. You can play with all of these new GPT-40 mini text-to-speech models. You can change the vibe.
by shuffling, selecting something that's there or giving a custom prompt. And then you can give it a script and hit play. And what you'll find is that this, it's very interesting to me that it's a mini version of the model. implying that there is a bigger better more capable something in the wings but it's very capable for what it is now especially coming off the heels of the amazing sesame real-time audio demo that we talked about raved about really just a week or two ago But you can go there.
prompt your script, get it out. And you can find that these models can whisper, they can scream, they can get angry, they can be sarcastic. They have a whole wide dynamic range of emotions and you can prompt the speed, the tone, the delivery, the emotion. sorts of stuff, the punctuation. You can get in there and really start playing with it. Can you ask it to do one thing for us so we can have people hear it? Can you ask it to...
Do a promo for Big Butt Bears, the Instagram handle and give it just a little bit of copy and kind of show off what's possible from an emotional standpoint. Do you want it as a cheerleader or as a New York cabbie? Let's do a New York cabbie. That would be that seems like the right voice for big butt. Okay. And is this for the app where when you scroll the bear's butt jiggles? Yes. Yes. So right now, Kevin is going through and he's tweaking the different.
aspects of this voice thing which are in the prompts on openai.fm you can see them it gives you a chance to change the affect like kind of how it's going to say stuff the tone the pacing the emotion and the pronunciation those are things that you can actually prompt for on OpenAI.fm.
Let me tell you something. I got an app that'll twist your brain and slap your granny. It's called Big Bear Butt Jiggle, and it's hotter than a taxi seat in July. Maybe we shouldn't do that. That sounds like something you would hear on like New York radio.
It's like, that's great though. Yeah, it is pretty amazing. It is pretty amazing. Okay, would you rather hear it as a cheerleader or should I shuffle and get us a new style? Let's shuffle and just get a new style. So what you can do on openingai.fm is you can literally say shuffle and I'll just give you a new style and you'll be able to keep the text that you wrote.
Okay, real quick class. This is not part of the lesson, but I have to tell you about this little app. It's called Big Bear Butt Jiggle. Yes, that's the real name. Timmy, I see you. Let's not go there, okay? You scroll, a bear shows up, and it jiggles. That's it. Totally pointless. But...
So that's so interesting, right? So you just get a sense right away of like how different you can make these voices. And anyway, Kev, this is like such, it's almost like a thing, again, you could spend a couple hours with over the weekend and just try different things.
that i found really interesting about it is you can use these audio clips for ai videos too right they're downloadable you can upload them you can use them in lip sync tools what's interesting about this to me also from a business standpoint is like
This is kind of Eleven Labs' business. The one part of it that Eleven Labs does specifically different is Eleven Labs does custom voices, which this is not doing right now, but also it allows you to change your voice based on what you're inputting. Oh, you got another one here? Yeah, I just I had to get an emo teenager read. I'm sorry. Oh, cool. Another app. Just what the world needs. It's called Big Bear Butt Jiggle. Groundbreaking, right? You scroll, there's a bear, it jiggles. Wow. Art.
I mean... Whatever. It's unbelievable. I mean, we have hinted at the fact that Kevin and I are working on something kind of special, creative in the background right now. We have a project that we're very excited about. And like this sort of thing is really a lot of what we love about what we're working. on, it is now possible. It sounds like you're talking about the Big Booty Bear app. Big Booty Bear app is not the thing we're working on. Just to make everybody clear. It's called...
Plump Rump Farms, and it's a take on Farmville, and your job is to figure out which diet will get which animal's booty the thickest. Honestly, that's a good vibe-coding game. Because juicy rumps are in season. That's a good idea for a vibe-coding game, but no, it's not that.
just saying is you need you need to play with these tools because that what we just heard that emo guy literally kevin did in what 30 seconds right 30 seconds off of this tool And just to be clear, I went to GPT 4.0 and I said, write me 15 seconds of copy to be read by a New York cabbie or by a whatever style that was in, generated the script while I was talking to Gavin, copied and pasted it, and then there you go.
there's never been a better time to have ideas. Now you can whisper them into reality. That's right. OK, we have so many other things we want to get through fast. So let's just get through a bunch of stuff here quickly. Deep Seek, Deep Seek, Deep Seek, Kevin. Deep Seek is back with a new model. Again, it's getting kind of pushed past a lot of people, but this is their new base model.
It is actually called V30324. So another great naming convention. The benchmarks on this model, this is not the reasoning model. This is their base model, like GPT 4.5, are very good. And at some places... better than GPT 4.5. So when this gets turned into the reasoning model, R2, you can expect this to be pretty close to state of the art. And I'm pretty interested to kind of track this and see how it goes. Okay, we'll put that one on the radar. Moving on.
Figure 01 has a natural gate, Gavin. You're going to hear these robot footsteps behind you, probably in a steamy, seedy alley late at night, and you will not feel safe. No, and what one of the, Brett Adcock, the guy who's the CEO, said is less grandpa walking. There's still a lot of grandpa walking here that you can see. It's still a little shambly, yeah. It's a little shambly, but it just shows you how this sim training that we've been talking about on our show forever really does.
change the way these robots work because you can download it directly into the robot's brain and then it will be able to do stuff that it was only doing in the simulated environment at first.
Then, Kevin, there's another video that we're going to talk about here, which I kind of was like dismissing, but you thought was really cool. I think it's really cool. These are robot steadicam operators or really robot camera operators. And when we say that, obviously, people have been working with robot cameras on news.
sets forever. These are literally humanoid robots acting as camera people on television or commercial sets. So tell us what you think is so cool about this and then I'll tell you why you're wrong. Well, listen, I know you're a fan of job displacement, so I know you're really excited for another blow to the industry that made us both.
Why I think it's interesting in the video itself, it talks about the traditional mechanical servo motion controlled robot arm, which can hold a camera and give you the same consistent shot over and over again. And those systems are really, really expensive. They're cumbersome to move around.
You have to program them with very specific programming software. And then you want to move a shot or do something else. Well, got to move the whole set around the arm or move the arm around the set. And what they're showing off here, which is still very early, is the ability to take an Atlas robot. a humanoid robot put...
a non-special tool in its hand, that same kind of rig that a human camera operator used. Yeah, like a Steadicam op, a Steadicam rig or something to that extent. Yeah, it's like a ring around the camera, which might stabilize it. And then you can tell the robot, I want you to move in this way or film in.
this way and because it is a robot it is going to repeat the movement exactly the same each and every time so aha and there is my problem kevin there is my problem sorry yes i wanted to hit me with it you Every time you want the robot to do a move,
You have to then set that robot in the same place and set it to go there in the same way that you would want. Robots are great at factory work, right? If you're making the same thing again and again and again, you can set that robot up to go. Here's where the bolt goes. Here's where this goes. Here's where that goes.
What I worry about with this particularly, and again, long term, it probably gets there because robots become so smart, they become like people. You're saying you can't program back to one into a robot? No, that's not what I'm saying. What I'm saying is if I am the director and I say like, hey, that's just a little bit too low this time. I then have to go.
Either prompt it in or get the robot technician to like communicate to the robot what to do. It is not going to be so fluid as a person might be to interpret what I am saying to them. That is my number one thing. I don't know. I think if we're dealing with a humanoid robot. that that is a highly advanced ai it should be able to understand like
We just talked about making David Bowie and Donald Trump in one shot or prompting entire games. I bet by the time this thing is implemented in sets, you'll be able to say, hey, Robit, pan down a little bit. And it's going to go, all right. Okay, again, very quickly. You've worked with union camera operators. You know how difficult it can be. Let it be known. Gavin is anti-union. Gavin is anti-union. No, I'm just saying my thing here is this. Yes, tell the robot to pan down.
robot pans down. Oh, robot, you didn't pan down in the way that I wanted to. Okay, pan down, pan down this way, pan down that way. It is a multiple step process where a human who has spent their life doing this could interpret it in a much different way. Now, that's not to say that five to 10 years from these robots won't have that interpretability. When I saw this video though, it made me so angry because I was like,
That set is going to just take forever to get the thing. And it felt like hype beastie. It felt like, why am I, why would I ever put a camera in a humanoid robot's hand? I understand. drone-based cameras that can follow people and you can get a shot from a drone where it's following a subject. But this just felt like three more steps to show off like, oh, our robots are camera people too. Like it just felt, I was annoyed by it.
Well, mark it now, Skynet Actors Guild. That's right, SAG of the future. Gavin said, why would you ever bring one of those crusty old robots to the set? I can conceive of a million different reasons. I'm not saying you should, but I think you could.
that's okay, Gavin, you know, different strokes, different folks. Can we talk about AI replacing models now? Cause we just killed the camera person industry. Let's get to models. This one seems more realistic. I'm flip flopping. So I'm on the side of the camera people, but maybe not.
models. So yeah, H&M is going to make AI clones of 30 models. And the idea here is when you go do a photo shoot, you know, H&M, you've seen the H&M ads are black and white, some very skinny looking people kind of buffed out.
They have to put different clothes on them. We've been talking about this for a while, how you can swap clothes on people very easily. These 30 models will be able to take these photos and be used as AI. And then in the future, they will get royalties or they will get paid for where.
their images are used. So that is a cool thing. But what it also probably means specifically in this case, much more so than say actors or writers or any other creative job. And I think people would argue maybe modeling isn't as creative.
They are replacing a lot of other people who will get modeling jobs, right? Like this idea that if you had an AI of one of the most beautiful people in the world who you could put in your clothes and they would look perfect in them, you could shape them in any way you want to shape them. You could have them wear whatever.
clothes you wanted to wear. Why would you try to find another model to do a shoot that's going to cost you money and do all this sort of stuff where you have to hire a photographer, you have to hire a lighting designer, all these sort of people. This feels like something that was happening for a while and now just makes sense. Yeah.
I think the answer is you wouldn't do all those things. And then you hop, skip and jump just a few months down the line. Or once this is more socially acceptable, right, to have AI models. And like, why would you use a real human being at all in the first place? Yeah. Why wouldn't you just hallucinate the entire model? And I don't like like. Out of context, we can seem, especially me, can seem very flippant about all this. I am not. I am very much concerned about job displacement.
The fact is it's still happening. And so these are 30 models that H&M is doing this with, like you said. Now, right now, the human models still own the rights to their AI likeness, which is interesting. And I think that's to probably.
keep people at the gates a little bit longer, let's just say, right? It's like, hey, we're going to make a digital model of you. You can go use that model with other companies, even competitive ones. You still own it. But again, that's just turning the dial a little bit because the next phase is actually we're going to own.
this next phase of AI models. We're only going to use 10 of you because we can style transfer you enough to get different looks out of it. And then it becomes maybe we only need one human model. Or there's the thing where, you know, there's this big argument that both open AI and different AI companies are making to the government right now where copyright shouldn't matter for AI models. So if that's the case,
Do you need any models? Because if you can prompt a unique model out of GPT-40 ImageGen now, then you just have a digital model wearing your clothes and then you're not paying anybody. What I want is a video of an Atlas robot with a disheveled tie trying to do a cool model lean in a chair with a robot cigarette in its mouth. And I want it to show that it's going to replace humans like those camera operators. Wait, what is a robot cigarette?
A digital, an e-cig. A digi-cig. Is it like a little tiny robot that it like brings tobacco through its... through its body somehow. Oh, there's like nanobots that actually go crawling into the robot's mouth? I think it's just like ones and zeros. It's a cute sig. All right, Kev, it's time to see what some people did with AI this week. It's time for AI to see what you did there. Sometimes you're scrolling without a care Then suddenly you stop and shout
My first story this week is a really good example of what's a possible vibe coding, but also kind of how information spreads and how apps spread on the modern AI internet. There's a kid whose name is Martin and he is 18. at least according to his X handle. His name is Martin. His X handle is underscore Martin sit.
And he has generated what is called we basically tweeted out a video that says we built cursor for 3D modeling. And if you watch in this demo, what he does is he draws a little house and then he pushes a button and then that house turns. into a image and then eventually it turns into a 3D model. And what's cool about this is.
It basically takes all the open source stuff that we've been talking about is how you take drawings to 3D and kind of puts it into a place where it's all manipulable in the same thing. Very cool thing. But then the interesting thing was if you if you look at all of the stuff, his other tweets after this.
Like he's gotten people coming after him. He's had like 10 VCs reach out to him. He's had multiple founders talk to him about how to raise money. And it just goes to show you like what the kind of environment for like very quick vibe could. apps are and in this particular case like I do think there's something there right because cursor made a vibe coding very easy and the 3d asset thing
If somebody was able to make 3D assets super easy to create, that does feel like a really valuable tool that many people would pay for. Here's the wild thing. I love when people build in public because it informs and it inspires.
With a lot of these tools, you look at, oh, they bolted open source A into open source B and they got the result. I love that people are hopping in to give him advice on how to raise money and start a business around this and formalize this. I fear that on the other side of that, there's someone else.
watching that going, aha, this is an interesting pipeline. How do we leap on it? How do we productize it? Which is all to say, Gavin, I'm here to announce my brand new cursor. Your big booty bears cursor programming, 3D modeling stuff. If you code...
If you code a big booty Farmville knockoff that we have proposed on this thing, we have got to participate in some way. Don't just put our face on the title screen. Don't just have us approving your game. We want to participate in this big butt farm.
Speaking of really interesting and cool companies, A16Z just did a demo day for their speed run and a company showed up called Talus Robotics. And this was a video that came out from Ryan Ben Malek, who I think is either one of the, I think he's the. president of the company is somebody at talus and what their company is doing
is using robot dogs like the Unitree robot dogs for blind people. And what's fascinating about this as we're watching this video is if you're not familiar, I didn't know this, but like seeing eye dogs cost about $80,000 to train. Obviously they age like actual.
animals and eventually they have to be replaced for the person if they're blind their whole life. And what this company is offering is like $10,000 unitary robots and the ability to kind of have these be cheaper, get out to a lot more people, and then eventually maybe... even be better than a traditional seeing eye dog. To me, this was just a very cool use case.
of ai in the real world that i had not thought about but it is often that thing where people say i think altman himself sam alman says like you have to think about where the you know the eventual technology is going to be This feels like a company that's going to get there in a couple years and do some really amazing stuff. I get chills looking at the video of it.
in the real world, you know, assisting somebody, the little robot walking around like that's incredible. I would not have thought about that use case better than an Atlas. holding a camera. I get it now. I see it. Yeah, see, exactly. This is what you'd be doing with robots. No, but this is amazing. And you and I speak from experience here. It is very difficult to mount machinery to a Labrador or a golden retriever or a German shepherd.
But here, two for one, when we're decommissioning to help people, you know, assist in their daily tasks, you put these things on the front lines, baby. Oh yeah, this is, you're right. So this might also, you're saying this might also be training for the robot war where like the blind people are training these dogs to do warfare eventually. Is that what you're saying? Might also be. Okay, Gavin, let's pretend like we don't know here. Like we're not insiders. I want.
to talk about sin city yeah me too this is so cool tell us what this is because when i saw this i was like i want to play with this right now Yeah, I will go to Zen City, pop a couple sugar pillows in my mouth and be cruising through Zen City. This is S-Y-N or a simulated city.
Shout outs to Sonny here, who I think had dropped a paper, but no code yet. But this will let you generate a SimCity-style isometric tiled city that... has coherent buildings placed within like natural landscapes and you just use words if the demo proves to be anything like the video it just looks like a quaint little city simulator where you can ask for
a college campus, a water park, a city, an industrial post-apocalyptic town. And it generates these little tiles that exist on a coherent grid. And it... would just be so much fun to play with this and rapid prototype little worlds that you can move about in.
this reminds me of like the kind of idea of how ai could change gaming in a real significant way right not just not just like hey we can make assets faster we can do this like this is a kind of a different type of gaming that would only be playable with ai and I can think as a kid who grew up playing, you know, SimCity, the original SimCity or all the like, you know, Cities games after that.
This would just be a cool thing to try. And imagine a world where you plump down an alien unit in the middle of a normal neighborhood. And what would that interaction look like? But the designer may not have thought about that. But if I thought about it, then I'm kind of co-creating the game as I go along. That is such a cool idea to me from a creative standpoint. You ever play SimAnt? Oh, I love SimAnt. SimAnt was great. I mean, I'm a giant Will Wright fan. So SimAnt's fantastic. Yeah.
Oh, no one loves SimAnt. You're the first person. I even like SimTower, but I'll digress. SimAnt, phenomenal game. How about SimEarth? SimEarth was also good, but very nerdy. SimEarth was very nerd. I didn't really get into SimEarth. I like SimEarth. You might have been too young for SimEarth, actually. SimCopter was also good. Anyway, Will Wright, we love you. You're a hero amongst heroes for everywhere. Okay, Kev.
We did a little bit of stuff with AI this week. We talked a lot so far already, but I do want to really quickly shout out something that I worked on only because like it was a dumb thing. I turned it around in like three hours. I thought this is fun. I saw this video that got posted of people. They were using Hedra, the character three.
the model that we really like. And they basically had regenerated a podcast and it was like a cute girl talking to a, you know, a 20 some guy. And it was this dumb kind of back and forth around, like, I can't remember exactly what it was, but it was kind of stupid, like, you know, funny thing. Everybody was like, Oh my God.
fake podcasts look so big and all this stuff. So I was like, you know, I wanted to try something unique and creative. And I just thought, well, what would I try? And I generated a podcast, a fake podcast that is called dial up.
diaries and is about two guys in their 50s, maybe even their 60s, who are discussing the sounds of what dial up was. Because to me, some of the best things that come up on TikTok are these weird ass podcasts. So maybe you play what I made and you get kind of a sense of what. this is clearly the San Bernardino region this one this one changed me Bob I remember it vividly it started clean kind of scream
No, no, no. That's too early on the screech, Bob. The San Bernardino had a longer handshake before the carrier tone kicked in. It was more like... Kekekekekekekekekekekekekekekekekeke So, Kevin, I mean, it was just dumb. It was fun to do. But like one of the interesting things about this is just how fast you can do this. And what I hope going forward, this reminds me of that intergalactic cable thing from Rick and Morty. I'm going to tell my kids this was called for help. Yeah.
It doesn't look that different than Leo Laporte in some ways, right? But I do think how fun would it be to see like a full like channel of these? And, you know, we've talked a little bit about those weird formats that are coming out of Korea or a real short out of China, which are like kind of scrollable videos.
that go on soap operas and they're shot with bad actors and everything. But I could see a version of this where almost like that, there's that website, I can't remember what it's called, websim.ai, where you can create fake websites. Imagine a website where you could create fake.
content, but it would have to be good. Like the tricky thing is like, you have to have, you could vibe code it right now, buddy. That's true. Yeah. I don't think we're there yet, but I think it's not that far away from a world where you could say, Hey, make me a funny video about two guys talking about dial up internet sounds and then help the prompt and help the creative and then spit it out. Like it would be a tricky thing to figure out how quickly you could pull that off. I think.
When we exit our Series D Big Booty Bears. The massively multiplayer. Then we can do whatever we want, Kevin. We can do whatever we want. That's what I'm talking about, Gavin. So let's be the first two-person billion-dollar company. This week, I got my hands on the brand-new OpenAI real-time voice model, which was unceremoniously updated in the wake of everything else.
It's faster. It's more performative. It's responsive. You can ask it to scream. That will be nightmare fuel inducing. But I got it to just pronounce the letter O followed by the letter A. 50 times in a row, and it fully got caught in a loop that lasted for minutes in my living room, and my wife is still mad at me. I think we should just listen to it very quickly here before we go. Go ahead, let's hear it. I mean, don't you want to see how long it goes for? Okay, okay, that's good. That's good.
Now do it again, but faster. Here we go. Okay. Thank you. I would be so mad at you, Kevin, if I were her. I would be so mad. I know. April, we're sorry. We'll see you all next week. We'll see you all next week. Thank you for joining us. See you all next Thursday. Bye, everybody.