Meta Podcasting - podcast episode cover

Meta Podcasting

Jun 24, 202536 min
--:--
--:--
Download Metacast podcast app
Listen to this episode in Metacast mobile app
Don't just listen to podcasts. Learn from them with transcripts, summaries, and chapters for every episode. Skim, search, and bookmark insights. Learn more

Summary

Andrew discusses his new role hosting the official OpenAI podcast, sharing insights from his interview with Sam Altman and the company's encouragement for tough questions. The conversation then shifts to Tesla's Robotaxi launch in Austin, debating Elon Musk's sensor strategy versus Waymo's approach. Finally, they explore the potential of ByteDance's remarkably cheap and fast video AI model for animation and creative projects.

Episode description

On this episode, Justin and Andrew break down Andrew’s new role as host of the official OpenAI podcast — including behind-the-scenes insights, why Sam Altman welcomed tough questions, and what it means for transparency in AI. Then, they dive into Tesla’s Robotaxi rollout in Austin, Elon’s sensor philosophy, and what autonomy could mean for accessibility. Finally, they geek out over ByteDance’s shockingly cheap and fast video model, and why the future of animation might cost just a few cents.


Chapters


00:00 - Intro

00:25 - OpenAI Podcast

12:05 - Tesla's Robotaxis

27:10 - ByteDance's Video Model

35:23 - Wrap-up


Hosted on Acast. See acast.com/privacy for more information.

Transcript

Intro

Hello and welcome to The Attention Mechanism. My name is Justin Robert Young, joined as always by my friend, Andrew Mayne. How you doing? Hey, buddy. Good to be here. And we're going to do a little meta podcast analysis here because we're going to podcast about another podcast. Yes, you hear us talk on this podcast, The Attention Mechanism.

OpenAI Podcast

But there is another podcast that debuted this week. It is a top tech podcast, according to the charts. And it is the OpenAI podcast hosted by... I'm going to check my notes here. Andrew Maine. Yes. You know, Justin, I made a promise to you. If I was ever going to do a podcast with another host, it would have to be somebody to make you proud.

And in this case, my first guest on the other podcast was Sam Altman. Sam Altman. Yeah. Who's done a few things in the AI world. A great conversation and a great way to kick this off. What can we say about the OpenAI podcast? Well, for starters, there is now the official OpenAI podcast, and I was asked by OpenAI to be the host for that. That was very cool to have them ask me to do this. The reason they asked me, a former OpenAI employee, was that

I have a certain amount of freedom to ask questions. People there can, but I'm not going to worry about a supervisor going, Andrew, you made it uncomfortable because you asked this question about this or whatever. I don't work at the company anymore. I can ask about anything there. It's not going to create any kind of complications and stuff. So OpenAI is obviously in the news a lot. There's a lot of controversial stuff out there.

They were encouraging for me to bring that up and talk about it, which we did. We talked about the New York Times lawsuit, Elon Musk, a lot of the hot button issues that normally companies – don't proactively go out there to sort of engage with, but they wanted to. And so obviously I'm biased because they asked me to do it. You know, for me to be the host of it was great, extremely flattering. And, you know, the push.

Justin, as my witness, the only pushback we ever got was don't be afraid to push more into these issues. Yeah, to ask more probing questions. And it's been an absolute honor to work with you on it. The reaction to it has been stellar and it's been at the upper ends of where I was hoping it could be because I do think that the idea of a company like this wanting to have a kind of...

on-the-record series of conversations about where they are at this stage is unique. These companies kind of don't come along every day. The ones that explode tend to become... institutions, which means that they move beyond the initial exciting phase where they are building the product that they are really going to be known for. So I thought it was exceptional.

that they wanted to do it. And to their credit, like you said, the notes are, hey, push harder. We don't want to be speaking from their perspective. They don't want to have it look like this is too much of a thing where they're cutting around the edges of issues. They want to be able to answer these things honestly.

Yeah, and also because I don't work there, if I say – stupid or ask something stupid it's on me and it's not the comms team or whatever which there's also this like hey listen he's the chimp that ran into the studio we don't own him you know um

So, yeah, it's been very cool. So we've been working on this for a while, you know, in the corporate world, it takes a long time to sort of get things ready. And I think we've hinted at that a little bit and, you know, probably annoyingly a little bit coy.

But it's good to finally have it be out there and for people to be able to see this, to see it working on. We're going to still do the attention mechanism. That doesn't go away. This has just been, you know, fun. It's been great. It's been really, really cool to sit down.

with people at OpenAI, both people I work with and people that I don't know, and be able to ask some questions about the insides of the company, what made them do choices, where things are headed. We got Sam to give us a more specific timeframe for GPT-5, which was cool. Yeah. We'll probably be breaking some news about the upcoming features. We got the most concrete answer to GPT-5 when.

The question that is apparently asked the most to OpenAI's social media team is just GPT-5 when, and we were given the answer of summer. Yeah. Well, now they have no reason to listen to the podcast, Justin. We also were able to get – we have some other interviews coming up about insights and some of the product stuff to get a real – I think –

This is the thing I've said before, particularly it's true about opening eye and it's true of other companies. A lot of people spend time trying to read the tea leaves or listening to gossip about where things are headed when very often. it's right in front of you. You read the blog post stuff like this. We'd seen this before with some of the model capability stuff like, oh, we heard they have this thing where the models can learn. I'm like, yeah, there was a paper they actually published.

talking about this nobody paid attention or care yeah this was in in in the in the aftermath of all the the sam intrigue it was the like oh is is is are they keeping the the the spectral ghost in the basement like like will q star haunt us and it's like no it was written about in this paper yeah you could you could you could pick that up and so i think that um

There are stories that OpenAI wants to talk about and also directions they want people to be aware of. One of the things that's come up a lot is agentic, agentic, agentic, agentic, and the idea that you have to just – reframe the way you think about these systems beyond just a chat bot that gives you an answer and think about things that go on and continue to do work.

And I've had a couple conversations to talk about the fact that, you know, when I ask Deep Research a question now, it runs in the background. And that's what Sam talked about on the podcast. The idea, and I, like, when I was there, if we said, you know. A year or so from now, people will be fine with letting an AI take its time to give an answer because the answer is going to be so much more high quality than what we expect. And I think the problem is.

In any scenario where we try to envision the future, we try to imagine a simpler... future than our present this is why we make a lot of mistakes you see this in science fiction we're gonna have the one world government we come contact with one alien race we have one giant robot because it's hard to think about how complicated things are but the one we know about the flow of time other than reset events like asteroids.

It heads towards increasing complexity. And one of the ways we have to think about how we all deal with AI in the future isn't one specific way. It is a chatbot. There is also the thing you say, go off and go loose in the background and it comes back to you. There are probably going to be omnipresent systems, much like we use Echo and things like this that we use for other kinds of things. And I think that's what people have to start realizing that.

The future of AI is very big and hard to see because it's going to be more complicated than our present with AI. Yeah. And the human expectations to it as well. And that was something Sam touched on was. His pattern of consumer behavior was always just faster, quicker, better. That's what the consumer wants. But he's been surprised that there's this desire, this latent desire to say like, oh, no, I want to watch this model thinking. I want to watch a loading bar because now I'm going to trust.

the product that comes after it better, I'm going to feel better about it, that there was this back and forth to it, which I think deep research is a, is such a, I think. a groundbreaking product, probably more than people really understand it to be right now. But the fact that it takes a little bit to get there is pleasing. You know, weirdly, because I do agree with them. I wouldn't expect that that would be what my initial customer preference would be. Yeah, I think...

It's going to be great to listen to them talk about where things are headed and the choices they've made. I'm certainly getting feedback from people too about topics and things like that, which is to be expected. So I'm excited. Very, very, very – I am – You know, my utility here for us and for what I'm trying to do is going to be dependent upon my independence and be able to ask interesting questions and sort of continue to do that. That being said, it was I don't take lightly the fact that.

They asked me to do this and that was – to look at a company that I was at that was a small company and now look at it as a very big name, a brand name and to have them say, hey, do you want to come do this was very, very flattering. And having been working on this project for a while, I will just tell everybody, great names coming up. You are going to very much enjoy the conversation. Yeah, and Justin, I got it. I mean, I got it.

This would not be possible without you helping out. I mean, Justin has been just an integral part of this as far as, you know, both planning out the questions and work of the opening eye. Justin has done the brunt. of the work of just figuring out just from the, Hey!

It's just logo need to be here to what are the best questions we can ask Sam Altman in the span of time to get that. Our buddy Brett Rounceville. Brett's been a producer on this. He's been amazing. We've got a really good production team helping us. cinematographer, Matt silos. We've got, uh, Jesse, we've got Eric. We've had other DP that's come and helped out. Um, yeah, yeah. Will's yeah. Will's absolutely. Will's been turning things around really fast. My wife.

Rush, she's been doing everything from... Yeah, from helping me figure out, you know, what to wear to giving really great editing notes. So being married. Yeah, no, stylist, makeup artist, editing. I mean, lead consultant, I would say, you know, there. Yeah. She has been invaluable when it comes to helping shape the visual product in so many different ways, as well as being, of course, a tremendous and integral part of the human component for our illustrious host.

I have, you know, one of the lessons I've learned probably a couple of decades later than when it would have been really useful to learn that is no matter how creative and how much of a self-starter you are. If you put yourself in a room with other creative self-starters, magic happens. Yeah. Yeah. So everybody, go ahead and check it out. Download it.

View it wherever you do those. They are on every platform. If there is a platform for which this podcast is not on, hit me up, Justin R. Young on Twitter, and I will make sure. that it is there, be it a podcast directory or a video service that you would like to see it on. We will make sure it shows up there. So go ahead and enjoy that. More coming.

More coming very, very quickly. So let's move on to the rest of the world of AI. Indeed, my home turf of South Austin, Texas is ground zero for a big...

Tesla's Robotaxis

and that is Tesla's robo taxis. It is an interesting way that they're framing it. They're saying that they are starting taking customers, which is... A little, it's a fleet of 10 right now. They have a list of people that they are green lighting to use it. Many of them are influencers. So they want this out there.

I don't know where they are in terms of regulation with the city or the state, but there's a big bet by Tesla right now on the future of autonomous driving. And Elon Musk was out on television today talking up this release saying... that the solution that has been put together by Waymo and some of the others with multiple different sources of LIDAR, radar, that that's not the way this should go. That instead...

Neural network plus camera plus microphone is a simpler and indeed Elon says safer way to go. What are your thoughts? I have a lot of thoughts. So on the technical side. I would say that I'm a big fan of Waymo and a big fan of Tesla full self-driving. I have the update that lets me drive hands-free in my Tesla, and I can go from driveway to destination in San Francisco without.

touching the wheel, like literally not touching the wheel. And it's amazing. I would say that Waymo... I've been in the car with him. I've witnessed him do it. Yeah. I would say that Waymo... It uses a bit of a different technology stack like we talked. LIDAR, for those of you curious, LIDAR, the L, is laser. So basically it's a kind of a laser radar. It has a...

Series of lasers that sweep around and map the area with precision. Whereas the Tesla self-driving system uses, I think, eight cameras located around the car. I can kind of make arguments for either or in that they both have strengths and limitations. I've been in situations where with my Tesla where it didn't see things like a railroad crossing, you know? Yeah. block, you know, the whatever we call the rail. It did not notice that.

And I know that there are some limitations with LIDAR with certain kinds of weather and stuff. And so I'm not prepared to sort of say, ah, this is the clear thing. I think that, you know, at a point.

Tesla actually had like a forward radar that was in the car and they pulled that out. And I thought that was a mistake because I know that there's a certain kind of signal and stuff that you should be able to get from there. I would say that it kind of comes into use, but I'd say I feel comfortable in both. But when I'm in my Waymo, I don't have to pay attention at all. I'm sitting in the back seat. Nobody's in the front seat. With the RoboTaxi, I think that they're going to, one...

One of the things both these systems, what they do is they do go map out ahead of time the area in which you're going to be driving. Now, my Tesla, my personal Tesla, can go to just any destination on a map. When you start putting these taxi services, you make sure these...

vehicles have actually mapped that area out. So if you ask me, which is going to perform better in a pre-mapped out area at Tesla or Waymo? I don't know. I'm not going to concern myself with it. I just want the availability of it. Waymo is incredibly smooth.

Waymos are very, very smooth. You watch them handle just really interesting interactions. And we're going to get a lot more data now with people are going to be observing the robo taxis and how they behave. I've seen some anecdotal stuff online of them doing stuff.

Maybe they're not supposed to. But I want to – let's give it some time to sort that out because people also like to collect videos of Waymos going into potholes and stuff. So I think it's a great moment. I have questions though. I have questions. Yeah, real quick, because you brought something up about the Waymo stack.

Elon said today that the reason why they pulled out radar from Tesla's was because if the camera sees one thing and the radar sees another thing, then the question is, what's real? And that the. glitches from that or the problems that arise from conflicting data are part of the reason why Waymos sometimes get into situations where they glitch out or they are going the wrong way or whatever. Do you buy that?

I have questions. The problem is, yes, it comes down to a data problem. It comes down to how you then look at that data and evaluate it. And I think they put the radar back into it. When I talk about LADAR, we're talking about the radar. As far as I understood, they did put it back into the vehicle, which I think they may have realized that was a mistake. That's my assumption.

I think the whole point of using AI systems is they learn to handle these things. We saw a problem with Tesla vehicles where they didn't see emergency vehicles when they were having these lights flashing because they're covered with this very reflective material.

And it was like, is it there? Is it not there? Is it there? We have these problems in vision systems. A vision system is trying to decide what to believe and what to believe all the time. And the Tesla system, by the way, only recently within maybe the last two years shifted over to a more modeled.

neural net architecture like ChatGPT, they started using transformers. And I would say that if you're telling me, oh, it doesn't know, like, well, that's an AI we call the policy. You build a policy based upon data. Nobody has more data than Tesla on this stuff. So I kind of go.

I don't know if he's really convincing himself of a technical argument because he doesn't want to pay the cost of trying to equip cars with LiDAR. Because the moment he says, yeah, LiDAR is better, then people go, why aren't you putting $30,000 worth of LiDAR into these cars? In every car, yeah. Yeah, once the cost of LiDAR drops down considerably, I can see it.

Maybe be another feature. Vision has been great. I trust my Tesla within a certain parameter with Vision. Yeah. But the, oh, we don't need more sensors thing because it's too much noise is a silly argument. That's like, no, you have to work with it. the architecture of your system, how much data you process. But that's the whole point of AI is to understand when and when not to use certain data. Getting outside the technical thing, here are my questions. His argument was camera plus...

Yeah, sorry. Yeah, yeah. His thing was camera plus neural net was the way to go because that is the way that roads are designed. Eyes plus brain. And but I do see the argument that, look, if AI is supposed to neural, we're supposed to parse through this data faster and easier and more effectively than why not more? Yeah, but, you know.

When you take another fast-moving form of transportation like aviation, it originally was eyes only. Then we added the ability to have radio towers sending you signals, let you know. Then we added... you know, early forms of GPS. Then we added radar and we keep adding stuff. There's never been a system where we said, oh yeah, this thing that picks up in other wavelengths, it gives us even more data.

Why would we want that? You go, yeah, you might be technically it's hard to handle. And he understands he has commercial vehicles on the road and he has to back fit things over years. I bought my first Tesla from almost 10 years ago. you know, was supposed to be theoretically upgradable to full self-driving. We know that's a joke. I didn't think it was a joke when I paid 10 grand for it, you know, that feature. But that's the reality is that he also, anything he says, he has to then...

in some context, maybe has to follow through and say, well, I have, therefore I have to make everything else capable for it. So I'd say that I don't know when he's telling me a thing because he believes it's technically true or he's looking at the spreadsheet saying, well, this has to be true.

Yeah, yeah, because if I say this is true, if I say this thing is other helpful, people are like, well, you sold me a car without it. I don't know. The point is I think they can do a great job. Like I said, I trust for the most part. you know, myself driving with my Tesla. I use it all the time, you know, but I'm open to the idea. Maybe more data is always good. But anyhow, here's the economic question I have for you.

And I don't hear the answer to this because I see these people very excitedly talking about like, hey, you know, I could have my Tesla. And then when I'm not using it, my Tesla could go around and drive and perform in a taxi and pick people up. Which, by the way, is not what the robo-taxi is doing.

The original mission was, the original idea was, every Tesla you bought, Elon had talked about the idea that I could flip a switch when I'm not using it, and then it could go around and start servicing places. Now we're getting, oh no. Tesla is providing the robo-taxi. Tesla will run fleets of robo-taxis. Yeah. So my Model Y, which I never had any plans to use as a taxi.

I'm now going to be competing with Tesla. The vehicle I bought can't actually be used for that. I don't know how many people bought them with the expectation they're going to do that, but that was a little bit of a switch. Oh, yeah, Tesla's going to build out its own fleet of robo-taxis. It's now we're operating it. Now, at some point, maybe I can buy them. And they've talked about the idea that I could buy, you know, robo-taxis to go service areas.

How many do we need? You know, what is the competition going to be like? You know, like in SF, we have something like 600 maybe Waymos or less. I don't know the number. You know, probably could handle three to four times as many. Probably handle a lot more, but I wonder that. Austin is going to, or might well now actually have. Waymo, which is operated by Uber, the Amazon self-driving company Zook, I believe, and now RoboTaxi. So three different autonomous vehicle.

companies that are servicing various parts of the city. Right now, Elon's robo-taxi zone is in South Austin specifically, so it's not going to go downtown. But it will go through South Congress, which does have a very high foot traffic area. So you are going to deal with a lot of people and unpredictable things on the road. But it's, you know, look, I walk.

south lamar every week and you know there's now a chance that i could get run over by either a waymo or a robo taxi i mean it's probably more likely to get run over by a real human but Yeah, I've used Waymos downtown in Austin. Oh, no, no. Waymos have access downtown. No, I know. I know. Yeah, I know. Yeah, I've used Waymos downtown and I'm saying like I don't know why they couldn't be there because that's, you know.

But yeah, I think, you know, it's neat. I don't know where they're, yeah, I don't know where they're, you know, what the licensing thing is now. Because they also still have a person in the car. but not behind the wheel, which is interesting. So I don't know what the purpose of that is. Waymo? I mean, I mean, robo-taxi. Robo-taxis right now. Yeah, yeah. In the passenger seat. Yeah. Yeah. Well, who – wait. Who's writing in them then?

Right now, influencers who hype up Tesla. Yeah, but the robo-taxi only seats like two people. Yeah, they're Model Ys, so there's people in the backseat that are the passengers, and then there's a Tesla employee. No. No. No, they're just Model Ys with RoboTaxi on the side. Oh. So it's not the actual Robotex. Okay. All right. Yeah. They are not. No, no. Yeah. Yeah. In that case, yeah. Very, very interesting. I'm excited overall because...

These tools, these methods of transportation certainly are convenient for me and they're predictable. I've had wonderful conversations with... gig drivers uber etc and lyft and whatnot but i would say the pricing is always weird you know how how do i you know i got into one in

uh, Palm Springs last week. Nice guy. Reeked of cigarette smoke, you know, just reeked of cigarette smoke on the interior. And I'm in this weird, uncomfortable position because it's like, he was polite, got me from point A to point B.

But man, I would love to have not smelled like cigarettes when I got out of the car. And it's just this weird kind of thing where it provides kind of an awkward situation where part of the experience is the thing that I don't really want to have the experience. I may sound terrible for that. But that being said, I'm excited because you think about it's one thing for convenience for you and me.

When you think about friends of ours, I have a friend that suffered a stroke and has very low visibility. He was a guy that was extremely independent, got a lot of things done by himself. Then all of a sudden, he's dealing with the idea that anytime he wants to get from point A to point B,

B, he is dependent upon another human in their schedule. When we get to the point where these things are widely available, The idea that somebody dealing with something like that can just press a button, get from point A to point B, go to work, go wherever else, do whatever they need to do without having to feel like they're a burden, which they should never feel like.

That's what's exciting about this. That's why I'm just very, very excited about this. My parents, they're getting older. They still drive. They're still capable. But there might be a point where I have to say, hey, do you guys need to drive as much? But then it means like, okay. Okay, you're supposed to sit at home. You know, I might, you know, I'm contemplating do I get them a driver, but the idea that if you have a lot of autonomous vehicles for the elderly, et cetera, it's wonderful.

It makes that whole conversation a whole different one. And look, we both grew up in Florida where it is more prevalent that people have the conversations with their family about what happens when your elders... still have car keys but in in in the world we're headed to it's gonna matter a lot less i mean in a lot of ways uber solved a lot of those problems just in general uh and now this will be even more than that

Yeah, the Uber situation brings up a challenge, which is that for some people, you feel kind of vulnerable. And I think Uber goes to great lengths to sort of mitigate that. It can be – I've been in Ubers where the driver didn't speak any English, which for me is not a problem. But if you're somebody who's not used to navigating the social interactions of that, that can be – The challenge. Awkward. A few AI things that we...

ByteDance's Video Model

We're talking about over the weekend. One of them is a new model, a new video model from ByteDance that you brought to my attention. And I played around with it a bunch over the weekend. And it is a really, really remarkable model for how small and cheap it is. Yeah. So this is a ByteDance Seed Dance Dash. One dash light, L I T E.

By the way, a shout out to my favorite place to go play with different models is replicate.com, replicate like the word .com. You can go create an account there, put a credit card in to go try stuff. And again, you might get a 40 cent bill at the end of the month because it's just literally, they show you the, it's basically you're just using their GPUs to run it. And I go there all the time to see new stuff and I didn't hear about it. I just looked there and I saw, what's this? And so.

ByteDance has released some cool stuff in the past. They've done some really – like my first super fast image model I loved was ByteDance. So this model, holy cow, to do a five-second clip. costs you something like 25 cents compared to like six bucks or something on other systems.

It's the same amount as it costs to generate a high-quality image using OpenAI's best image generator, which is $0.25. And again, there is getting the complexity of variable binding, the blah, blah, blah, et cetera, like that. There's a reason. That being said is that I thought the quality of this model was exceptional for the speed and the price. And it just, there's, I've had projects where I was working on little things that I needed like B-roll stuff. And I'm like.

Oh, I could just use this. I'll tell you what. I tried it around with... I got fascinated with using it for animation. Like... And it's better than, it was better than Sora. Like when I, when I. Sora Turbo, sorry. No, they call it Sora, but you're not wrong. The version of Sora that I have for paying, you know, whatever I pay a month to open AI. But it was really, really, really good. I made a little clip of you and Sam, an anime clip of you and Sam on...

the open AI podcast, our first fan art. Oh yeah. And I found a little clip to put the, the audio over it and it, and it worked, you know, I, I, I looked at that and I was like, the animation was smooth it was it was really really good and it it made me think that with a little bit of time effort and uh api training you could you could do something kind of more uh

more more active than that you know because because it it gets it obviously is trained on a lot of animation because the the the movements of of things are incredibly smooth yeah it's been It's been fun to play with. I just did a thing in the background of Space Gladiator fighting a lion that looks very much like a bit of a video game thing. It's perfect. It's what I asked for. The fidelity, the quality is good. The speed is good.

It took me 36 seconds to get this thing. And yeah, I think it's better than, you know, the Sora Turbo model, which is what's available. And that being said, that model is very good for connecting stuff and more complex things. But some of these models have a level of sophistication. that I don't understand in the things that I want was me make pretty picture me happy yeah um and I showed you like yeah like like this you know what I took if if I take a photo or an image

and add it in there. It does a great job. Like, I'll go use the best image model from OpenAI, describe something. That's what I was doing. I was using seed images, so... Randomly, I was going back and forth with the Australian sketch group Auntie Donna. They're launching a new tour. There was a joke about one of them getting kicked in the balls. And so a fan account said, I wanted to make that into a Wojak if you've never seen the meme style of art. So I did that on opening.

I did that on 4.0. I made a Wojak version, posted it on Twitter. They liked it. And then when we were talking, I was like, oh, I wonder if you, if I threw that, that image into ByteDance and then said. This guy is getting kicked in the balls. Guess what? It made it awesome. It animated it perfectly. And then I was like, all right, here's a podcast. These guys are talking.

it looked like an anime of you guys talking. So I think it really, really, really sings if you give it an image to go with. And if you give it an animated image, the movies are... I mean, I think it's like broadcast quality. Good. Yeah. I'm working on a project with some people in Hollywood right now. And part of the challenge is. There's the capabilities of the model and there's understanding of what...

how you're supposed to program and what they can and can't do. And that's where it gets to be kind of like a bit of an art when it shouldn't be. And when you understand that, you understand there's a lot of cool stuff you can do. I made, I showed you the quick Blade Runner-esque thing I made with myself. And I looked at that and I'm like, oh, shoot. If I wanted to make a movie, I would actually go do that with my images, then yank myself, my virtual self out, and I could composite.

I'm like, oh, I could think of a whole flow. to go from concept to just incredibly high quality animation where the people are people and everything else is AI generated in the matches. It's exciting because if people know how to use the tool right. we are going to get great storytelling.

99% of people are going to use the tool in the most crude way possible and you're going to get crap. I keep people here like, ah, look at this AI. You could do a Harry Potter AI. I'm like, yes, I have characters look like they're in Harry Potter land doing stuff, whatever. this looks worse than a fan film. The eye of the director and the storyteller is extremely useful and we have to understand there's a reason why we need them to do these things.

I got to share this with you because this is awesome. I went in and had ChatGPT create the image on the left, and then I went and did the one on the right and said, a space glider battling a robot lion. I mean, kind of cool. Oh, my God. Yeah. That looks amazing. Yeah. Yeah. And if you know, it's the fact that that runs that fast and that cheap is insane.

I mean, it just shows you that we are nowhere near the end of the hockey stick when it comes to these models being capable and cheaper. Yeah, that cost was something like five cents per second, which... The numbers right now that Hollywood pays for – TV pays for VFX shots is in – it would be like –

multiple thousands of dollars, right? And the idea that you're now per minute or whatever, and the idea that you can generate, you know, for $1.20 a minute. And again, there's a range of quality and stuff. It's not like, oh, it's one-to-one, but... You think of just where are we going to be next year? Where will we be next year? And the tools are going to be, I think, the realism, everything else, the control is going to be solved for a year from now.

And the limit is going to be how creatively people use these tools. 100%. All right, Mr. Andrew Main, where can people find more of you?

Wrap-up

The OpenAI podcast. Go to YouTube.com slash OpenAI. Hell yeah. And that'll be my plug, too. Until next time, friends, this is your old pal Justin Robert Young saying, see you later. Diamond Club hopes you have enjoyed this program. Dog and Pony Show Audio

This transcript was generated by Metacast using AI and may contain inaccuracies. Learn more about transcripts.
For the best experience, listen in Metacast app for iOS or Android
Open in Metacast