You’re Using the Smartest GPT Model Wrong (GPT o1 Full Tutorial) - podcast episode cover

You’re Using the Smartest GPT Model Wrong (GPT o1 Full Tutorial)

Jan 21, 202539 minSeason 1Ep. 42
--:--
--:--
Listen in podcast apps:

Episode description

Episode 42: Are you truly unlocking the full potential of OpenAI's 01 models? Matt Wolfe (https://x.com/mreflow) and Nathan Lands (https://x.com/NathanLands) dive deep into the capabilities of ChatGPT01 and GPT01 Pro, offering insights to ensure you're not overlooking these powerful tools. In this episode, Matt showcases how to create short-form content from long-form transcripts, while Nathan discusses using 01Pro to build a game from scratch. With specific workflows, practical examples, and mind-blowing insights, you won't want to miss how these advanced models can revolutionize your content creation and coding endeavors. Check out The Next Wave YouTube Channel if you want to see Matt and Nathan on screen: https://lnk.to/thenextwavepd — Show Notes: (00:00) Model Perception Disconnect (05:50) Understanding ChatGPT and 01 Models (07:58) AI Transforming Complex Problem Solving (10:19) AI-Assisted Medical Diagnoses (15:37) "01 Outpaces 01 Pro" (18:35) Efficient Podcast Clip Identification (22:57) "Efficient Coding with XML Format" (24:34) Optimize Instructions for Better Output (28:07) AI Workflow Optimization: O1 Pro & Cursor (32:17) Unexpected Gaming Industry Connections (32:56) AI Empowering Creative Pursuits — Mentions: OpenAI: https://openai.com Whisper: https://openai.com/research/whisper Perplexity: https://www.perplexity.ai Get the guide to build your own Custom GPT: https://clickhubspot.com/tnw — Check Out Matt’s Stuff: • Future Tools - https://futuretools.beehiiv.com/ • Blog - https://www.mattwolfe.com/ • YouTube- https://www.youtube.com/@mreflow — Check Out Nathan's Stuff: Newsletter: https://news.lore.com/ Blog - https://lore.com/ The Next Wave is a HubSpot Original Podcast // Brought to you by The HubSpot Podcast Network // Production by Darren Clarke // Editing by Ezra Bakker Trupiano

Transcript

hey welcome to the next wave podcast i'm matt wolf i'm here with nathan lands and today we want to dive deep into what chat gpt's o1 models are capable of we believe that a lot of people really misunderstand what they're actually capable of. And most people are probably using 01 wrong. And in this episode...

I think we might actually blow your mind with what they can do. I'm going to show you some simple workflows that I use to actually create short form content from my long form content using O1, but then Nathan.

He's going to break down his flow for actually building a game. And he's going to give you a little sneak peek at the game that he's showing you. And, you know, stick with them when we get to it, because it does take a couple of minutes to understand it. But once you have that aha moment, which.

You're going to have that aha moment when he's showing it to you. You're going to have your mind blown by what 01 Pro is capable of. And Nathan's game is looking killer. And you're going to see what I'm talking about if you're watching this one on video. So stick around for this. one, we're going to dive in and show you what O1 from OpenAI is really capable of. Look, if you're curious about custom GPTs or are a pro that's looking to up your game.

Listen up. I've actually built custom GPTs that helped me do the research and planning for my YouTube videos and building my own custom GPTs has truly given me a huge advantage. Well, I want to do the same for you. HubSpot has just dropped a full guide on how you can create your own custom GPT and they've taken the guesswork out of it. We've included templates and a step by step guide to design and implement custom models so you can focus on the best part, actually building it.

if you want it you can get it at the link in the description below now back to the show Yeah, I feel like it's the first model where there's a major disconnect between how good the model is and how people perceive it to be. They think it's just like a slower version of a chatbot or something. They're like, how is this an upgrade? And so I think there's a major disconnect.

it feels like it's also the first time there's a model where people are kind of the limit like and your knowledge that you bring to the table when you use it are kind of a limit right like because like if you don't know how to properly use the tool

the results you'll get back will be bad. And it takes several minutes to get the response back. And it's a horrible experience. You're like, I waited for a few minutes and you give me some crap back. Why am I ever going to use that again? And so apparently a lot of people have like tried that one time and just never used it again. The people who like figure out how to actually properly use it are like kind of blown away by what you can do with it.

Yeah, yeah. And in a few minutes here, we'll actually dig in a little bit and show off exactly how we're using it. But, you know, that's that's totally right. Like I use regular chat GPT and I'll dump like a transcript from a video in with the timestamps in it and tell it to find.

me like clips from that video and with regular chat gpt it'll find like moments from the transcripts but the time stamps will be like way off like it can't seem to like line up the time stamp with what that clip it was that found when i do it with oh one

01 does it's like double checking, triple checking. And, you know, that's kind of what it's doing when it's doing all of this processing and it's taking so much longer. It's basically prompting behind the scenes, getting a response and then double, triple, quadruple checking and sort of.

re-evaluating its response over and over and over again before it finally goes, okay, we think we got this right. Here's our response. Well, we think that's what's going on. We think, we think, yeah. Like Open Eyes not being entirely transparent about what's going on. They're kind of alluding to them having some kind of... of other secret sauce but yeah that's probably the majority of what's going on but but yeah i mean i think that's the key is you know and we'll show later like

giving these models the proper context with like Claude or like or any of the other like just regular chat models you know LLMs they don't have a reasoning model attached to them like you can just kind of go back and forth you can just like ask it a simple question it instantly responds back and you just kind of go back and forth but with like oh

or one pro you know you can throw so much contacts at them you can like you can copy and paste in you know paid like 10 pages 100 pages maybe even a thousand i'm not sure the exact it's like you know what the token limit is on on o1 o1 pro i believe is 125.

Okay, 125,000. 125,000, but people who've tested it have said that... kind of stick to like 50 to 75k like there seems to be some kind of thing it's like almost like like a ram back in the day or memory like yeah you can hold that much but don't like fill it up well according to perplexity here i actually did a quick search on it according to perplexity the 01 model is

a 200,000 token context window with a maximum output token window of 100,000. So you can put up to that's about 75,000 words into like a single prompt. the results you get like at least with coding that you get from like sharing it that much context versus just like asking it a question it's like it's like a night and day difference uh and i found the same thing with writing and i've heard other people sharing examples too that they've tried to owe one for for writing and they're like oh

this kind of sucks or maybe it's slightly better than Claude or like it's like about the same as Claude. But actually like with O1 Pro, if you give it tons of examples of like, here's good writing, here's a good newsletter or here's my best newsletter issues and some people who write.

newsletters I really respect and wish I could be like if you do that like there's stuff it gives you back even just like for like editing it is so good and actually before 01 Pro I almost never used AI for my newsletter at all and so I've been using whisper flow and basically where I can just like press a button and just talk to the computer and then it just

uses AI to transcribe what I said. I think that wasn't the one that Riley was talking about on that episode we did with Riley. I think he might have brought that up and said he was actually using that to code with.

I believe so. I'm sure I've learned so many things from this show, like subconsciously, like we're like, oh yeah, I'm going to try that out. I don't know why I'm trying it out, but I am. Well, I also have to subtly slip that in. So people go, oh, they did an episode with Riley Brown. I got to go listen to that one. I've been using it that way. And lately for my newsletter, I will use Whisperflow and just talk to it.

And, you know, and I'll talk for like five to 10 minutes about whatever I want my newsletter to be about for that day. And then I'll like hand it off to one pro and I gave a one pro, you know, I'm copying in examples of my favorite, my best newsletter issues, but also newsletters that I like.

things I don't like and it's doing an incredible job at editing what I said to make it like really presentable and professional and the previous models were not we're nowhere near that caliber well and I think I think what you're saying too is like getting to the root of like

Why we mentioned that most people are probably using a one wrong, like the regular chat GPT, the GPT four. Oh, it's designed to be conversational, right? It's designed for you to ask it a one or two sentence question. It gives you a response. You give a. And it's designed to do that sort of back and forth, back and forth. And you get into this long conversation and it sort of ideally remembers the context of the previous conversation.

O1 models, on the other hand, are not really designed to do that. O1 models are really kind of designed to dump as much information as you can into that very first prompt. Like you mentioned, right? Dump in the newsletters you like. Dump in your own newsletters. Maybe the information that you want included in the newsletter that it's about to write for you and give it all of that information, all of that context from the very first prompt.

hit submit and then let it go to town doing its processing right let it spend 10 minutes 15 minutes however long it takes yeah processing all of that information and then it's going to give you a nice detailed output with all of that information If you try to use it to like.

chat and be conversational with it ask it a one sentence question and then wait for a reply ask it a follow-up question you're gonna hate it right because it's gonna take forever with each question a lot of people have noticed too that it seems like it's really great at like one-shotting things like

In terms of you want to give that huge context right up front. And often after that, you're kind of done with the conversation. You kind of like when you want to use it again, you open a new, you know, a one pro chat.

Right. Right. Like you can kind of continue, but I found like it, you know, the more and more you throw at it, like eventually it kind of gets more confused. And like the first one, if you just give it tons of context, it's able to reason about all of it and give you a great response. And I think most people don't realize that.

Like you said, they're like they're waiting for several minutes and they're like, OK, that kind of sucks. Let me talk to it some more. And then they wait more and just it never goes anywhere. I think you pulled up some tweets about some of the interesting ways that some like like scientists and things like that are using. Oh, one. So let's maybe talk about those real quick first, and then we'll show them some of the ways that you and I have actually been playing around with it.

you know this is something i've been noticing too like online is there's a huge disconnect between like the people who are just trying to chat with 0101 pro and the people who are like trying to like solve really hard problems you know with it for example here's this doctor on x

who's been sharing really great stuff, you know, and he's talking about how, you know, he thinks people don't realize how good this model is. Like they've been using it, been using it to help create an incredibly complex composite biological framework, which, you know, there's a lot of technical stuff in here, but it sounds like basically this is.

something that's actually helping them to identify target drugs that you could create and even give them good information about how to possibly create the drugs, how to do to run tests on them, things that before you would need a whole staff of people to help you do. And he's saying that now.

Instead of having that staff, he's basically able to do it himself, which means, you know, if you gave this kind of technology to every doctor, like how fast we discover new drugs is going to go up dramatically. I think it's not a surprise that you're seeing it more in like complicated areas like, you know.

engineering the medical field things like that you're noticing that like people in those fields are understanding how good these models are because their needs are more than like the average person who's asking you like you know i'm shopping for this or whatever you know right right Here's another tweet from Didi, who's a venture capitalist at Mineral Ventures, really well-known Silicon Valley Venture Fund. And he's saying that based on the data that already...

AI, like, like, oh, one, like reasoning models are doing better than doctors on solving hard to diagnose diseases. So as of right now, and I think the numbers was 80% versus 30%. What do those numbers mean? Like hard to diagnose diseases. That when you were testing a doctor, if the doctor could diagnose the disease, the doctors got it right 30% of the time. Wow. The AI got it right 80% of the time.

And this is not the new models. This is the very first preview of 01. And from a lot of stuff I've seen, 01 Pro is probably in the ballpark of three times smarter than that model. So probably when the new data comes out, it's going to be like, OK, it's not 80 percent, like it's 95 percent or 90 percent and the doctors are 30 percent. It's a huge difference. I mean, so this just shows this one reason we started the podcast, right?

I think most people don't realize like this is society changing stuff. Like this should be where we restructure society where, you know, you still need doctors, but you have doctors who are highly relying on AI to help diagnose.

diseases ideally you have the doctors that sort of understand all of the different diseases and the ways to cure them and things like that But, you know, maybe they're not always the best at actually diagnosing what the disease is, but they're the best at probably telling you how to like handle.

and and you know work with you on the treatment of it so i think we're going to get to a point and i know i'm probably already going to start doing this where if i have like a checkup or a doctor's appointment or like something's bothering me and i'm going to go to the doctor i'm going to put all of my symptoms into some Something like a one.

And basically see if O1 can tell me what it thinks is the problem first, but then go and use that information and bring it to a doctor. I've been hearing stories lately about people who basically get to like the root problem. of their various ailments by using a one and then going to the doctor and the doctor is essentially confirming it for them and then helping them with a course of treatment. So it's not like eliminating doctors. It's just sort of.

All right, let's let's get an opinion here and let's get a second opinion from a real doctor. And then let's sort of overlap the two to figure out the best course of action. And I really think that's probably going to be the smartest way for people moving forward. I think it's going to get to a point where. like any doctors that refuse to also leverage AI to sort of help with some of the diagnosis and stuff. It's like.

That's borderline like going to be unethical to not sort of get a second opinion for it from AI or at least or, you know, get a first opinion from AI and then have a doctor confirm the opinion. Right. Right. I think the best thing we should do now is, you know, you talked about using it for your newsletter. You've talked about using it for coding. I've talked about using it for doing shorts for some of my videos. So I think we can jump into one of those.

I actually have chat GPT-01 pro running right now in the background. Cause I knew it was going to take a while to like process the transcript. So what I'm going to do right now is I'll go ahead and jump in and we'll run this through chat GPT-01. see what kind of clips it finds for us and then we can compare it with what 01 Pro gave us and see if we can spot any differences because I think this whole strategy of

letting it find viral clips for us. I don't necessarily think you're going to need the pro mode. I bet too that regular 01 will probably do it just as good. So I'm going to go ahead and share my screen here. If you are listening on audio, you can check out the YouTube version of this and actually see it in action. So I've got ChatGPT01 open. And here's one of our recent YouTube episodes, AI predictions that will completely change life in 2025.

And if I go down to the bottom of the description here on YouTube, there's actually a button that says show transcript. So what I like to do, and I do this on a lot of the live streams that I do on my YouTube channel is once the live streams over. I go and click the show transcript button and it puts these transcripts over on the right side of your YouTube window. So we've got the entire transcript of this recent podcast episode that we did.

And I'm going to go ahead and just select it all, including the timestamps. You can see I'm selecting. the actual times as well as the transcripts, because OpenAI 01 is going to need those times as well to know where to tell us to pull those clips from. So I'm going to go ahead and copy this whole thing here. and then I'm gonna jump into 01 and I'm just gonna paste this whole thing into 01. So you can see I've got the entire transcript loaded in here right now.

And if I just kind of add a couple lines, let me get all the way up to the top of our transcript here. And I'm going to add a couple lines here. And I'm going to say, below is the transcript from a recent podcast episode. Please review it. and find clips that have the potential to go viral. Clips should be roughly 60 seconds and make for good short form video.

Right. So I'm just giving it this like little prompt up here and then I'm pasting in the entire transcript and I'll go ahead and submit it. And you can see it's going to it's going to take a minute or so to like process this whole thing. So I'll scroll down. You can see it's thinking right now. So it actually responded in less than a minute. So this is regular 01 and it's actually, it's responding.

But I forgot to give it a little extra context here. I forgot to tell it to tell me the actual timestamps. So you can see right here, it's giving us the clips. Is 03 actually AGI? And then it actually gives me a little transcript section.

But what I would typically do is let me actually start over real quick I'm just gonna go ahead and copy and paste the entire original prompt and I'm gonna do it one more time Because I want to tell it to actually give me the timestamps that just makes life easier I kind of forgot to put that in. So let's go ahead and copy all of this. I'm going to create a new chat here, paste the same thing in here. And then at the end of this prompt, say, give me the timestamps.

for each clip. And then we'll go ahead and run this one more time. Yeah, you gotta give it the right context. You know, it's kind of like, I think of like a Hitchhiker's Guide to the Galaxy where it's like... What's the meaning of life? And it comes back, you know, how many years was it later? A thousand years or whatever it was. And, you know, it's 42. It's like, OK, you know, you got to give it the right context, like tell it what you expect to get back.

O1 is actually quite a bit faster than O1 Pro. That's definitely something I'm noticing just kind of comparing them side by side because I actually ran O1 Pro while you were talking earlier just to like let it start going. This one takes... Maybe you can see here it thought for 38 seconds and now it's actually giving me the timestamps. So we got clip one is from zero to one minute.

2025 will be wild for AI plus 03 IQ levels. And then you can see it actually kind of gave us a little transcript of this section. It's telling us to clip. Next one is one minute and two seconds to two minutes and two seconds. This one actually feels like it's kind of going very linear where it's taking like the first handful of minutes and giving us those clips. So you can see clip three is from three minutes to four minutes. Basic AGI might be here already. And then it gives us a transcript.

But you can see it gave us a handful of clips here. And then the final one is from 2759 to 2859, the rise of AI email agents for everyone. These are all potential short form clips that we can clip out and then use as YouTube shorts. And this was using the basic 01. And again, let's take a look at the time. You can see. It thought about it for 38 seconds. So actually pretty quick, but also it's kind of weird that the very first clip starts at zero seconds and goes to one minute. Like that's.

our intro basically i think there's probably something there where we could give it even more context of like what's what's a good clip and you know and what we and like rank order them too don't just give us like in in a sequential order like tell us like what are the top five viral clips from this episode that's actually usually my follow-up prompt to this is like give me an order of like which one is most likely to least likely go viral but now if we look over here

So I ran this same thing through the 01 Pro model. And this time I definitely did tell it to give me the timestamps the first time around. But if I scroll down, let's see if it tells me how long it thought for. Uh, so this one thought for five minutes and 20 seconds. So, you know, about 10 times as long as the last one, but you can see here it's the timestamps are a little bit. more dialed in. So 55 seconds to 155 is 03 already AGI. From 322 to 422 AGI agents and societal shifts by 2025.

5.16 to 6.16. AI video is about to get wild. It's actually kind of suggesting a lot of the same exact clips that, oh, one regular. gave us, it just seems a little bit more accurate on its timestamps. O3's IQ is near Einstein levels. So, and then look, we see one much, much deeper in the podcast than what the regular 01 gave us, which is from 45, 55 to 46, 55, one person startups and multimodal mastery. That's kind of hard to say, but.

That's how I've been using it. And I've been plugging, this was only a 40, what was this? A 49 minute podcast episode. I've been plugging in transcripts from three hour live streams and it's been finding clips throughout the entire live stream. Like it's finding clips at two hours and 20 minutes in and clips at, you know, two hours and 48 minutes and 37.

seconds and another clip at you know three minutes in and all and just kind of all over the podcast but you're right a probably better prompt to use in the future would tell it right up front find these time stamps and then give me a rank order of the one most likely to go viral and just put that in the first prompt i've actually been doing it as a follow-up but yeah but i think you've got a good point it's probably going to actually work better if you just

include that in the original prompt I believe so because it does it definitely does with coding so I assume that probably applies to everything it is interesting like as of a year ago I definitely was on the side of like Oh, don't learn about prompting. It's not important. Like these models are just going to handle all that for you. And even the open AI is kind of.

said that they think that'll be the case eventually but it's like as of right now we've kind of went to where even the prompt is even more important than it was before well i think it depends on the use case right like i think for like 95% of your use cases for AI, you don't have to stress too much about the prompt, right? Like if you're using perplexity to get some quick information or you're asking chat GPT about like, you know, one of the things I've used chat GPT and cloud.

and things like that for are like, you know, I'm on this medication. I'm thinking about, you know, taking this supplement. Are there any like interactions between the two? I don't really need to write up a complex prompt. It's going to get what I'm asking for. Right. So for the most part. I would say like 95% of the time, like

prompt engineering or getting crazy with your prompts is not that necessary. But when you're asking it, like the higher the level of complexity of what you're asking for, the more it's necessary to be more detailed in your prompt, I think.

There is this thing I'm using called repo prompt. But basically what it does is, you know, before when I was working on any kind of coding project, like if you want to use O1 Pro, the only way you can get the context for your project is to literally copy and paste.

everything into it and so what this does is like you know you can see here i've got all these different files here tons of files different directories all the files inside the directories and it has it lists them all here and tells you how much context it's currently taken up how many context tokens so you kind of know right now it's at 51 51.9

thousand right now so for anybody listening it's like there's on the screen there's like a left side that's got like a folder structure and that's like the folder structure of is that like your entire computer or is that like just this software that's a game i'm working on but

The interesting thing is even though they're calling it a repo prompt, you totally could use this for other things. Like you could use this for writing or for whatever use case and just have files with text in them that you want to copy and paste every time.

So if you were trying to like write a book or something, right, you can probably have different folders for like each chapter of your book, and then it can like help understand the context of previous chapters you've already written in the book as well, right?

yeah totally or or you could have that plus you could have a thing on like style if you're a writer like what's the favorite things you've written before or like what are the things you like right you can provide all that context as well to make a one pro to give you a better response back oh okay So let me clarify something. So when you were mentioning earlier, when you said that you use this for your newsletter and when you're doing the newsletters, you might have like a folder that's like...

Here's the style of newsletters that I really like. And those might be text files within a folder called like newsletters I like or something. Right. And then you might have another folder that's like my newsletter. And then it has a whole bunch of entries.

in it of your newsletter and so now when you go to prompt 01 pro it can actually look at all of that and it's in a sort of a nice clean structured way right yeah with notes like kind of uh in the text itself like saying here's your here's things i like here's thing i don't like or usually more detail than that but i'm like kind of simplifying it oh wait so real quick so the um the repo prompt

It doesn't actually do the prompting for you. You have to like copy and paste something from repo prompt into chat GPT. Yeah. Yeah. You're just copying a massive amount of data. Oh, okay. I thought it was like tapped into chat GPT and like submitting prompts. Okay. Okay. No.

But let me show you, though, why it's so good for coding. Before, when I would give 01 Pro something for code, what I was doing was I was creating a script that would look at my entire code base, and I would run the script, just a simple node script.

And it would take all the code and then put it into a single text file. And then put like, here's where this file is, just so the model knew where everything was. So it could help if it was pointing to a file or whatever. But with this, with XML, it's basically giving it to you in a format.

where then you can just apply it to your code base and it'll automatically change everything for you, which is incredible. Like it's so much faster coding with this now. But let me show you the other stuff that's cooler too that I think is not well explained when you use this tool. Like for example. Here's the thing where if you're wanting it to architect something and not actually code it yet, you can add this and then you can dive into what that text is. Like it'll say.

You are a senior software architect specializing in co-design implementation planning. Your role is, and then it tells you all the stuff that you're expected to do. That's like a system prompt right there. That's essentially a system prompt, right? Yes. It acts dramatically different.

if you do this versus telling it the engineer one where you're a you know you're an engineer your job is to execute on the plans and all this kind of stuff and the interesting thing that engineers are starting to discover is it seems to be even better if you do both and then say you're an architect and then first do your architect work and then when you're done with your architect work go and do your engineer work okay yeah yeah so this is this is like

borderline getting into like agentic stuff right because it's almost like it's better than people realize like and and when you when you tell it it's different roles in what order to do its roles

it's so much better. Like what you get back from this so much better. Like if you tell it just to give you code, it'll give you some good code back. But if you tell it to take its time to think as an architect first and to really to map out the feature and how it's going to interact with all the other parts of your code base.

which, like I'm saying, this is for code, but this could apply to so many different things. And then you can do other things too. Like, for example, I have just my own rules. I don't know if that stuff works or not, but people say like...

tell it not to be lazy because sometimes it'll like try to be lazy and not give you all the code i've got i've got my own little notes in here of that kind of stuff like don't be lazy and do this and this is how the kind of responses i like you know kind of like custom instructions kind of stuff right I'm working on a feature for my game to add better special effects to the orbs when they're matched. And so when I click copy, it's going to give me this giant mega prompt.

And then I'm going to get a mega prompt here. Let me see. And so, you know, I go in ChatGPT and you're going to see this prompt is just nuts. I'm using O1 Pro here. And so you can see here. Okay, you can see at the bottom.

you know and i'm not sure why it puts it in this order i'm not sure this is actually better or not i kind of think maybe it's not but it seems to work okay it puts literally the instructions at the very end and like puts user instructions you know right it's like i'm working and then it just whatever i said i'm working on a feature for my game blah blah right but you can see like if i start to scroll just you look at the bar over here that as i'm scrolling

As I'm scrolling. Yeah, it's like the bar's barely even moving. It's not even moving. I mean, we are talking so much information here. It is just wild how much information I'm sharing here with it. I think most people would not realize that you can do this and hand it so much information and it understands it. It is just, it is mind blowing that you could pass. I mean, I don't know how many pages this is, but like, God, this is like probably over a thousand pages of stuff. Yeah, that's wild.

And then you press it. and you know if you're listening to the audio right now like it just scrolled for like a minute to get to the bottom of the prompt yeah yeah yeah This is why Sam Altman tweeted that they're losing money on 01 Pro. It's people like me. Because I'm literally doing this when I'm working. I'm doing this every five or ten minutes.

That's wild. That's crazy to me. And when I'm done, I get the response back. It gives me the code. And then my typical workflow, like I said, it'll give me XML back. It'll give me XML back. And the XML, they have a thing in repo prompt now where you basically can actually. copy and paste that in, and then it helps merge it into your code. So you can review all the changes and press, yeah, I'm okay with this bit. This bit, not sure about. Or if you're lazy, just press accept it all.

Using XML, it actually works all that out for you. Because it tags the files in the directories and it knows exactly where they are and exactly what code was changed. And you press a button, it's all done. Oh, wow.

It's mind-blowing. But you see here, it's like, okay, green checkmark. It shows you the different files that are going to be, you know, like here's a turn manager and here's like the enemy UI and stuff like that, right? So it shows you the, and if you click merge changes, you would then see the actual code.

in the different ones and then line by line you can accept or deny the code and you if you want to just accept it all you can and sometimes i do that sometimes i just like major change back up my prop you know project back it up and then accept all yeah That's like kind of the best workflow for engineering right now using AI, in my opinion, is to use Owen Pro for any new feature, anything that's like a new thing that doesn't currently exist in your product or your website or whatever.

Use O1 Pro because it's way better at figuring out how to properly architect it and make sure it all performs well and everything else. And thinking through every possible scenario where something could go wrong, it's way better at that. And then once you've got the feature implemented, small changes.

you totally can just use cursor or something like that. Like if you're changing the color of a button or you're like changing a name or anything super small, you know, obviously you could do it yourself. But if you want to use AI, you could use cursor or something like that.

Yeah. Yeah. So you wouldn't use this just to make like small bug fixes or small tweaks. You'd kind of more use it to build like the overall like bones of the product, right? Yeah. If I'm building a software application and there's a new feature, let's kind of figure out what that feature looks like.

It's not going to look good design-wise. None of these are good at design yet. But in terms of it actually working, O1 Pro often gets it right the first time. And if it doesn't, it's usually very minor bugs. I suggest like anyone who has a company who has engineers right now, like you're like really missing out if you're not like paying for like a one pro for your engineering team and having them use something like repo prompt. Right. Really missing out. That's super cool. Yeah. Now.

Is there something you can show us of like what you've generated? Yeah, so this is the Godot editor, which is like a open source. game engine kind of like final fantasy style right but i mean it still looks really really good if anybody's just like listening on audio it's a very like colorful visual game that you've built here and ai helped me make

All of that. And so like the background right now, there's like, I don't know, like a cathedral looking thing, but it's got like a wavy animation. I'm sitting there thinking like, okay. This is something where I really didn't even think I was going to do it. And I may, who knows, I may not do it eventually, but I thought it'd be great for the show for me to be really hands-on with all the different parts of AI from like...

from i'm using it for writing i'm using it for coding i'm using it for the art i'm even using videos i'm trying to think about like okay in a year from now what's ai going to be really good at yeah yeah and so if this takes me a year to build as a hobby it'll only get better from here because like

03 will come out. The AI video models will get better. The AI art is going to get better. And so that's kind of what I've been doing is like seeing what's currently possible, but then trying to set up in a way where as those things get better, I possibly could actually turn this into a real game.

But yeah, I've just been shocked by how good, I mean, like AI can do all of this, like O1 Pro, especially like O1 was not able to do this, by the way. Like I've tried O1 for like hard coding stuff and it just, it fails a lot more often than O1 Pro. Yeah. So that tells me that once we get like 03, and once you get 03, plus, you know, apparently the next version of Mid Journey, they're saying that it's going to be way better at being consistent with characters.

And things like that. Apparently that's the big next thing coming. And then you get AI video better. And so you can have cool cut scenes and stuff like that. You'll be able to make like amazing experiences entirely with AI. Well, yeah. And if we can get like consistent characters in video, I know there's some tools out there that claim they can do that too, but it's still a little wonky, but I mean.

By the end of this year, we'll be able to have consistent characters and images really, really good. probably pull those characters into videos and have consistent characters in videos. Um, O3 model will be out at some point, which is going to like really, really improve the code that you're able to do. Right. It's also going to really improve the writing and the, any sort of like storytelling.

elements in there and it's like yeah we're kind of running out of stuff for the humans to actually do when it comes to making these games but but but as a creator you still get to be the i mean like i'm piloting all of this yeah like for me this is like like so fun to like

think that maybe it will be the thing in the future where like it's so easy to make these games it's like in the past you have the different coders that specialize at different things and then you have like the override the the project manager who's kind of like

telling them each what to do and then i always like i don't know for whatever reason i always use the like symphony analogy of like now you're going to become the conductor where you're just sort of like telling all the instruments what to do but you're standing there conducting them right that's where it's going

You know, different people are going to be able to use this to like make their dreams come true. Because like when I was a kid, my dream was to like work at Blizzard. Yeah. Like I want to be like one of the top people at Blizzard. And then the weirdest thing happened where I was one of the top players in the game EverQuest when I was a kid.

The number one player, Rob Perdue, who used to run Blizzard, at that time he was on EverQuest and he was running Legacy of Steel, the top guild on EverQuest. I ended up like raising money for a startup. We raised several million for a startup called GameStreamer.

And the combination of having that gaming background and having that startup at E3 had like a huge corner at E3. I ended up getting to like hang out with Rob and get to know him very well and a lot of top people in the game industry. So I had this weird situation where I never really got to fulfill my dream. I was hanging out with all those people like as like good friends. I got to see that.

you know, it wasn't really the life I wanted, like going to work for one of those companies. It was not. I wanted to do my own things. But then I still never got to do what I wanted in terms of making a game. And it's so wild to me that now that AI is getting so good that a lot of people are probably going to have the same kind of like, you know, awakening that I'm having.

I was like, I can do those things now. It doesn't matter if I'm 40. I can still do it because AI is getting so good that it doesn't take as much time as it used to. And it's going to get even better. Like when O3 comes out, you'll... you'll probably be able to do any new feature without any bugs, you know? Yeah. One shot, yeah.

one shot it's already close to one shot in many things now and it'll only become more so and so that's just it's exciting like for people who like creating things like the next 10 years is gonna be like a revolution in terms of creating things uh not only art but like you know even companies like

you wanted to create a company now it's definitely definitely gonna be easier yeah a hundred percent no it's it's super super exciting and you know i i feel the same way like i i've one of my things when i was a kid was i always wanted to be like a game designer game developer like working the gaming space and I feel like

Now we kind of have like a way to sort of live that childhood fantasy a little bit, but without all the negatives. Right, right, right. Yeah. I mean, it's like me and you had even talked about it. And I was like, I'm just going to start playing with stuff and just see what's possible. And I was just.

After a week, I was like, wow, it's actually possible. It can do the whole thing. And then just like learning that you can do all of this and you can control most of it with your voice has just been, I'm hoping that people will listen to this podcast and kind of like think bigger about what they could be accomplishing. with these tools.

I couldn't say it better myself. And so with that being said, I'm not going to try to say it better myself. We'll just go ahead and wrap this one up. It's a really, really exciting time right now. You can pretty much do build anything you can imagine. It's only getting better and easier.

And we're going to keep on exploring and diving deeper into these rabbit holes to figure out what we can build. And as we learn, we're going to share with you. So make sure if you're not already subscribed to the show, subscribe on YouTube, subscribe wherever you listen to your podcast. You can find us at all of those places. And thank you so much for tuning in. Hopefully we'll see you in the next one. Thank you. you

This transcript was generated by Metacast using AI and may contain inaccuracies. Learn more about transcripts.