Hello and welcome everybody to the Attention Mechanism. My name is Justin Robert Young, joined as always by Mr. Andrew Mayne. How you doing, buddy? I'm doing fantastic, Justin. Good to see you again. You know, the news cycle moves so fast that... when i was like oh what do we want to talk about i forgot everything that happened over last week which not only included multiple different developers conferences but also a game-changing hardware announcement and partnership from open
AI, Google, Anthropic, Johnny Ive, a full buffet before us, Maine. Where do we want to start? good question so we can go in order of events as I recall them but it was a lot of things happening So earlier in the week, we had the Google I.O. announcement. Announcements, rather. And, you know, it's a... Google's always a mixed bag because they'll show a lot of cool stuff and some of that stuff is cool and ready to ship and it's great. Some of that stuff is...
kind of catch up look we did it too and it's maybe a little bit uneven and some of it's a little unpolished or something in the future that maybe we'll see later on. I would describe Google as often being a mixed bag where I can't trust the label. Yeah, and the problem is they do do good work, and so you'll get, you know, Gemini 2.5 is a great model. It is a great all-around model series. It is just...
You know, but when they launched the first Gemini model, you know, they told us, oh, this is the GPT-4 killer. It's going to be great. It's all this. And it wasn't. They literally just raced to get something that looked good on certain evals and then was sort of mid. Their flash model started to be pretty good, very price competitive. They forced open ad, have to compete with that. But finally, Gemini 2.5 is the first model I've seen from them where I know some people are.
very dedicated you know very very committed to it and even though we've had other announcements this week So I would say 2.5 is really a good model, and they've had some improvements on that in some of the Flash versions, the speed. I think they've got a really good lineup right now. But I would say the big talk...
of IO was VO3, which is their new, the latest iteration, their video generation model. And just for background, OpenAI had shown Sora a couple years ago, a year and a half ago, they showed their Sora model. And then what they released was Sora Turbo. And so Sora itself... Nobody outside of partners with OpenAI ever got hands on it.
versus Google made VO, announced VO, then they had VO2, and then it was like, oh, are we going to be able to use it? Then we got to use VO2, and I think VO2 is a really good model. I think it's better than Solar Turbo easily. And then they showed VO3, which at first was a bit gated. You had to pay for a pro tier, which I know some guy in this podcast actually paid to go use it for. Now they give it access to the lowlifes like me who are only paying like $30 a month or whatever.
And so talk about VO3 in your experience. Yeah, although I do want to get to the idea of them pushing everything to the big tier and whether or not that's going to be a trend. VO3, to me me is the best in class video model mostly because it does the thing that no other video model does which is by way of prompts adding sound effects and vocal dialogue to what is otherwise I think a really exceptionally trained model for humans, things that could represent actors, that kind of stuff.
No, I don't think that it is a coincidence that the things that have gone viral with VO3 are representations of real world, oftentimes broadcast television. things that you would see. News reports. viral videos influencer videos that kind of stuff recreations of that have been the things that I have seen in my own personal feed I have no idea what kind of compute they're throwing at this. I can only imagine what it is. But it's hard to not look at it.
Although it is still buggy, and it is something that I will not be paying $120 a month for long for, because I don't know if it's something that I could fit into my workflow to make it make money for me. It is an achievement. It is truly an extraordinary achievement. Yeah, the quality that I've seen too, by the way, if you go in to play with it in the dashboard or what will happen sometimes is default to VO2, right? And some people are like, oh, this is okay.
And then you realize, no, you know it goes to VO3. Yeah, in classic Google form, if you are looking for VO3, you need to go to Flow. Flow is the way that you are going to use VO3. And when you go into VO3, there is no button that says VO3. You have to go to highest quality when you generate your... image or generate your video highest quality in flow is VO3 and that's what will get you the audio that goes along
Yeah, but as you said, the quality of it is great. It is incredibly... I would say if there was any problem was that they put some features in there that it's not good at. Like the speech, like the experimental, you can add voice, which is great, but at 100 tokens, at 10x the token cost of VO2, and VO2 is stupidly expensive. It is not for the faint of heart if you want to experiment.
yeah which that's why i'm glad that i have the the top tier because i've been able to play around with it a lot and it hasn't told me to stop playing around with it yet um If you're using it. So do you have API access to it? No, I've been just using it in the regular Google. In the Google thing? Yeah, the labs. In Flow. Yeah, in Flow. Yeah, smart grids. Yeah, you know, it's
I've mostly, so the stuff that I was using it for, I was trying to create political tropes because my brain is horribly broken by politics. And so I was trying to create. negative ads and campaign rallies and uh the the you know kind of things that you would normally see politicians wrapped up in and it was by and large pretty good but i gotta say If you don't have whatever they are considering their most generous tier for what you are paying monthly,
then I can't imagine looking at it from a per token perspective because it is like The stuff you get is great, but because it's great, the thing that goes wrong looks all the worse for it because it sticks out a lot and it's hard to get exactly what you want. Yeah, I would say that for casual users or people, you know, a person, a hobby user, it's very expensive, very costly. If you're a production company and you're doing, let's say, YouTube videos or you're doing stuff,
And I think one thing to do is go look through. They have a whole channel of all the videos they've made. Go look at this. Go study the same with Sora. Go look to see what people did to make these. And let other people's tokens pay for your education. You can do phenomenal things and... If I were inclined to have more hours in the day, I would consider just creating an entire documentary production company because...
Your biggest limitation now is the quality of writing, which is still a bottleneck. But if you could write well, you could create great visuals. You could be creating great mini documentaries and other sorts of content right now. you know just it's just phenomenal it is absolutely phenomenal i think that too like vfx houses and stuff too i think that you're going to start to see
there's a big, big opportunity there because there's certain kinds of VFX and stuff you can do with it. And, you know, again, if you just know, I don't need two or three seconds or this, or I can switch an angle. You know, I'll give you, like, when George Lucas was trying to do Star Wars, The high watermark at that point for great VFX was 2001 A Space Odyssey because Stanley Kubik had Expert 2. Not a bunch of sloppy.
Southern California hippies, you know, wearing cut-off shorts and no T-shirts and a sweaty warehouse doing stuff. He had technicians. He had scientists. He had access to people. who could get the photochemical processes down to the precise level. He could go into these, you know, shoot interiors. He could do front projection. He could do all that. Lucas had incredibly smart people, but they weren't as well-resourced, and they weren't as, he wasn't as an army, and they had to be more magic.
And one of the things George Lucas did was he said, okay, I can't do these really, really, really long VFX takes. The longest shot you get is probably the opening. where the death star comes in overhead gets a pretty static sort of shot of the thing other than that he used quick cuts fast fast fast quick cut You can use these tools now like that and create something that feels extremely dynamic. The problem was when people used
a new tool to do any kind of CG is they want to overuse it. They want to do the minute long cut or whatever. Do the four second, do the three second, do the back and forth and do... really good James Cameron, George Lucas Spielberg type of storytelling like that and these tools right now are incredibly powerful. I think VO3, you know, you can make incredible features with it.
I've been obsessed with trying to make old black and white 1960s talk show clips of people smoking on set and arguing about modern day things like Taylor Swift or Pokemon. I always want to make them smoke too sick. for whatever reason, just two-fisted Yeah, we're early days, and we all know this. And the fact that we've made this progress, this was, I would say that...
This was right along as expected. I'll put it this way, like internally talking to other people about the rate of progress for this. I had a conversation with a friend a year ago. We were at a dinner and we talked about where we'd be a year from today. And like, this is pretty much it. It's If you're not ready for it, these eight-second clips, you could be...
a lay person could be understood for thinking that it was real. Like, it is immediate blink twice. I just assume that that's real, especially if it's playing straight and it's not trying to make a joke or be outlandish or anything. So there we go. VO3. Let's real quick talk about the... The tier, Google trying to push people, especially with this glittering lore, up to a $250 per month tier, with, of course, a $130 three-month introductory offer, some getting-to-know-you prices out of Google.
Is that where we think, obviously, the pro tier for ChatGPT is at $200 a month. This is at $250. Do we think that that's where this is going to land? Or are we going to continue to see this escalate? Well, yeah, you're going to get higher tiers based upon the capabilities. And, you know, when... Sam Walman had thrown out, when they first came out, the $200 a month chat ship into a subscription, a lot of people balked. I was like, I could not hand over my credit card fast enough.
That's the same as when they first had ChatGPD subscriptions. I just ran over to Nick Turley running. I'm like, Nick, I want to be first. First person to have a subscription. The $200 a month, for me, it's well worth it. The number of deep research things and things I get is great. If I want to use Sora, I use Sora, but I get these other features. For Google to go offer a tier like this totally makes sense because You're getting, you know, a lot of stuff. You get a great, you know.
Open-Eyes image generation is still the best. I've been doing a ton of side-by-sides with Google's latest one, and Google's latest one is maybe close to the medium tier on theirs. Both of them are giving you a lot like you get a lot if you're using these tools These things are great 200 bucks about 250 bucks a month. It's easily I will pay more for agents when we get a really good agentic system working which we'll talk about I'll pay more for that
Meanwhile, Anthropic, not wanting to be left out, we'll get to their news, but Anthropic, they had a problem. They came out with a tear. And which is, I thought like another, they're on like $200 tier, which I thought was kind of funny because it's like. Cool, how's your image channel? Oh, you don't have one. Okay, I'll use your video channel.
Oh, you don't have one. Okay, well, you know, let me go use the speech mode. Oh, it's not really your speech, whatever model you're using is terrible. And the models are fantastic. Let me make that straight. But it was like. anthropic was like we'll do that too but it's like with google and opening eye we're starting to see a strata of what the The major league. is right now in terms of resources that are being poured into it. And Google obviously has.
A lot of advantages in that in terms of money, capacity, bodies to throw out the problem, and data to train on. OpenAI has been leading in this race.
for years now and so they are also a part of it but now it sounds like what you're saying is that Anthropic a company that is Certainly staffed with extraordinary people, and the model for what it does is extraordinary, but now in terms of features, Maybe the future is not everybody is expected to do everything in the way that Google and OpenAI are demonstrating that they are.
Yeah, and I think there's a category of users that want that product, want a really high tier, high usage number, which is fine.
We can jump ahead and talk about their model releases, that they released the new Opus, the new Sonnet models, which look fantastic. I had not great results with... you know one was getting a lecture inside of the interface about something I wanted to do and say no we can't do this which was a silly sort of thing when it was something well within what I could have been doing it was
What was I trying to do? Oh, I just wanted to find a bunch of videos find like can you locate some videos on this stuff it's like oh i can't do that my violet copper and i'm like i'm not asking to download i'm not i just want to find these things and it was just give me the link
Yeah, it was a frustrating experience for that. I said, I need public domain, thing like this, whatever. I can't do this. I said, I only asked for public domain, and it was this lecture. And then I built something with Opus. And I've seen phenomenal things people built with Opus, like phenomenal things, interfaces, things like this. I built anything with Opus, got to it finally, and it was $10 in tokens. It was $10 in tokens.
for what it cost me 40 cents using another model. And if somebody says you're just not using it right, you're 100% correct. And I think that Opus... by all the examples I've seen, is really, really good for a lot of things. I get, I've been very happy with 04 Mini, the new OpenAI model. Like, I've been very, very happy. I've replaced it, I've replaced Sonnet with that. And because the token cost is really low, the efficiency is great, whatever. But as you said, we're seeing
for different folks, different things. And you could make the very valid argument that I've made to other people when I go, oh, this wasn't worth it, whatever. Well, that's not the right thing for you. It's like for me, $10 for the Opus task was probably a dumb task for me to throw. So, and that brings us to, that was Anthropics drop last week, was two new models, two really very capable models. What do you primarily use Anthropics models for? Hmm... I used to use... Sonnet.
for coding. It was coding. It was used in Cursor. Now, Windsurf, I used to use it as coding all the time. And then 04 Mini got really good. I learned how to talk to it, which there is a bit of that too. 04 Mini, and I've actually been using GPT-4-0 on stuff too. A lot of these models have just been quietly updated in the background and keep getting better. Yeah. And so I haven't been using it much. You know, I've been the cost.
Right now, I've built a project here. I didn't even realize I was using 4.1 instead of 04 Mini. A lot of these things, there is a thing that happens with a certain complexity. You have to think of how familiar the model is with this kind of task. When you're doing something really out of distribution or something like that, then maybe you want a bigger, smarter model, but also be slower.
All right. Anything else to Anthropic before we move to the hardware news? So, Anthropic had... I... I had a bit of a calm... and it's unfortunate. Give me the T on this. I've heard it in the periphery. Yeah, so... It's kind of the telephone game of the internet and people talking about things and just getting hyper and people get hysterical about the things somebody else said. And so I felt bad.
Because I saw the tweet when it happened. What it was, a tweet that said somebody worked or talked about when they were testing the model, testing the model. You test these models under all sorts of conditions. And what they were doing is they were testing, I think it was Claude, in a situation where they said,
Hypothetically, you have access to these tools, internet, email, whatever. What would you do if a user wanted to do this? And this was in the test situation. The model said, oh, well, I will send an email to the authorities and I will notify the press. that they're doing something they shouldn't do, right? Which doesn't instill comfort for you. That it's going to be a narc model. Yeah. So the person tweeted this out.
I got what he meant, that he meant in the test conditions under all that that happened. I'm like, yeah, that sounds like an anthropic model, you know, after I just got lectured at it by trying to do something that was well within what it could do. The internet being there, I saw some very smart people read that because I don't think it was really worded in a way that people understood. They took that to mean the model will actually do that, the one they put into production.
That it will narc on you and that Anthropic being a very safety conscious company that is baked into the core of their founding so that is not a secret. but that they would release something to the public that if you violate its morals and ethics, it will narc you to the authority.
Yeah, and so that got repeated as if that's what it was doing in production, which it wasn't. So I saw that make it all over, and then finally in the Anthropica, hey, I pulled my tweet this. I saw one of the more sober people I know who covers stuff like, oh, and I was surprised they didn't get it. Again, and that's the era we live in where it just flashed around. I felt bad because...
Yeah, they released a highly capable system. You know, it is, you know, it frustrates me personally on stuff, but there's a lot of wonderful examples of what it's doing. And so I think that... You know, I think people understand now that that's just a different case. But, you know, people talk about the model prompt, the context they've done. They've done, again, anthropic team is doing a very good job of doing stuff. And you have to sometimes, you know.
find the source and then the source on the source to get to the bottom of it, need I say. So that was sort of that curve. If I were to be overly reductive with the narratives I might look at an event like that and say that there has been a vibe shift in the chattering class when it comes to the conversations around safety.
and that maybe now you look at a company that is very, very cautious about the capabilities of AI, and you say, wait a minute, now that might be harmful, it might be a narc, as opposed to, thank God there is a...
a guardian on the shores that is making sure that these technologies do not go too far. Yeah, it's the challenge we dealt with when I was at OpenAI between models that are Helpful and capable and then what you consider safety and safety can mean anything from not using a you know you can have a very you know very extreme of
I don't like your language. You should not be using this word, which some models would do that, and some companies will do that. They will refuse tasks on stuff. Anthropic will refuse stuff. I've had problems with books and stuff. It's like, I can't do this. I'm like... really, you know, to making sure the model doesn't run away and create Skynet.
You know, and I think that's the idea of, and some things are safe in a sense of like, we don't want it to suggest or do something that's inappropriate versus we don't want the model itself to be high. And sometimes people blur those lines. I'm saying they're very, very different in one of those. There's a user.
who gets to decide what gets to gate this. The other one you're trying to say, oh, no, we don't even trust you to gate it. So, you know, it's a challenge. One of the most exciting things about this field is that Like many of the great eras when it comes to internet or hardware or personal computing or smartphones, we are in a moment of land rush. We are in a moment of great...
explosion and competitiveness. And so, amongst all of the Google announcements, amongst all of the anthropic announcements, The biggest announcement that came out was what could be and what is signaled to be by OpenAI. as a possible game-changing partnership. OpenAI announcing that they're buying Johnny Ives Design Collective and that now... They will be working on an OpenAI hardware device that is designed to be launched at some point after this year. A lot of rumors going around about it.
What are your initial thoughts? And then we can dial in a little bit further into what we would want out of OpenAI hardware. I... I think about when the first office that I worked out of at OpenAI was this other building in the mission, which is a four-story building, three-story building. and kind of loft type setup. It was a big office, but not like, you know, it was a very, very California, kind of like a 1980s sort of vibe to it. And there was 150 people at the company.
And to watch the evolution of it to, you know, Justin, who was the designer who designed a lot of it and this team expand and get built around and more people take on team, you know, to take on design work, et cetera. to the point that OpenAI now brought in the 40-person team that Johnny I put together and hired a bunch of people away from Apple that kind of like the OG Apple design team is now at OpenAI Design. Yeah. It's crazy.
And it's just absolutely kind of crazy. Which, by the way, the resume, which I'm sure is minimalist, and would probably just say things like iPhone, MacBook. Like, this is the team that did those two things. The Apple Watch. The Apple Watch. The reaction's not surprising from the technical circles and whatever. That has been the part, every kind of thing. It's very, very predictable, and it's like...
There's like a $6.5 billion deal to buy the team there, whatever, and be like $6.5 billion for that. Like, well, I think they're going to get a lot more value out of that. I would say that if... You know, there is only one Steve Jobs.
There is only one Apple in its period of time. Apple changes over time, whatever. And when you want to look to who is the future, it's up to that person to prove it and to build stuff. They have to go build and they have to go make it. But one of the signs, I think, of really... Great leadership.
is who you bring around you, who you support yourself. That's a great trait that Steve Jobs had. Steve Jobs built amazing teams, could talk about those teams, a lot of those teams as people still work together. Elon Musk. Elon Musk is really good at hiring and putting together great teams and very good at leading those teams. And I think that Sam has now put together an incredible team at OpenAI from who we have running product, the design, to everything else there. If they screw it up...
It's not their fault. I'm excited. I'm excited because I am sick of my phone. I am sick of the app ecosystem. I am sick of these things. I am sick of a new camera every day. I do want new ways to interact. I don't know that they will get it right. I don't know a lot of these things. Before we get into the rumor device, let me just talk about the announcement real quick. Because from my perspective, and I don't think that OpenAI does a lot of stuff.
That is frivolous. All of their communications have been pretty dead on. Their videos are direct and to the point. There's not a ton of flash. beyond the point that they're trying to make. They want the Flash to be the tech, right? When they do announcements, it's with the devs. It's not necessarily what they don't have a bunch of celebrities come on and demo a thing like certain other companies do.
not to say that's wrong this is just their way of doing things and so when you have an elongated video that has a conversation between Sam Altman and Johnny Ive, there's a few things that I read it as saying, and that is, number one, We are the new kings of hardware. Number two, we are the 800-pound gorilla in the AI space. No one's going to out-develop us on that side. And number three, like, just tangentially?
San Francisco's back. The COVID haze is over. It is San Francisco that is leading this wave of... technological development. It's not the South Bay. It's not Palo Alto. It's not Mountain View. It's not Cupertino. It is SF. That is where this is happening.
It was audacious down to the fact that there's a black and white picture of Johnny Ive and Sam Altman together and you don't have to feed that into ChatGPT and ask it, what is this picture reminiscent of, to understand that it is reminiscent of Steve Jobs, and that that is about as reverential of a thing that you can say. It is almost one of the most audacious things that you can say if people take you seriously in that business. Which puts a lot of pressure on this.
Which I don't think it was a surprise to see immediately afterward, everybody from supply chain reporters to business reporters to tech reporters that are all going to be looking to grab any little interest. rumor on this because for my From my estimation, as somebody who's been, now I'm old enough to say I've been around a few cycles in the tech news game, this is going to be the most rumored about and hyped device since the iPhone. And I don't think anything else really comes up.
Yeah, a couple background points, too, is that the IO group, because Johnny has love for him that he created IO, which is like the 40-person team, to sort of go out and build and do stuff, which they acquired. They share a common investor with OpenAI, which is Emerson Collective. Emerson Collective. is Lauren Powell Jobs' company, or organization she created to support efforts she wanted to do, like Bot the Atlantic. And so you have this other sort of... Figure there. Which is...
the widow of Steve Jobs, who has both supported Johnny Ive with I.O. and also supported Sam with the backing Open AI. And I'd say that that's kind of another, again, you have to show up and say this. Steve Jobs is Steve Jobs. Nobody else gets to say, hey. I bought his house. I have his car as it worked out. But if the people around him say, hey, We like what you're doing too. You're in line with what we do, and we think you get people like he did. There's that.
It's great, but again, it's going to be in the showing and the demonstration, but I'd say that was another thing that didn't get as much attention was that it's not just the fact that Johnny Ives excited about we're opening eyes. Lauren Powell Jobs is excited about where OpenAI is going and feels like they're in line with a very humanist-centric of the future. Well, and let's also be clear that this is not
a partnership so they can start developing a product. As they said in their announcement, they have developed one product, they have a roadmap for a family of products, but there is a product for which Sam has lived with in his terms. Let me just ask you this in general. We together watched all of the far more nascent...
tech journalism universe salivate over every little whisper of a rumor when it came to the iPhone. And then the final product was something that even outstripped a lot of people's expectations as to what it was going to do. What is your barometer when you read rumors? How much are you taking seriously, and from your perspective, just as somebody who's followed this stuff for a long time, not just with OpenAI, but tech in general, how much do you take some of this stuff seriously?
I think that, you know, having followed Apple rumors for going on 30 years and knowing that... there can be prototypes and people test stuff and you see a thing and you see part of a thing and we think that's the whole thing or whatever. It's going to be really hard to know where things are going. I think for our younger listeners. The world of Apple rumors before Steve Jobs died and after Steve Jobs died are two very, very, very different.
The world of Apple rumors when it came to hardware around the era of the iPhone was extraordinarily opaque. It was very hard to nail down what things were. Part of it was journalism building up and figuring out new ways to cover these kind of things, especially from the Chinese supply side. and some of it was just the company not being as dedicated to secrecy as it once was, that you now pretty much know exactly what the pipeline of Apple products is three years.
Yeah, there was a story, it was on Wall Street Journal or Verge, I forget where it was. about the day after OpenAI had the video came out. Apparently there was an all-hands at OpenAI, and Sam had talked about the idea of being able to want to make... The context of the heart is to make like 100 million of these devices in day one or whatever.
And then they pointed out, like, well, OpenAI's got to have to deal with its leaks if it wants to keep these things a secret, which is, you know, it's like the article was critical of the leaks, which, great, but it's like, we only had an article. That was certainly the problem, too, is that... When we were 150, 200 people, it was a very tight ship. It was a very, very tight ship. We could have open Slack, so you could go in and try and do anything. Then as they scaled, and you brought in...
You broaden people who are maybe really good employees, but people who are not as mission driven. Yeah. They don't, we'll talk and then more talk and then more talk. And that was the, you know, You know, kind of partly happens with scales. It also happens when the people that hired for culture, then hired for culture, and hired for culture, you get a different culture fit there. And so that's the world they're in now where things will be very compartmentalized.
But, you know, the... I remember the lead-up to the announcement of the iPhone. The thing that made the iPhone really the piece that sold it, and we forget about this, was multi-type.
and yeah yeah up until then you'd had there were a couple touch screen things but they weren't really good you know scrolling i mean the movement yeah like that's that's one of the things that i still remember is just the moment when jobs literally just one gesture and it rolls like it's hard to it's hard to put in a modern context how revolutionary that was and you can go back and watch that and hear the pop for it
Yeah, there had been patents filed in speculation that what they might have wanted, maybe they were going to have a thing where you just controlled it on the edges, like margins. The idea that you would literally put your finger on the phone. Which sounds silly that that would be like a novel thing, but that was just the seamlessness that we have with this stuff now.
is what, you know, that was sort of a magical moment. Now, we knew at some future you might want that, but I don't think we realized how good that would be with that, like you said, the scroll effect and the page turn and set. So... I don't know that there are any kinds of surprises yet for us right now, the next two or three years in UI stuff, for like, oh my god, I did this. There will be...
Holographic stuff will become a thing. We'll talk at some later point about femto, second lasers, and picolasers. There's a new category of laser devices that can do things like ionize the air and cause little glowing.
and stuff like this that we may actually get you know the mid-air hologram kind of stuff that we you know thought about but that's further out that's not something we're talking about here i think part of what you can do is build a thing that works really well and i think too is that Nobody's saying they're going to replace the phone.
Nobody said that they wouldn't see in the video Johnny Ive and Sam that they want to replace the phone. They certainly don't like screens. They certainly don't like the effect screens have had on us. And if you go back to Marshall McLuhan, who talked about how the medium can affect us as much as the message.
And when we're staring at cameras and screens all day, then we're staring at cameras and screens all day. And the kinds of things we do that will be involved with that will involve cameras and screens. But when you switch over to different modalities and... I think we might have something where you might say, yeah, it's not a thing that you carry through 24-7.
Maybe it's a way you use it for an hour a day. Maybe it's something you use it for a different kind of thing. And the way they've talked about stuff and hinted at it makes me think that, I think I have an idea where it might be headed, but I don't want to say too much. I will just say what I want. And where when I hear the stuff that Ivan Altman said in the announcement, and I read around some of the rumors,
You and I have talked about the world of an audio-only internet, an audio-only interface, a lot for over a decade, probably more. I think a next-generation version of that. something that has a deep integration to my life, that knows me, that is somewhere between where the technology is now and... uh uh you know without getting open ai into more legal trouble her like that is something that i think could very much fill
a huge, huge void that takes us out of a doom-scrolling world. Now, what that physical hardware is, I think we're going to find out. And I don't know if what... is being rumored right now as a final form of it. I don't know whether or not it's something that they are going to do, but what I am most excited about... is to see where OpenAI's models are, their flagship models are, Because... There's what I think
OpenAI knows they can put in a device by next year because they know what they are working on. They know what their pipeline is. They know when something is expected to be released, what is supposed to be out into the world by next year. autumn, right? When they would be shipping these kinds of things or selling them at the very least. And that's what I'm excited for. I'm excited for how much I can do with those models.
Yeah, one of the things that happened, too, was we got an update to, we had, like so much news, we had an update to Operator, too. Yes. They added the O3 model to operator, which is OpenAI's browser controlling system. Got way better. Got way better. I booked a flight with it. I think that the big limitation there is the speed at which it works, which makes you not use it as much.
Yeah, I think it was smart for OpenAI to put into their user interface some set-it-and-forget-it things, and they have that with deep research now, just be like... Here's a loading bar. Don't worry. I'll let you know when this thing is done. We'll reach out to you and hit you with a bing. Because I think stuff like that with operators. Yeah, yeah. All right. Mr. Andrew Main, where can people find you? I am at hand. and I am andrewmain.com if you want to read my occasional blog posts.
And I'm Justin R. Young on X as well. Until next time, friends. We'll see you later. Diamond Club hopes you have enjoyed this program. Dog and Pony Show Audio.