Hello and welcome everybody to the attention mechanism. I'm Justin Robert Young, joined as always by my friend, your friend, the world's friend, Andrew Main. How you doing, buddy? I am doing great, Justin. A lot of stuff going on in the world of AI. Let's start with our now. It's been a while. But it has been happier times for the NVIDIA style. since we last got together. We have rebounded back to around where I think we got on the train. So for those of you keeping...
uh, keeping track yourself. I don't know whether or not you're playing the home version of this game, but, uh, it was, uh, we, we've, we've, we've more than come back from the draw. We got in when it was like around the one tens and whatever. Yeah. Not that we timed this podcast away for NVIDIA to bounce back so we didn't have to be like, what was that?
We weren't on strike until NVIDIA got back to 133. We were not. So the rumors that we were not going to do an episode until we got back to the mid-130s, that is dispelled. That's not true. Yeah, and it's all going to be, and again, we are not investment advisors. Don't let us do us for investment advice. And these are always going to be volatile. And the reason we got in was because The deep seek thing didn't mean what Wall Street thought it meant. Yeah.
you know yeah um the big dip happened with tariffs and the idea that also in a big part of nvidia's you know ability to export stuff was going to be affected and we saw an even bigger dip so if you were waiting and bought off of that
You're like super X smart, which is great. And if you don't buy, it's fine. It's all good. We're not trying to push this, but that was sort of, that kind of came back to our sort of a thesis is people don't realize how the demand for AI is just going to get bigger and bigger and bigger. And Justin sent me an article today that was a Bloomberg piece, a pretty well-detailed piece about Apple and AI. And you realize just how caught off guard they were. Even long after they should have been aware.
So let's get into that. And I think we've talked about it even on this show before, but one of the stocks. that we have talked the most about, one of the companies, one of the things we have talked the most about in our friendship has probably been Apple Computer. And I remember you telling me a story a long time ago about...
a moment in which you realize that Apple Computer was similar to NVIDIA, just being fundamentally misunderstood at a point before it became the world-beater that it is today. And... As we talk, and I sent you the article this morning, It appears that it is a company in flux, and it seems like if we are to understand that the point of this podcast, talking about artificial intelligence and the large language model revolution that has defined our age,
Apple kind of missed the boat on, or at least they had different decisions that were made in very, very key positions as to exactly how much they should take it seriously. I'll say, to lead off the conversation, I think it echoed a lot of things that we were hearing kind of just rumbling around about Apple, that it is a very divided company. And I don't mean that they necessarily hate each other, but they're siloed. They have warlords that kind of control elements.
of their company. And it confirmed a lot of worries that I had just as a consumer. around where this company is. It's been a long time since they released something that was magic in the way that they seemingly routinely did in the past. Apple intelligence was something that has not materialized in even the way that they had
demoed it, and there was not a lot of hope that I saw in that article that was pretty deeply sourced by Mark Gurman who has had a line into Apple for a very, very long time. What did you take away from this article? I think... Part of the challenge comes into us the time frame in which companies work.
And the article pointed out that Apple was used to having more time to sort of adapt and evolve. Apple could be more conservative about what they're trying to do, take their time to get into a market. figure out next steps because Apple is a big part of their brand is reputation is you trust FaceTime you trust a lot of these things because
They're secure when they implement features. Things don't always work, but it's surprising when they don't. And that's the thing that kind of the Apple brand was that methodical. And I think part of it is you can think that Apple starts as a hardware company, that hardware and software, but they often operate in hardware time cycles.
And hardware time cycles are very different than software cycles. You know, hardware updates come when you can literally fix your circuit boards or your hardware or whatever, and it takes a lot longer, and so you can think in those time frames. AI is not operating under hardware time cycles. AI is not even operating in traditional software development cycles, which... paradoxically, it would sort of get longer. If you followed Apple during the 1990s,
One of the biggest problems they had was getting their new operating system out the door, and they went through different variations on this, competing products. They ended up buying Next from Steve Jobs, and that became the operating system that we use today. and our Macs and even in our phones and whatnot.
an event with some students and pointed out, you know, that's how you code in iOS. You never noticed the words NS string. And it's a little function in there. And the NS is next step. That was next to the company Steve Jobs created after Apple and then Apple bought it. And so I'd say that. That's been the problem for them is that you realize that they didn't pay attention to generative AI in the LLM space. until ChatGPT came out.
Yes. And it was literally the executives using this. Oh, wow. This is cool. And I remember... in 2020 with gpt3 thinking imagine what apple could do with this with all the even then smaller things for text messaging you could say like this is actually fine-tuning to gpt3 to get my words right to get this right I trained my own tiny little LLM model that was like a GPT scale model that I had running on an Apple phone.
for doing grammar correction that was better than the ones Apple was using and I assumed because I was able to make this thing that Apple was going to be training things like this and it turned out No, it would take them another three years before they even bothered to really do that in earnest. Yeah, one of the anecdotes in the article is Craig Federighi, who had been culled on AI development.
only changed his tune when he used chat gpt to help him code something that he had been working on personally and that was it it's like that Late in the game compared to how much time, effort, and to your point with NVIDIA, chips had already been purchased by companies like OpenAI and Microsoft and Amazon. Yeah, that's the other part is for them to do catch up. And it's not even part of it. They can say like for them to get their massive compute, they needed to train.
There are people that lend that to you. You can do partnerships. You can do things like that. I was just surprised by how late in the game they got into it. And I understand, too, at this point, They had... I think they were finally winding up Project Titan, which was their autonomous car. By the way, apparently their reason for abandoning that was they didn't think AI was capable of self-driving.
which was a problem because I told you, you know, we've had, there is an article that came out by some Apple researchers that kind of just, showed you i think um and these people i'm sure are very very smart smarter than me whatever but they were talking about basically it was they were talking about like benchmarks and stuff like this and like oh look
It can't solve this sort of thing. And I'm like, no, I can one prompt. I can solve for that problem that I can then train a model and never have to give it that prompt again. And it just. You just realize that there's a large number of people there that were involved in AI that really didn't know much about LLMs. Yeah.
and and that's you know that i think that's a lot of companies that's you know plug our company interdimensional we often go in there we show people this is what's not in the documents this is this is actually how you do the thing and we find out that
Billion-dollar companies are making decisions because they have three or four researchers go play with a thing, and they come back and give them a report like, no, didn't do the thing, no, it's hype, it's whatever, and like, oh, okay, moving right along, which is catastrophic in the case of Apple because I've said this before, and I have. I love my iPhone because I've got my hot button because it takes me straight to chat GPT. But my phone, my iPhone, is where I go for iMessage.
ChatGPT. Those are my two requirements. And if somebody came out with another super secure form of communications that I knew they would defend their dying breath from government intervention or whatever, which I believe Apple will do. Apple's the only company right now that I think will do that. I would switch.
And that was the other big thing, is that A lot of time is spent in this article talking about the long-form history of Siri, the fact that Steve Jobs very much believed in it and fought very, very hard to acquire the initial startup. but also the fact that it has dwindled. It's not as capable as it was when it was acquired. It hasn't kept up pace.
with Google's assistant, with Amazon's assistant, let alone what you can do now with the LLM revolution with ChatGPT and Gemini and everything. But more than that, They haven't been able to leverage the fact that their greatest, Apple's greatest strength is that what you just said. I trust putting data into my phone. I trust the contacts I have in there. I trust my emails that I have in there and I trust my text messages.
I need for there to be a better way for me to maximize those things that I want to keep on those devices. Because I agree with you. This was the first time that I looked at Apple. And I said, they're vulnerable. Like, I don't think that they're ever going to go away, but... Eddie Q said it on the stand at the Google trial that the iPhone might not be the iPhone intended.
And I mean, he said that under different circumstances, but I think it's true. You know, there are companies right now that you don't think of as hardware. companies that could very well be hardware companies open ai is one of them perplexity could be one of them like anthropic could be one of them there's there's there's a lot of different people that could all of a sudden come out with a device
that does what you want it to do based on this technology and could be really compelling. I had lunch about a month or so back with Adam Chair, who's one of the creators of Siri. Talking to him about where they saw this before Apple acquired it. We remember when Siri was first in the app store before Apple bought it. It was great for the day and the capabilities. I've said this, it is an application that became less useful over time.
And part of the reason that happened is that people that were building Siri and working on the Echo devices The ML was often just trying to recognize words. They were spending a lot of time in trying to understand word recognition. there really wasn't as much deep neural network stuff which one of the things and now that's kind of a solid problem and everything else was just sort of you know basically kind of a rules-based approach towards it but that being said
They just had a lot of capability and was sort of modular. And it was just, they took kind of the guts out, but left the cool part. They left behind the cool part. And that was, Siri was the thing that Jobs had said, this is the future, this is where we need to be going. And then nothing happened. It came out after he passed, it finally got officially released after he passed away. And I would think... that had Steve Jobs not passed
He would have been in the offices at opening night in 2019. He would have been playing with GPT-3. He would have been playing with GPT-4. He would have been pestering Sam with questions about this and where it was. and then would have decided this had to be part of the iPhone and made some sort of strategic deal.
and Apple would have been in a very different place. And I think his advice to Tim Cook before he passed was like, don't spend your time guessing what I would do. But I would say there was this forwardness to stuff. And Apple had been, one, they were going to sell a driving car. Whatever it takes in the person of an Elon Musk to get through to basically deliver hardware like that, they didn't have.
They then pivoted the Apple Vision Pro, which is the most expensive desk weight I've ever had. It is just a project that is literally, it is a camel. It is just a series of compromises there. I was at a dinner party two weeks ago and we were talking about Apple and I was like, look, I'm worried about the company that I love, that I bought so much Apple stuff.
And Apple Vision Pro was the first time that I was worried about it. And even these are people that are adjacently involved in tech, but they're not like tech journalists or working in tech companies. They're just working in the greater tech world. They're like, well, yeah, but that was like a research thing. And I'm like, it was $4,000. It was not a research thing. And they put a huge amount of time, effort, and money. They gave it their big rollout.
I think they were hoping that it'd be so popular at 4,000 that they could then come out with a light version that people would be so into. Instead, it's died on the vine. It just doesn't have any footprint culturally. There's no conversation about when the next version's coming out, light, heavy, or otherwise.
Yeah, it's a thing that when I got it, I thought that some of the solutions were just absolutely elegant, what they did in there. I thought the hardware... for what it was was very thought out but as a whole terrible i literally have a battery pack you have a heavy unit on your head i have a cord like Why I have us wait on my head if I have to have a Courtney rise, it was just this.
You knew they wanted to build a thing, they couldn't build the way, and you could tell that there was a turf war between the hardware people, the software people, the usability people. and nobody won, and they created a product that was so bad in implementation that nobody talks about it. It's a joke. Meanwhile, this AI thing is happening. And then Apple Intelligence came out, which I thought was a really good first way to sort of put this stuff in there.
And I used some of it, but then it's just not like it continued to become more embedded. I'm just back to using ChatGPT, and I want ChatGPT to control more things. The thing that, again, I was hoping I'd be able to do... is leverage every conversation I've ever had in iMessage, which is the vast majority of my... conversations i've wanted it to be able to it'd be worth it to switch back to mail
So it could refer to everything. I would switch to calendar if it could leverage it or create things in a quick way. That's one of the elements that I still don't think is fully realized with AI is just Quick point interaction. I'm thinking about it. I can go to a thing that records it on a level of detail that would otherwise be impossible if not for that kind of LLM capacity and the voice technology to do it, and they have failed.
They have failed utterly. And this Bloomberg article explains they're not even going to mention it at WWDC coming up. It's so far down the road. They're building a kludgy version of a smarter Siri that has to use the old code that doesn't work when you actually start doing it. They have a new black box.
a team that's apparently out in europe making an llm version of siri that is just now starting to get to the level that some of these chatbots are but You have people that are the head of machine learning and AI that didn't believe in chatbots. And thought that they were a waste of time. And it's like, dude, this is for a flagship company. These are warning signs.
I get if you're some edgelord commentator who wants to say these things aren't working and you're not paying attention to the numbers. But if you're a company that's ran by a guy notorious for the numbers and you're looking at the adoption rate.
It was clear. It was absolutely clear. And it's weird because you get opinionated takes from certain people in AI that are like, well, it doesn't work because it doesn't satisfy my personal definition of whatever it's going to be. And it's like, it's a tool. Are people using a tool? Leaps and bounds and yes. And so we saw what happened with there was some Senate testimony talking about Google and antitrust and other companies. And there's been basic conversations about.
Google's dominance and one of the things that was brought up was a point of vulnerability is for the first time ever search on the iPhone has gone down. They're seeing Google search, the search usage has gone down. Why was that? Yeah, this was in testimony from Eddie Q saying that for the first time in the history of the iPhone,
mobile search went down. It has never gone down before. And that is because of OpenAI. It is because of ChatGPT. It is because of large language models that that has happened. It is a sea change. And look, the part of that testimony is coming out by way of an antitrust case that might One of the things that it is believed could happen is that right now Google pays Apple, I think somewhere in the neighborhood of $22 billion a year, to make sure that Google is the default search engine.
On iPhones, it looks like by way of government intervention, that might not be allowed to happen anymore. And obviously for any company to lose $22 billion a year, that's going to be money that would otherwise be missed. Who knows if it would have been $22 million a year when that number is going down. They might have wanted to renegotiate exactly what that was going to cost. Or Apple might have wanted to just say, hey, look, it's better worth it for us to have a fresh start here because...
Google might not be the brand. Web search might not be the thing that people want to do with their iPhones. Yeah, if you notice, Google's been putting in a bunch of new features into web search. If you do a search now, because I just had this thing pop up and I'm like, what is this thing? And if you go look, it'll take over if you highlight something.
you'll get like in some cases you get like a pop-up or something that'll come up like you can you can like in the AI overview I've had well like many things Google it works sporadically but you've had these things that popped up but it's like I don't I We've talked about when the bill comes due, and both with Facebook and with Google, and that when you're an ad company, you're an ad tech company, and your job is, the rate of people being added to the world is slowing down. And it means
Your ability to make money is dependent entirely upon the more increased amount of advertisements you show people are raising your rates in advertising. As we've seen, rates aren't going up. the number of ads are increasing. And so these companies are in a very weird bind where they're ultimately going to be constrained by our
faith in them and how much we trust them. And Apple's an advantage because Apple's foray into ads, iAds, was terrible. And Apple, by not being very good at ads, just didn't really destroy much of the trust that we had in them. But I'd say that we're in this position where Google is extremely vulnerable right now. And I'd say that Apple is the thing that I do to get stuff done. But what do you use a phone for? And I use my phone for...
iMessages, certainly I get notifications. 98% of notifications I don't want, by the way. Like every app wants to shove things through. And so I would say that I use it for iMessage. I use it for that. It just basically runs, I use it for Slack and some other things, but if you ask me what are the Apple-like features there that I use it most for, It's not a great environment to develop for now, by the way, either. It's a pain. It's always been a pain.
Developing for iOS has always been terrible, but because they had dominance, because it was the platform to be on, people jumped over hoops, jumped backwards through hoops to do it. By the way, just so it's mentioned, when you talk about the dominance in hardware 2023 announcement that OpenAI was working with former legendary Apple chief design officer Johnny Ive on some kind of hardware. That was a little while ago. Who knows if and when anything will ever happen with that.
I would not be shocked to see one of these young Turk AI companies that have really come to prominence over the last five years drop hardware in the next two years. Wouldn't shock me. Yeah, I think there's been public talk about that. I think that... Oh, but I... I can't. I'm... understand it's the company I'm closest to and I'm having a continuing relationship with them. They've been wonderfully supportive of me and providing inbounds for my company and referrals and stuff.
Everybody know that, full disclosure, is that my friends are still there or whatever and love the company and also got to be biased towards that. But I will be happy to talk about other great things in other places too. I had to ask a friend who's there who's head of iOS development. I said, hey, I have a question about trying to help somebody out with a project. He's like, I've hired every good iOS developer I can find.
And that's one of the things I know that the teams there are incredible. The amount of people building applications and working at OpenAI is incredible. They have a huge number of people building stuff. And I think that should they, were they, and I have no inside knowledge about Harvard, I'm going to be very clear, I have no inside knowledge about anything like that. And...
Should they go do something like that, given the fact that one of the opening advantages right now for other companies, by the way, is the number of people they have that are the best consumer-facing engineers they can hire. They've been hiring them from Google and Apple and from Facebook, et cetera. I think they'd be in a great position should they do something.
I think it would be the hottest device that... tech has seen in a very long time it would come with it a lot of hype which is part of the reason why I think that they probably want to be really careful with it because they wouldn't want to put something out that has a bad reputation immediately Well, let's talk about something that began with a company that's not OpenAI that got a little shine at Microsoft Build today. Can you explain...
In a way that my limited capacity could grok. What MCP is? Okay. Model context protocol. Okay. and A lot of innovations often aren't major leap forwards in technology. They're basically drawing a framework or drawing a circle around something and saying, we're going to call this this.
And this is the way we should interact with it. We've talked a bit before about an API, an application protocol interface. What is that? That's the way one application talks to another, whether it's on a server or whatever.
you know when when you're on the web and you're on youtube and you click a button it goes to an api and the google cloud which says send this video do this whatever okay so So LLMs, language models, and again, we call them language models, although they're really everything models now. There was a couple big reviews
A couple of big things have happened. One was early on with GPT-3, we saw, oh, shoot, this thing understands code. We didn't train it explicitly on code. It can do code. It can write code. It can write usable code. It made a button. It was remembering, hey, I just had it make this button. The button worked. Soph and I went and trained a model called... In a year and a half, this is going to blow Craig Figaderegui's hair back.
Yeah, so they trained a model called it was like Code Da Vinci. And I remember I completely forgot the name of it because it was just a short word. They trained a model called Code Da Vinci. which was a variant of GPT-3 trained specifically on code, and it was really good, and they created a model called Kodak. which powered Microsoft VS Code and whatnot. And that was this first AI model really aimed at code. It was the first time we went beyond language into like code and whatever.
super useful and that created this sort of and there are other models people worked on stuff too tab 9 whatever I don't want to take any away from other people who work on space but this was a big leap forward that language models got So the next step was not just training them to use code, but was to train them to use tools or functions. The idea of telling in it, you could tell at ChatGPT, hey, if you need to get something from the web, say web search.
say use the web search tool and go search for this and then what will happen is when it spits out its output a little thing will say oh You want to search the web, I will connect to an API and I'll get that information. Now I'll give it back to the model. The model will know where it came from and understand how to use that. So that was a big use. It was tool use. Tool use was a big paradigm. You trained models and made them better. It's called function calling. It's great.
So Samanthropic, which is an amazing company, they've been building great models and their model, you know, Kersher 3.5, 3.7 were, you know. where my go-to code models, now that I use 04 mini, but I'm a fan of their stuff they did, they took this a big step further and they said, okay, What if we take, there's a protocol called OpenAPI, which working for OpenAI and here OpenAPI gets very confusing, but OpenAPI is basically, if I build an API,
that has these different outlets who connect and get things from it. I can produce a spec that says, hey, if I need to send an email, use this port. If I need to do this, do this. This is the structure of the data, whatever. So you can take that OpenAPI spec, which tells you what the API can do. And now Anthropic said, oh, well, what if we, one, made models implicitly understand that spec so what they could do, and two, gave them access to a server?
So all of a sudden, if I let the model have access to a server, it can interact with that API. So not just call a tool, but say, here's your server. Now you can make the API calls and do what you need to do. So what does this mean? I'm using Windsurf right now to code, which I love. Windsurf is apparently getting acquired by OpenAI. And OpenAI came out and supported MCP because they said this is a great protocol. OpenAI is like, we embrace it. We think it's great.
So Windsurf is a tool I use to code. So right now I'm working on a project and I have a database sitting up there in the cloud. It's on Supabase, right? So Supabase has embraced the MCP protocol and Windsurf has embraced it inside of there. So what I can do is I can tell my code editor, here is the credentials for my Supabase database.
Now I want to build a thing. Go talk to the database. You can now query it. You can write rules for it. You can change it. So you can also do this inside of models like Claude. You can add it a little bit. several jumps you have to hurl you have to go through but like Claude they're Chatbot will let you basically connect it to servers to do this, and this is a thing coming to everywhere else.
So it's great. So basically your little chat bot can now talk to these servers in the cloud directly. And so MCP was a protocol that simplified it, made it very easy for everybody to get on board. And it was, you know, great from Anthropic. I went to an event. Last week at the Exploratorium in SF, it was MCP night. Sponsored by WorkOS because people are so excited about this paradigm. And you said, like, what's the deal? It's like, well.
We just decided to agree on a set of rules to make it easier for chatbots to talk to servers in the cloud. And that little step, there's some things that just get overlooked. If you drive through the port of Oakland, you see all these containers, all these shipping containers, and we see these every day. Those just do not exist.
They weren't a major piece of technology in the sense that they weren't stronger, sturdier, did anything else. It was literally, let's have a uniform size and shape for this thing. So if I order a hundred bicycles from China, they know how many fit into a container and they'll shove them into a container, seal the container and ship it. And then I can get that container delivered to Ohio. And think of MCP as just another standard by itself. It's not a big technological leap.
But as a framing and a standardization of something that has just made it extremely, it's made chatbots even more useful. And it shows you there's a lot more things out there just waiting. So Microsoft talked a lot about it during their build conference today. And by the way, Google I.O. is starting in the next week. We've mentioned WWDC before, but this is the season for a lot of these developer conferences. Microsoft Build took a lot of time to talk about MCP and specifically...
The way that they framed it was really how big of an effect it's going to have on the web and further going forward. what we would look at as the agentic AI future that we're going to have, and it really... It made me think that the future is here a lot quicker than people from a mainstream perspective are understanding. And by that, I think that the line that's going to be crossed is somebody's going to put out a browser that doesn't feel like any browser you've ever...
dealt with before that is that is using agent that's using agents that like the first thing you do is either through some graphical interface or a chat field or something, you are just interacting with this that is bringing you things as opposed to you going out to all these different... It is going to blend information on a level that we have not seen before.
And I don't think it's far away. I mean, I would hazard a guess that somewhere in one of these companies, this browser is being used right now as we speak. Yeah, there are companies that have browser company, other ones that have been building versions of these kinds of things. And I think that you finally, when somebody gets the right formula together, it's wonderful. Like for me...
I appreciated MCP and explained MCP to people because also it's not like, oh, it's this new piece. It's just literally just a different framework to understand something. It's great. One of the things that's really hard when you're building an application and you'll hear about these horror stories about security rules because you have a database where all of your auth, everything else is in there.
And you can start getting really complex rules where I can create a blog post, but somebody can't see it unless I switch to visibility or this. And that's where a lot of disasters happen. Language models are actually really good at those security rules. We found out early on I could have them write my rules. I could tell what I could do. I could write a better rule than a person because all those if-then statements was very good at doing. And I was in the process of working on an app yesterday.
And I noticed that, oh, I did a change. I went back and made sure I had super tight security rules and everything and something wasn't working. But because I had MCP working in Windsurf and it was connected to my database, I could say, hey. literally ask it, hey, what's going on with security rules? Why isn't this working? It's like, oh, well, you set this for...
But I think you wanted to do blank. Do you want me to change this? I said, yes, change it. Did it. And I did it right. And that was the thing to me. It was just one. You can make things more secure. You have a lot more opportunities when your chat has the ability to interact with the world or the world of the web and do stuff and or talk to API. And it's like, you know, one of the things that you think about the future we could have had was, do you remember Sherlock?
That was back in 1998 when Apple introduced an application called Sherlock, which was considered competition to a program by another company called... which had Watson, which eventually... you know, kind of like, you know, there's a whole thing I can have in your application be Sherlock, which is the fear that a big company might kind of take on all the futures that you're going to have.
But it was a very neat idea of not everything needed to go through a browser. Some things like search and other things. There's some things I want to do. that I just don't need to search the web. And we've created a layer of complexity about doing things through UI when really what I want is an API. If I want to book a ticket, Somewhere I don't need a ton of all these images and ads and all this other stuff.
I need tables. I need five tables, and then I need something intelligent enough to understand it. And so much of the web... has been all this cruft we put on top of it. And I think that the exciting thing about what we're going with chatbots and MCP and related protocols is we can start rethinking how these things work. I was saying this the other day where... Right now, Google and Apple are under threat. Next big one's Amazon. Because when somebody creates a database, when you can say...
It can be run by Wikipedia, whatever, and there are ways to prevent people from using and abusing it, but Anytime you want to sell a product, like you do ISBNs, there's things like universal product codes, but you create a big database, and I don't want to use the term blockchain like that because you don't have to have that to do it, but if I create a big graph, that I want to list things on and then I get a unique code for it
I can then introduce companies that are like consumer reports and stuff, whatever. And instead of going to Amazon, And having to go through all their advertising and all the other things they're using to capitalize on my attention, I could maybe go buy something. And maybe Amazon's back end is going to be doing the fulfillment, but there's an entirely different way in which we can consume things and buy things and sell things. gets away from the web
Have you noticed this thing with founders getting back involved in their companies because of AI? I've seen this twice now. I saw a clip that Bezos is working with Amazon on their AI stuff, and apparently Sergei and Larry are both involved in... Apparently Sergei is, like, coding for Gemini. Doesn't sound efficient, but okay. Yeah.
But it does sound like some of these kind of legends in the world of technology, and look, they've made jillions of dollars between them, and they have the right to do whatever they want, but they are truly jazzed about this in a way that They had found other interests previously, and now they're back at their companies trying to be a part of it.
I think part of it is that it is so new. The goal of a founder is to eventually replace yourself in just about every capacity you can and just then be supervising your replacement. And I think that part of it is that It gives them another, because it's so new, you can be a founder and come in and your ideas are going to be just as fresh and interesting as anybody else's ideas.
I think also part of the problem is that I think that some of these companies had trusted that their AI people knew what was going on, but you found out that just because somebody has AI in their title doesn't mean that they know. And I see this a lot. I see a lot. I consult for firms that have AI teams, but these people are very traditional AI that are very good at certain skills. But when it comes to using language models,
They're no better than a random. And I think that's happening at some companies. I think they've realized. oh, this person that's headed AI developmental research the last 12 years is no more qualified where we are now. And I think that took a while. That was why there was a lot of resistance in some places was because
Because I could see this in the papers. I could see this in some of the stuff that the companies that were supposed to be known for having AI were doing. We're like, oh, the people talking to the CEO don't understand. Yeah. And now, like you said, it's like, why did Craig Figurini have to be the guy to take a look at ChatGPT? And why wasn't his AI people running to him before and saying, you boss, you need to take a look at this. Why was it after, why did he find out when your mom found out?
Yes, exactly. Why wasn't somebody saying, here's GPT 2.5 in the sandbox, look what it can do. Here's something that really needs to be paid attention to, as opposed to, oh. It's on South Park. Like, let's pay attention to it. That's pretty late to the game. Pretty late. But you are not late to the game if you're listening to this podcast. You are on top of it. You are a smart person. And you want more from Andrew Mayne. Andrew, where can they find you? I'm at Andrew Mayne on X.
You can find me, Justin R. Young, on X as well. Until next time, this is The Attention Mechanism. See you later. Diamond Club hopes you have enjoyed this program. Dog and Pony Show Audio.