Bryan. Hey, Adam.

Hey, Simon. How are you?
I'm very good. Thanks.
I'm gonna put myself up here because I was here last year.

Grandfathered in.

Steve is grandfathered in. Adding to the confusion,
we have also Brian number 2.

Brian number 2, also known as Steve Tuck, here in the litter box. And then keep an eye out for Lyndon Spades Johnson. Yes. Yes. Mike Caffarello from last year is also gonna join us. So he's our we we've got all of our our our, distinguished guests from past years and our distinguished guests this year, Simon Wilson.
Yeah. I've I've never done predictions before. This is gonna be interesting.

Oh, this is so much fun. So I and I don't know, Simon, I don't know if you, if you if you listened to any of our
I did. I listened to last year's,

this morning
just to get get an idea of how it goes. And I was very pleased to see that the, the goal is not to be accurate with the prediction.

The goal is not to be accurate.
Reassured. That's right.

Yeah. Accuracy is I mean, like, there's there's no thrill in accuracy.

There's no thrill in accuracy. No. That's it. It's one and I also feel, I mean, especially, Adam, having gone through the the just listening to because we've now done this we did it in in 22 and 23 and 24. So, and I mean, I we've said this over and over again, but God, just like relistening to those, I'm really reminded yet again, predictions tell you more about the present than they do about the future.
And and be like, I mean, God, those years had such like, 2022 was the year of web 3. And Yeah. The God, it was so hyped and overhyped that it created this huge overhang over that year where with, Simon, we did something that actually I regret doing and I would never do again, which is we I limited everyone to 1 web 3 prediction because everyone was just like tripping over themselves to to to demise them.

Yeah. I mean, that was that was a kindness, Brian. I think like No. Okay. So we're saying this all from ourselves.

No. No. No. Here's why I think that was a bad idea. Because the the fact that you have to put that limitation, and obviously, this was this is the policing mind versus the criminal mind.
Clearly, I was putting that limitation in it for myself. I I clearly was the one who, like, I was the criminal. Yeah. And the but the fact that you need that limitation says that, like, actually there's a desire, like, this is so overwhelming and that people wanna talk about it. And as a result, I think, Adam, our other predictions that you were not that good.
My other predictions that you were terrible because the I really wanted to make 3 web 3 predictions. I just wanna make that nonstop with 3 predictions. But Simon, that year, Adam had a great prediction, which is that web 3 falls out of the lexicon. And Simon, for years, you know, we would we've done this every year since 2000. We'd be missing some years in the middle, but but we did this for a long, long time together, and we didn't record the sessions.
And one of the the challenges that we had is when a prediction was right, we and Adam, I I remember vividly this happening to you, but I think this happened to a bunch of folks where a prediction was correct, and then someone would look back and be like, well, no, everyone knew that at the time. And you're like, screw you. I was arguing with the entire table that night, like the
and we can go back in time and listen to to Adam's prediction in 2022, 1 year prediction that web 3 drops out of the Lexicon. And it was like everyone's like, oh my god. Like and Adam's like, oh, this is like, I'm predicting this with my heart, not my head. I know it's gonna be wrong, and, and it was spot on. So, and it's a lot more fun when it's like that.
Well, we're not actually did the accuracy is not actually that interesting. It's much more interesting when, we can, and then because when they are accurate, it's it's it's pretty interesting. So in that spirit, Adam, I do think in I do wanna revisit just briefly because I know we we got a this is gonna be a very lively year. I do wanna revisit some past predictions. In particular, and I know I think you listened to this one as well, and now this is a 6 year prediction from Ian in 2023 that Apple goes into and out of the VR business, which I love.
So for Simon, another thing that we that we love is parlays to make it just in case you think a prediction is gonna be accurate, you know, add a parlay to it to make it.
Gotcha. Okay.

And so, Ian predicted that this is, you know, this is in 20 January of 2023 when, there were kind of rumored stuff, but no one really knew. And Apple was indeed I mean, he is that it was amazing that that prediction is, that prediction might be true on 3 years, not 6, which is crazy. Yeah. Mine mine
from a year ago was Apple VR related as well, which obviously 3 years is or the 6th is much bigger than 1. But it's very interesting, I think, specifically because I said Apple VR will do well, but not take over the world. So that means, like, do a second revision. And they just announced last week they're stopping production on the current Apple VR. Like so it's, like, almost to the week they've ended.
So I'm not gonna say I'm right or, like, wrong, you know, but it's, like, interesting because I was, like, yeah. I think it's gonna be do fine, and I think maybe it did less than fine. I don't know.

I think it did maybe a little less than fine, but I think you're right. No. I I I Steve, you had a good prediction. It's, like, this is, like, look. This is not gonna be not gonna be the Newton, but they and Steve, I love the way you phrase it. It's like, they'll make another one.
Still a little Yeah.
I would try to make
it, like, you know, you need actionable metrics for success. There's new Chris there's new Chris Am video out today, so I went and

watched all their old other old ones.
So that's stupid sketches on my brain. But, yeah. No. I was trying to, like, figure out how to quantify what I meant by, like, meh, not amazing, not terrible. And so, yeah, we'll see.

On that note, Adam, we've got to revisit because now our 3 year predictions from 2022 are now up. And, we've got 2, we got, the the, 3 year that became kinda famous around here where Laura predicted that that risk 5 would be, would be present and meaningful in the data center, and I would say that one is not wrong. You can actually spin up a risk 5 instance on Scaleway right now. So I I think

I was I was listening to that the other night and thinking, oh, well, near miss or whatever, but sounds like

I did that one is holding on. That one is not wrong, I would say. And and, you know, you had kind of made that you had made a 6 year prediction in that same year that you'd be able to spin up, AWS instances, risk 5 AWS instances. So, in 6. So that's got 3 to run. Feels like that's got, some plausibility to it.
Plausibility.

I had I accurately predicted the demise of web 3 like everybody else that year, and then my other predictions were absolutely terrible and embarrassing. The, open EDA, we are no closer to open EDA. You're

you're you're you're you're KiCad, family KiCad,

you know, and this has happened to me a couple times over the years where I get, like, something, like, I just I I I get some bit set, and I believe that better things are possible. And I just, you know, it, it clouds my, I, I blame web 3, web 3 cloud, if I touch on.

It, it, it really explained your recent Intel suggestions, which was to open source their entire EDA toolchain. Clearly, you were trying to put your thumb on the scale of this particular
I was.

I I was. I yeah. Exactly. I was try I I was actually trying to make sure that I was not included in the CEO search by making clear what I had intended to do as CEO. So, mission accomplished on that one.
Okay. Then the other one, Adam, that at least I wanna talk about, because, I had a way I had a prediction a year ago that, that AI doomerism has dropped out of the lexicon, a la your web 3 prediction. We don't talk about p doom and x risk. And, Adam, I'm giving myself full marks on this one. I I gotta tell you.

Feels pretty right. I I mean, I think that, like, there are certainly the the niche, hardcore doomers who are still holding on to it, but I think, people are mostly letting go of it. I agree.
I'm gonna push back at the one slightly. Not on the doomerism. I think the doomerism's gone. But the, the AI skepticism, the, that the the argument that this whole thing is use is useless and it's all gonna blow over, that's still very strong.

Oh. You know? Yeah. That is still present. And my prediction, just to be clear, was purely around AI based doomerism.
And I did wanna make sure that my prediction was wrong by turning it into a parlay that the doomerists would claim credit for the fact that doomerism is no longer in the zeitgeist. And I I think that that is I I have not seen that as much. I've not seen Leon Shapira claiming that it was, his doomerism that has allowed AI to be safer. So I don't I don't think that part has come true. No.
But I I am gonna I I I would like to grant myself full marks on AI Doomerism dropping out of Lexicon. I feel that that is and it's not good, Simon. When we had you on almost exactly a year ago
Yeah. You know, we had episode after this one a year ago.

A year ago, and we had this scary IEEE spectrum article about the boogeyman of of open source AI, which we would now call open weight AI, I think.
And Yep.

That already feels like that has not aged well, that piece. So, yeah, I mean, we were we we Yeah. As you said, your eyebrows were flying off your head when you read it, and I don't I don't think that one looks back on that piece and thinks like, well, boy, then maybe that maybe that it actually did raise some good points. It's like, no. That that
is My my absolute favorite thing for the last 2 weeks was when when DeepSeq in China dropped the best available open weights model on Christmas Day without any documentation. It turns out they'd spent 5 and a half $1,000,000 training it, and that was it. Like, it was it was such a great sort of microphone drop moment for the year.

It it was. And, actually, I I I could foreshadowing because I think that is actually one of the biggest stories of last year. It it was what only happened, you know, whatever it was, 2 weeks ago, less than 2 weeks ago because you're right, Simon. That was the, amazing what what DeepSake has done. So I'd add many of the past predictions that we need to revisit before we get going on looking forward. Are you,

Not nothing really stood out. Ben wanted to to get credit for predicting a significant portion of the commercial office space was converted to housing. Depends on what you call significant, but we'll give it to you then.
Right.

Is it

what if you

it's if it's significant to you, you know?

That's right. That's right. And, it let's get Mike to, Mike Efrau, I think, is here, so maybe he can raise his hand. We'll get him up on stage. The, so, yeah.
We and, oh, it's actually so Mike's LBJ avatar, did remind me, Adam, one thing I did wanna go back from re listening to our predictions episode is, my prediction of recall my prediction of omnocracy. The, we record every meeting and it turns into we we automate away middle management. The and he said, like, ask how it worked out for for Richard Nixon, was I think your you had you had a quip. And, I don't know. And, Mike, I'm not sure if you know this, why Nixon recorded conversations in the oval office.
When Nixon first came in, this is a story that was told to Doris Kearns Goodwin that when he, ascended to the presidency, he really wanted to make sure he had great memoirs. So he dispatched an aide to go to Austin to visit LBJ on his ranch and get his perspective having just left the the Oval Office. And LBJ said, you know, I I was beginning to work on on on his memoirs and he said, you know, I'm I'm very grateful that I've recorded all these conversations and tell Nixon that if he wants to write great memoirs, he should record every conversation.

Well, if that was the goal, like, mission accomplished.

There's people who experience it out.
More good books out of it than anyone else. So, yeah, sure. Exactly.

Yeah. There I mean, it was, I remember at the I think it was a year ago or whatever that I was reading the the Watergate, a new history. And, folks visiting the Oval Office at the time would say that occasionally, Nixon would move to a corner of the office and speak as if into history. Like, so really really telegraphing that, like, this was his intention with the recordings.

You know, we've got more in common with Nixon than we thought. So, anyway, so that was, last year. Obviously, Mike, we love your predictions from last year. Still waiting. I'm I'm still, I'm in sunglasses as we speak and a hoodie to make sure that no one can pull AI related details from my my irises. But I think we're ready

to get going. Hasn't happened yet, but it's mad
as though.

Not yet. But actually, Mike, you very occasionally said if it happens anytime after, this is also gonna make some credit. This is why you should. So I think we're ready to get going. And so, Simon, let's kick off with you.
And, I kinda like what we did last year. We did kind of our one everyone did their 1 years and then we got to their 3 years, then we got to our 6 years. So, Simon, let's kick off with you and your 1 years, and I I I love that you you thought, like, you know what? I may maybe I'll go like a a gloomy 1 year and an optimistic 1 year. So I'm very, very curious what your predictions are for for the coming year.
My original idea was gonna I was gonna go utopian and dystopian, and it turns out I'm just too optimistic. I had trouble coming up with dystopian things that sounded like they'd be more than just sort of black blank sci fi. But for the 1 year one, I've just got a really easy one. I think age AI this whole idea of AI agents, I think, is gonna be a complete flop. Lots of people Yeah.
Lots of people lose will lose their shirts on it. I don't think agents are going to happen. Yes. Again, they didn't happen last year. I don't think they're gonna happen this year either.

That is really, really interesting. Okay. Could you elaborate on that? Because I
Well

I I I was I I was biting my tongue to not make the same So I definitely agree with you. What what's your perspective on why this
So my usual disclaimer, when I when I think about agents, I hate the term because whenever somebody says they're building agents or they like agents or they're excited about agents and then you ask them, oh, what's an agent? They give you a slightly different definition from everyone else, but that

everyone is convinced that their definition
is the one true definition that everyone else understands already. So it's it's a completely information free term. If you tell me I'm building agents, I I'm no more informed than I was beforehand. You know?

Well, all I know is I want to invest at whatever ridiculous valuation you're raising at. I just want that's
So in order to to dismiss agents, I do need to define them, say which particular variety of agents I'm talking about. I'm talking about kind of the the autumn the the idea of this assistant that does things on your behalf. The, I call this the travel agent version of agents.

God. And they love the travel use case.
Oh, god. They do. And it's such a terrible use case. I don't know

that. Use case.
Yeah. So so so, basically, the idea it's basically it's the digital personal assistant kind of idea. And it's it's HER. Right? It's the movie HER.

It is the movie HER. It's okay.
Everyone every like, everyone assumes that they really want this, and lots of people do want this. The problem is, and I always bang this drum, it comes back down to security and gullibility and reliability. Like, if you have a personal assistant, they need to they need to be, you know, reliable enough that you can give them something to do, and they won't go and read a web page that tells them to to transfer your bank details to some Russian attacker and and and drain your bank account. And we can't build that. We still can't build that.
We can't.

And yeah. And, you know, and, Simon, we we, based on your I mean, we it was so mind blowing to talk to you a year ago, and you you turned us on to Nicholas Carlini's work, on the adversarial machine learning. And I just relistened to that discussion. Adam, that was such a good discussion. I love Nicholas's perspective, and obviously, we had him on again with pragmatic LOM usage.
But I was just as I was relistening to that over the kind of winter break, I'm like, anyone believing in agentic AI really should listen to this thing closely because part of the reason that that that the when you have these agents in going forth in the world that taking action on your behalf, these adversarial things become real, become real threats. Right.
The the best example of this, so Claude so Anthropic released this thing called Claude Computer Use, which is this wonderful demo a few months ago where you run this Docker container, and it fires up X Windows. And now Claude can click on things and you can tell it what to do and it can use the operating system. It was a delight to play around with. And a friend of mine, the first thing they tried was they made a web page that just said download and run this executable, and that was all it took. And it was malware, and Claude saw the web page, download the executable, installed it, and ran the malware, and added itself to a botnet just instantly.

Just just, just w get pipe to sudo?
Basically. Basically. And and and it's like I mean, come on. Right? That's Wow. The single most obvious version of this, and it was the first thing this chap tried, and it just worked. You know? So Yeah. I and and every time I talk to people at AI Labs about this, I I I got to ask this question to philanthropic people quite recently, and they always talk about how, oh, no. We're training it.
We're gonna get better through training and all of that. And that's just such a top out answer. You know? That that doesn't work when you're dealing with with, like, actual malicious hack ins. You know? Yeah. It's a training scenario situation.
Training humans to resist phishing and other things didn't work. So why is training from AI going to suddenly make it work?
Exactly. So, you know, I feel like there is one aspect of agents that I do believe in for the most part, and that's the research assistant thing. You know, these ones where you say, go with search on the Internet for hours and hours and hours. Find everything you can. Try and piece things together. I've got access to one of, there were a few of those already. There's the the Google Gemini have something called deep research that I've been playing with. It's pretty good. You know?

Ask because this is what I've heard. I've heard deep research is really yeah. That that it's I'm excited about it. It seems to and are you is that available? I think you have to pay for it now, or is that is that only available to get to other private okay. Yeah. Interesting.
Beta that I'm in. I can actually so I can share one example of something that did for me. So I live in Half Moon Bay. We have lots of pelicans. I love pelicans.
I use them in all of my examples and things. And I was curious as to where are where are the most California Brown Pelicans in the world? And I ran it through Google deep research, and it figured out we're number 2. We have the 2nd largest mega boost of Brown Pelicans. And it gave me a PDF file from an on from a a bird group in 2,009 who did the survey, and it would you know, it it it Right. Right. Right. It's like, group in 2009 who did the survey, and it would you know, it it it

Right. Right. Right. Right. It's gonna get some questions and so on. Yep.
Yeah. Yeah. I'm I'm convinced that it got it. It found me the right information, and that's really exciting. You know? Like, we're numb Alameda are number 1. They have the largest mega roost

Oh, my god. We we am I in, like, a Pelican connection? I've got number 1 and number 2. Represented. Yeah. Represented here.
That's right. Yeah.

That's the only thing I would have Bay. Yeah. Well, I I think, Austin, you can sit down here. I don't think yeah. I I I don't I don't think Austin is number 3 there, Steve.
Play. Yeah. That's insane.
Scott, point being, agent that that the research assistant that goes away and digs up information and gives you back the citations and the quotes and everything, that already works to a certain extent. Right now, I think that's over the year, course of the year, I expect that to get really, really good. I think we'll all be using those.

Yeah.
The ones that go out and spend money on your behalf, that's Luminess. Okay. Like, I I

It's also like the travel use case. Stop it. You're talking about something you're talking about spending, like, a lot of money making a consequential decision on something that's already, by the way, pretty easy to do. You know, I can't actually book travel online. It does take me about 4 minutes to go do if it the and I I just feel that, like, the the putting agents in charge of it, it's like, what do you mean?
I'm like, oh, wait. Why I'm flying Ryanair around the globe. I mean, you're just gonna have, like, a lot of, So,

Simon, I love this prediction, in particular, being short, short agents. This reminds me of even more dystopian, prediction I read along these lines. I'm gonna read out loud. It said by the end of 2025, at least 20% of c level executives will regularly send AI avatars to attend routine meetings on their behalf, allowing them to focus on strategic tasks while still participating and maintaining a presence and making decisions through their digital counterparts. And I read that That's
way too low.
No. I hate that one. I hate that one so much that, sometimes I call that digital twins, which is an abuse of a term that actually does exist. Right? A digital twin is when you have, like, a simulation of your your your hybrid or cam or whatever. But, yeah, it's the biggest pile of bullshit I've ever heard. The idea that you can get an LM and and give it access to all of your, like, notes, your emails and stuff. That can go and make decisions on your behalf in meetings based on

being there Oh, it's like I mean,
if this comes to pass simulation of you?

At least one of these agents will be held hostage in a prediction years ago of the unionization of tech. There'll be a, a hostage standoff where the bot will be held against its will if it had any. I mean, I've

been I've been at a place where, like, the chief of staff, for the CEO was sent off on a similar mission, and we gave that person as as much sort of, credence and patience as you might imagine. So try it try it now with a robot. We'll see how that goes.

It's not gonna yeah. Exactly. Try it now with someone who doesn't actually fire mirror neurons for people who that are pretty upset that there's then, also, it's like, you've already like, you've started off the meeting by saying, this meeting is not important enough for me to attend, so I've sent this this shell script.
Right.

It's like cocks. Cocks. Yeah. That is

Please bow to the master cog. I have sent in my stead.
That's right.

That Stein, that is a great one year prediction. Adam, would you do you do you have,
do you have 1 year? I have another dystopian one, and this goes counter to my one from a few years ago. I think crypto is back, baby. I think web 3 is back, and I think that through a bunch of factors this year, we're gonna see, like, you know, like, Chris Dixon's, horrible, horrible book that he pumped to the top of the New York Times bestseller list by forcing all of his, all of the portfolio companies to buy tons of copies for all of their customers. Like, that's gonna be back on the bestseller list, maybe organically.
Yes. And doing web and it's very transparent. It it it is like It could be You you you you are just you you are worried that you predicted your hopes one too many time. Your heart's been broken. And you know what?
You're like, you know what? I'm gonna lock up 2025 because one of 2 things is gonna happen. Either Chris Dixon's book will be a best seller and at least my prediction will be right, or it will actually be a does it will continue to be a wreck and, my prediction be wrong, but that has a small part price to pay.

So I mean, part of it so true. Not you're not wrong, but, also, I think Bitcoin's, like, over a 100,000 or something like that. Right? That's something bananas, and we've got a bunch of lunatics coming into power.

Bunch of lunatics. Yeah.

So, anyway, that that's what informed the this one.

Now, Adam, I have to ask you, is this does the term web 3 come back?

Yes. I'm I'm putting I'm stacking my chips on the web 3 Square and spinning the gorilla.

Oh my god. It's cutting. Oh my God. You know, never meet your heroes, kids. Oh, wow. Okay. That is dark. Joshua Yeah.

But you're right. And maybe it is more telling not just about the present, but also about my present state of mind. So

that's right. There And and did you did you say you have a dystopian one and a non dystopian one? Or No. No. No. That's it. No. No. A 100% dystopian. Yeah. Exactly. That's it. That's fine. Mike, how about you? Do you have a 1 year?

Alright. So 1 year, I'm I'm a little bit chagrined by last year's, you know, prediction for 1 year, which imagined a cyberpunk future in an unreasonable 12 month time span. It's probably not gonna happen. So I wanna make it a little more modest. I'm gonna take the opposite side of Simon's and say that the strong agent vision is as ludicrous as everyone says.
But weak agents, some weak version of this kind of squishy thing is actually here to stay, by which I mean inference time, like post LLM inference procedures to improve or to have a whole sequence of LLM requests. I think that's actually gonna be around for a long time. And it means that, like, previous LLM interactions that were a little bit lengthy, a little bit annoying, but basically okay, are now gonna stretch to minutes long.

Okay. Now it now per Simon's, also his criticism of AgenTic AI that it can mean anything. Okay. This to me would be agents declaring victory over something that's got nothing to do with AgenTic AI, but it's gonna happen anyway. Is that

So I totally agree. Calling it an agent is insane. Letting it have arbitrary it's it's like a software module that has no expected termination time and no budget of anything. Anyway, that is crazy. But some of the agent programming frameworks exist basically to chain optional sequence.

Oh, interesting. Okay. So

like, hey, I'm gonna I'm gonna write some code on your behalf, then I'm gonna try to lint it. And if the linting fails, then I'm gonna rewrite the prompt, and I'm gonna ask someone else for input. Yeah.
And if that happens I believe in. And to be fair, I think we've had that exact time as a kind of agent for 2 years, almost. It's the ChatGPT code interpreter, right, was the very first version of a thing where Chatterjee writes code, runs it in the Python interpreter, gets the error message, reruns the code. They got that working in, like, I think, March of of 2023, and it's kind of weird that other systems are just beginning to do what they've been doing for 2 years. Like, some of the, like, some of those sort of the things that call themselves agents that are, like, IDEs and so forth, they're getting to that point.
And that pattern just works, and it's pretty safe. You know? You want to be able to have it run the code in a sandbox so it can't accidentally delete everything in your computer, but sandboxing isn't that difficult these days. So, yeah, that that I do buy. I think, I I think it's a very productive way of getting these machines to solve.
Any problem where you can have automated feedback, where the the negative situation isn't it spending all of your money on on flights to to Brazil or whatever? That that feels sensible

to me.
Yeah.

So, you know, I agree it's been around, like, in in real world examples for some time. But I think in the last year, we saw a kind of abstraction of that pattern for the first time.

So Totally. Yeah.

This paper that they call mixture of agents, which I hate the name, but, like, the basic idea is that you farm it out to either 10 different models or the same model with very high temperature settings. You get multiple candidate answers, and then you try to integrate them. It it definitely does better
on some

tasks. Right? It
you that that's also tied to the l one, the o the the the these new inference scaling, l, language models that we're getting, which, like, the one that did well in the AGI the AGI test, 03. That was basically brute force. Right? It tries loads and loads and loads and loads of different potential strategies, solving a puzzle, and it figures out which one works, and it spends a $1,000,000 on electricity to do it. But it did kinda work. You know?

Right. Yeah. I was gonna ask Mike in terms of, like, how this compares to, like, test time compute. Sorry. Go ahead.

Yeah. So I guess what I'm saying is that $1,000,000 were now gonna burn it at inference time.

Right. Well, it it means that this is this test time compute. Right? This whole idea that, of these thing these models when they begin to kind of, yap as as one YouTuber kind of I love this. The this is the BiCloud had a great explainer on this, where these things that we begin to kind of, like, think through their process a little bit, and that allows them to get better results.
But it sounds like the the the the the multi agent the mixture of agents is not is is disjoint from test time compute, Mike.

I guess I would say it's one pattern of test time compute.

Interesting. Okay.
Okay. Because I've got one thing I do want to recommend for test time compute. I I've been calling it inference scaling. It's the same idea. There is a Alibaba model from Quen, their Quen research team called QWQ, which you can run on your laptop.
I've run it on my Mac, and it does the thing. It does the give it a puzzle, and it thinks a very it it outputs, like, sometimes dozens of paragraphs of text about how it's thinking before it gets to an answer. And so watching it do that is incredibly entertaining. You know? But the best thing about it is that, occasionally, it switches into Chinese.
I've had my laptop think out loud in Chinese before it got to an answer. So I asked that question in England. It thought in Chinese for quite a while, and then it gave me an English answer. And that is just delightful.

That is so great and so disturbing.

This English is a second language. It's like, look, I can I can speak English, but I have to actually think in Chinese?
Right. So What's not to love about seeing your laptop just do that on its own?

Absolutely. It it and it actually does remind me, you know, we worked at, Samsung Bot Giant, and our VP of marketing at the time really wanted to be a great Samsung Patriot. So he threw out his iPhone and he got the latest Samsung phone. And to prove his patriotism, he was gonna use Bixby, which it was there. As they say, you haven't heard of for a reason.
And, Bixby and Steve, did I am I, like, making this up because this sounds so crazy. Bixby would go would start to to spout off in Korean. Yeah. It would come alive apropos of nothing and start saying things in Korean during our executive staff meeting. Yeah. Was it was not confidence inspiring.
And you didn't even have to say Bixby. That was like, nobody would do it, one other word would do it. All of a sudden, it would just start blaring on the table.

And and he's, like, not able to turn it off and, like, shoving it into his backpack. Like, that actually happened. Right? Yeah. That's basically.
So, actually, along these lines, I actually do have a a 1 year prediction. So I think that we are seeing a a big shift. We are seeing a bunch of these scaling limits on pretraining, and and I think this is gonna be the year of AI efficiency. And this it's funny because I was actually thinking this before the deep seek result dropped, and the deep seek result is astonishing. So folks have not seen this.
This is a Chinese hedge fund that that trained a model that, by all accounts, Simon, like, looks pretty good.
It is scoring higher than any of the other open weights models. It is also it's like 685,000,000,000 parameters, so it's not easy to run. Like, this needs data center hardware to run it. But, yeah, it's score the benchmarks are all very impressive. It's beating the previous best one, I think, was was Meta's, llama 405 b. This one's, what, 685 b or something. It's it's very good.

And the thing that I that I found to be so amazing is that that they they did this with, H 800 that were, they did this because they they did not have H 100 or H 200. So they had to do this on basically older hardware. And they did it on a with on a shoestring budget because they were forced to because of of export regulations. And I think it's got a lot I mean, Simon, that was I I assume that was as surprising to you. That that was a very surprising result, I think, to a lot of these.
Shocks. The the thing that shocks could DeepSeq have a good reputation? They've released some good models in the past. The fact that they did it for 5 and a half $1,000,000, that's, like, an an eleventh of the price of the closest, like, Meta model that Meta have documented they're spending on. It's just astonishing.

Yeah. So I think this is gonna become a trend this year, because I think I've been really, troubled, for lack of a better word, by the kind of 10 x growth in cluster training sizes because it just doesn't make sense with Technological revolutions, have an advantage that accrues to the user always. And the idea that like we're gonna have we're gonna have to spend 10 times as much money to get something that is only twice as good, I'm like, that's not that just doesn't make sense. And I think that there's gonna be a lot of folks are gonna really begin to look at their training build out. And now I think that that build out could be could be kind of rephrased as inference time compute, test time compute, or or mixture of agents spike.
I mean I guess the judging guy I wanted
One thing I do want to highlight is that last year was the year of inference compute efficiency. Like, at the beginning of the year, we had, like like, the the OpenAI models were about literally a 100 times less expensive to run a prompt through than they were 2 2 and a half years ago. Like, they all of the all of the providers have they're in this race to the bottom in terms of how much they charge per per token, but it's a race based on efficiency. Like, I checked in Google Gemini and Amazon Nova are both the cheapest hosted models or 2 of the cheapest, and they're not spent they're not doing the loss. They are at least charging you more than it costs them in electricity to run your prompt.
And that's pretty that's pretty like, that that's very meaningful that that's that's the case. Likewise, the ones that run on my laptop, like, 2 years ago, I was running, like, the first LAMA model, and it was not quite as good as GPT 3.5. It just about worked. Same hardware today. I've not upgraded the memory or anything.
It's now running a GPT 4 class model. So Wow. There was so much low hanging fruit for optimization for these things. And I think there's probably still quite a lot left, but it's pretty extraordinary. The oh, here's my favorite number for this.
Google Gemini Flash, 8 b, which is Google's cheapest of the Gemini models. And it's still a vision audio model. You can pipe audio and and images into it and get responses. If I was to run that against 68,000 photographs in my personal photo collection to generate captions, it would cost me less than $2 to do 68,000 photos, which is completely nonsensical. Like,

no. And and that's the kind of economic advantage. So we and this is where it's, like, a lot easier for me to be, like, no. This actually is gonna change everything because now that economic advantage is accruing to the user. It's the user that's able to do this really ridiculously powerful thing with not much money in terms of compute, which is really, really interesting.
The, so on that note, another okay. It's another 1 year that I've got, and this is not investment advice. I think Blackwell is gonna struggle. I think that the and and we'll see. Steve and I are actually just about to head down to CES, and, we'll see how prominently Blackwell features down there.
But and I know that they've, you know, sold out their supply for the next year, but they've got these thermal issues that are kicking around. They had this thermal issue that they said was a design issue that they fixed with a mask, which, doesn't make any sense to me. I'm not a I'm not an ASIC designer, but, that that would be very surprising. I think they're gonna have yield issues, like, they're kind of reliability issues, like, they're gonna have price point issues. I think it's really expensive.
And I think you couple all of that with the changing market conditions, and I think Blackwell is gonna be something we haven't seen from NVIDIA in a while, which is, a part that does not do well. And I no. I I it is not investment advice because I think there's every reason to believe that the h one hundred and the h two hundred will continue to thrive. I I I would I I mean, there is no way I would, I I'm not taking a short position on NVIDIA, like, ever, because like AWS, they've executed so well. But,
But given how much capacity of h one hundred is out there, it's gotta not just be better. That's gonna be a lot, lot better.

It's gonna be a lot better. And the price point is so high, and I think the availability is gonna be tough, and I think the yield issues are gonna be I I we'll we'll see. But I think we'll know a lot more in a year on on on Blackwell.
Like you, I'm not nearly brave enough to shorten video, but at the same I don't understand how being able to do matrix multiplication at scale is is emote. You know? I I I just don't you're you're hardware people, I'm not, so maybe I'm missing something. But it feels like all of this stuff comes down to who can multiply matrices the faster. Are NVIDIA really, like, so far ahead of everybody else?
You've got Sarah Brass and, and Grok have been doing incredible things recently. Apple's like, Apple Silicon could run matrix multiplications incredibly quickly. Like, where is where is NVIDIA's moat here other than CUDA being really difficult to to get away from?

I think that's it. I think you just did that. That's that's the remote as as perceived. So, either the Steves, do you have any, any one of your predictions?
Steve talk. Maybe on the, on the topic of the coup de mote, I think in 2025, AMD buys a software company. My head at the last acquisition they made for $5,000,000,000 which was a manufacturing and design company. But, I think they finally buy a software company to get developer access and someone actually working on the software layer, outside in.

And do you have any idea yeah.

Yeah. How I mean, what kind of scale I mean, this is like, Broadcom buying VMware scale or, you
I don't I don't know because I think they would probably go to try to buy, like, a modular AI or one one one of these companies that's got open source models that has a bunch of developer interest. And Interesting. And they're these companies have raised money at at preposterous, seemingly preposterous levels very quickly. So it could be at a big scale, but for a company not people are not as familiar with yet. Yeah.
Interesting. And maybe those companies aren't for sale. I mean, they they also might be in this I
think it might be. Philosophy.
I I

I mean, we know that the market's pretty flooded, and it feels like there's a lot of those companies are gonna be looking for what they're different it would not surprise me if a company is looking for the light boat.
Yeah. I think there's enough pressure. I think there's enough opportunity on for AMD and Oh,

Janet in the chat says that humane is for sale. That's a bold prediction. That is a great one. Yeah. Exactly. Calm AI. The, yeah, George Hotz, Geo Hotz. Yeah. But but but some someone like that, a small player who has got a reputation for software expertise.
And it would be north of $1,000,000,000. Yeah.

And north of a billion. Wow. Yeah. So the the the tiny yeah. That's a that's a that's a great prediction. Much better than Adam's prediction, which we really we we can all agree. It's just an absolutely terrible prediction. Okay. Klavnick, do you have a 1 year?
Yeah. I got I got all 3, but I like the 1 year, 3 year, and the 6 year format. So I think that this one perfectly embodies the, like, prediction is more about the present than it is about the future. And, this one maybe sounds simple, but it's a little spicier than that, which is congestion pricing in Manhattan will be an unambiguous success. That's my one.

Yeah. It it it feels that way based on the the weekend traffic. It definitely feels that way.
So the reason why, like, I think that this counts even though we've had 2 good days is that, like, both there's still some lawsuits from New Jersey, which didn't manage to stop it from happening, but my understanding they're still kind of in play. And secondly, Trump has said that he wants to make it illegal, and they've been talking about passing a law in congress that would make it illegal. And so I'm not even sure that, like, with these 2 days being pretty clearly accomplishing the goal that it will, like, survive, but that's kind of the, like, my my I think it will serve the legislation or it will survive as a thing. Like, it won't get a law made against it, and that sentiment will be more positive about it now than it or that in a year than it is currently right now.

It's great. Yeah. A good prediction. When I've got a, I'm just gonna come back to myself for one other one year prediction. And this is this is a prediction that could be wrong by tonight.
But, I think that Intel's CEO search is gonna be an absolute wall to wall, unmitigated disaster. I think you've got warring constituencies in terms of you've got employees, you've got shareholders, you've got the board, and you've got the future candidate themselves, all of whom have got slightly different agendas. And I think that this will or the next year is gonna be an absolute wreck in this regard with a bunch of missteps. I think that they will name at least one CEO who had not yet agreed to it. And then it has to be walked back because they were like, Intel got ahead of it and thought that they could announce it and rush it.
And then it's a it's a total black eye. And I think the end at the end of the year, Intel has their co CEOs in place. So I think Intel does not have a new CEO. I like
the prediction that they're gonna name someone who has not agreed to it.

Yes. They're gonna name
so that's the better prediction than the co CEOs remaining in place.

Yeah. They no. I think that we you know what I've been doing is I've actually been, because I I I've seen this kind of incompetence before in terms of John Fisher and the manager of the a's. So I just have to like, it does, like, what have the a's done with new stadium deals? And I can, I I can just, like, superimpose that on the CEO search?
Again, this could be wrong by tonight and maybe that you've got a lip bootan that that agrees to do it, but I think, the longer this thing is out there without a CEO, the more of a basket case it is, and then you're gonna have this problem. Did you read, in the garden of beasts, Mike, in particular, Adam, did you read the the, really interesting book. It's the or, it captures the, so in 1933, Roosevelt becomes president and they need an ambassador to Germany. And anybody who knew anything about Germany knew that this thing was on was was just heading at top speed into the wall of of a Nazi takeover of Germany. And no one want and so they had to find someone who would be flattered by being the ambassador to Germany.
They had to, like, go into, like, this kind of 4th tier of picks and the book is about his daughter then kind of falls in love with a Nazi and kind of her diary. The but I always thought it was kind of interesting where they had this problem of, like, no. No one wanted to be the ambassador to Germany in 1933 because anyone that you would want is smart enough to know this thing is a disaster. And I think it's I guess I the Intel CEO. Made the comparison between Intel and Germany in 1933.
I'm not sure. I'm, sorry, Intel. It's still hard at investors, though. I guess, you know, I, but, I I think that that this is gonna be a real problem from Ifrita is that the the person that you would want to run this is gonna be cagey enough to know that, like, no, this thing is an absolute wreck. And, and they end with the the the co CEO still in place. So that's my

But but still in place, not acquired, not sold off, but

sort of this is Not sold off. This is such a standoff between these constituencies because I actually think that the and I think that the board and shareholders in particular are are are not, that the board does not represent shareholders right now. And I think that is gonna be the real battle that happens over the next year. And obviously, that'll be a legal battle, and that's gonna be, it's gonna be gory. And I think that lots of people are just gonna like opt out and people are gonna be, because you need an activist shareholder and they're gonna be like, why?
Why would I be an activist shareholder in Intel when I can go do, there's like so many other ways to make money, you know? And this is, and I just think that it will, end up being this kind of the the the status quo will, this kind of the thing that everyone knows they don't want, which is these co CEOs, ends up being the least objectionable thing. And I don't think they're gonna wanna make it permanent, but I guess what they're gonna be in a year. Again, could be wrong by tonight, so who knows?

It's a a very exciting prediction. If they try
to Only only because I saw it in the

Brian, if they try to spin off the foundry business, does the prediction change?

I don't think that they I I think that they, I would I've got a 3 year prediction about that, but I, I don't

think That's fine.

Yeah, exactly. So we'll, we'll get to the 3 year. I've actually got a I've got a price on the for the foundry business. I've got a prediction about how much it's gonna sell for. So, Steve, sorry, you No.
I'd I'd seen someone mentioning Enron further back in the chat and that evoked another 1 year prediction, which is Enron in its current parody of a company will be a revenue generating company once again this year.

Alright. So you In the theme of
the onion, it'll be, you know, it'll generate revenue based on media content but, the the egg was brilliant today. You're you're long Enron. Long Enron. Alright.

Alright. On to, on to 3 years. Simon, do you what what are your, what are your 3 year predictions?
So I've got a self serving 3 year prediction. I think somebody is going to perform a piece of Pulitzer Prize winning, investigative journalism using AI and LLMs as part of the tooling that they used for that report. And I partly wanted to raise this spot partly because I'm but I my day job that I have assigned myself is building software to help journalists do this kind of work. But more importantly, I think it's illustrative of the larger concept that I think AI assistance in that kind of information work will almost be expected. Like, I think it it won't be surprising when you hear that somebody achieved a great piece of, like in this case, we it's sort of combining research with, with journalism and so forth.
Pieces of work done like that where an LLM was part of the mix feels like it's it's not even gonna be surprising anymore.

Simon, you know what it reminds me of is, was it in the seventies or the eighties where they had a proof of the 4 color theorem, a computer assisted proof of the 4 color theorem, which was very kinda groundbreaking at the time. And now, I mean, computing and math just became,
the same thing. Right?

Yeah. Became the same thing. Right. It just feels like that is a great prediction. And, that that feels very, very plausible, where you've got so this is just just to repeat it back to you. This is someone whose research has been made possible from that using an LOM, they were able either to do much more research or much deeper research, and they were able to discover something that they would not have discovered otherwise just by the
Yes. Yeah. Yeah. And most specifically, the angle here is, like, this is actually possible today. Like, if you think about what investigative journalism, any kind of deep research often involves going through tens of thousands of sources of information and trying to make sense of those.
And that's a lot of work. Right? That's a lot of of of trudging through documents. If you can find use an LM to review every page of 10,000 pages of police, abuse reports to pull out, like like, vital details. It no.
It doesn't give you the story, but it gives you the leads. Like, it gives you the leads to know, okay. Which of these 10,000 reports should I go and spend my boot investigating? The problem but the thing is you could do that today, but I feel like the knowledge of how to do that is still not at all distributed. Like, people get these things are very difficult to use.
People get very confused about what they're good at, what they're bad at. Like, will it just hallucinate details at me? All of that kind of thing. I think 3 years is long enough that we can learn to use these things and broadcast that knowledge out effectively to the point that the kinds of reporters who are doing, like in the investigative reporting, will be able to confidently use this stuff without any of that fear and doubt over how what is is it appropriate to use it in this way? So, yeah, this is my sort of optimistic version of we're actually going to know how to use these tools properly, and we're going to be able to use them to take on interesting interesting and and notable projects.

That is a great prediction. I love it. I love it. And you you and you've talked about this in the past in terms of, like, just the sheer amount of public records that are out there that are that you can't an individual just can't go through. There's just too much to be able to actually, and to to get this kind of assistance, to be able to to quickly take you to things that are to act as a stringer for you and find the leads and allow them you to do the traditional journalism. I yeah. I love it.
And on top of that, if you want to do that kind of thing, you need to be able to do data analysis. Today, you still kinda need most of a computer science degree to be a data analyst. That goes away. Like, LLMs are so good at helping build out like, they can write SQL queries for you that actually make sense. You know?
They can do all of that kind of stuff. So I think the level of technical ability of of non programmers goes up. And as a result, they can take on problems where normally you'd have had to tap a programmer on the shoulder and get them to come and collaborate with you.

Love it. Absolutely love it. Alright. Did you have another did that's that's it feels like a very utopian 3 year. Do I dare I ask if there's a dystopian 3 year?

Is that the
It's not only just

it's not so much dystopian, but
I think we I I I think we're gonna get privacy legislation with teeth in the next 3 years, not from the federal government because I don't expect that government to pass any laws at all. You know? But, but, like California states like California, things like that because the privacy side of the stuff gets so dark so quickly. The fact that we've now got universal facial recognition and all of this kind of stuff. And I feel like the legislation there needs to be on the way this stuff is used.
Right? You need to there are things like and in fact, the AI industry itself needs this because the greatest fear people have in working with these things right now is it's going to train a model on my data. And it doesn't matter what you put in your terms and conditions saying we will train a model on your data. Nobody believes them. The only thing I think that gets that I think that's why you need legislation even to say, we are following California bill x y zed, and as a result, we will not be training on your data.
At that point, maybe people start trusting it. And so that that if I was a in a position to do so, I'd be lobbying on behalf of the AI companies for stricter rules on how the privacy stuff works just to help win that trust back?

Yes. I mean, of course, what you're what you're advocating is the sensible thing, where someone realizes that, like, actually, this regulation, it is in my interest for this regulation to be done in in the right way and to get off the back foot and on the front foot and actually construct something that is reasonable that we can all adhere to. But, that that common sense feels like it's, it's fleeting. I mean, it's rare, I would say. The but that's a that's a I think that's a great prediction.
That's that's a and I think as and just you say that, like, when when people say I'm not training on your data, not only do does no one believe it, but I've got no way of really knowing if you've trained on my data or not. You know, maybe, you know, the New York Times can figure it out because they can prompt you to regurgitate the story, but it's very hard for me to prove that you've trained on my data.
But the Exactly.

Yeah. No. But the the challenge here, though, is that the tech companies themselves can't know if they're training on your data. Like, some data some some log shows up at Google and, like, one of however many people touched it. Google can't make a trustworthy claim even to itself that they didn't train a model on it.

That's right. Yeah. Right. So, Mike, do you have a do you have a 3 year?

Okay. My 3 year is, that the hottest, you know, VC startup, financing sector is manufacturing in 3 years. And here's the reason. So so you've built huge companies out of, like, kind of comparatively piddling industries like retail and advertising. These are a much smaller fraction of GDP than manufacturing.
Manufacturing is one of the few areas where we don't have some straddling tech colossus touching it yet. And also Yeah. True. If you think about where the AI stuff can really strut its stuff, you need a very large number of, like, good but not perfect verdicts and you need a process that can survive some fraction of them being wrong. So as everyone said, like, buying my vacation airline tickets is a bad example, but, like, doing sensing on quality control for some widget going off the end of the assembly line is a great example of that.
Right? Where, like, the increase in sensing and perception could really strut at stuff. And, you know, there's also, like, various, like, national security issues that might be involved, but I don't even think you need that. I think just making stuff as, like, a great area to apply AI and one of the few areas that software hasn't totally beaten to the ground yet is, why it's gonna come back.

Alright. Well, I you know, we've, we've always discovered that there are many more venture capitalists that think they're interested in hard tech than are actually interested in hard tech. You know, everyone actually wants to go whaling until they actually learn that it's a 3 year voyage to a far flung ocean, you're likely gonna sink, but, who knows? You know, I I I welcome our our new whalers. I think it'll be, it would be it definitely would be good for the industry, good for us all to have more people manufacturing.
Obviously, we're, very much believe in the physicality of what we're doing. So I like it.

Brian, let let me let me sharpen it a little bit then. For hard tech, I don't necessarily mean that they are building, like, science fiction objects that did not exist before. I mean that they are competing with, like, an overseas factory churning out chunks of steel on something.

Yeah. I think what we have learned is that, for even the things that you think are pretty basic, they're actually very, very sophisticated in terms of, the there's there's a lot that and the, there's a there's a lot of of art and craft that goes into a lot. And yeah. But I think it's interesting. I mean, I think it's the the

And, Mike, we we honor your inability to take yes for an answer on that. Like, definitely keep refining that
Yeah.

Yeah. Prediction. Exactly. No. It sounds outlandish.

Yeah. That's right. Adam, what are the what what's your what basically, mister

What what doom and gloom

What what what what awful thing is gonna come back now? What do I mean? What what terrible prediction do you have for us now?

Well, you were I mean, you were likening the intel to the Nazis. So, my my prediction, maybe a part

of the 1980 likening until the 1930 Germany, I would like the record to reflect. I mean, there's there there is a difference here. It's a

it's a complex melange of political facts. Yes.

Melange. Exactly. 1930, this is still Weimar, Germany in 1930. Okay. Like That's

the spirit. Yeah. I predict a chips crisis. So a confluence of things here, shortages maybe due to geopolitics, to tariffs, to natural disasters, perhaps, to certainly Intel and their their Weimar, leavings or whatever, their their inability to execute. But all of this culminates in, chips being incredibly scarce, failures of of batches, yield yield problems, maybe not necessarily due to to the fabs, but perhaps to the designs.
But all of this leading to a real shortage, even even more extreme to the point where only, you know, the the chosen few are able to get access to all of the chips that they're interested in, and this impacts consumers. It impacts all kinds of devices and certainly impacts kind of the kinds of servers and devices that that we're used to attaining.

And I like that you left it open to natural disasters. This could be a a major slip on the Chi Shang fault in Taiwan. Okay.

For for sure. Or or, you know, a missile lobbed over or or a shipment being destroyed, it'll yeah. I'm I'm not not pinning down the don't pin me down to the cause, but just that there is a chips crisis. I mean, could be over our own creations. Could be could be we jack up tariffs on all this stuff without realizing that, that we're shooting ourselves in the foot.

Without realizing that they're all made in Taiwan. So the this will be and this is a this is 3 years. So this is 3 years. How long does the crisis go on? What do we, the does there

I I We'll know we'll know when we see it. I mean, it's a crisis by its nature is not like a blip. Right? The the the fuel shortage of the seventies wasn't wasn't like a 1 week affair. I I mean, I don't think. So, you know, go well, you'll know it when you see it.

Like, look, pal. I'm not telling you how to get out of it. I'm telling you're going into it. That's that's that's my job. My job's done here. Your your job's pretty good.
Adam, if in your professional capacity as a CPA,
are you gonna be
worried about my oxide options if there's gonna be a storage?

No. No. No. No. We're we we've got strong relationships with AMD, decreasingly with Intel apparently. And, but but no, we're we're gonna be among the chosen few. Like, we we're we'll get the ships we need. Don't worry, Steve.

Oh, that's excellent. Yeah. That's a the the
Yeah. Yeah.

I think that's In fact, it's gonna help us. We're gonna have a lot of wind in our sails because we're gonna be one of the few places that people can get the the modern architectures.

It's very exciting. I look I look forward to this catastrophe, I guess. The it's to say, this this major slip in in Taiwan. Yeah. Alright. Well, every crisis is an opportunity. Klavnik, do you have a a 3 year?
Yeah. So I'm I'm turning this into a parlay with my 6 year from last year. So, my 3 year is some government contracts were gonna require a memory safety road map included in their procurement process.

Oh, interesting. Yeah.
So, like, the the government has currently suggested that by next year, like, software vendors should have 1. And so I think, like, the next step after that becomes the, like, you need to have it whenever the government is procuring software from something. And not necessarily all of it, but, like, a little bit. And so and that's because, like, my 6 year from last year was c plus plus is considered a legacy programming language. And so I think that that step is the thing that, like, really accelerates that, occurring.

And I gotta tell you, between the chip crisis and the requirement that everyone bidding on a federal contract has a memory safety story, oxides I am long oxide. Oxides looking really good in this scenario. No. That's that's great, though, Steve. I think that that's and feels very, very plausible. Steve, do you have a
Yeah. I mean, this is gonna be kind of a lurch from the current topic, but maybe it related to our upcoming travel this evening on Spirit Airlines. I, I think we are headed into an era of optimization, thriftiness. And, I think in 3 years, Fox Rent A Car is going to be bigger than Hertz.
Jay Jacobs (zero zero six:fifty six):

You know, I have always said that predictions tell us more about the present than they do about the future. And the present that we are in is that you and I are about to get on Spirit Airways to go to CES because there was no other way to get a flight down there. That's right. And so we are in the kind of stream we we are in in that that that brief period of time where we have purchased our ticket on Spirit, but have not yet traveled on Spirit. So as far as we're concerned, the future is all Spirit and Fox as far as the eye can see.
Well, I mean, I think
that the fact that people are not willing to spend 10 x the money for 2 x the benefit in AI right now is something on rental cars. Sorry. I I got pushed into the Fox corner.

You're you're a long Fox.

You're a long Fox. Alright.

I I'm
gonna be a household name.

I feel like this is maybe an intervention, but have you rented from Fox in the past? Because I have several times, and I feel like I'm still waiting in line.
Like, from Okay.

From 2,000 Steve, I would run through Fox. So we are actually if this is gonna turn, like, I will sit here and defend Fox. I've run
it through Fox, like, 40 times.

Okay.
Yeah. What do

they call what do they call their affinity program?

Foxers? Fox yeah. He he's in he's in Fox Club. He's he's he's in Fox Club Gold.
You hit the premier status, you're the Fox deal.

You're a silver fox.

Silver Fox is sad. That's it. This is you know, I I decide I was in the, the the Super 8 MVP club. I was a card carrying member of the Super 8 MVP club, and it was always I'm always a source of pride. So I like it. Yeah. Long Fox. That's, that that it's definitely good. So, in terms of of my own 3 years, the got a couple. One is that the the Cybertruck is no longer being manufactured in 3 years.
So I think that this thing is, is is, it's got, too much headwind, and I think will will no longer be manufactured. It's got the issues are too deep.

So, Brian, I I think it's a great prediction and, obviously, like, terrible tragedies with with the Cybertruck. And, I mean, I think that that also predicts some really entertaining falling out between Musk and the Trump administration. So I I love this prediction.
Oh, well,

so actually, no. To be clear, it's the Cybertruck is no longer being manufactured. I think it's gonna be a commercial flop. I don't think it's that that a regulatory body is necessarily gonna do anything, but it would not surprise me if a state regulatory body does something that causes a lot of I think it wouldn't it wouldn't surprise me if all of California tries to put some some regulation in place. The thing has never been crash tested.
I mean, I think the reality is the Cybertruck is operating already without any regulatory regime. So the idea of, like, total absence of regulatory regime is what it was already manufactured in. So that's not going to be a change. I don't think it's going to be insurable. I don't think the I think that you are, in the within 3 years, it will be, and again, there's a lot of decisions that they have made that are going to
be So So where's the cyber cab in that scenario?

The the I Yeah. I think that it I'll be, it'll be interesting. I think that there are a bunch of mistakes that that will not be repeated. So the cyber truck was, that that is my 3 year prediction. I I promised a an Intel Foundry Services prediction. So I think after in 3 years, after much tumult, IFS has been spun out of Intel. No commentary whether the co CEOs are still in charge or not. I can't see that far into the future. Crystal ball is murky on that one. But I think IFS will be spun out.
I think that ultimately, its future has to be separate. But it does not bear the Intel name. And it has been purchased for the purchase price of $1, by a a deep pocketed Maverick. And I would normally say that this would be a deal broker to the US government, but I'm really not sure, because I do think it's in the next 3 years, so I'm not sure what the dispositions on that is gonna be. But I think it's gonna be, you're gonna have someone who is perhaps has perhaps has domain expertise, perhaps doesn't.
This could be a maybe it's a is it a Bezos type or is it a Mark Cuban type or is it someone is it a is is it a, TJ Rogers type? It could be a it could be a lot of different kinds of of folks, but, that have basically taken it off of Intel's hands, and, changed the name, and whether that's success or not, I think is is very hard to predict. But so that is my that is my IFS prediction.

That's a great one.

And then my my final 3 year prediction and this is a perhaps I am predicting my heart on this one, Adam. I I do think and I saw someone in the chat saying that, you know, we're gonna see something, some new product that's that's totally revolutionary based on AI or LOMs that's but but where that's not the interface. It's not a chatbot. It's something else. I definitely agree with that.
And I think that we could see a couple different of these. So I'm gonna attack into my heart on this one, Adam. And I think that we the the state of podcast search right now is absolutely woeful. There are people predicting that are not me, that the the podcast has a new relevance, that with the the that the, you know, with the role that the Rogan podcast did or didn't play and the kind of the the crumbling of some traditional media that podcasts have a new relevance. I want to believe that, so I'm not sure if I do believe it or not, or I'm not sure if that's how valid the prediction is, but I definitely want to believe that.
And podcast search is absolutely positively atrocious, and I think LLMs could actually do something really interesting here, where there is no YouTube because it's it's RSS. There is no YouTube equivalent for podcasts. And things like, how do you use no podcast? What did you use Spotify or do you use

I use, Apple

Podcasts. Apple Podcasts. Yeah. Yeah.

In person. Yeah.

And it's not I mean, I It's not that bad. It sucks too about podcasts.
Like, the lowest hanging fruit podcast search is you subscribe to all of them. You run all of them through Whisper to get transcripts. You make the transcript searchable. Because you, at least, people have started building those things already. Like, it feels like it's just sat there waiting for someone to do it.

This is why I think Simon I think this is why someone will do it because I think that you and then but be able to do that much more broadly because I think that the the cost of a podcast is basically 0. And I am convinced that there's a lot of great stuff out there that I haven't found that I that I can't find because I'm sitting there on, like, listen notes or whatever, just being vectored to popular things. And it's like, I don't want popular things. I want interesting things. I want great conversations, and I think LLMs can find that

for me. Would you pay for this either with money or with listening to ads? Yeah. Okay.

I I I would, and I know and you're right to be, like, have a cocked eyebrow on that one. First of all, you're I the I I I like your prosecutorial tone here, Adam. I think that you're I be because you you you've got me I mean, that that that is, like, the the the the key question is, you know, would I pay for it? And I would pay for it. I think that I would pay I I would pay for it because I spend I think that, you know, part of the reason that podcasts are I think that they're relevant is because it just, you know, we've talked about here before, Adam, the ability to listen while you do something else, while you're walking the dog, while you're washing the dishes, while you're you're walking or what have you, while you're commuting perhaps.
And I think that that that is something that's a good fit for kinda where we are, and I I think people want that. I think I would pay for it if it's good. I mean, I'm not gonna it it needs to, like, deliver real value, but if it delivers value, I absolutely would pay for it.
Okay. Make sense? Another. I I'm gonna check-in the pricing observation. Again, Google Gemini 1.5 Flash 8b, these things all have the worst names.
I transcribe I used it just a straight up transcription date minute 8 minute long audio clip, and it cost naught point naught 8¢. So less than 10th percent less than the 10th percent of processed 8 minutes. Like, that was just a transcription, but I could absolutely ask questions about you know, tag give me the tags of the things they were talking about. The the the analyzing podcasts or audio is now so inexpensive.

That's yeah. And and so I think that that'll be and and, you know, Nick in the chat is saying that Apple Podcasts have got searchable transcripts. I'd be curious to check that out, Yeah. Because I I, you know, we I I you know, because there are these terms that are, like, pretty easy to search on and but I want to do more than just, like, searching on terms. I want, like, I I want someone to to, you know, Mike, this is what you called in your prediction last year about the, you know, the presidential daily brief.
And I want, like, the presidential daily brief of podcasts, and I want it to be, like, tied into other aspects of my life. You know, I want it like, this is the the kind of thing where you want something to be like, oh, you know, Adam, like, you were recommending the acquired episode on Intel from a couple years ago. Right? Like, I want someone who's gonna pull that content for me when I am it's like, oh, that's interesting. Other people who thought that that Intel was, like, 1933 Germany include, you know
Right. You you want to be able to say give me a give me a debate between credible professionals talking about subject x exploring these things.

And maybe not the
kind of thing that you can't do with full text search, but you can do with weird vibe based search.

That's right. And maybe not even you don't want the full episode or whatever, but you want something that leads you in, something that gives you the parts that you're interested in or whatever. And, obviously, you can look for more, but, but something that's helping to to curate that.

Yeah. So I mean, it's helping to curate that. Exactly. It's weird vibes based search, Simon. I love your I mean, you that that's exact I I I want to search on the vibes and not, and I I so I think that there's gonna be, think that that there's a gap there and some just for as you say, it's like, boy, that doesn't seem very hard, and I don't think it is very hard. That's one of those why I think something will fill it.
I'll I'll join you on that prediction. I'd be shocked if in 3 years' time we didn't have some form of

I'll put you on that.
Really well built. Like,

yeah. You and Dan, that's funny something you said because because a year ago, I predicted that within 3 years, we would be using LLMs for search and search before LLMs with you antiquated. And, man, I was, like, 2 blocks ahead of the band on that one, Adam. I feel like I feel like now you're like, that was a prediction. Like, wasn't that just a statement of fact?
It was

like, no. No. I was just like I mean, just barely not a statement of fact a year ago. But I did.
I've gotta put a call I've I've gotta put a shout out to Google's AI overviews for the most hilariously awful making shit up implementation I've ever seen. The other day, I was talking to somebody about the plan for Half Moon Bay to have a gondola from Half Moon Bay over Highway 92 to the Caltrain station, and they searched Google for Half Moon Bay gondola, and it told them in the AI view that it existed. And it does exist. It did summarize the story about the plan and turned that into, yes. Half Moon Bay has a gondola system running from Crystal Springs West.
Wow. That's Yeah. I do

not know how they screwed that up so badly. And, Simon, it is what you know, you call this the gullibility problem, which I think is a very apt description. And, you know, I saw this on the just this past weekend where, Google the the the AI assisted search, believes that adjunct professors in a college average $133,000 in salary in Ohio. And you had people who were just like, I'm actually, like, genuinely concerned that people think, like, when I was an adjunct in Ohio, I was living at the poverty line. Now, that is not and, you know, you kinda trace back how it got there, and it got there because of, you know, mistaken information that it then treated as authoritative.

So, a little closer to home, we searched up how to clean the grout in, the tile in our bathroom, and got a recommendation from the Google AI summary that turned out to cause massive damage and was a very expensive mess to clean up. So PSA for for natural stone folks, like, don't use anything acidic. Turns out.

Oh, I so I so you're saying that, like, the, home repairs, LLM surge plus I feel lucky could result in devastating consequences.

Turns out turns out you should, like, click through the link and, like, check the source and read the whole thing. Yeah.

That that's great. Well, the so it'll be and and it's something but I like your prediction that, like, you'd be surprised if this doesn't happen within within 3 years.
Honestly, it feels like it's all of the technology is aligned right now that you could build a really good version of this. And that means inevitably several people are gonna try.

So we'll we'll see which
one bubbles to the top.

And whoever succeeds, they'll say that they took an agent approach. They would
it was

it was those agents had allowed them to do it, at least in their pitch deck. Alright. Are we on the are we on the 6 years now? Are we are we at kinda 6 years? Are we ready for Simon, you ready to take us deep into the future here in your,
Yeah. Yeah. Go on then. I've got I've got a utopian one and a dystopian one here. So Utopian, I'm gonna go with the art is going to be amazing.
And this is based like generative art. I have not seen a single piece of generative art really that's been actually interesting or like like, so far, it's been mostly garbage. Right? But I feel like 6 years is long enough for the genuinely creative people to get over their sort of initial hesitation of using this thing, to poke at it, to improve to the point that you can actually guide them. You know, the problem with prompt driven art right now is that it's rolling the dice and lord only knows what you've got, what you'll get.
You don't get much control over it. And the example I want to use here is, the movie Everything Everywhere All at Once, which did not use AI stuff at all, but it did the VFX team on that were 5 people. So I believe some of them were just, like, following YouTube tutorials, like, incredibly talented 5 but they they pulled off a movie, which it won, like, most of the Oscars that year. You know? That that movie is so creative.
It was done on a shoestring budget. The the VFX were just 5 people. Imagine what a team like that could do with the versions of, like, movie and image generation tools that we'll have in in 6 years' time. I think we're going to see unbelievably wonderful, like, TV and moo movies made by much small teams, much lower budgets, incredible creativity, and that I'm really excited about.

And this is so getting out from the idea of, like, okay. This is just, like, regurgitating art that it's that it's trained on, and we're we're kind of, absconding with the with the copyrighted work of artists and actually begin to think of, like, this is actually a tool for artists. We're not actually misappropriating anyone's work, but allowing them to achieve their artistic vision with many fewer people.
I think you'll get yeah. You teams who have a very strong creative vision will have the tools that they can tools that will let them achieve that vision without spending much money, which matters a lot right now because the entire film industry appears to be still completely collapsing. You know? The Netflix destroyed their business model. They've not figured out the new thing.
Everyone in Hollywood is out of work. It's all diabolical at the moment. But maybe that like, the dotcom crash back in the 2000, led to a whole bunch of great companies that sort of raised out rose out of the ashes. I'd love to see that happening in the entertainment industry. I'd love to see a new wave of, like, incredibly high quality independent, like, film and cinema enabled by a new wave of tools.
And I think the tools we have today are not those tools at all, but I feel like 6 years is long enough for us to figure out the sort of the the the the tools that actually do let that happen.

Yeah. Interesting. And that's exciting. I love it. The and so we we will have art that we could never have before because it was Right. It was just too wouldn't too expensive to create.
And I'll do the prediction. The prediction is the film a film will win an Oscar in that year, and that film will have used generative AI tools as part of the production process. And it won't even be a big deal at all. It'll almost be expected. Like, nobody will be surprised that a tool a film where one of the tools that it used were were based on generative AI was was an Oscar winner.

I love it. In fact, that's so Utopian that that that now has me bracing for impact on a potential dystopian.
Okay. I'm I'm gonna go straight up, but Larry and Jihad. Right? This is some. So all of the the the dream of these big AI labs, the genuine dream really is AGI.
They all talk about it. They all seem to be true believers. I absolutely cannot under I cannot imagine a world in which basically all forms of, like, knowledge work and large amounts of manual work and stuff as well, are replaced by automations where the economy functions and and people are happy. That just doesn't I don't see the path to it. Like, Sam Altman talks about UBI. This country can't even do universal healthcare. Right? The the

Right. The idea of
pulling off UBI in the next 6 years is is a total joke. So if we assume that these people manage to build these artificial superintelligence that can do anything that a human worker could do. That seems horrific to me. That's and I think that, like, that's full blown but Larry and Jahad's, like, set all of the computers on fire and go back to to to to working without them.

So so what is the prediction, and I'm also trying to square this with the Oscar winner. Does the Oscar winner happen right before we set them on fire and go out without them? Or
These are parallel universes. Like, I

don't think anyone's taking nobody
is making amazing art when when nobody's nobody's got a job anymore. There was an amazing there was a a post on Blue Sky the other day where somebody said, what $1,000,000,000,000 program is program problem is AI trying to solve? It's wages. They're trying to use it to solve having to pay people wages. That's the dystopia for me.
I have no interest in the AI replacing people stuff at all. I'm all about the tools. I love the idea of giving like, the artist's example. Right? Giving people tools that let them take on more ambitious things and and do more stuff. The the AGI ASI thing feels like that's almost a dystopia without any further details. You know?

And so in this dystopia, in this parallel universe, so do you believe that that this that that we are able to attain the the vision that these these folks have in terms of AGI and ASI? Or
I mean, I I'm personally not not really. No. But you asked me to predict 6 years in advance and in this space Yeah. Predict the the way things are going right now. Who knows? Right? So there's my my thing is more that if we achieve AGI and ASI, I think it will go very poorly and everyone will be very you know, I think there will be massive disruptions. There will be civil unrest. I I think the world will look pretty pretty shoddy if we do manage to pull that off.

Interesting. I do think that the AGI that I mean, this is gonna be and even this year, there's gonna be a lot of talk about AGI because of this very strange contract term that OpenAI has with Microsoft that
I think they might get AGI there. I wouldn't rule against the management. What? It's a $100,000,000,000 in revenue, and then they say AGI. Right? That's

their Well, supposedly, yeah. That's what the information report is about. That that they they've got different definitions of AGI. And, apparently, one of them is, if we can generate a $100,000,000,000, we've achieved AGI. You're just like, so much what. It's like, what?
What is the funniest thing about AGI is open open AI structure as a nonprofit is that they've got a nonprofit board, and the board's only job is to spot when they've got to AGI and then click a button, which means everyone's investments now worthless.

But but also, like, a g AGI is a $100,000,000,000 worth of profit. It's like, you you are all like, this is like the capitalist rapture or whatever. Like, Jesus Christ. I I mean, I I I it's it's so and so I but I wonder sometime, do you I wonder if they're gonna try to make claims, especially this coming year of like, no. No. No. We've achieved AGI. No. We've already achieved AGI. It's already it's actually, you know what? GPT 3.5 actually is AGI. That is so sorry, Microsoft. Me.
But yeah. No. My my my my my dystopian prediction is the version of AGI which just replace means everyone's out of a job. Like, that that sucks. You know? So so, yeah, that's that's my that's my dystopian version.

Yeah. That is dystopian. Well, I the the I think my yeah. My I would take the other side of of of the likelihood of that, but that is definitely dystopian.

Brad, I do like your suggestion that they just declare victory on on GPT 35 or something because there are these moments in chat where I'm I'm sure everyone feels themselves like they're just kind of fancy autocomplete. Right? Like, the people have predicted the thing you're about to say. So maybe they just decide that actually general intelligence is mostly just autocomplete anyway. So mission Yeah.

Yeah. The the you Adam, I I Rick shared I love this where they try to rule monger by being like, hey. Have you looked around you? Like, people are pretty dumb, actually. And the I mean, you you're kind of a knucklehead. Like, you forget stuff all the time and, like, you you got a lot of stuff wrong and, like, yeah, we don't call it hallucinations. We just call it, you know, you forgetful or whatever. So, yeah. We've we've we've achieved that. I mean, is that intelligence?
Yeah. We definitely have achieved that. Like, that's a that's a mission accomplished. And now, by the way, Microsoft, per our agreement, we, OpenAI, are you are not entitled to any of our our our breakthroughs. Actually, I don't the you know, we did not no one had a 1 or 3 year related OpenAI prediction. I'm not sure if there is an OpenAI but OpenAI is argue Yeah. Go for it, Tommy.
I think in 3 years' time, I think they are greatly diminished as a Yeah. Influential player in the space. You know? I I I don't think like, the it's already happening now, to be honest. Like, 6 months ago, they were still in the lead.
Today, they're in the top sort of 4 companies, but they don't have that same they've kind of pulled ahead again with the o three stuff, but, yeah, I don't I don't see them holding on to their position as the the leading the leading, entity in in the whole of this space now. I

don't either. And, especially, I think if they end up because I think it's also conceivable that they end up at a time when pretraining is becoming is hitting real scaling limits that they just do that they continue to double and triple and quadruple down and end up with just a because they they are operating at a massive, massive, massive loss right now. And I kinda think that if they it'd be kind of interesting if if, you know, I wonder if if, OpenAI will start to tell you, like, hey, by the way, yeah, I know you paid us $20 a month. By the way, your compute cost us $85 last month or it it cost, I mean, it'd be kind of interesting if they begin to tell you, because I I feel
that if Sam Altman So Sam Altman said on the record the other day that they're losing money on the $200 a month, plans they've got for o one pro.

And Easy.
I I don't know if I believe him or not, but that's what he said. You know?

You know, is that because okay. So I honest question. Is that because that, like, that $200 a month, is that and that's the o one pro or the, Simon, the the the thing that
It gives you unlimited o one, I think oh, well, mostly unlimited o one. It gives you access to o one pro. It gives you Soarer as well. And it I think the indication he was giving was that the people who are paying for it are using it so heavily that they're they're blowing through the the amount of money. This is

what I was asked. It's like MoviePass for compute. We're we're, like, the only people that actually MoviePass being this this

I'm glad that you know that you need to explain that. I'm glad that you know that MoviePass is not like the the Harvard Business case study that everybody knows.

But but but but I'm and I'm glad that you agree that it should be. I like your your implicit judgment of others that it that it needs explanation. But MoviePass was this idea that sounds great that, like, oh, no. Like, we'll charge you $30 a month. You can go see as many movies as you want.
But as it turns out, like, the people that are most interested in that, like, wanna go see a movie every night at a movie theater, and they were they were literally losing money on every transaction. It was just like no way to Why did they just invented it?
They just gave their members a credit card to go to the cinema with. That was it. They made up the volume ticket for that.

Yeah. And so you you

There was a time when, like, cosmo.com was the canonical example of a To call. That burned money, and you're lost on every single transaction. And at some point, it was replaced by by MoviePass. Maybe the $200 OpenAI product is now gonna, like, push MoviePass into the the dustbin of history and take its rightful place as, like, the product that loses money on every transaction.

I I I can't believe that Adam has to, like, blow the whistle on MoviePass, but you're able to walk right past cosmo.com, and Adam's got no problem with it. Cosmo.com, like, a the I mean, but this is a famously a dot com this was a a real artifact of the the dotcom bubble. And, Mike, as I recall, it was, like, when people were having a Snickers bar delivered, and little did we know we would be it was our teenagers would be DoorDashing a Snickers bar some 20 years later, but, you know Yeah. It

would you could you could door you could basically DoorDash a Snickers bar for zero delivery cost.

That's right.
If only they were advanced enough to describe the, the taxi for your your Snickers bar, then it would have been more successful and

turned into DoorDash for real. That's right. It was just ahead of its time as it turns out.
I'll I'll say one more thing about OpenAI. It's, they just they've lost so much talent. They keep on losing top researchers because if you're a top researcher at AI, a VC will give you a $100,000,000 for your own thing. And and they don't see they seem to have a retention problem. You know?
They've lost a lot of the I mean, my favorite fact about, Anthropic, the company, they were formed by OpenAI Splinter Group who split off, it turns out, because they tried to get Sam Altman fired a year before that other incident where everyone tried to get Sam Altman fired, And and that failed, and so they left and started Anthropic. Like, that that seems to be a running pattern for that company now.

Alright, Simon. I'm gonna put a parlay on your 3 year prediction. I think someone wins the Pulitzer for using an LLM to tell the true story of what happened at OpenAI and that the the the the the the boardroom fight. I mean, there's clearly a story that has not been told there. There's clearly rampant mismanagement that that boardroom fight, I feel like we got we kinda got the the the the surface of that.
There's a lot going on underneath clearly, and I look forward to the Pulitzer prize winning journalist who's able to use NOLA to tell the whole story.
Nice.

Mike, do you have a do you have do you have a 6 year?

Alright. My 6 year, which I think is optimistic, a lot like Simon's, is, the first gene therapy that uses a DNA sequence suggested by an LOM is actually deployed at least in a research hospital, maybe not wide.

That is yeah.

Like like a a CGTA sequence that came from the model goes into a human body.

In Mike, how well informed is that prediction?

You know, I would say that my rough reading of the models that have been designed for genetic sequence prediction is that, like, they're able to achieve kind of remarkable things. I I I I'm in particular thinking of this EVO model that was released kinda early in 24. I don't know if Simon or others are are familiar with this thing. To me, they they do this experiment in that model, which is really jaw dropping. Okay.
So so the core the core technical idea here is that the model architecture is a little bit different because when you're predicting genetic sequences, the alphabet is small, but the sequences are much long longer than in human or than in natural language. Right? So the the model architecture is a little bit different. But the experiment that they performed that was really stunning to me was the following. So imagine you have, like, a a genetic sequence, and this was just in, like, single celled organisms.
Alright? These are not they're not doing this on, like, mammals or anything. Imagine you have a genetic sequence, and you intentionally mutate it. So you've got a bunch of different versions of that sequence. And then you try to evaluate its fitness in 2 different ways.
One is that you try to grow it in the lab and see how much it grows. The other is that you look at the probability of that sequence as evaluated by one of these trained models. Okay? And now let's imagine you take all of those sequences and you sort them according to those two scores. Like, you sort them according to the observed fitness in the lab, like, when you try to grow it in a petri dish, and you also sort it according to, like, the, you know, the inverse of the probability, meaning, like, high probable strings go on the top, low probability strings go on the bottom.
And what's stunning is that those two sort orders are remarkably highly correlated. So, like, the ability to just stare at a genetic sequence and actually say something with maybe some predictive accuracy about its real world fitness, to me, is just absolutely stunning.

Amazing. Amazing. And that would be I mean, it's part of the reason I asked because, I mean, I I obviously want this to be I very much want this prediction to be true. And I think that this is now you I mean, just like like Simon's prediction about revolutionizing art, I mean, I feel there is just it and to our, how life works episode, with Greg Costa, Adam, earlier in the year and, and, you know, there is so much that we still don't understand. And, boy, the ability to allow the computer's ability to sift through data or generate data, test data, allowing that to allow for new gene sequences, which are gene therapies, Mike, would be amazing.
So I love it. That's great. Is that your I and I and I, you know, for whatever reason, I I I feel like there's not a dystopian one on the other side of this one, but maybe there is.

Let's see. Could I come up with 1?

I I That's right.

I mean, it feels like an easy parlay from where you got there. Is it I don't know if the following is is optimistic or pessimistic. The PlayStation 6 is the last PlayStation. There's never a
PlayStation 7.
Oh. I like this one.

I love that we're going kinda like wall to wall on, like, you know, this this revolutionary gene therapy, you know, saves the lives of 1,000,000 and just and then also, by the way, some more PlayStation 6. Oh, my god.
Do you think the PlayStation 5 will have a double digit number of games by the time they come out with the PlayStation 6? That's the Oh,

that's that's spicy. Yeah. They'll probably get to they'll probably get to 2 digits. Yes.

And, Adam, I think you you're, do you have a 6 year?

I do. Yes. My 6 year is that and this is from the deep ignorance I hold that, AI will mostly not be done on GPUs. You know, Simon was talking earlier about, NVIDIA not having a hammer lock on matrix multiplication, but, we'll have more specific hardware tailored potentially even tailored for models. It becomes much more economical, and there are many more players. And in particular, we we mentioned CUDA earlier. Like, it's not driven by CUDA or Rock them or or some of these existing platforms.

So we've got something that is completely new that is and maybe it's some of these new I mean, we got there are bunch of folks that bunch of companies that are looking at new silicon or new abstractions, but what what one of these gets traction in the next 6 years?

Yeah. I mean, I I would maybe I I should stop while I'm ahead, but I think even multiple of them do. That it's that it is not Interesting. A single company having a good insight, but rather, you know, many folks, maybe even incumbent players, maybe even existing GPU manufacturers, but building things that really don't look like GPUs, that increasingly don't look like GPUs, and most of that, both training and inference happens outside of the domain of GPUs.

So something positive came from the great chips crisis of 2027, which is actually a relief.

Yeah. And most of well, and also, Intel spinning out the foundry and this this rogue entrepreneur, you know, buying it for a dollar, or or perhaps the US government, you know, taking taking ownership of it. Yes. But all of those things have resulted in this, diversity of of silicon.

I like it. Cloud Next, do you have a do you have a 6 year?
Yeah. So my 6 year is, so, basically, AI is not going to be the hot thing. And what I mean by this is the same way we started this episode in many ways talking about how Web 3 was, like, the thing everybody talked about the whole time. And so we had to, like, you know, can it. It's, like, pretty clear that I think Web 3, like, gave way to, like, AI now being the cool technology du jour that a ton of money gets thrown into.
And so I'm not I'm not saying AI won't exist or won't be useful or whatever, but the cycle will have finally happened where some other thing becomes the thing that you just get a blank check for having a vague association of an idea of what a company might do in that space.
And I

I would like to say that VR is very upset that it doesn't even merit a hype bubble. It's like, yo, I was a I was a hype bubble. Better read the the the Facebook renamed themselves for me. It's like, no. Sorry. VR, you don't even merit Steve's shortlist straight from Web 3 to AI. Okay. So we were going
the comments, like, what's the other thing? And I I'm explicitly not. I have no idea. I'm not a fashion predict predictor. I have no idea what will be the next thing. Just that something will be.

That, we will. I, you know, I I feel we also had a 3 year prediction in, 2022 that that we will have moved on to a there'll be a new hype boom, and maybe it was for a 6 year. And I I when I was listening to the time anyway, I'm like, oh my god. We we, we didn't realize that it was gonna be AI that was gonna be that that that next boom. All right. Steve, do you have a do do you have a 6 year?
I did. Wasn't very bold, though. It got taken in the 3 years because it was Intel out of the foundry business. And

I this could no. I honor your foundry business.
I I
add them out of the foundry business
in the

In 6 years. So you
think 6 years. So you

think this could take a while?
And then small enough to be acquired in that same 6 year period.

Oh, okay. So I've got a couple questions for you. So the because one thing I was thinking about in terms of an Intel that's split up, who is left with the Intel name? Does anyone want the Intel name, or has the brand been so tarnished at this point that they all give themselves, chat DPT suggested names to avoid calling themselves Intel?
Yeah. That's a good question.

I think AMD buys it and puts it in their down market brand.
No.
I'll take Oracle over AMD.

Oracle buying the design side now. So not the foundry side. That's right. Oracle buying the design side. Over AMD. So, what do you think happens to Habana? Because I actually did wonder, you know, we had talked about this a couple episodes ago, whether I wondered if if Meta or or Microsoft or someone else would actually try to buy Habana. I actually think that, like
I I don't think so.

I don't think so either. I think that you kinda, like, go deep into it and you're like, I think I'd rather actually buy I'd
rather put the money into GPUs. Yeah.

Yeah. But I know it's I think it's like, I I think there's gonna be like this kind of process of like, God, if the number of GPUs we're talking about, we could just buy Habana. And then someone will do they'll decide to be like, actually, go back. Go go buy the GPUs, actually. The GPUs don't have a culture problem, actually.
Yeah. I I don't think anyone anyone buys that. Just staying on brand and transportation, if I had to come up with another a different 6 year, and this is colored by a bunch of conversations over the holidays with a bunch of extended family members that live in different cities that have traveled via Waymo.

I was gonna ask, does does does Fox do a self driving taxi? Well, that's
a that's a 12 year. The, I think Waymo will be a more common means of transportation than Uber and Lyft

in 6 years. That feels like that might be a 3 year or even a 1 year. I I agree. Yeah. Yeah. Yeah.
Yeah. I think it's a Yeah. Yeah. Yeah. For sure.

I well, and I gotta say that, like, I
I've never traveled in 1, but I mean, hearing the descriptions of folks that have now you have to understand the pricing is extremely subsidized right now.

But it is. But I also think that Waymo has really and I really tried to encourage those folks to talk more publicly about some of the engineering discipline they've had because they've done a lot of things the right way in contrast to a bunch of these other folks that have kinda come in and burned out on self driving taxis. Like, wait. What like, there's real, real engineering there. I'm
I'm gonna have to rave about Waymo for a moment because, it is if you're in San Francisco, it is the best tourist attraction in the city is is an $11 Waymo ride. It is just it's it's ultimate living in the future. We had, my my wife's parents were visiting, and we did the thing where you book a Waymo and don't tell them that it's going to be a Waymo. And so you just go, oh, here's our car to take us to to to to lunch. And the self driving

Yeah. And some what I've heard from folks who've done that is, like, everyone that's in there is, like, this is obviously the future. It just feels like
The Waymo moment is you sit in a Waymo, and for the first two minutes, you're terrified and you're hyper vigilant looking at everything. And after about 5 minutes, you've forgotten. You're just relaxed and and and enjoying the fact that it's not swearing at people and swerving across lanes and driving incredibly slowly and incredibly safely. Yeah. No. I'm I'm really impressed by them.

When I gotta tell you again, I got I got the privilege of of watching a a a presentation from one of their engineering leaders on the kind of their approach to things. And sometimes, you know, you kinda look behind the curtain It's like, oh my god. It's all being delivered out of someone's home directory. But in this case, it was really, really impressive about what they've done, and I think that they've really taken a kinda a very deliberate approach that deliberately so I, you know, I absolutely agree with you, Steve. So you think yeah.
I think, and so, Ian, that was your, the, Ian, you said you that was your 6 year prediction. What did now we've got you on stage, what are your sum what you let's do your 1 and 3 in addition to to any 6 year prediction that Steve didn't hoover up.
Yeah. My 6 year was, like, slightly less, optimistic than Steve because I said, Waymo overtakes Uber in rider miles per day, so I didn't lump lift in to kinda, like, hedge my bets a little bit. My 1 year prediction, I had 2. 1 was open a OpenAI pricing or usage limit changes to prevent losing money on power users of their current flat monthly pricing schemes, which I think has already been discussed. The other I had was a ban on new sales of TP Link routers in the USA, 1 year.

Okay. So, what's take those one at a time. So on the OpenAI so is is that a 1 year or a 3 year on the OpenAI production?
That's a 1 year. I think that they Okay. They're they're kinda still experimenting with pricing, and it's very clear that they set the pricing based on Sam open trying 2 different price points and being like, yep. That'll do. And they haven't they haven't really run the numbers or seen how the users actually you utilize the product.
And I think that they may keep this current pricing schemes, but just put, a kind of usage cap, at which point you have to start paying for additional credits, sort of like, I don't know, how audiobooks work on Spotify where you can run out of minutes within a month and have to buy more. I think the same Yeah. Will happen for for like GPT pro, where power users are currently, spending more compute than they're bringing in in revenue, so it's it doesn't make financial sense for them to continue to set money on fire at that kind of scale.

Someone in the chat asking, you should ask chat gpt how much it should cost, which I just love the idea of, like, asking, like, then the different, like, 03 and o one. They're the o one pro and have that thing, like, really grind on it generating all, you know, thousands of hidden intermediate tokens to ask how much it should cost, see if it's see if it thinks it should cost itself less or more. If it actually that q query costs more, does it feel it should cost less or more? The okay. So that is your one I I agree with that.
I I think they're gonna have to do something in that regard. And I'll be very interested to see kinda I'm curious about what my usage because I I think I'm not a power user, so I I would be curious about where my own usage kind of ends out there. And then, this would then what is the TP Link prediction, Ian?
Yeah. This is a a a ban on new sales of TP Link router hardware within the USA. Interesting.

Okay.
This is this is So TP Link

getting, like, the the the Huawei treatment, in other words?
Correct. Yeah. There there has been some pretty recent news stories about this, and I feel like the incoming administration, kind of stance on Chinese companies is going to be potentially even more restrictive than the the outgoing administration. So I feel like the stage is set for this to happen, and that plus the kinda, network level intrusion into some, some of the large scale telecommunication companies has kind of heightened fears of large scale, intrusion into the network stack within homes. So I feel like there's
That's right.
A few, like, things that are kind of pointing in that direction.

Yeah. So the and you feel that that that's within a year. And then what, what was your, do you have any other 6 years other than the Waymo and and, so you got Waymo exceeding Uber rides. Is that right?
Yeah. In rider miles per day, which means that it may be that they're not in all the cities that Uber is, but they drastically outcompete, Uber in the cities that they are present in. So this means that they don't necessarily have to get they they need a large scale deployment, obviously, but, I think that they will massively outcompete Uber in any market that they're in, because the product is superior.

Yeah. Interesting. Simon, before I give my own 6 years, I got I got a question for you because we did, in 2022, we had Steven O'Grady on, and he had some pretty, he had some, like, pretty dark open source predictions for for 6 years. And then I think, he's probably on track to not be totally wrong about it anyway. I mean, I don't think open source is like I think, we've kind of tracked we have tracked negatively on open source for sure, as we've seen more and more relicensing and so on.
What is your view on kinda where open weights are tracking? Because it feels like that's just been positive in the last year. We've got more and more I mean, I think Llama 3 is been extraordinary. We've got a bunch of these things that are open weights. What's your view on what the trajectory is there for 6 years for open models?
That's a really interesting question. I mean, the big problem here is that the the the what what is the financial incentive to release an open model? You know? At the moment, it's all about effectively like, you can use it to establish yourself as a a a force within the AI industry, and that's worth blowing some money on. But at what point do at what point do people want to get a return on their their 1,000,000 of dollars of training costs they're using to release these models?
Yeah. I don't know. I mean, it has been some of the models are actually, like, real open source license now. There are, I think, the Microsoft PHY models are MIT licensed. At least some of the QUEN models from China under Apache 2 license.
So we've actually got real open source licenses being used at least for the weights. There were also the other really interesting thing is the underlying training data. Like, the criticism of these AM mod models has always been happen to even pull itself open source if you can't get it the source code, which is the training data. And because the source code is all like, it's all ripped off, you can't slap an Apache license on that. That just doesn't work.
There is at least one significant model now where the training data is at least open as in you can download a copy of the training data. It includes stuff from the common crawl, so it's includes a bunch of copyrighted websites they've scraped. But, but that has but there is at least one model now that that has completely transparent licensing transparent transparency on the training data itself, which is a it's good. You know? One of the other things that I've been tracking is I love this idea of a vegan model.
Right? An an, an LLM Yeah. Right. Which really was trained entirely on openly licensed, material such that all of the holdouts on ethical grounds over the training, which is a position I fully respect. You know, if you're going to look at these things and say, I'm not using them.
I don't agree with the ethics of how they were trained, that's a perfectly rational decision for you to make. I I want those people to be able to use this technology. So, actually, one of my potential, guesses for the next year was I think we will get to see a vegan model release. Somebody will put out an openly licensed model that was trained entirely on licensed or, like, public domain work. I think when that happens, it will be a complete flop.
I think what will happen is it won't be as good as the as the it'll be notably not as useful. But more importantly, I think a lot of the holdouts will reject it because Yeah. We've already seen these people saying, no. It's got GPL code, and the GPL says that you have to attribute the the you know, there's attribution requirements not being met, which is entirely true. That is a, again, a rational position to take.
But I think that it's it's it's it's both true, and it's it it makes sense to me, but it's also a case of moving the goalpost. So I think what would happen with a a vegan model is the people who it was aimed at will find reasons not to use it. And I'm not gonna say those are bad reasons, but I think that will happen. In the meantime, it's just not gonna be very good because it won't know anything about modern pop modern culture or anything where it would have had to ripped off a newspaper article to learn about something that happened.

Look. We all know folks who are vegans who also eat bacon. It's like, what is okay. What? Oh, and okay. You're a vegan unless it's really delicious, I guess. Like, okay. I mean,
I I guess

I love the LLM that's all Steamboat Willie references and public domain songs and stuff.

Well, this is our it and, you know, saying Tog Sarnet and, you know, the I this is our kind of, like, the Abraham Simpson, like, kind of the the the mister Smithers isms. The I I I definitely love the idea. The old timey model, that is all all public domain work. And it may also be inter I mean, maybe those will get better and better as more and more stuff enters the public domain because we are on the cusp of a lot of stuff now entering the public domain, as we are what at 1929, I think?
Or are

we even and so we've obviously got, like, and, hey, you know, 1930 Germany wasn't just it's only a couple of years away. Yeah. You'll be entering the public domain. Alright. So the, in terms of my own, my own six year predictions, so I I, I'm really glad, again, we've recorded these, Adam, because I I had a prediction that I was really, like, I felt was a really great prediction.
Whereas I basically made the same prediction last year. So I'm gonna restate this prediction. I'm gonna I'm gonna tweak it just a tad. I I think that, I because I I I've been wondering about, you know, where are the I think, LOMs are going to completely revolutionize some domains, and I've been trying to think about, like, some of the the the and I and, like, certainly, software engineering has been is being revolutionized, has been revolutionized. I think that a another one, and Simon, agreeing with you and with Mike about letting people do more, I I've I've always believed that, like, that's the real revolution here is not actually having people putting people out of work.
It's about allowing people to do more of their job that they couldn't do previously. And I watch my own kids with respect to LLMs. And, you know, right now, at least at the at, like, you know, I've got a kid in I've got a kid in college and in high school and in middle school, and at the high school and the middle school, you know, their, their AI policy is basically like abstinence. You basically can't use it at all. And I think that that's nonsensical, and they the kids think it's nonsensical.
And whenever they are kind of doing intellectual endeavor outside of school, they are using, they're they're using LLMs in a great way. It's like, you know, we are using it to, you know, learn more about a sports figure or learn more about doing the things
that kids do. Right? Troll next door.

Troll next door. Exactly. And I I continue to believe so my prediction last year was a 6 year prediction that k eight education was gonna be revolutionized. I actually think it is it is, 9 through 12 education that's gonna be more revolutionized by LLMs. And I think when we begin to tack into this and we stop viewing it as just cheating, and, you know, how can we do?
I think there's gonna be a lot more in class assessment, which I think is gonna be a good thing. But I think, you know, it's like, you know, you remember Quizlet from back in the day, and like Chat gbt has absolutely replaced Quizlet. My my my my senior in high school needs to study for an exam. He sits down with Chat GPT and has Chat GPT help him study. Mhmm.
And he goes and sets the exam. He's not he's not, you know, he's not he's using it to actually, like, you know, god forbid, learn. And I and I think that there is I think we can do a lot more, I think, especially in secondary education. So That's
a great prediction. I'm very sold on that with one one sort of edge case, and that's the thing about writing. Like, this is the the most tedious part of of of of learning is learning to write essays. Mhmm. That's the thing that people cheat on, and that's the thing where I don't see how you learn those writing skills without the miserable slog, without the tedium.
And so that's the one part of education I'm most nervous about is is how do people learn the tedious slog of writing when they've got this this tempting devil on their shoulder that will just write it for them?

Well, so and here's what so here's what I think. I think that, 1, Chat gpt is a great editor, and maybe it's a little too great because Chat GPT tends to praise my work to me when I because my wife had decided that she's like no longer interested in reading drafts of my blog entries, which is like understandably. I think this has been a little arcane. So So I just have, like, you know what? I actually I'll just have Chat GPT read it.
And, it's interesting. Chat GPT, again, I'm probably a sucker for it's like, this is a very interesting blog entry. I think you are writing on a very important topic. So I'm like, you know, I'm glad someone around here gets the importance of what I'm doing. But it it gives me good feedback and it asks, like, do you want me to give you, like, deeper feedback?
What kind of feedback do you want? And I'm able to guide it to give me and so I actually it does what my mother used to do with my papers when I was in high school. I think that's really valuable. I think you gotta get out from like, you're gonna it's gonna write it for you. I would personally, if I were in high school, I would have, I think an interesting experiment to do would be like, no.
I what you're gonna I want you to write on this topic. I want you to write a great essay on it. If you use chat gpt, use chat gpt like, do whatever you need to. If you just have chat gpt spit out an answer, it's gonna be like copying the Wikipedia article. It's probably not gonna be, you know, and actually ask people to do more with their writing, and then I would have them read it aloud.
And, because I think that you I mean, it's really interesting to have people read their own work aloud. If you suspect a kid, by the way, has used chat gpt to write something, have them read it aloud, and it will become very obvious, whether it's their own work or not. Yeah. That's

great. My my, son's high school English teacher, his senior year, last year, had them do all their writing, pen and pens pen and paper in class. So she was, like, not not an issue for us.

That's what we're they're doing in the high school as well. And I think that's good too. I mean, that's like the I think that that that that's but I also feel that you're also missing a really important part of writing is revising and
Absolutely.

And you can't and that

teacher that teacher also was the teacher who didn't hand back assignments for weeks weeks weeks, so the kids weren't getting feedback. So you're right that, like, chat GPT is a way to get that feedback instantaneously, where otherwise, it's, you know, you may never be able to improve because you're not getting that feedback.

Well, I think it'd be interesting to have a class with, like, a a LLM maximalist high school English class where it's like, hey, class, You're gonna use LLMs to write, and I'm gonna use LLMs to grade, by the way. And we're all that's not gonna be an excuse for us not using our brains. We're gonna really use these things as tools.
Yeah. I will say one thing about LLMs for feedback, They can't do spell checking. I only recent noticed this recently. Claude, amazing model, it can't spot spelling mistakes. If I ask it for spell checking, it hallucinates words that I didn't misspell, and it misses words that I did.
And it it's it's because of the tokenization, presumably. But that was a bit of a surprise. It's like it's a language model. You would have thought that spelling spell checking would work. Anything they output is spelled correctly, but they actually have difficulty spelling spelling states. Chip, it was interesting.

You know, that's that is really interesting. Simon, and that didn't even occur to me. I tend to do, like a a heavy review of my own before I give it to GPT. But then I would notice that like, that's kinda strange that it didn't notice this kind of grievous error. I I gotta say, actually, I I mean, I don't wanna speak about him in the 3rd person because he's in the he's in the room, but man, Steve Tuck does a very close read on things.
A very, like you were able to channel, I think, your own mother when you do a read on things where I I I I'd handed you things that I have, like, reviewed a lot on my own, and you find things that I that that I and the many other people have missed. Couldn't you tell
me an early age.

Yeah. It did. Exactly. But it's that's really interesting, Simon, that it that it can't capture, because I have found that, like, it doesn't necessarily find errors. The thing that it finds are kind of like structural. Like, the things it'll say is, like, I think you need a transition sentence here. And it will be, like, I actually you know, I have been thinking to myself I need a transition sentence here. And then it will make a suggestion that is terrible that I discard, that I, of course.
I yeah. I like drafting it for I I ask it to look for logical inconsistencies or, you know, points that I made and didn't go back to, and and that is great for. But it's another one of those things where it's all about the prompting. You have to it's quite difficult to come up with a really good prompt for the proofreading that it does. I'd love to see more people share their proofreading prompts that work.

Yes. Absolutely. And I the only thing I have done is I I mean, the notebook l m podcast manufacturing, I think, is so mesmerizingly good. I know
that thing. Yeah.

I mean, I know it's using tricks on me of, like, you know, it's the umms, the ahs, and the laughing at their own jokes, but, man, I just fall right for it. I just think it's just that is insanely good. And I think it's kinda easy proofreading.
Yeah. I dump my blog entries into that, and I'm like, I do a podcast about this. And then you can tell which bits of the message came through, and that's kind of interesting. The other thing that's fun about that is you give it custom instructions. So I I say things like, your banana slugs, read this essay, and they discuss it from the perspective of banana slugs and how it will affect your society, and they just go all in. And it is priceless funny.

That is amazing. Meanwhile, someone someone at notebook.com is like, I told you we cannot have all you can eat compute. We've gotta start charging for this thing. Like, the guy spent the the $1200 in compute having the banana slugs offer their perspective on the pelicans. But that is that is great, Simon.
I will say that I also got one other 6 year prediction. I think that post secondary degrees in computer science and and related disciplines, information science and so on, go into absolute free fall. And in 6 years, they are below. So I don't know if folks are unaware, but the degrees in computer science have skyrocketed in the last, even in the last 7, 8 years. And we're talking like factors of 3 higher.
And I mean, Adam, you've been on the kind of the pointy end of this with a kid who's interested in computer science and having everything be oversubscribed everywhere. Yeah. And, I think we I think there are gonna be a lot I think a whole bunch of factors are gonna come together. And I think CS degrees are gonna be way off the mark. I think that it's there's been there've been some folks, I mean, Adam, not your son, but there have been plenty of people who no, because there have been plenty of people because who've who've done computer science because mom and dad have told me this is what I needed to go do to get work.
Like, this is not something that's in my heart. And I've always felt that that's kind of cruel to the folks for whom it is in their heart, they're at a disadvantage. And, and I think that that is So I think there will be some good things that come out of it, because it's not going to be a lock on post undergraduate education or employment. And I just, I think that it's going to fall, I mean, I think in 6 years, it's going to be below 70,000 a year. That only puts it actually back to 2015 levels.
I think it could actually fall a lot further than that. And if you the reason that's a 6 year prediction is because there's a 4 year lag on, and I think people are gonna have are gonna realize that, like, this is, if there's a job that's gonna be where where LOM based automation is gonna really affect the, the the kind of the demand for full time folks in this, it's gonna be computer science. And I think also to put an optimistic spin on it, Simon, you said this earlier, but people are gonna realize like, wait a minute, I don't need to get a degree in computer science to you, like, I actually wanna be a journalist. I can actually take it, take some computer science courses and then use this stuff to generate these things to get the rest of the way there, to use this as a tool to do my other work?
My my ultimate utopian version of this is it means that regular human beings can automate things in their lives with computers, which they can't do right now. You know, that blowing that open feels like such a absolute win for for our species. And and we're we're most of the way there. We need to figure out what the tools and UI is on top of LLMs look like, but let regular human beings automate things in their lives. We're gonna crack that, and it's gonna be fantastic.

Yeah. And I would say that, like, I think Linux audio is still a hill on a distant horizon. I don't have the guts to make that a 6 year prediction, but I did use Chat gpt to resolve a Linux printing issue the other day, and that felt like, felt like the future is here. The future is now. I can actually use Chat gpt gave me some very good things to go do and and, ultimately, it worked. I got this goddamn thing printing, but it was pretty frustrating. But the fact that it
was FFmpeg? I use FFmpeg multiple times.

Oh, god. Yeah. Well yes. Yeah.
I yeah. It's it's great.

Yeah. For like, I will never again read an FFmpeg. I I have read an FFmpeg manual for the last time in my life. I will only generate, FFmpeg invocations with chat GPT. I'm just like, there's no way. I'm I'm not gonna sully myself with it anymore. Alright. Well, that's a that's a that's a good roundup. The, you

know, Brian, I would say that I asked chat gbt to evaluate my predictions because you don't want them to be, like, too obvious or or whatever. And I think that this applies for everyone. It told me none of these predictions are obviously wrong yet, and they all fall within reasonable expectations for their time frames. So, think think we can take that one to the bank.

The the dreaded neutral zone. What is it that that that makes an LOM go neutral? The, well, that's good. So I I I think that's good. I I guess I guess the the the I mean, the LM stands for milk toast apparently. It feels like you know? It's it's very, you know, Mike, you'd I I I you probably use this term regularly, but I know I think Adam and I both heard it for the first time last year from you. It's a very normcore answer, Chad.

Sorry. Say that again, Brian? You looked out.

The the the you you were you described, chat gbt or LOMs as being very good to give you a norm core answer to any problem you have.

Yeah. That's a go to.

So I I I think we've got the very the very norm core, interpretation of our predictions.
Yep. Alright.

What do you any last predictions from anybody? I've got one. Yeah.
I've got one last 3 year prediction. On the 3 year, I predict, the Apple's Xserve line returns, and Apple sells server hardware again.

Okay. Wow. That's exciting. It it that that's exciting. We got a Xserve, the return of Xserve from the from Ian who's who, again, your your 6 year prediction in 23 is was was pretty good. So we take that one seriously. So alright. The return of Xserve. Rack scale compute from from our our friends at Apple, perhaps. Don't worry. I feel like With the coming chips crisis, we're sitting pretty here at Oxide. So
I mean, it's unlikely that they're doing right scale design, and and I feel like Oxide is still going to have a pretty attractive niche in that market. It's just, I feel like Apple were definitely developing hardware to be able to do the, private cloud compute stuff, and they're not racking Mac Pros. And I feel like it's unlikely that they're going to not sell that hardware in addition to making it for their internal usage.

Well, I I think it's a little too logical, I think, for Apple. I I I agree on the logic, but, we shall see, but a good 3 year predictions. Folks in the chat, definitely wanna get your predictions. So, Adam, should the folks put out PRs against the show notes for their for their predictions?

That would be awesome.

So, if you, please, give us some PRs, get your predictions in there, and, looking forward to a a a great 2025. Simon, thank you so much, especially for joining us, and really loved the conversation we had with you a year ago, and it's been great to keep up on your stuff. I again, you continue to really, I think, serve the the practitioner and, like, I think, the the the broader industry by really capturing what is possible, with with what is, impregnable. So I'll be thinking of you anytime anyone mentions agents over the next year. I'm just gonna be like, anytime there are conflicting definitions of agents, I will I will be thinking of you.
So
Excellent. Thanks for having me. This has been really fun. Alright.

Thanks, everyone. Happy New Year.