Welcome to the first... bonus episode of the Tech Meme Right Home for the year 2025. I'm your host, as always, Brian McCullough. Listeners to the pod over the last year know that I have made a habit of quoting from Simon Willison when new stuff happens in AI from his blog. Simon has become a go-to for many folks in terms of analyzing things. criticizing things in the AI space. I've wanted to talk to you for a long time, Simon, so thank you for coming on the show. No, it's a privilege to be here.
And the person that made this connection happen is our friend Swix, who has been on the show back, even going back to the Twitter spaces days, but also an AI guru. In their own right, Swix, thanks for coming on the show also. Thanks. Happy to be on and have been a regular listener. So just happy to contribute as well.
And a good friend of the pod, as they say. All right, let's go right into it. Simon, I'm going to do the most unfair broad question first, so let's get it out of the way. The year 2025, broadly... What is the state of AI as we begin this year? Whatever you want to say. I want to lead the witness. Wow. So many things, right? I mean, the big thing is everything's got really good and fast and cheap.
Like that was the trend throughout all of 2024. The good models got so much cheaper. They got so much faster. They got multimodal, right? The image stuff isn't even a surprise anymore. They're growing video, all of that kind of stuff. So that's all really exciting. same time they didn't get massively better than gpt4 which was a bit of a surprise so that's sort of one of the open questions is are we going to see but i kind of feel like that's a bit of a distraction because gpt4 but way cheaper
much larger context lengths, and it can do multimodal is better, right? That's a better model, even if it's not... What people were expecting or hoping, maybe not expecting is not the right word, but hoping that we would see another step change, right? Right. Where from like GPT-2 to 3 to 4, we were expecting or hoping that maybe we were going to see the next step.
evolution in that sort of yeah we did see that but not in the way we expected we thought the model was just going to get smarter and instead we got massive drops in drops in price we got all of these new capabilities you can talk to the things now right they can do simulated audio input all of that kind of stuff and so it's kind of it's interesting to me that the models improved in all of these ways we weren't necessarily
expecting i didn't know it would be able to do an impersonation of santa claus and i could talk to it through my phone and show it what i was seeing by the end of 2024 but yeah we didn't get that gpt5 step and that's one of the big open questions is is that actually just around the corner and we'll have a bunch of GPT-5 class models drop in the next few months. Or is there a limit? If you were a betting man and wanted to put money on it, do you expect to see a phase change, step change in 2025?
I don't particularly for that, like the models, but smarter. I think... All of the trends we're seeing right now are going to keep on going, especially the inference time compute, right? The trick that 01 and 03 are doing, which means that you can solve harder problems, but that costs more and it churns away for longer. I think that's going to happen because that's already proven to work.
I don't know. I don't know. Maybe there will be a step change to a GPT-5 level, but honestly, I'd be completely happy if we got... what we've got right now, but cheaper and faster and more capabilities and longer contacts and so forth. That would be thrilling to me. Digging in to...
what you've just said. One of the things that, by the way, I hope to link in the show notes to Simon's year-end post about what things we learned about LLMs in 2024. Look for that in the show notes. One of the things that you did say that you alluded to... even right there, was that in the last year, you felt like the GPT-4 barrier was broken, like IE, other models, even open source ones, are now regularly matching sort of the state of the art.
Well, it's interesting. So the GPT-4 barrier was a year ago, the best available model was opening as GPT-4 and nobody else had even come close to it. And they'd been in the lead for like nine months, right? That thing came out in what, February, March of... of 2023 and for the rest of 2023 nobody else came close and so at the start of last year like a year ago the big question was
Why has nobody beaten them yet? Like, what do they know that the rest of the industry doesn't know? And today, I've counted 18 organisations other than GPT-4 who've put out a model which clearly beats that GPT-4 from a year ago thing. Like, maybe they're not better than GPT-40, but that barrier got completely smashed. And yeah, a few of those I've run on my laptop, which is wild to me. Like, it was very...
It felt very clear to me a year ago that if you want GPT-4, you need a rack of $40,000 GPUs just to run the thing. And that turned out not to be true. This is that big trend from last year of the models getting more efficient, cheaper to run, just as capable with smaller weights and so forth.
I ran another GPT-4 model on my laptop this morning, right? Microsoft 5.4 just came out. And that, if you look at the benchmarks, it's up there with GPT-4.0. It's probably not as good when you actually get into the vibes of the thing, but it... It runs on my, it's a 14 gigabyte download and I can run it on a MacBook Pro. Who saw that coming? The most exciting, like the close of the year on Christmas Day, just a few weeks ago, was when DeepSeek dropped their DeepSeek V3 model.
on hugging face without even a readme file it was just like a giant binary blob that i can't run on my laptop it's too big but in all of the benchmarks it's now by far the best available open open weights model like it's it's it's beating the the metal llamas and so forth. And that was trained for five and a half million dollars, which is a tenth of the price that people thought it cost to train these things. So everything's trending smaller and faster and more efficient.
Well, OK, I kind of was going to get to that later, but let's combine this with what I was going to ask you next, which is, you know, you're talking also in the piece about the LLM prices crashing, which I've even seen in projects that I'm working on. But explain.
Explain that to a general audience, because we hear all the time that LLMs are eye-wateringly expensive to run. But what we're suggesting, and we'll come back to the cheap Chinese LLM, but first of all, for the end user, what you're suggesting is that we're... We're starting to see the cost come down sort of in the traditional technology way of cost coming down over time.
Yes, but very aggressively. I mean, my favorite thing, the example here is if you look at GPT-3, so OpenAI's GPT-3, which was the best developed model in 2022 and through most of 2023. The models that we have today, the OpenAI models, are 100 times cheaper. So there was a 100x drop in price for OpenAI from their best available model like two and a half years ago to today. And just to be clear, not to train the model, but for...
The use of tokens and things. Exactly. For running prompts through them. And then... When you look at the top tier model providers right now, I think are OpenAI, Anthropic, Google and Meta. And there are a bunch of others that I could list there as well. Mistral are very good. The DeepSeek and Quen models have got great. whole bunch of providers serving really good models but even if you just look at the sort of big brand name providers they all offer models now that are
A fraction of the price of the models we were using last year. I think I've got some numbers that I threw into my blog entry here. Yeah. like Gemini 1.5 flash, that's Google's fast, high quality model is how much is that? It's 0.075. dollars per million tokens like these numbers are getting so we just do cents per million now cents per million
Cents per million makes a lot more sense, yeah. They have one model, 1.5 Flash 8B, the absolute cheapest of the Google models, is 27 times cheaper than GPT-3.5 Turbo was a year ago. That's it. And GPT 3.5 Turbo, that was the cheap model, right? Now we've got something 27 times cheaper. And this Google one can do image recognition. It can do million token context, all of those tricks. It really is startling how inexpensive some of this stuff has got. Now, are we assuming that that happening is...
directly the result of competition. Because again, you know, OpenAI and probably they're doing this for their own almost political reasons, strategic reasons, keep saying, we're losing money on everything, even the $200. So they probably wouldn't, the prices wouldn't be coming down if there wasn't intense competition in this space.
The competition is absolutely part of it, but I have it on good authority from sources I trust that Google Gemini is not operating at a loss. Like the amount of electricity to run a prompt is less than they charge you. And the same thing for Amazon Nova, like somebody...
found an Amazon executive and got them to say, yeah, we're not losing money on this. I don't know about Anthropic and OpenAI, but clearly that demonstrates it is possible to run these things at these ludicrously low prices and still not be running at a loss if you discount the army of... PhDs and the training costs and all of that kind of stuff. One more for me before I let Swix jump in here. To come back to DeepSeek and this idea that you could train...
you know, a cutting edge model for $6 million. I was saying on the show like six months ago that if we are getting to the point where each new model... cost a billion, 10 billion, 100 billion to train that at some point, it would almost only nation states would be able to train the new models. Do you expect what DeepSeek and maybe others?
are proving to sort of blow that up? Or is there like some sort of a parallel track here that maybe I'm not technically, I don't have the nows to understand the difference. Is the model, are the models going to go?
up to a hundred billion dollars or can we get them down sort of like deep seek has proven so i am the wrong person to answer that because i don't work in a lab training these models so i can give you my completely uninformed opinion which is i feel like the deep seek thing that was a bombshell That was an absolute bombshell when they came out and said, hey, look, we've trained.
One of the best available models, and it cost us $5.5 million to do it. I feel... And one of the reasons it's so efficient is that we put all of these export controls in to stop Chinese companies from buying GPUs. So they've... were forced to go as efficient as possible. And yet the fact that they've demonstrated that that's possible, I think it does completely.
tear apart this this this mental model we had before that yeah the training runs just keep on getting more and more expensive and the number of organizations that can afford to run these training runs keeps on shrinking that that's been blown out of the water So, yeah, that's again, this was our Christmas gift. This was the thing they dropped on Christmas Day.
Yeah, it makes me really optimistic that we can, there are, it feels like there was so much low hanging fruit in terms of the efficiency of both inference and training. And we spent. a whole bunch of last year exploring that and getting results from it. I think there's probably a lot left. I think there's probably, I would not be surprised to see even better models trained spending even less money over the next six months.
Yeah, so I think there's an unspoken angle here on what exactly the Chinese labs are trying to do, because Deep Sea made a lot of noise for... around the fact that they train their model for $6 million. And nobody quite believes them. It's very, very rare for a lab to trumpet the fact that they're doing it for so cheap. They're not trying to get anyone to buy them. So why are they doing this? They make it very, very obvious that...
Their lab, you know, DeepSeek is about 150 employees. It's an order of magnitude smaller than at least Anthropic and maybe more so for OpenAI. So what's the end game here? Are they just trying to show that the Chinese are better than us? So DeepSeek, it's the arm of a quant fund, right? It's an algorithmic quant trading thing.
So I would love to get more insight into how that organization works. My assumption from what I've seen is it looks like they're basically just flexing. They're like, hey, look at how utterly brilliant we are with this amazing thing that we've done.
And it's working, right? So is that it? Is this just their kind of like, this is why our company is so amazing. Look at this thing that we've done. I don't know. I'd love to get some insight from... from within that industry as to how that's all playing out. The prevailing theory among the local llama crew and the Twitter crew that I indexed for my newsletter is that there is some amount of copying going on. It's like Sam Altman, you know, tweeting about how they're being.
copied and then also there's this the other sort of opening i employees that have said stuff that is similar that DeepSeek's rate of progress is how US intelligence estimates the number of foreign spies embedded in top labs. Because a lot of these ideas do spread around, but they surprisingly have a very high density of them in the DeepSeek. V3 technical report.
So it's interesting. We don't know how much tokens. I think people have run analysis on how often DeepSeek thinks it is Claude or thinks it is GPT-4. And we don't know. I think for me... We basically will never know as external commentators. I think what's interesting is where does this go? Is there a logical floor or bottom? By my estimations, for the same amount of ELO...
from the start of last year to the end of last year, cost went down by 1,000x for GPT-4 intelligence. Do they go down 1,000x this year? That's a fascinating question. Yeah. Is there a Moore's Law going on or did we just get a one-off benefit last year for some weird reason? My uninformed hunch is low hanging fruit. I feel like up until a year ago, people haven't been focusing on efficiency at all. You know, it was all about.
What can we get these weird shaped things to do? And now once we've sort of hit that, okay, we know that we can get them to do what GPT-4 can do. When thousands of researchers around the world all focus on, okay, how do we make this more efficient? What are the most important, like...
how do we strip out all of the weights that have stuff and that doesn't really matter all of that kind of thing so yeah maybe that was it maybe 2024 was a freak year of all of the low-hanging fruit coming out at once And we'll actually see a reduction in that rate of improvement in terms of efficiency. I wonder. I mean, I think we'll know for sure in about three months time if that trend is going to continue or not.
I agree. You know, I think the other thing that you mentioned that DeepSeek V3 was the gift that was given from DeepSeek over Christmas, but I feel like the other thing that might be underrated was DeepSeek R1. which is a reasoning model you can run on your laptop. And I think that's something that a lot of people are looking ahead to this year. Oh, did they release the weights for that one? Yeah.
oh my goodness i missed that i've been playing when so the the other great the other big chinese an app is alibaba's quen actually Yeah, I'm sorry. R1 is API available. Exactly. Quen, that's really cool. So Alibaba's Quen have released two reasoning models that I've run on my laptop now. The first one was QWQ. And then the second one was QVQ because the second one's a vision model. So you can like give it vision.
puzzles and a prompt, that these things, they are so much fun to run because they think out loud. It's like the OpenAR01 sort of hides its thinking process. The corner ones don't. They just churn away. And so you'll give it a problem and it will output literally dozens of paragraphs of text about how it's thinking. My favorite thing that happened with QWQ is I asked it to draw me a pelican on a bicycle in SVG. That's like my standard stupid prompt.
And for some reason, it thought in Chinese. It spat out a whole bunch of like... Chinese text onto my terminal on my laptop. And then at the end, it gave me quite a good sort of artistic pelican on a bicycle. And I ran it all through Google Translate. And yeah, it was like it was contemplating the nature of SVG files as a starting point. And the fact that my laptop can think in Chinese now is so delightful. It's so much fun watching it do that.
Yeah, I think Andrei Karpathy was saying, you know, we know that we have achieved proper reasoning inside of these models when they stop thinking in English. And perhaps the best... uh, form of thought is in Chinese. Uh, but yeah, for listeners who don't know, uh, Simon's blog, uh, he always, whenever a new model comes out, you, I don't know how you do it, but you're always the first to run Pelican bench, um, on these models. And you post up the results.
Yeah. So I really appreciate that. Yeah, you should check it out. These are not theoretical. Simon's blog actually shows them. This year, actually achieve your New Year's resolutions. Feel your best, regain your energy, face your fears, no matter what your goals are for 2025. This supplement can help because it supports your health at the foundation by encouraging cellular renewal. Mito Pure.
is a precise dose of the rare postbiotic Urolithin A. It works by promoting an essential cellular cleanup process that clears out dysfunctional mitochondria, a.k.a. your cell's battery packs. MitoPure is the only Urolithin A. a supplement on the market clinically proven to target the effects of age-related cellular decline with regular use you'll see and feel the difference in the form of improved energy levels better workouts faster recovery more endurance and more all of which will
help you achieve your New Year's goals. P.S. MitoPure is shown to deliver double-digit increases in muscle strength and endurance without a change in exercise. Win! Cellular health is the foundation of well-being and longevity. Mitopure recharges your cells, supporting any New Year's goal by helping all of your systems work better. Timeline is offering 33% off your order of Mitopure while supplies last. Go to timeline.com slash ride33. That's T-I-M-E-L-I-N-E dot com slash ride33.
Even if you think it's a bit overhyped, AI is suddenly everywhere from self-driving cars to molecular medicine to business efficiency. If it's not in your industry yet, it's coming fast. But AI needs a lot of speed and computing power, so how do you compete without costs spiraling out of control?
Time to upgrade to the next generation of the cloud. Oracle Cloud Infrastructure, or OCI. OCI is a blazing fast and secure platform for your infrastructure, database, application development, plus all your AI and machine learning workloads. OCI costs 50%. So you're saving a pile of money. Thousands of businesses have already upgraded to OCI, including Vodafone, Thomson Reuters, and Suno AI.
Right now, Oracle is offering to cut your current cloud bill in half if you move to OCI. For new U.S. customers with minimum financial commitment, offer ends March 31st. See if your company qualifies for this special offer at oracle.com slash techmeme. Let me put on the investor hat for a second, because from the investor side of things, a lot of...
The VCs that I know are really hot on agents, and this is the year of agents, but last year was supposed to be the year of agents as well. Lots of money flowing towards... agentic startups. But in your piece that, again, we're hopefully going to have linked in the show notes, you sort of suggest there's a fundamental flaw in AI agents as they exist right now. Let me quote you, and then I'd love to dive into this. I remain skeptical as to their ability based once again on the...
challenge of gullibility. LLMs believe anything you tell them, any systems that attempt to make meaningful decisions on your behalf will run into the same roadblock. How good is a travel agent or a digital assistant or even a research tool if it can't distinguish truth from fiction?
So essentially what you're suggesting is that the state of the art now that allows agents is still, it's still that sort of 90% problem, the edge problem getting to the, or is there a deeper flow? What are you saying there? So this is the fundamental challenge here. And honestly, my frustration with agents is mainly around definitions like...
If you ask anyone who says they're working on agents to define agents, you will get a subtly different definition from each person. But everyone always assumes that their definition is the one true one that everyone else understands. So I feel like a lot of these agent conversations, people talking past each other because...
One person's talking about the sort of travel agent idea of something that books things on your behalf. Somebody else is talking about LLMs with tools running in a loop with a cron job somewhere and all of these different things. You ask academics and they'll laugh at you because they've been debating what agents...
mean for over 30 years at this point it's like this this long-running almost sort of an in-joke in that community but if we assume that for this purpose of this conversation an agent is something which you can give a job and it goes off and it does that thing for you
like booking travel or things like that. The fundamental challenge is it's the reliability thing which comes from this gullibility problem. And a lot of my interest in this originally came from when I was thinking about prompt injection as sort of this form of...
attack against LLM systems where you deliberately lay traps out there for this LLM to stumble across. And which I should say you have been banging this drum that no one's... gotten any far at least on solving this that i'm aware of right like that's still an open problem yeah we've been talking about this problem and um like
A great illustration of this was Claude. So Anthropic released Claude Computer Use a few months ago. Fantastic demo. You could fire up a Docker container and you could literally tell it to do something and watch it, open a web browser and navigate to a web page and click around. and so forth really really really interesting and fun to play with and then um uh one of the first demos somebody tried was what if you give it a web page that says download and run this executable and
It did, and the executable was malware that added it to a botnet. So the very first, most obvious, dumb trick that you could play on this thing just worked, right? So... That's obviously a really big problem. If I'm going to send something out to book travel on my behalf, I mean, it's hard enough for me to figure out which airlines are trying to scam me and which ones aren't. Do I really trust a language model that believes the literal truth of anything?
that's presented to it to go out and do those things. Yeah, I definitely think there's... And it's interesting to see Anthropic doing this because they used to be the safety arm of OpenAI that split out and said, you know, we're worried about letting this thing out in a while. And here they are enabling computer use for agents. The...
It feels like things have merged. I'm also fairly skeptical about this always being the year of Linux on the desktop. And this is the equivalent of this being the year of agents. People are not predicting. so much as wishfully thinking and hoping and praying for their companies and agents to work. But I feel like things are coming along a little bit. To me it's kind of like self-driving. I remember in 2014 saying that self-driving was just around the corner.
And I mean, it kind of is, you know, like in the Bay Area. And then you get in the Waymo and you're like, oh, this works. Yeah, but it's a slow cook. It's a slow cook. Over the next 10 years, we're going to hammer out these things. The cynical people can just point to all the flaws, but there are measurable or concrete progress steps that are being made by these builders.
There is one form of agent that I believe in. I mostly believe in the research assistant form of agents. Yes, I was going to say. You've got a difficult problem, and I'm on the beta for the Google Gemini 1.5 Pro with... Deep Research, I think it's called. These names. These names, right? But...
I've been using that. It's good, right? You can give it a difficult problem and it tells you, OK, I've got to look at 56 different websites and it goes away and it dumps everything to its contacts and it comes up with a report for you. And it's not.
It won't work against adversarial websites, right? If there were websites with deliberate lies in them, it might well get caught out. Most things don't have that as a problem. And so I've had some answers from that which were genuinely really valuable to me. feels to me like i can see how given existing lm tech especially with google gemini with its like million token contacts and google with their crawl of the entire web and they've got like
They've got a cache of every page and so forth. That makes sense to me. And what they've got right now, I don't think it's not... as good as it can be obviously but it's it's it's a real useful thing which they're going to start rolling out so you know perplexity have been building the same thing for a couple of years that that i believe in you know if you tell me that you're going to have an agent that's a research assistant agent
Great. The coding agents, I mean, ChatGPT code interpreter, nearly two years ago, that thing started writing Python code, executing the code, getting errors, rewriting it to fix the errors. That pattern obviously works. That works really, really. So yeah, coding agents that do that sort of error message loop thing, those are proven to work.
and they're going to keep on getting better, and that's going to be great. The research assistant agents are just beginning to get there. The things I'm critical of are the ones where you trust this thing to go out and act autonomously on your behalf and make decisions on your behalf, especially involving spending money. Like that, I don't see that working for a very long time. That feels to me like an AGI level problem.
It's funny because I think Stripe actually released an agent toolkit, which is one of the things I featured that is trying to enable these agents each to have a wallet. that they can go and spend. Basically, it's a virtual card. It's not that difficult with modern infrastructure. If I can stick a $50 cap on it, then at least it can't lose more than $50. I don't know if either of you...
No Rafat Ali. He runs Skift, which is a travel news vertical. And he constantly laughs at the fact that every agent...
thing is we're going to get rid of booking a plane flight for you. And I would point out that historically, when the web started, the first thing everyone talked about is you can go online and book a trip, right? So it's funny for each... generation of like technological advance the thing they always want to kill is the travel agent and now they want to kill the web page travel agent like it's like i use google flight search it's great right if you gave me an agent to do that for me it would
saved me i mean maybe 15 seconds of typing in my things but i still want to see what my options are and go yeah i'm not flying on that airline no matter how cheap they are yeah um for listeners go ahead For listeners, I think both of you are pretty positive on Notebook.lm. We actually interviewed the Notebook.lm creators, and there are actually two internal agents going on internally. The reason it takes so long is because they're running an agent.
loop inside that is fairly autonomous which is kind of interesting for one for a definition of agent loop if you pick that particular one and you're talking about the podcast side of this right there yeah the podcast side of things um they have a uh
There's going to be a new version coming out that we'll be featuring at our conference. That one's fascinating to me. Like, Notebook LM, I think it's two products, right? On the one hand, it's actually a very good RAG product, right? You dump a bunch of things in. You can run searches.
That's what it always was. And then they added the podcast thing. It's a total gimmick, right? But that gimmick got them attention because they had a great product that nobody paid any attention to at all. And then you add the... unfeasibly good voice synthesis of the podcast. It's just spookily brilliant. It's the lesson of mid-journey and stuff like that. If you can...
create something that people can post on socials. You don't have to lift a finger again to do any marketing for what you're doing. Let me dig into Notebook LM just for a second as a podcaster. As a gimmick, it makes sense, and then obviously, you know, you dig into it, it sort of...
has problems around the edges, like it does the thing that all sort of LLMs kind of do where it's like, oh, we want to wrap up with a conclusion. I always call that like the eighth grade book report paper problem where it has to have an intro and, you know. But that's sort of a thing where – because I think you spoke about this again in your piece at the year end about how things are going multimodal and how things are – that you didn't expect like –
vision and especially audio. So that's another thing where, at least over the last year, there's been progress made that maybe you didn't think was coming as quick as it came. I don't know. I mean, a year ago, we had one really good vision model. We had GPT-4 Vision. It was very impressive. And Google Gemini had just dropped Gemini 1.0, which had Vision, but nobody had really played with it yet. Like, Google hadn't.
People weren't taking Gemini seriously at that point. I feel like it was 1.5 Pro when it became apparent that actually they got over their hump and they were building really good models. And yeah, to be honest, the video models are mostly still using the same trick, the thing where you divide the video up into one image per second and you dump that all into the context. So maybe it shouldn't have been so surprising to us that long context models plus...
vision meant that the video was starting to be solved. Of course, What you really want with videos, you want to be able to do the audio and the images at the same time. And I think the models are beginning to do that now. Like originally, Gemini 1.5 Pro originally ignored the audio. It just did the one frame per second video trick.
As far as I can tell, the most recent ones are actually doing pure multimodal. But the things that opens up are just extraordinary. Like the chat GPT iPhone app feature that they shipped as one of their 12 days of open AI. I really can be having a conversation and just turn on my video camera and go, hey, what kind of tree is this? And so forth. And it works. And for all I know, that's just snapping a picture once a second and feeding it into the model.
the the things that you can do with that as an end user are extraordinary like that that to me i don't think most people have cottoned on to the fact that you can now stream video directly into a model because it it's only a few weeks old but Wow, that's a big boost in terms of what kinds of things you can do with this stuff. Yeah, for people who are not that close, I think Gemini Flashes free tier.
allows you to do something like capture a photo, one photo every second or a minute, and leave it on 24-7, and you can prompt it to do whatever. And so you can effectively have your own camera. app a monitoring app that um that you just prompt and it detects where it changes it detects for you know alerts or anything like that or describes your day And the fact that this is free, I think it's also leads into the previous point of it being, the prices haven't come down a lot.
And even if you're paying for this stuff, like a thing I put in my blog entry is I ran a calculation on what would it cost to... process 68 000 photographs in my photo collection and for each one just generate a caption and using gemini 1.5 flash 8b it would cost me one dollar and 68 cents to process 68 000 images which is
I mean, that doesn't make sense. None of that makes sense. Like, it's one four hundredth of a cent per image to generate captions now. So you can see why feeding in a day's worth of video just isn't even very expensive to process. Yeah, I'll tell you what is expensive. It's the other direction. So here we're talking about consuming video. And this year we also had a lot of progress, like probably one of the most excited, excited, anticipated launches of the year was Sora. We actually got Sora.
And less exciting. We did. And then VO2, Google's Sora, came out like three days later and upstaged it. Like Sora was exciting until VO2 landed, which was just better. In general, I feel the media or the social media has been very unfair to Sora because it was released to the world.
generally available with Sora Lite is the distilled version of Sora, right? I did not realize that. You're absolutely comparing the most cherry-picked version of VO2, the one that they published on the marketing page, to the most embarrassing version of Sora.
So of course it's going to look bad. Well, I've got access to the VO2. I'm in the VO2 beta and I've been poking around with it and getting it to generate pelicans on bicycles and stuff. I would absolutely believe that VO2 is actually better. Is Sora... So is Full Fat Sora coming soon? Do you know? When do we get to play with that one? No one's mentioned anything. I think basically the strategy is...
Let people play around with Sora Lite and get info there. But keep developing Sora with the Hollywood studios. That's what they actually care about. Gotcha. Like the rest of us. don't really know what to do with the video anyway. Right. I mean, that's my thing is I realized that for generative images and video, like images we've had for a few years, and I don't feel like they've broken out into the talented artists.
community yet like lots of people are having fun with them and doing and and producing stuff that's kind of cool to look at but what i want um you know that that movie everything everywhere all at once right
One ton of Oscars, utterly amazing film. The VFX team for that were five people, some of whom were watching YouTube videos to figure out what to do. My big question for Sora and... uh mid-journey and stuff what happens when a creative team like that starts using these tools i want the creative geniuses behind everything everywhere all at once what are they going to be able to do with this stuff in like a few years time because that's really exciting to me that's where you take
artists who are at the very peak of their game give them these new capabilities and see see what they can do with them i should i know a little bit here so uh i should mention that that team actually used runway ml um so there was in that movie I don't know how much. So it's possible to overstate this. But there are people integrating it.
generated video within their workflow even pre-sora right because it's not it's not the thing where it's like okay tomorrow we'll be able to do a full two-hour movie that you prompt with three sentences it is like for the very first part of of you know video effects in film it's like if you can
If you can get that three-second clip, if you can get that 20-second thing that they did in The Matrix that blew everyone's minds and took a million dollars or whatever to do, it's the little bits and pieces that they can fill in now that is probably already there. Yeah, I think actually having a layered view of what assets people need and... and letting ai fill in the low value assets right like the background video the background music um and you know sometimes the sound effects um that
That may be more palatable. It maybe also changes the way that you evaluate the stuff that's coming out because people... tend to in social media try to emphasize foreground stuff main character stuff so you really care about consistency and you you really are bothered when like for example Sora botches an image generation of a gymnast doing flips, which is horrible. It's horrible. But for background crowds, like...
Who cares? And by the way, again, I was a film major way, way back in the day. That's how it started. Things like Braveheart where they filmed 10 people on a field and then the computer could turn it into 1,000 people on a field. That's always been the...
away it's around the margins and in the background that that that first comes in right yeah the lord of the rings movies were over 20 years ago although they had those giant battle sequences which were very early like i mean you could almost call it a generative ai approach right they were using very sophisticated like algorithms to model out those different battles and all of that kind of stuff um
Yeah, I know very little. I know basically nothing about film production, so I try not to commentate on it, but I am fascinated to see what happens when these tools start being used by the people at the top of their game. I would say there's a cultural war that is more being fought here than a technology war. Most of the Hollywood people are against any form of AI anyway. So they're busy.
fighting that battle instead of thinking about how to adopt it. And it's very fringe. I participated here in San Francisco, one generative AI. video creative hackathon where the AI positive artists actually met with technologists like myself. And then we collaborated together to build short films. And that was really nice. And I think, you know, I'll be hosting some of those at my events going forward.
One thing that I think I want to... give people a sense of is like this is a recap of last year but then sometimes it's useful to walk away as well with like what can we expect in the future i don't know if you got anything i would also call out that the chinese models here have made a lot of progress um
and God knows who else in the video arena. Also making a lot of progress. I think maybe actually China is surprisingly ahead with regards to... open weights at least, but also just like specific forms of video generation. Wouldn't it be interesting if a film industry sprung up in a country that we don't normally think of having a really strong film industry that was using these tools? Like that would be a fascinating sort of angle on this. Agreed.
Go ahead. Just to put it on people's radar as well. Hey, Jen. There's a... category of video avatar companies that don't specialize in general video. They only do talking heads, let's just say. And H&M's doing very well. Swix, you know that that's what I've been using, right? Yeah. Right, so if you see some of my recent YouTube videos and things like that, because the beauty part of the HeyGen thing is I don't want to use the robot voice, so I record the MP3 file for my...
clips every single day. And then I put that into HeyGen with the avatar that I've trained it on. And all it does is the lip sync. So it looks, it's not 100% uncanny valley beatable. But it's good enough that if you weren't looking for it, it's just me sitting there doing one of my clips from the show. And yeah, so by the way, hey, Jen, shout out to them. And so I would, you know, in terms of like the look ahead. reviewing 2024, looking at trends for 2025. They basically called this out.
Meta tried to introduce AI influencers and failed horribly because they were just bad at it. But at some point, there will be more and more... basically ai influencers um not in a way that simon is but in a way that they are not human i
Like the few of those that have done well, I always feel like they're doing well because it's a gimmick, right? It's novel and fun. So like that, the AI Seinfeld thing from last year, the Twitch stream, you know, like those, if you're the only one or one of just a... If you're doing that, you'll attract an audience because it's an interesting new thing. But I don't know if that's going to be sustainable longer term or not. I'm going to tell you...
Because I've had discussions. I can't name the companies or whatever. But so think about the workflow for this. Like now we all know that on TikTok and Instagram, like holding up a phone to your face and doing like in my car video or walking a walk and talk, you know, that's that's very. comment, but also...
if you want to do a professional sort of talking head video, you still have to sit in front of a camera. You still have to do the lighting. You still have to do the video editing. Versus if you can just record what I'm saying right now, the last 30 seconds, if you clip that out as an MP3,
and you have a good enough avatar, then you can put that avatar in front of Times Square, on a beach, or whatever. So again, for creators, the reason I think Simon were on the verge of something, it's not going to... I think it's not, oh, we're going to have AI avatars take over. It'll be one of those things where it takes another piece of the workflow out and simplifies it. I'm all for that. I always love this. I like tools. Tools that help human beings do more.
do more ambitious things, I'm always in favor of. That's what excites me about this entire field. Yeah, we're looking into... basically creating one for my podcast. We have this guy, Charlie. He's Australian. He's not real, but he opens every show, and we're going to have him present all the shorts. Yeah, go ahead. The thing that I keep coming back to is this idea of credibility, like in a world that is full of like...
AI generated everything and so forth, it becomes even more important that people find the sources of information they trust and find people and find sources that are credible. And I feel like that's the one thing that LLMs and AI can never have is credibility. Wait, chat.
gpt can never stake its reputation on telling you something useful and interesting because that means nothing right it's a matrix multiplication it depends on who prompted it and so forth so i'm always and this is when i'm blogging as well i'm always looking for okay who are the reliable people who will
tell me useful interesting information who aren't just going to tell me whatever somebody's paying them to tell tell them who's who aren't gonna like type a one sentence prompt into an llm and spit out an essay and stick it online and that that to me Like earning that credibility is really important. That's why...
A lot of my ethics around the way that I publish are based on the idea that I want people to trust me. I want to do things that gain credibility in people's eyes so they will come to me for information as a trustworthy source. And it's the same for the sources that I'm consulting as well.
I've been thinking a lot about that sort of credibility focus on this thing for a while now. Yeah, you can layer or structure credibility or decompose it. So one thing I would put in front of you, I'm not saying that you should... agree with this or accept this at all is that you can use AI to generate different
variations and then and you pick you as the final sort of last mile person you pick the last output and you put your stamp of credibility behind that like everything's human reviewed instead of human origin that's the thing if you publish something you need to be able to put it under You're proud of publishing it. You need to say, I will put my name to this. I will attach my credibility to this thing. And if you're willing to do that, then that's great.
For creators, this is huge because it's a fundamental asymmetry between starting with a blank slate versus choosing from five different variations. Right. And also the key thing that you just said is like, if everything that I do, if all of the words were generated by an LLM, if the voice is generated by an LLM, if the video is also generated by the LLM, then I haven't done anything. Anything, right? But if one or two of those...
You take a shortcut, but it's still, I'm willing to sign off on it. Like, I feel like that's where I feel like people are coming around to, like, this is maybe acceptable, sort of. This is where I've been pushing the definition. I love the term slop, where I've been pushing the definition. of slop as ai generated content that is both unrequested and unreviewed and the unreviewed thing is really important like that's the thing that elevates something from slop to not slop is if
A human being has reviewed it and said, you know what, this is actually worth other people's time. And again, I'm willing to attach my credibility to it and say, hey, this is worthwhile. It's the curational. curatorial and editorial part of it. No matter what the tools are to do shortcuts, to do, as Swix is saying, choose between different edits or different cuts. But in the end, if there's...
curatorial mind or editorial mind behind it. I want to wedge this in before we start to close. One of the things coming back to your year-end piece that has been a something that I've been banging the drum about is when you're talking about LLMs getting harder to use. Oh, wow, yeah. You said most users are thrown in at the deep end. The default LLM chat UI is like...
taking brand new computer users, dropping them into a Linux terminal and expecting them to figure it all out. I mean, it's literally going back to the command line. The command line was defeated by the GUI interface. And this is what I've been banging the drum about is like, this cannot... be the user interface what we have now cannot be the end result do you see any hints or seeds of a gooey moment for LLM interfaces
I mean, it has to happen. It absolutely has to happen. The usability of these things is turning into a bit of a crisis. And we are at least seeing some really interesting innovation in little directions, just like OpenAI's ChatGPT Canvas.
thing that they just launched that is at least going a little bit more interesting than just chat chats and responses you know you can exploring that space where you're collaborating with an llm you're both working in on the same document that makes a lot of sense to me like that that feels is really smart the um one of the best things is still uh who was it who did the the ui where you could they had a drawing ui where you draw an interface and click
TL Draw, would that make it real thing? That was spectacular. Absolutely spectacular alternative vision of how you'd interact with these models. Because, yeah, the... So I feel like there is so much scope for innovation there and it is beginning to happen. I feel like most people do understand that we need to do better in terms of interfaces that both help explain what's going on and give people... better tools for working with models. I was going to say, I want to...
Have you heard about Senolytics yet? It's a class of ingredients discovered less than 10 years ago, and they're being called the biggest discovery of our time for promoting healthy aging and enhancing your physical prime. As we age, everyone accumulates senescent cells in their bodies. Senescent cells cause symptoms of aging such as aches and discomfort, slow workout recovery, sluggish mental and physical energy associated with that middle-age feeling.
Also known as zombie cells, they are old and worn out and not serving a useful function for our health anymore, but they're taking up space and nutrients from our healthy cells. Much like pruning the yellowing and dead leaves off a plant, Qualia senolytic. removes those worn out senescent cells,
to allow for the rest of them to thrive in the body. Take it just two days a month. The formula is non-GMO, vegan, gluten-free, and the ingredients are meant to complement one another, factoring in the combined effect of all ingredients together. Resist aging at the cellular level. level. Try Qualia Senolytic. Go to qualialife.com slash ride for up to 50% off and use code ride at checkout for an additional 15% off.
For your convenience, Qualia Senolytic is also available at select GNC locations near you. That's Q-U-A-L-I-A life.com slash ride for an extra 15% off your purchase. Thanks to Qualia for sponsoring today's episode. Growing your small business in 2025 all comes down to how well you can hire.
Better hires start with smarter insights. And LinkedIn has the strongest hiring data and insights to help you identify the right candidates so you can make the best hiring decisions. Start the new year off hiring smarter with LinkedIn. LinkedIn pairs you with the best candidates. Thank you.
LinkedIn also lets you go beyond candidates who are actively applying. In a given week on LinkedIn, 171 million LinkedIn members aren't actively seeking jobs but are open to new opportunities. That's a big pool to miss out on if you're not hiring with LinkedIn. So hire smarter in the new year with the only hiring tool I've ever used for my businesses. Post your job for free at linkedin.com slash ride. That's linkedin.com slash ride to post your job for free. Terms and conditions apply.
Dig a little deeper into this, because think of the conceptual idea behind the GUI, which is instead of typing into a command line, open word.exe, you click an icon, right? So that's... So abstracting away sort of the, again, the programming stuff that like, you know, it's a child can tap on an iPad and make a program open, right? But...
The problem, it seems to me right now with how we're interacting with LLMs is it's sort of like, you know, a dumb robot where it's like you poke it and it goes over here. But no, I want to go over here. So you poke it this way and you can't get it exactly right. What can we abstract away from the current what's going on that makes it more fine-tuned and easier to get more precise? You see what I'm saying? Yes.
And this is the other trend that I've been following from the last year, which I think is super interesting. It's the prompt-driven... UI development thing. Basically, this is the pattern where Claude Artifacts was the first thing to do this really well. You type in a prompt and it goes, oh, I should answer that by writing a custom HTML and JavaScript application for you that does a certain thing. And when you think about that, and since then, it turns out...
This is easy, right? Every decent LLM can produce HTML and JavaScript that does something useful. So we've actually got this alternative way of interacting where they can respond to your prompt with an interactive custom interface that you can work with. People haven't quite wired those back up again. Ideally, I'd want the LLM to be able to ask me a question where it builds me a custom little UI.
for that question. And then it gets to see how I interacted with that. I don't know why, but that's like just such a small step from where we are right now. But that feels like such an obvious next step. An LLM, why should you just be communicating with text when it can build interfaces on the fly that let you select a point on a map? Or move like sliders up and down. Right, knobs and dials. I keep saying knobs and dials. We can do that.
And the LLMs can build and Claude artifacts will build you a knobs and dials interface. But at the moment, they haven't closed the loop. When you twiddle those knobs, Claude doesn't see what you were doing. They're going to close that loop. I'm shocked that they haven't done it yet. So yeah, I think there's so much scope for innovation and there's so much scope for doing interesting stuff with that model where the LLM, anything you can represent in HTML, JavaScript and SVG, which is...
almost everything can now be part of that ongoing conversation. Yeah, I would say the... The best executed version of this I've seen so far is Bolt, where you can literally type in make a Spotify clone, make an Airbnb clone, and it actually just does that for you zero shot.
with a nice design. There's a benchmark for that now. The LM Arena people now have a benchmark that is zero shot app generation because all of the models can do it. I've started figuring out how I'm building my own version of this for my own.
project because i think within six months i think it'll just be an expected feature like if you have a web application why don't you have a thing where oh look that you can add a custom like so for my data set data exploration project i want you to be able to do things like conjure up a dashboard
Just via prompt, you say, oh, I need a pie chart and a bar chart and put them next to each other and then have a form where submitting the form inserts a row into my database table. And this is all suddenly feasible. It's not even particularly difficult to do, which is... utterly bizarre that these things are now easy I think for a general audience that is what I would highlight that software creation is becoming easier and easier Gemini is now available in Gmail and Google Sheets I don't write
my own Google Sheets formulas anymore. I just tell Gemini to do it. And so I think those are... I basically somewhat disagree with your assertion that LMS got harder to use. Yes, we expose more capabilities, but they're in minor forms, like using Canvas, like web search in... in ChatGPT and like Gemini being in Excel sheets or in Google sheets. Like, yeah, we're getting... No, no, no, no, no. Those are the things that make it harder.
because the problem is that for each of those features they're amazing if you understand the edges of the feature if you're like okay so in google gemini excel formulas i can get it to do a certain amount of things but i can't get it to go and read a web you probably can't get it to read a web page right but
You know, there are things that it can do and things that it can't do, which are completely undocumented. If you ask it what it can and can't do, they're terrible at answering questions about that. So my favorite example is Claude Artifacts. You can't build a clawed artifact that can hit an API somewhere else. because the cause headers on that iframe prevents accessing anything outside of CDN.js. So good luck learning cause headers as an end user in order to understand why...
I've seen people saying, oh, this is rubbish. I tried building an artifact that would run a prompt and it couldn't because Claude didn't expose an API with course headers. All of this stuff is so weird and complicated. And yeah, like that... The more that with the more tools we add, the more expertise you need to really to understand the full scope of what you can do. And so it's I wouldn't say it's like the question really comes down to.
What does it take to understand the full extent of what's possible? And honestly, that's just getting more and more involved over time. Yeah, I have one more topic that I think you're kind of a champion of. And we've touched on it a little bit, which is local LLMs and running.
AI applications on your desktop. I feel like you are an early adopter of many, many things. I had an interesting experience with that over the past year. Six months ago, I almost completely lost interest. And the reason is that six months ago, the best... There was no point in using them at all, because the best hosted models were so much better. Like, there was no point at which I'd choose to run a model on my laptop if I had API access to Claude 3.5 Sonnet. They just...
They weren't even comparable. And that changed basically in the past three months as the local models had this step-changing capability where now I can run some of these local models and they're not as good as Claude 3.5 Sonnet, but they're not. so far away that it's not worth me even using them. The continuing problem is I've only got 64 gigabytes of RAM, and if you run, like, Llama 370B...
Most of my RAM is gone. So now I have to shut down my Firefox tabs and my Chrome and my VS Code windows in order to run it. But it's got me interested again. The efficiency improvements are such that now, if you were to stick me on a desert island with my laptop, I'd be very productive using those local models. And that's pretty exciting. And if those trends continue and also...
Like, I think my next laptop, when I buy one, is going to have twice the amount of RAM, at which point maybe I can run the almost the top tier... like open-weight models, and still be able to use it as a computer as well. NVIDIA just announced their $3,000, 128 gigabyte monstrosity. That's a pretty good price. You know, that's... You're going to buy it. Customers and all. If I get a job...
If I have enough of an income that I can justify blowing $3,000 on it, then yes. Let's do a GoFundMe to get Simon one of the... Come on. You know you can get a job anytime you want. This is just purely discretionary. I want a job that pays me to do... exactly what I'm doing already and doesn't tell me what else to do. That's the challenge. I think Ethan Malik does pretty well. Whatever it is he's doing.
But yeah, basically, I was trying to bring in also, you know, not just local models, but Apple intelligence is on every Mac machine. You seem skeptical. It's rubbish. Apple intelligence is so bad. It does one thing well. Oh yeah, what's that? It summarizes notifications. And sometimes it's humorous. Are you sure it does that well? And also, by the way, the other, again, from a sort of a normie point of view, there's no indication from Apple...
of when to use it. Like, everybody upgrades their thing, and it's like, okay, now you have Apple Intelligence, and you never know when to use it ever again. Oh, yeah, you consult the Apple Docs, which is MKBHD. The one thing I'll say about Apple Intelligence is... One of the reasons it's so disappointing is that the models are just weak. But now, like, Llama 3B...
is such a good model in a two gigabyte file. I think give Apple six months and hopefully they'll catch up to the state of the art and the small models. And then maybe it'll start being a lot more interesting. Yeah. Anyway, I like this was year one. And, you know, just like first year of iPhone, maybe not that much of a hit. And then year three, they had the App Store.
I would say give it some time. And I think Chrome also shipping Gemini Nano, I think, this year in Chrome, which means that every web app will have for free access to a local model that just ships in the browser. which is kind of interesting. And then I think I also wanted to just open the floor for any of us. What are the apps that, you know, AI applications that we've adopted that we really recommend?
Because these are all apps that are running on a browser or apps that are running locally that other people should be trying. I feel like that's always one thing that is helpful at the start of the year.
Okay, so for running local models... um my top picks firstly on the iphone there's this thing called mlc chat which works and it's easy to install and it runs llama 3b and it's so much fun like it's not necessarily a capable novel but i use it for real things but my Party trick right now is I get my phone to write a Netflix Christmas movie plot outline where like a...
jeweler falls in love with the king of sweden or whatever and it does a good job and it comes up with pun names for the movies and that's that's deeply entertaining on my laptop most recently i've been getting heavy into into olama Because the Ollama team are very, very good at finding the good models and packaging them up and making them work well. It gives you an API. My little LLM command line tool has a plugin that talks to Ollama, which works really well. So that's my Ollama.
is, I think, the easiest on-ramp to running models locally. If you want a nice user interface, LM Studio... is I think the best user interface thing of that. It's not open source. It's good. It's worth playing with. The other one that I've been trying with recently, there's this thing called, what's it called? Open Web UI or something. The UI is fantastic. If you've got Ollama running and you fire this thing up, it spots Ollama and it gives you an interface onto your Ollama models.
And that's really nicely done. That's my current favorite open source UI for these things. But yeah, so there's lots of good options. You do need a lot of disk space. The models start at 2 gigabytes for the 3B models that are actually worth playing with. The really impressive ones tend to be in the 20 to 30 gigabyte range, in my experience. I think my struggle here is I'm not that much of an absolutist in terms of running things locally. Like, I'm happy to call an API. Same here.
Okay, yeah. I do it to play. It's my research interest, yeah. When people get so excited... Answer your own question. uh, like give us more apps that you want to. Yeah. Sometimes it's like, it's just nice to recommend apps. Um, so, uh, I use super whisper now. I tried whisper flow. It didn't really work for me. Super whispers is one of them, uh, which basically.
places typing like you you you should just talk most of the time especially if you're doing anything long form you hold i hold down caps lock and i and i talk and then when i'm done it i lift it up and it uses it doesn't it's not just about writing down your transcripts because i make ums and ahs all the time. I restate myself all the time. But I use this GPT-4 to rewrite. And that's what these guys are doing. They're all doing some form of state of the art.
ASR, Automatic Speech Recognition, and then LLM to rewrite. And then I think I would also recommend...
for people to check out Rosebud for journaling. I think AI for mental health is quite unexplored. And it's not because we're trying to build AI therapists. I think the therapists really hate that. You'll never be on the... level of therapist that gets back to the human thing that we were discussing you know on on some level there are certain things and disciplines that require the human touch and that might be sure but the human touch cost me three hundred dollars an hour yes right
And this thing's $3 a month. Like, you know. So there's a spectrum of people for whom that will work. And I think it's cheap now to try all these things. I'm going to throw in a quick recommendation for an app. Mac Whisper is my favorite desktop app. I love that thing.
It runs Whisper and you can do things like you can paste in the URL to a YouTube video and it'll pull the audio and give you a transcript. So that's how I watch YouTube now is I slap it into Mac Whisper and then I hit copy and paste into Claude and then I use the Claude web app to do things.
But Mac Whisper, it works with MP3 files. Every time I'm on a podcast, I dump the MP3 into Mac Whisper. Then I dump the transcript into Claude and say, what should I put in the show notes? And it spits out a bullet point list where it says, oh, you mentioned like data set at the... You should link to that, that kind of thing. Stuff like that. Mac Whisper, I use it several times a day, to be honest. It's great. I'm actually...
I'm going to say one that is incredibly super basic and again, coming back to just my workflow, but we are currently recording this on Riverside.fm. Riverside is a great tool for recording video, audio, things like we're doing right now. But I always use this as an example to folks when they're like, well, what will AI do for me? When I first started using Riverside, we're recording three different channels right now, right?
You guys are recording locally. So there's three audio files, three video files. And then when I first started using Riverside, you had to... Pump three tracks into Adobe and then edit. Okay, now we focus on Simon. Now we focus on Swix. Now we focus on Brian. Now we do all three. And then one day... A tool popped up that says, hit this button, and it's smart edit. And then the AI determines, okay, Simon has been talking for 30 minutes, so go to the full shot of him.
And Brian is now talking or there's over talk. So let's have all three talking heads with one button for anything I posted. It saved me three or four hours worth of work. That to me is like, again, if normies are listening. Riverside has that feature now. Yeah. Damn. I don't use it. Oh, that sounds fantastic. I still use a human editor. The day it came out, I was running around the house telling my wife, telling anyone that would listen, you don't know. I just saved...
Three hours because they had a new feature. That's exciting. Brian's basically crying with joy right now. All right. Let's try to bring this to a landing a little bit. Simon, I have about... Maybe two or three more. We can do these rapid fire. One of my shows, one of the things of my show is it's sort of like Silicon Valley writ large. So it's sort of like the horse race of who's up and who's down or whatever. To the degree that you're interested in pontificating on this.
OpenAI as a company in 2025, do you see challenges coming? Are you bearish, bullish? I almost am doing a CNBC sort of thing, but how do you feel about OpenAI this year?
I think they're in a bit of trouble. They seem to have lost a lot of talent. Like they're losing and they don't have that. If it wasn't for 03, they'd be in massive trouble because they'd have lost that like top of the pile thing. I think 03 clawed them back up again. But one of the big... stories of 2024 is open ai started as the clear leader and now google gemini is really good google gemini had an amazing year anthropic claude claude 3.5 sonnet is still my personal favorite model
And that feels notable. Like, like OpenAI went from, like, nobody would argue they were not the leader in all of this stuff a year ago. And today, they're still doing great, but they're not like as far ahead as they were. Next question, and maybe this couldn't be as rapid fire, but I loved finally from your piece the idea that LLMs need better criticism, which I'd love you to expand on because as I sort of straddle this world of tech journalism and...
creator and investor and all that stuff. I thought that you had a really interesting thing to say about how, and we even alluded to this about like Hollywood being against it, like better criticism in the sense that as I took it. everybody is sort of, they've got their hackles up. They're trying to defend their livelihoods and things like that.
But it's either this is going to destroy my job and destroy the world or like, I'm sorry, I'm again leading the witness. What did you mean by LLMs need better criticism? So this is a frustration I have that I...
Like if I read a discussion thread somewhere about on this topic, I can predict exactly what everyone's going to say. People talk about the environmental impact. They talk about the plagiarism of the training data, the unlicensed training data. There's often this sort of, oh, and these things are completely useless thing.
That's the one that I will push back against. The other things are true, right? The idea that LLMs are just completely useless, the argument I will always make there is they are... very useful if you understand how to use them which is distinctly unintuitive like you have to learn how to deal with something that will just wildly hallucinate and make things up and all of those kinds of things if you can learn how to what what they're good at and what they're bad at
I use them dozens of times a day and I get enormous value out of them. So I'll push back on people who say, no, they're just useless. But the other things, you know, the environmental impact of the way the training data works, I feel like the training data one's interesting because...
It's probably legal under fair use, but it's clearly unfair if somebody takes your work without your permission and trains a model which then... competes with you in the marketplace like like legal or not that that's that's I understand why people are upset about that that's a reasonable thing to be upset by so
What I want, and I also feel like the impact that this stuff can have on society, especially as it starts undermining all sorts of jobs that we never thought were going to be undermined by technology, like who thought it would come for artists and lawyers first, right? That's bizarre.
We need to have really high quality conversations where we help people figure out what works, what doesn't work. We need people to be able to make good decisions about what to do with their careers to embrace this stuff and all of that sort of stuff. And if we just get distracted by saying, yeah, but it's. It's useless plagiarism-driven, like, environmentally catastrophic.
Even though those things represent quite a lot of truth, I don't think that that's a useful message to lead with. Like, I want to be having the much more interesting high-level conversations. Okay, well... If there are negatives, what do we do to counter those negatives? If there are positives, how do we encourage those? How do we help people make good decisions about how to use this technology? I think I...
Where I see this the most is for people who are kind of very internal, like sort of you and I are immersed in this every single day. frankly tired of the same debates being recycled again and again. I think what might be more useful or more impactful is the level at which it starts to hit regulation. Last year, we had...
A couple of very notable attempts at the White House level and in the California level to regulate AI, and those did not come to pass. But at some point, these criticisms bubble up to law, to matters of national... or national science in progress. And I feel like there needs to be more information or enlightenment there, maybe. If only because...
It tends to be that they're very trailing, like the, you know, my favorite example to pick on, which is very unfair of me, but whatever, you know, the California SB 1047 act tried to cap compute at 10 to the power 25. Which is exactly DeepSeek. Exactly. Well, it also is exactly at the point at which we pivoted from training GPT-5 to O1, where there is no longer scaling pre-trained compute. What I'm saying is we're always trying to...
regulate the last war. And I don't think that works in a field that is basically eight years old. I've got there are two areas of regulation I'm super interested in that one of them is I do think that regulating the way these things are used can work. The big example is I don't want somebody's insurance claim denied by a black box. when nobody can explain what it did.
We have laws for that. This is like redlining. Take those laws, reinforce them, update them for modern capabilities. And then the other one, there's some really interesting stuff around privacy. We've got this huge problem right now where people will refuse to use... any of these tools because they don't trust that the things they say to it won't be trained on and then exposed to other people.
And there are lots of terms and conditions that you can read through and try and navigate around. I would love there to be just really straightforward laws that people understand where... They know that it's not going to train on their input because there's a law that says under these circumstances that can't happen.
Like that sort of stuff. It's basically taking our existing privacy laws and giving them a few more teeth and just reinforcing them without introducing cookie banners, a la the European Union, right? These things are always very, it's very risky to try and get this stuff right. because you can have all sorts of bad results if you don't design them correctly. But there's space for that, I think.
Small steps today can have a huge impact on your future. You know the saying from Acorns, mighty oaks do grow, which is why I love our sponsor, Acorns. Acorns makes it easy to start automatically saving and investing so your money has a chance. to grow for you, your kids and your retirement.
You don't need to be an expert. Acorns will recommend a diversified portfolio that fits you and your money goals. You don't need to be rich. Acorns lets you invest with the spare money you've got right now. You can start with $5 or even just your spare change.
Plus, you can earn bonus investments just for buying what you need from brands you love. You don't need a ton of time either. You can create your Acorns account and start investing in just five minutes. You don't need to feel like financial wellness is impossible. small, simple steps to get you and your money on track. Basically, Acorns does the hard part so you can give your money a chance to grow. To me,
This is a no-brainer New Year's resolution sort of thing. Head to acorns.com slash ride or download the Acorns app to start saving and investing for your future today. Paid non-client endorsement. Compensation provides incentive to positively promote Acorns. Tier 1 compensation provided. Investing involves risk. Acorns Advisors, LLC and SEC registered investment advisor. View important disclosures at acorns.com slash ride.
Yeah, I, when I read that piece, and then when you just said, you know, Swick said, we were in the weeds on this every single day. So we're tired of hearing these arguments. It reminds me of folks that are always into politics, and then they're like...
They're mad at the people that don't care about politics until it's an election year. And then they're like, well, you're a low information voter because all you know is that the factory in your town got shut down or there's inflation or whatever. And so you vote one way or the other, but you haven't been paying attention.
That's kind of the point, is that you shouldn't expect normal people to pay attention, except for the fact that, oh, this might lose me my job. So you can't blame them for being... I don't know if reactionary is the word or emotional. But right. If you're in the weeds, it's harder to to keep. Everybody informed, and this is going to touch everybody, so I don't know. Okay, so this is the very last one, and then we can wrap and...
do plugs and everything. But Simon, this is for you. It was kind of alluded to a little bit and you might not have one, but if there's something this year that a generalist like me is not aware that is coming down the pike that you think is going to be big. in the AI space, and maybe, Sean, if you've got one too, what do you think it would be?
I think for most people who haven't been paying attention, we know these things already. We know that the models are now almost free to run things against. The fact that you can now do video, like stream video to a model, the one that I've not played with nearly as much, but the thing we can share.
your entire screen with a model and get feedback there that's going to be really useful like that's again the privacy side of things really matters though i do not want some model just training on everything that it sees on my screen But no, I feel like the stuff that is now possible as of a few months ago, that's enough. I don't need anything new. That's going to keep me busy all year. Twixie, go on.
Simon's always too content. And then he sees the next thing and he's like, oh yeah, that's great too. Yep. Okay, I love trying to be contrarian by saying, what does everyone hate right now? And remember this time last year, we just had CES, Rabbit R1. We had the humane. Wearables, wearables, yep. Those are completely in the gutter. No one will touch them. They're toxic nuclear waste. Okay, this year is the year of wearables. Yep, yep. I agree with you.
By the way, that cycle always works out where you go to a CES and it's everything, hype, hype, hype, hype, and then three years later it becomes the thing, unless it's 3D TVs, in which case that was a mistake anyway. but yeah transparent tvs are the big thing for the last couple of years what the hell
Yeah. Yeah. So I think Simon may have got one of these, but there are a lot of people working on AI wearables here in SF. They are surprisingly cheap, surprisingly capable, and with decent battery life. and they do useful things, we have to work out the privacy aspect, of course. But people like Limitless, which used to be called Rewind, I think, they're shipping one of these wearables that... based on your voice, only records your voice. So you opt in.
Interesting, right. Right? And so you can have perfect memory if you want. You can have perfect memory at work. Your employer can buy these for you. It only applies at work, and it's fine. It's just a meeting aid. Lots of people use granola or some kind of...
fireflies or like some of these meeting recorders only for for online meetings but what about in-person meetings what about conversations and locations uh that you've been and uh some of that should be a choice right now you have zero choice you and and
I think these wearables will enable some of that. And it's up to us as a society to determine what's... acceptable and what's not i really like these gray areas where we still don't know yet people whenever i tell tell tell people about this they're like i don't know like i'm sure i guess it's like as though you have perfect memory but some people have better memory than others like where's the line
There will be a lot more of these. I would add to that because, Swix, as you know, because you listen to my show, the idea that... AI has taken the smart glasses and completely changed everyone's mind about that as a product category and form factor. And I should say this from things that I've been looking at investing in, wait till you see what they... They can add on to earbuds.
Like the earbuds in your ear can do a lot more things than they're doing now. And then you combine that with smart glasses and you combine that with an LLM that you can access maybe with a phone as like the mothership. some interesting things. CES next year is going to be crazy if you think wearables or AI wearables are a thing. Anyway, this year they were not a thing. There were very much no wearables at CES.
This one's interesting as well because the thing that makes these interesting is multimodal, right? Audio input, video input, image input, which a year ago was hardly a thing and now it's dirt cheap. So yeah, we're in much better position now than we were 12 months ago to build the software behind this stuff. All right, let's bring this to a landing. Swix, go first. Tell everybody about, obviously, your podcast, which hopefully...
We're simulcasting, but also your conferences, events, everything. Sure. Yeah, you can find my work on latent.space. It's the AI engineer podcast, much more sort of focused on serving engineers and developers than the general audience. But, you know, you feel free to dive in to the. end with us.
And we are also hosting a conference in New York in February, the AI Engineers Summit, where we gather people. And this one is entirely focused on agents. As much as people like to make fun of the idea that every year is the year of... agents at work. I think people at least want to gather to figure out what are the open problems to solve.
And so these are the community of builders that get together. They show their latest work. Like I have Instacart coming to show how to use agents for their recommendation system and their sort of background jobs and internal jobs. have a whole bunch of financial tech companies, fintech or finance companies also showing off their work that I cannot name yet. But it'll be lots of fun. We do high quality events that sometimes people like Simon speak at.
And that, right, as I said, or I think I said online or on air that I saw Simon speak at one of your events last year. Wait, Swix, just say again. It's in February. It's in New York City. I'm going to be there if that matters to anybody. If that's an attraction. But what's the dates on that and how to apply?
I'm horrible at this. February 20th and 21st. 20th is the leadership day for management, like VPs of AI CTOs. And 21st is the engineer day, the individual contributors, hands on keyboard people. And that's when I'll have the big labs. Anthropic, Meta, OpenAI all coming to share their agents' work. And then we'll have some new launches as well that you haven't heard of. And to sign up to attend, what website can I go to? Yeah, it's apply.ai.engineer.
All right, Simon, I'm going to handhold you even more. Your weblog is simonwillison.net, but what else would you like us to know or go find out about what you're doing? Yeah, I was going to say my blog. job i call it a job um is i work on open source tools for data journalism that's my project data set spelt like the word cassette but data um data set dot io and that's beginning to grow some interesting ai tools like originally it was
It was all about data publishing and exploration and analysis. And now I'm like, okay, well, what plugins for that can I build that let you use LLMs to craft queries and build dashboards and all sorts of bits and pieces like that? So I'm expecting to have some really interesting... product features along those lines in the next few months.
And I'll end by saying, if anyone's listening to this on Swix's show, I do the Tech Meme Ride Home every single weekday, 15 minute long tech news podcast. Look up Ride Home on your podcast app at Choice Tech. me right home um gentlemen thank you for your time thank you this was fantastic what a great way to start the year for for this show cool thanks a lot for having me this has been really fun yeah thanks for having us honor to be on Thank you.