Zone media.
Chosen by God, perfected by science. I'm ed Zetron. This is better offline. And as I've written about many, many, many many times and argued on this very podcast, just as often, the large language models run by companies like open Ai, Athropic, Google, and Meta are unprofitable and unsustainable, and the transformer based architecture they run on as peaked. They're running out of training data, and the actual capabilities of these models were peaking as far back as March
twenty twenty four. Nevertheless, I'd assumed, incorrectly, by the way, that there would be no way to make them more efficient, because I had assumed, also incorrectly that the hyperscalers, along with open Ai and Anthropic, would be constantly looking for ways to bring down the ruinous cost of their services. After all, open Ai lost five billion dollars last year, and that's after three point seven billion dollars in revenue too. An Anthropic lost just under three billion dollars in twenty
twenty four. And in the last episode I told you a little bit about deep seek. By the way, in this one we're going to get into well how fucked things might actually be. But what I didn't wager was that potentially nobody was actually trying to make these models more efficient. My mistake was if you can believe this being too generous to the AI companies, assuming that they didn't pursue efficiency because they couldn't and not because they
couldn't be bothered. But then, as I just hinted at, a little known Chinese company released a product that was broadly equivalent to open AI's latest reasoning models, but cost a fraction of the cost to train and run. And now the conventional understanding of how generative AI should work has been fundamentally upended. You see, the pre Deep Sea status quo was one where several truths, and I'd say that in the loosest sense of the word, allowed the party to keep going. So the first one is that
these models were incredibly expensive to train. GPT four morrow cost one hundred million dollars in the middle of twenty twenty four, and future models, according Dario Ama Day of Anthropic, might cost as much as one billion or more to train. And training future models, by the way, as a result of this, would necessitate spending billions of dollars on both data centers and the GPUs necessary to keep training these
bigger huger models. Now, another thing was these models had to be large because making them large, pumping them full of training data, and throwing masses of compute at them would unlock new features such as an AI that helps us accomplish much more than we ever could without AI, which is Sam Altman quote. And in the words of Sam again, you'd be able to get a personal AI team full of virtual experts in different areas working together to create almost anything we can imagine. I don't know, mate,
you ever try creating a functional fucking business. Dipshit. Anyway, here's another one. These models were incredibly expensive to run. There has to be this way, but it was all worth it because making these models powerful was way more important than making them efficient. Because once the price of silicon comes down, and this is a refrain I've heard from multiple different people as a defense of the costs of generative AI, we would then have these powerful models
that were cheaper somehow because of silicon. Now you may think, ed that sounds like a not a real argument. That just sounds like something someone said once, and it is. It is something someone said once. Anyone who knows anything about chips know how hard it is to make a new chip. And remember one of the CES episodes when I asked Max Cherney about this, should go back and
listen to him. Anyway, another thing, another part of this was that as a result of this need to make bigger, huger, even bigger models, the most powerful ones, these big beautiful models that we love them. We look at the big beautiful models, we would of course need to keep buying bigger, more powerful GPUs, which would continue the American excellence of burning a bunch of money on nothing. And by following
this roadmap, everybody wins. The hyperscalers get the justification they needed to create more sprawling data centers and spend massive amounts of money and open AI and their alk get to continue building powerful models, and also in video continues to make money selling GPUs. Remember I've said in the past that things were kind of a death cull. This is what this is. It's a capitalist death cull. It runs on plagiarism and hubris and the assumption that at
some point all of this would turn into something meaningful. Now, I've argued for a while that the latter part of the plan was insane, that there was no profitability for these large language models. As I believe, there simply wasn't a way to make these models more efficient in a way I was raimed. The current models developed by both the hyperscalers, so Gemini from Google or Lama from Meta, and so on and so forth, and the multi billion dollar startups, if you can even fucking call them that,
Open Ai and Anthropic, they're horribly inefficient. And I just made the mistake of assuming that they tried to make them more efficient, and they couldn't. Now, but what we're witnessing right now isn't some sort of weird China situation. This isn't China being Chinese and doing scary Chinese things to us. Now, what we're witnessing is the American tech
industry's greatest act of hubris. It's a monument to the barely conscious stewards of so called innovation who are incapable of breaking the k fab of the fake competition where everybody makes the same products, charges about the same amount
of money, and mostly innovates in the same direction. Somehow, nobody, not Google, not Microsoft, not open Ai, not Meta, not Amazon, no Oracle thought to try or was capable of creating something like deep Seek, Which doesn't mean that deep Seek's team is particularly remarkable or found anything super new, but that for all the talent, trillions of dollars of market capitalization, and supposed expertise in Americans American tech hologarchs, not one
bright spark thought to try things that deep Seak had tried, which appeared to be what if we didn't use as much memory, and what if we tried synthetic data? And because the cost of model development and inference was so astronomical in the case of American models, they never assumed
that anyone would try to use their position. This is especially bad considering that China's folks and AI as a strategic part of its industrial priority was really no secret, even if the ways it supported domestic companies kind of is. In the same way that the automotive industry was blindsided by China's EV manufacturers, the same is happening with AI. Fat,
happy and lazy, and most of all oblivious. America's most powerful tech companies sacked back and built bigger, messier models powered by sprawling data centers of billions of dollars of GPUs from Nvidia, A bacchanalia are spending that strains our energy grid and depletes our fucking water reserves. Without it
appears much consideration of whether an alternative was possible. I refuse to believe that none of these companies could have done what Deep Seek has done, which means that they either chose not to or they were so utterly myopic, so excited to burn so much money on so many parts of burning the earth, boiling lakes, and stealing from people in pursuit of further growth, that they didn't think to try. This isn't about China. It's so much fucking
easier if we let it be about China. No, no, no, no. It's about how the American tech industry is incurious, lazy, entitled, directionless, and irresponsible open an anthropic of the antithesis of Silicon Valley. They're incumbents, public companies wearing startup suits. I'm willing to take on real challenges more focused on optics and marketing than they are in solving actual fucking problems, even the problems that they themselves created with their large language models.
By making this about China, we ignore the root of the problem that the American tech industry is no longer interested in making good software that actually helps people. Deep seaks shouldn't be scary to Silicon Valley because they that Silicon Valley should have come up with this first. It uses less memory, fewer resources, and uses several kind of quirky workarounds to adapt to the limited compute resources available. All things that you'd previously associate with Silicon Valley, except now.
Silicon Valley's only interest that the rest of the American tech industry is the rot economy. It only cares about growing, growing at all costs, even if said costs were really things you could mitigate, or if the costs themselves were self defeating. To be clear, if the alternative is that all of these companies simply did not come up with this idea that in and of itself is a damning indictment of the Valley. Was nobody thinking of this stuff?
If they were, why didn't Sam Altman or Dario Amadae or sach In Adela or anyone else put serious resources into efficiency? Was it because there was no reason to? Was it because there was, if we're honest, no real competition between any of these companies? Did anybody try anything other than throwing as much compute and training data at the model as possible. It's all just so cynical and
antithetical to innovation itself. Surely, if any of this shit mattered, if generative AI truly was valid and viable in the eyes of these companies, they would have actively worked to do something like Deep Seeker has done. Don't get me wrong, it appears Deep Seak employed all sorts of weird tricks to make this work, including taking advantage of distinct parts of both CPUs and GPUs to create something called a digital processing unit, essentially redefining how data is communicated within
the servers running training and inference. And just this reminder, inference is the thing where when you type something in that's it infers the meaning. Just could have specified that earlier. Deepseak had to do things that a company with unrestrained access to capital and equipment wouldn't have to do, and
it often used impractical and quirky methods to do so. Nevertheless, Open AI and Anthropic both have enough money and hiring power to have tried and succeeded in creating a model this efficient and capable of running on older GPUs, except what they wanted, what they actually wanted was more goddamn growth and the chance to build even bigger data centers with even more compute that they would own. Open ai is as much a lazy, cumbersome incumbent as Google or Microsoft,
and it's about as innovative too. The launch of its Operator agent was a joke, a barely functional product that's allegedly meant to control your computer and take distinct actions like ordering stuff of instacark, you know, things you could do with your hands, but just to be clear, it doesn't work. You'll never guess who was really into it. Though his name is Casey Newton. He writes a blog called Platformer, and he's a man so gratingly credulous that
it makes me want to fucking scream. And of course he wrote the Operator when he used it was a compelling demonstration that represented an extraordinary technological achievement. Also, somehow was significantly slower, more frustrating, and more expensive than simply
doing any of these tasks yourself. Casey, of course, not to worry, had some extra thoughts about Deep Seek, that there were reasons to be worried, but that American AI lads were still in the lead, saying that deep seak was only optimizing technology that open ai and others had invented first, before saying that deep seek was only last week that open ai made available to pro plan used as a computer that can use itself. This statement is
bordering on factually incorrect. It is fucking insane that Casey is still doing this. I do not want to I don't know what to do with this guy. This guy just like that. That's a fucking lie. The computer can't use itself, This shit can't just to explain what operator is, You're meant to type in something like, hey, order me some milk, Order me some milk off of vistacar. And when Casey tried this, he tried to find milk and de moine iowa. Just fucking insane. Just this is how
these companies have got big. It's people like cases people are Casey, who are just like anything they show tore like God, damn, that's the most impressive thing I've seen in my life. It's a fucking fast. But let's be frank,
these companies aren't building shit. Open Air and Anthropic are both limply throwing around the idea that agents are possible in an attempt to raise more money to burn, and after the launch of deep Seek, I have to wonder what any investor thinks they're investing in other than certain ones I'll get into in a bit. And to be clear, an agent is meant to be this autonomous thing which you say, hey, go and do this action, go and sell things for me, and go and email people for me.
They don't really work. There are some that kind of do that are really expensive, but large Lane which model is not built for this kind of thing. But let's be honest, Deep Seek and as I said in the last episode, they've built a more efficient reasoning model. So like open aiy zo one And you'd think, well, okay, couldn't open Ai simply add on deep Seek to its models. Not really. First of all, with the way these models work, you can't just like plug it in. It's just not
how it works. They could train a new model using deep Seek's techniques, but the optics of that aren't brilliant. It would be a concession. It'd be in the middle that open aiyes slipped and needs to catch up, and not to its main rival pretend rival, I mean anthropic or to like another big tech firm, but to an outgrowth of a hedge fund in China, a company that few had heard of before December and the like. Really not that many people had heard before January twenty fifth.
It's very embarrassing, and this, in turn, I think will make any serious investor think twice about writing the company
a blank chet. They're going to have to dip into some very bothersome pockets, and as I've said ad nauseum, this is potentially fatal as open ai needs to continually raise money, more money than any startup has ever raised in the history of anything, and it really doesn't have a path to breaking even even if they copy what Deep Sea did, because we still right now, though deep Seek is thirty times cheaper than one, we don't know if that's profit or we don't know. We haven't found out.
And if open ai wants to do its own, cheaper, more efficient model, it's likely to have to create it from scratch, like I said, And while it could do distillation to make it kind of more like open ai using their own models. By the way, deep Sea taught itself using open aies outputs. Like I mentioned in the last episode, it's kind of what deep Seak already did.
It already has been fed open ai bullshit. Even with open AI's much larger team and more powerful hardware, it's hard to see how creating a smaller, more efficient, and almost as powerful version of OH one benefits them in any way, because said version has well already been beaten to market by deep Seek, and thanks to deep Seek, will almost certainly have a great deal of competition for a product that to this day lacks any killer apps.
Anyway, It's just it's very frustrating to me. It's very frustrating to me. It drives me a little insane. Reading all this stuff makes me feel crazy. I think you hear it in my voice, you hear the sanity stripping away. But I'm here to podcast and don't worry. But seriously, though, anyone can build on top of what deep seek has already built. Where is open Aiy's mote exactly? And where's anthropics?
What are the things that make these companies worth sixty billion or one hundred and fifty billion, or oh my god, as we'll discuss in a bit, three hundred and forty billion dollars?
What is the technology they own or the talent they have that justifies these valuations? Because it's kind of hard to argue that their model particularly valuable anymore. Celebrity celebrity them CULTI personality. Sam Orman, He's an artful bullshitter, and he's built a career out of being in the right places at the right times, have in the right connections, knowing exactly what to say, especially to credulous tech media ponces without the spine or inclination to push back on
his more stupid claims. And already Aortman has tried to shrug off Deep Seaks Rise, admitting that while Deep Seeks are one model is impressive, particularly when it comes to its efficiency, open Ai will obviously deliver much better models, and also it's legit invigorating for open Ai to have a new competitor. Yeah, mate, sure, I'll bet your love
in this. Aortmand ended his tweet where that came from with look forward to bringing you all AGI and beyond, something which I add has always been close on the horizon in Altman's world, but it's never really materialized. Then the timeline keeps moving and there's no actual proof they can do it. AGI is not fucking happening, And if it's possible in any way, it's not coming out of
probabilistic models. I'm fucking sick of this and Opening I can't even lean on its relationship with Microsoft, which on Wednesday, January twenty ninth started offering deep Seek's models through its own cloud services. Opening I hasn't got shit. Deep Seek has commoditized the large language model, publishing both the source code and the guide to building your own. Whether or not someone chooses to pay deep Seek is largely irrelevant.
Someone else will take what they have created and build their own, or people will start running their own deep Seek instances, renting GPUs from one of the various cloud computing firms. They don't give a shit. They'll take the money and while in video, will always find other ways to make money. Jensen Wong is amazing at this. It's going to be a hard sell for any hyperscala to justify spending billions more on GPUs to markets that now know that near identical models can be built for a
fraction of the cost with older hardware. Why do you need Blackwell, which is the latest Nvidia GPU. The narrative of this is the only way to build powerful models. It doesn't really hold water anymore, and the only other selling point is that what if China does something. Well, the Chinese did something, and they've now proven that they cannot only compete with American AI companies, but doing so is possible in an efficient way that can effectively crash
the market while there's being a recovery. This is still very worrying. I also want to address something real quick. A few people on Twitter have been suggesting that talking about deep Seek positively in any way is some sort of Chinese op. If you believe this, you're a fucking moron. I really must be clear, take your weirdsenophobia and go eat your own shit. I don't fucking care anymore. Yes, there are problems with China. Yes, China does something to America.
This is an open source thing. This is an open source thing, and you can remove China from the equation. Because it's open source, someone else is going to use this. If your only defense is the sneaky Chinese are doing something, go to therapy and talk to the therapist about you being paranoid or racist, because it's one of them. I should also be clear, concerns about China very realistic. The Chinese government has tried to interfere with America. It's happened
many times. Even if China is funding Deep Seeks models, the fact that they are open sourced means that anyone can run them and anyone can build their own. They can look under the hood. We don't have the training data, but that is it. You cannot win on a senophobic argument here. You can have realistic concerns about another foreign power. I'm not saying not to, but what I am saying is that you have to look at this realistically, and you have to take this seriously. And dismissing this as
Chinese magic is stupid. It's very goddamn stupid. But like I said earlier, it also isn't clear if these models are actually going to be profitable. It's unclear who funds Deep seg like I just said, and whether it's current pricing is actually sustainable. But they're likely going to be a damn site more profitable than anything open ai is
currently selling. After all, open ai loses money on every single transaction, even their two hundred dollars a month Chat GPT Pro subscription, And if open ai cuts its prices to compete with deep sek, its losses are only going to deepen. And as I've said again and again this is also deeply cynical, because it's obvious that none of this was ever about the proliferation of generative AI or
making sure that generative AI was accessible. Putting aside my very obvious personal beliefs for a second, it's fairly obvious why these companies, the big hyperscalers and open ai and Anthropic wouldn't want to create something like deep seek because creating an open source model that uses less resources means that open Ai, Anthropic and their associated hyperscalar findom clients
would lose their soft monopoly on large language models. Now what does that mean, I'll explain before deep seek to make a competitive large language model like GPT four zero, as in one that you can actually commercialize, required exceedingly large amounts of capital, and to make larger ones effectively required you to kiss the ring or the ass of Microsoft,
Google or Amazon. While it isn't clear what it cost to train open aizo one reasoning model, we know that GPT four to ZH cost in excess of one hundred million dollars, and oh one is a more complex model would likely cost even more. We also know that open AI's training and inference costs in twenty twenty four were around seven billion dollars, meaning that either refining current models
or building new ones is quite costly. The mythology of both open AI and Anthropic is that these large amounts of capital weren't just necessary, but the only way to do this. While these companies ostensibly compete, neither of them
seemed concerned about doing so. As actual businesses that made products that were say, cheaper and more efficient to run, you know, they made more money than they cost because in doing so, these companies would break the illusion that the only way to create powerful artificial intelligence was to hand billions of dollars to one of two companies and build giant data centers to build even larger language models.
This is aisrot economy, two lumbering companies claiming that their startups creating a narrative that the only way to build the future to keep growing, to build more data centers, to build larger language models, to consume more training data with each infusion of capital, GPU purchase, and data center build out, creating an infrastructural mote that always leads back
to one of a few tech hyperscalers. Open an anthropic need the narrative to say, buy more GPUs and build more data centers, because in doing so they create the
conditions of that infrastructural monopoly. Because the terms forget about building software that does stuff for a second, were implicitly that smaller players cannot enter the market because the market is defined as large language models that cost hundreds of millions of dollars and require access to more compute than any startup could ever reasonably access without the infrastructure that
a public tech company delivers. Remember, neither Anthropic nor open AI has ever marketed themselves based on the products they actually build. Large language models are in and of themselves, fairly bland software products, which is why we've yet to
see any killer apps. This isn't a particularly exciting pick to investors or the public markets because there's no product, innovation, or business model to point to, and if they'd actually tried to productize it and turn it into a business, it's quite obvious at this point that there really isn't a multi trillion dollar industry for generative AI. Look at Microsoft and their attempts to strong arm Copilot into Microsoft three sixty five. Both personally and commercially, Nobody said Wow,
this is great. When they demanded use copilot in word, Lots of people, however, asked why am I being charged significantly more for a product that I don't want or
care about. Open Ai only makes twenty seven percent of its revenue from selling access to its models, so allowing people to use their models to build products around a billion dollars of annual recurring revenue, by the way, with the rest of their money coming in about two point seven billion dollars last year coming from subscriptions to chat GPT.
If you ignore the hype, open Air and Anthropic are actually deeply boring software businesses with unprofitable, unreliable products prone to hallucinations, and then new products such as open Aisaura cost way too much money to both run and trained to get the results that well, they suck. They're not good. Even open a Eyes pushing to the federal government with the release of chat GPT, GOV is unlikely to reverse
its dismal fortunes. Seriously, think about it. I'm sure some of you are going to say, well, Trump will just give them money. These motherfuckers need way more money than Trump is going to give them, and why would Trump bet on a loser? Why would Trump be like, oh, yeah, I'm going to give more money to this company that does the same thing. He doesn't understand it this shit, and he probably just looks at Samut and goes, now,
that's a kind of new money I don't like. But to make this more than a deeply boring software business, Open iron Anthropic needed larger models, and they needed them to get larger generally in perpetuity, and for the story to always be that there was only one way to build the future, and that the future cost hundreds of billions of dollars, and that only the biggest geniuses, who all happened to work in the same two or three places,
were capable of doing it. Post deep Seek, there really isn't a compelling argument for investing hundreds of billions of dollars of capex in data centers, or buying new GPUs, or even pursuing large language models as they currently stand. It's possible, and deep Seek, through its research papers, explained in detail how to build models competitive with both of open AI's leading models, and that's assuming you don't simply build on top of the ones that deep Seek released.
It also seriously calls into question what it is you're paying open Ai for in its various subscriptions, most of which other than the two hundred dollars a month pro subscription, have hard limits on how much you can use their most advanced reasoning models. One thing we do know, though, is that open ai and Anthropic will now have to either drop the price of accessing their models and potentially
even the cost of their subscriptions too. I'd argue that despite the significant price difference between O one and deep Seek's are one reasoning model, the real danger to both open ai and Anthropic is deep Seek V three, which competes with GPT four h, which is their general purpose model.
By the way, and as on recording this episode, by the way, news broke the Alibaba, a bohemoth thie out of China in its own right, has created its own model that outsperforms Deep Seek and yet definitely dive into it. But if it's true, it's only going to pile on the price pressure, though I kind of wonder what they could possibly do. Is it going to be cheaper, because if it's more powerful, that's just like it doesn't really
change shit anyway. Though Deep Seek's narrative shift isn't just about commoditizing lllms at large, but commoditizing the most expensive ones run by two monopolists backed by three other monopolists. I mean, the magic's died. There's no halo around Sam Moltman or Dario ama Day's head anymore, as their only real argument was we're the only ones that can do this, something that nobody should have believed in the first place.
Up until this point. People believe that the reason these models were so expensive was because they had to be, and that we had to build more data centers and
buy more silicon because that's just how things worked. They believe that reasoning models were the future, even if members of the media didn't really seem to understand what reasoning models did or why they mattered, and that as a result, they had to be expensive because open Ai and their elk were just so fucking smart, even if it wasn't obvious what reasoning men or what it allowed you to do,
or what the products were. It's just very annoying, and now we're going to find out, by the way, because reasoning is now commoditized along with large language models in general. Funnily enough, the way that deep seek may have been trained using at least in part synthetic data. Also pushes against the paradigm that these companies even need to use other people's training data, though their argument, of course will
be that they need more training data always. We also don't know the environmental effects, by the way, with deep SEEK, because even if it's cheaper, these models still require those energy guzzling GPUs to run, and they're running at full tilt. In any case, if I had to guess, the result would be that the markets are going to be far less tolerant of generative AI, and the idea that generative AI is the future. Open AI and anthropic no longer
really have motes. Unless, well, there's another idea. What if there was a huge fucking idiot with a lot of money. How about billionaire dipshit Massioshi's son, the CEO of soft Bank, a multinational investment firm that's rumored to be investing anywhere from fifteen to twenty five billion dollars in Open AI in a round that values the company and astonishing three hundred and forty billion dollars as part of a round
of up to forty billion dollars. Now you may think, damn, this is a sign that open ai is going to make it. But I must remind you how bad soft
Bank is at investing. They put sixteen billion dollars into famously awful real estate company we Work and managed to lose I think eight hundred million dollars on the DoorDash Ipo and one point eight billion dollars on their investment in Uber, and in both cases did so because they were desperate to bandwagon on to supposedly your fire bets, either just before they crashed away after an investment made sense.
According to the Wall Street Journal, SoftBank would lead this insane forty billion dollar round in the company and would and I quote help assemble investors for the rest of the round. In doing so, SoftBank would also become open AI's largest investor, replacing Microsoft. And yes, SoftBank was the largest investor in whe Work before it went tits up. It's also important to remember the open ai is pledged to put eighteen to nineteen billion dollars to fund the
Stargate data center project. Along with you guessed SoftBank will be committing the same amount. This is on some level, soft Bank handing money to itself to invest in data centers to prop up an industry that's dying. Now. This is a developing story, but it's hard to imagine any serious contribution from any respectable investor at this point. My money is on a few VC firms desperately scraping at the bottom and the barrel quarter of a billion here,
quarter of a billion there. Maybe in Vidia chucks in some and on the subject of barrels of stuff, I also expect money from the Kingdom of Saudi Arabia and its associated venture arms. You're going to see a few maybe Andrews and Horowitz gets involved. They don't think much. It's just very fucking silly, and I don't know how this works out. Open AI burns money, and even if they somehow make more efficient models, the actual total addressable
market of generative AI is actually pretty small. Microsoft said in their recent earnings they made twelve to thirteen billion dollars of arr on AI. Just to be clear, that's not profit and that's not a business unit. There's no AI business unit, which means that that is just spread across delivering cloud compute for AI copilot on Microsoft three sixty five products, which by the way, no one likes. They're having trouble selling and other associated copilot products they sell.
I don't think and like twelve to thirteen billion dollars across four quarters. That's not actually good at all. It's just very silly. All of this is so silly, and when I think about it, too hard to feel a little crazy. Open Ai makes three seven billion dollars in revenue, and they do so, as I mentioned, primarily from chad
GPT subscriptions. Even if that somehow and it won't by the way, turns into three point seven billion dollars of profit or even ten billion dollars of profit a year, that's less than the profits in a single quarter of any given hyperscaler. It would be respectable, sure, But three hundred and forty billion dollar valuation, I guess makes sense if it was profit. It doesn't make sense if it's
not profitable. Though it also isn't obvious how open ai would actually provide any liquidity to investors, by which I mean allow them to sell their stock beyond selling shares of people that work at open ai, like people who work there who have been given stock runts selling that to another investor, a really dumb guy maybe, And as in the side, SoftBank bought one point five billion dollars of stock from open ai employees and the tender at the end of November twenty twenty four. Just a note
for you. They could also take the company public, but with the unit economics of this fucking company, which boil down by the way to our products, lose billions of dollars and are extremely compoditized. I'm not really sure what the plan is here. The fundamental problems that open ai has are not solved by throwing more money at the problem. This hasn't worked before and it won't work this time. They're burning cash and in soft banks case, it isn't
obvious what it is they're getting from open ai. Is it the chance to continue an industry wide con the chance to participate in a catalyst death cult. I don't know. Maybe it's the chance to burn money at a faster rate than we work ever could dream of. Will this be the time that Microsoft, Amazon, and Google just drop open ai and Anthropic and make their own models based on deep Seek's work. What incentive is there for the hyperscalers to keep funding open ai and Anthropic. They hold
all the cards. The GPU is the infrastructure, and in the case of Microsoft, non revocable licenses that permit them unfettered use and access to open AI's tech, and there's little stopping the hyperscalers from building their own models and just dumping them entirely. In fact, Microsoft might actually be a little glad to see soft bare become the biggest investor and pick up the tab for open AI's expenses. I can imagine satching Adella texting Semilt and being like, no,
don't take that money. Hell, well, don't do oh I'd hate that. I'd hate if this was someone else's problem. And the stargate thing, by the way, is an attempt to that up to five hundred billion thing. It's just bollocks whatever. It's an attempt to remove themselves from Microsoft. And Microsoft actually allowed open ai to alter their deal so that they could get cloud compute from others. Now at the time, people were like, yeah, man, this is
a good thing. This shows open ai will be independent. No, it doesn't. It just means that they're gonna be under Masayoshi's son, now the funniest dumbest man in investing. I love Massa Yoshi's son. I think it's nice that we have an insane guy who isn't instantly murderous in our
lives anyway. Anyway, though, as I've said before, I believe we're at PKI and now that generative AI has been commoditized, the only thing that open AI and Anthropy have left other than a pile of cash is their ability to innovate,
and I don't think they're capable of doing so. And because we sit in the ruins of Silicon Valley with our biggest startups all doing the same thing in the least efficient way possible, living at the beck and call of public companies with multi trillion dollar market caps, everyone is trying to do the same thing in the same way, based on the fantastical marketing nonsense of a succession of directionless rich guys that all want to create Americas next
top monopoly. It's time to wake up and accept that there was never any kind of AI arms race, and that the only reason that Hyperscale has built so many data centers and bought so many GPUs is because they're run by people that don't experience real problems and thus don't know what problems real people face. Generator AI does not solve any trillion dollar problems, nor does it create outcomes that are profitable for any particular business. Deep Seeks
models are cheaper to run. But the real magic trick they pulled is that they showed how utterly replaceable a company like open AI and buy extension any LLM company really is. There Really isn't anything special about any of these companies. They have no moat, their infrastructural advantages moot, and their hordes of talent are relatively irrelevant. What Deep
Seek is proven isn't just technological, it's philosophical. It shows that the scrappy spirit of Silicon Valley builders is dead, replaced by a series of different management consultants that lead teams of engineers to do things based on vibes. You may ask if all of this means generative AI suddenly gets more prevalent after all. Sech in Adella of Microsoft quoted Yvonne's paradox, which posits that when resources are made more efficient, their use increases. Sadly, I hypothesize that something
else happens right now. I do not believe that there are companies that are stymied by the pricing the Open AI and their ILK coffer, Nor do I think there are many companies or use cases that don't exist because large language models are too expensive. AI companies took up a third of all venture capital funding last year, and on top of that, it's fairly try reasoning models like O one and make a proof of concept without having to make an entire operational company. Sheer open ai barely
has one. I don't think anyone has been on the sidelines of generative II due to costs, and remember, few seem to be able to come up with great use case for one or other reasoning models anyway, and deep Seek's models, while cheaper, don't have any new functionality. As a result, I don't really see anything changing beyond the eventual collapse of the API market, which is the way
you plug these models into things. For companies like Anthropic and open AI, large language models and reasoning models the niche. The only reason that chat GPT became such a big deal is because the tech industry is no other growth ideas, and despite the entire industry and public markets screaming about it, I can't think of any mass market product that really matters.
Even if deep Seek doesn't land the fatal blow, it could set the foundations from another company to drag open AI's carcass out behind the barn and hit it with a big stick. One way in which this entire fast could fall as if nasty Mark Zuckerberg decides he wants to splay destroy the entire market for llms. Meta has already formed four separate war rooms to break down how Deep Seep did it and apparently to quote the information
in pursuing Lama, which is their large language model. CEO Mark Zuckerberg wants to commoditize AI models so that the applications that use such models, including metas, generate more money than the sales of the AI models themselves. That could hurt metas AI rivals such as open ai and Anthropic, which were on pace to generate billions of dollars in revenue from such sales and lose billions. Fucking hell. I love the information, but can you add the most important bit?
But I could absolutely see Meta releasing its own version of deep Seek's models. They've got the GPUs and Mark Zuckerberg can never be fired, meaning that if he simply decided to throw billions of dollars into specifically creating his own deep discounted alms to wipe out open ai he absolutely could. After all, a few weeks back, Mark Zuckerberg said that Meta would spend between sixty and sixty five billion dollars in CAPITALGICS spenditures in twenty twenty five. And
this was before the deep Seek situation. And I imagine the markets would love a more modest proposal that involves Meta offering a chat GPT be to simply to fuck over Sam Altman. And that's the thing. Chat GPT is big because everybody's talking about AI, and chat GPT is the big brand in AI. It is not essential, and it's only been treated as such because the media and the markets ran away with a narrative that they barely understood.
Deep Seek pierced that narrative because believing said narrative also required you to believe that Sam Moortman is a magician versus an extremely shitty CEO that burned a bunch of money. And I don't believe that, even before Deep Seek, that Altman's peers really bought into the hype. Sure you can argue that deep Seek just built on top of software
that already existed thanks to open Ai. Thank you Casey, by the way, but this beg's a fairly obvious question why didn't open ai build on top of software invented by open ai? And here's another question, why is it goddamn matter? In any case, the massive expense of running generative models hasn't been the limiter on their deployment, of
their success. You can blame that on the fact that they as a piece of technology and neither artificial intelligence nor capable of providing the kind of meaningful outcomes that would make them the next smartphone or cloud computing. Honestly, it's all been a con It's been a painfully obvious one one had been screaming about since February, since when I started this podcast, trying to explain that beneath the hype was an industry that provided monister at best outcomes
rather than any kind of next big thing. Without reasoning is its magical new creation. Open ai really doesn't have anything left. Agents aren't coming, large language models aren't going to build them. AGI isn't coming either. There's no proof it's possible. All of this is fucking flim flamm to cover up how mediocre and unreliable the fundament of the
supposed AI revolution really was. All of this money, all of this energy, and all of this talent was wasted thanks to markets that don't actually do anything, markets that don't make for good companies, just growth hogs, and a media industry that fails to hold the powerful to account. And it looks like everything got broken by some random outgrowth of a Chinese hedge fund. It's so ridiculous, it's
so sickening. I can't believe it. While I can totally believe it, I'm actually surprised they didn't come up with this idea myself, Just the idea that someone could do this cheaper. It makes me go insane. And what's more insane is that Open Eye is still going to be able to raise that round. But I think we're approaching the end of days. I'm not calling the end of the bubble yet. I refuse to do that. I'm not going to do that. What I am going to say is it's deflating, and I am going to say I
have no idea how they reinflate it. A bunch more money isn't going to change anything. These companies are washed. Sam Morton's washed. He's the Mark Sanchez of the tech industry, and he's so sickening. All of them are so sickening. Imagine if this money had gone anywhere else. Imagine if it had gone into batteries. Imagine if it had gone
into climate stuff. Imagine if gone somewhere useful. Imagine if instead of spending billions on this dog shit, they actually fix their problems, they actually fix the products that they've made worse. But the problem is that the rot economy is in control. That the growth at all costs mindset is all that you see in the tech industry, and
Silicon Valley needs to repent. Silicon Valley needs to change its ways because when the bubble bursts, and I really think it will, the destruction that follows will be horrifying and it will hit workers. It will hit tens of thousands of tech workers, and it will affect the markets. And after that, the markets are going to realize something. The tech industry doesn't have anything left, they don't have
another growth market. They're out. They're all out. And I look forward to telling you how I look forward to talking about it when it happens. And I'm so grateful for you listening. Thank you for listening to Better Offline.
The editor and composer of the Better Offline theme song, is Matasowski. You can check out more of his music and audio projects at Matasowski dot com, M A T T O S O W s ki dot com. You can email me at easy at better offline dot com or visit better offline dot com to find more podcast links and of course, my newsletter. I also really recommend you go to chat dot where's youreaed dot at to visit the discord, and go to our slash Better Offline
to check out our reddit. Thank you so much for listening. Better Offline is a production of cool Zone Media. For more from cool Zone Media, visit our website cool Zonemedia dot com, or check us out on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts.