Zone Media. Hello and welcome to Better Offline. I'm your host and chief romance officer, ed Ze Tron. In the last episode, I dug into the fundamental weaknesses and open AI the supposed leader in the genital of AI boom, and today I'm going to get into a much larger, more systemic, more terminal problem and the signs that things
are really really falling apart. And as ever, I will have links to everything I'm talking about in the episode notes, so you know I'm not making it up, which one person suggested I did once and it bothered me a great deal. But back to the actual stuff. The problems that open ai is facing are those faced by the
entire generative AI industry. One's born of their sole focused on the transformer based architecture underlying large language models like chat GPT open A issues besides the fact that they're in a terrible business as discussed in the last episode, is that generative AI, and by extension, the model GPT in the product Chat GPT, doesn't really solve complex problems that would justify the massive costs behind it. It is these massive intractable challenges that are a result of these
models being probabilistic, meaning that they don't know anything. They're just generating an answer based on maths and training data, something that model developers are running out of at an incredible pace. Hallucinations, which occur when models authoritatively state something that isn't true, or, in the case of an image
or a video, makes something that just looks wrong. Well, they're impossible to resolve without new branches of maths, and while you might be able to reduce or mitigate them, their existence makes it hard for business critical applications to truly rely on this kind of AI. I don't even know if i'd call it an AI, but regardless, we go forward, and even tech's most dominant players can't seem to turn generative AI into any kind of real business line.
The Information reported in early September that customers of Micross three sixty five Suite are barely adopting its AI powered copilot products, with somewhere between zero point one percent and one percent of the four hundred and forty million paying people who pay for Microsoft three sixty five, which is about thirty to fifty dollars a person, by the way, are willing to pay for AI, and just to be clear, I muddied that little. It's thirty to fifty bucks per
person per head to add this stuff. I'll get into it in a minute. One firm, according to the information, was testing the AI features and was quoted as saying that most people don't find it that valuable right now, and others are saying that many businesses haven't seen breakthroughs in productivity or other benefits, and that they're not sure
that they will. In an internal presentation provided to be by a source, users of Microsoft SharePoint copilot complained that Microsoft chatbot kept getting questions wrong, sometimes failing to provide references even for correct answers, with another complaining that the copilot was and I quote using content not connected as a document resource to answer questions. And by the way, the whole point of share point is that it's your
data informing everything. I assume it was drawing from its training data or perhaps the internet anyway, genuinely not useful. And you'd think that with these new services that don't seem that useful, that are questionably useful, that Microsoft would be doing people a deal right wrong. How much is
Microsoft charging for these services? Thirty dollars a seat per person on top of what you are already paying, are as much as fifty dollars a month extra for specialist products like co pilots for sales, Microsoft is effectively asking customers to double their spend, and by the way, that's with an annual commitment for products that don't seem to
be that helpful. And really, that is kind of the state of generative AI, the literal leader and productivity in business software, cannot seem to find a product that will make people more productive and that they will then pay for. And it's in part because the results are kind of mediocre, and also that the costs are so burdensome that there's
no way for Microsoft to avoid charging a premium. And really, if Microsoft needs to charge this much, it's either because Sachin Adela is really desperate to hit half a trillion dollars in revenue by twenty thirty, or that the costs are too high to charge much less. Maybe it's a little bit of both. And this all only serves to shed further light on just the mediocrity of generative AI
and how limited large language models are. And all of this, by the way, is existentially threatening to Open AI because they've coasted to one hundred and fifty seven billion dollar valuation, almost entirely based on hype. And so it's that company's always tried to tell us that the future of AI will blow us away, that the next generation of large language models are eminent and they're going to be incredible. And the artificial general intelligence, where machines can reason and
act beyond human capabilities, that's just around the corner. And by the way, all of that is in part thanks to the media sloping it down and just assuming that they get it right. But until now that that's all they've really had to do. But I think we're finally
getting the rubber meeting the road with this. I previously said one of the pale horses of the AI apocalypse is when a big stupid magic trick was necessary, a product that someone shoves out the door in hopes it will impress people and keep them believing in the magical future.
And you'd think that they'd have something really good right now because open ai just raised all this money and the practical applications just are obviously not there, except well, you know, no, no, no, no, this is open AI. They wouldn't make a big, stupid mistake, would they. I mean, one of the things I always tell clients of mine in pr is not to shove a product out the door before it's ready, and to also make sure it's
really obvious why people should pay for it. Otherwise you're just kind of launching something into the ether and hope people will find a reason to sell it for you. And yeah, that's exactly what they did. It happened. On September twelfth. Open Ai launched OH one, which had been code named Strawberry, with all of the excitement as a
trip to the Proctologist. Across a series of tweets, CEO Samuel and described one as open ayes most capable and aligned models, yet then immediately conceded that O one was still flawed, still limited, and it still seems more impressive on its first use than it does after you spend more time with it. Oh my god, he admitted. He then promised it would deliver more accurate results when performing the kinds of activities where there's a definitive right answer,
like coding maths or answering science questions. One might think that he'd walk in with I don't know, like a product built on top of one or like an use case or thing that would make the audience go, wow, I could build something with this. He didn't. I don't think he wants to try. I don't think he hasn't had to try that hard. So far people have been sloping down his slop happily. This boy may not have any tricks left. But let's talk about how O one works.
And I'm going to introduce you to a bunch of new concepts here, but I promise I won't get too deep into the weeds. And I really want you to know how these machines work. It's critical for critiquing these companies. And the big way they take advantage of you is that they claim all of this is black magic, that you could never possibly understand it. You absolutely can. And if you want their explanation, I'm going to have it
in the show notes. Okay. When presented with a problem, OH one breaks it down into individual steps that hopefully would lead to a correct answer in a process called chain of thought. Again, these things are not thinking. They're not thinking, but this is the term. It's also a little easier if you think of OH one as two
parts of one model. On each step, one part of the model applies something called reinforcement learning with the other one, which is the model actually outputting things rewarded or punished based on the perceived correctness of their progress. And this is what is called reasoning, by the way, even though it really doesn't match human reasoning at all, and then based on the reward of the punishment, it generates a
final answer from this chain of thought consideration. This is different to how other large language models work in the sense that the model is generating outputs than actually looking back at them then ignoring or approving what it thinks are good steps to get to an answer, rather than
just generating one and saying here's the answer. This may seem like a big breakthrough or even another step towards artificial general intelligence, and it isn't, And you can tell that by the fact that open ai opt to release O one as its own standalone product rather than something
built into GPT. It's also telling that the examples demonstrated by open AI, like maths and science problems, are the ones where the answer can be known ahead of time and a solution is either correct or false, thus allowing the model to guide the chain of thought through each step towards that answer, rather than actually having to produce
something where they might not necessarily be one. Open ai didn't show the one model trying to tackle complex problems such as high end mathematical equations or otherwise where the solution isn't known in advance by its own admission. Open AI has heard reports that one is actually more prone to hallucinations than GPT four H, and the model is less inclined to admit when it doesn't have the answer
to a question when compared to other previous models. This is because despite there being a model that checks the work of the model, the work checking part of the model is still capable of hallucinations. It's kind of like a kid being taught something by a teacher who just occasionally gets things horribly wrong. That child, though they may
mostly get right answers, will learn bad things. Now, learning here isn't really what's happening, but the output of the end will be informed by a model that makes the loocinations. It's like, I don't know, got a town full of dogs. You get a bunch of baboons in to get rid of the dogs. The baboons succeed and getting rid of the dogs Now you just got a bunch of baboons, so you get in, aren't no robots? Robots destroy the baboons. At this point, you've got robots. If the robots are autonomous,
they start taking over the town. So they need to find a bigger robot to take over the town from the robots. Now you've just got an escalating problem where things are only going to get worse. And if you work open ai and that sounds accurate, please email me anyway. According to open ai, OH one also, thanks to this chain of thought process, feels more convincing to human users because it provides more detailed answers, and thus people are
more inclined to trust the outputs even when they're completely wrong. Now, if you think I'm being overly hard on open ai, consider the ways in which the company is marketed. One open ai described O one's reinforcement training as thinking and reasoning when it's making guesses and then guessing on the correctness of these guesses at each step. Where the end destination is often something that can be known in advance. Generative AI does not know anything. These are still probabilistic models.
This thing is not thinking at all. There is no reasoning. It's got a model, reading a model, giving a model answers from it's a mess, and it's an insult to people, actual human beings who, when they think, are acting based on many, many complex factors, their experience, their knowledge, the knowledge they've accumulated over years of experiences, their brain chemistry, so on and so forth. Well, we may to guess about the correctness of each thing we're guessing at, and
we may reason through a complex problem. All of this is based on something concrete. When we get something wrong, it's based on actual experience versus training data and probabilistic models. This shit is not thinking at all, and by god is it expensive. Pricing for one preview, which is the first model, is fifteen dollars per million input tokens and sixteen per million output tokens. In essence, it's three times as expensive as their most expensive model GPT four O
for input and four times is expensive for output. And then there's a hidden cost. Data scientist Max Wolf reported the open ayes reasoning tokens the output it uses to get you to the final answer where it says, Okay, I need to find out the solution to this problem. So here are the thirty steps I've gone through. Yeah, those are actually generated using the most expensive tokens, the output tokens. So the more it has to think, the
more expensive it gets. All of the things it generates to consider an answer are also charged for, which means the more complex it is, the more expensive it's going to be worse. Still, if you integrate this model, open ai does not show you what it's reasoning. All of that calculation happens in the background, and they still charge you for it. You just don't know how much every oh one step is charged to you in an indeterminate way, and open ai claims that they can't show you because
of competitive reasons. Ugh, nasty company, really greasy and they're still gonna burn Okay, okay, though it's different GPT four. Oh and it's really expensive. But is it better? Of course it must be better, right, right? It sounds great. It's thinking, right, it's reasoning right. No, No, it's not, it's not. It's worse. This crap's worse. Let's talk about accuracy.
On how can news the reddit styles I owned by am Horman's former alum y Combinator, one person complained about O one hallucinating libraries and functions when presented with a programming task, and making mistakes when asked questions where the answer isn't readily available in the Internet. On Twitter, Henrik Nyberg, a startup founder and former game developer, asked OH one to write a Python program that multiplied two numbers, then
calculated the expected output of said program. While OH one correctly wrote the code, although said code could have been more succinct, the actual result was wildly incorrect. Karthig Cannon, himself, a founder of an AI company, tried a programming task on O one where it also hallucinated a non existent command for the API he was using. Another person, Sashi Yanshin, tried to play a game of chess with O one, and it hallucinated an entire piece onto the board and
then it lost. And because I'm a little shit, I also tried asking one to listen a number of states with A in the name. After contemplating for eighteen seconds, it provided the names of thirty seven states, including Mississippi, you know, the classic state with an A in it. By the way, there are thirty six states that have
in them, just in case you're curious. I then asked for a list of states with the letter W in the name, and then it sat and it thought for eleven seconds, and then included North Carolina and North Dakota. Great stuff. By the way, I also asked tho one to count the number of times the letter R appears in the word strawberry, which is the pre release code name for this. It's said too, I would have hard coded that one. Personally. You can't give me that kind
of joy now. Open AI claims that I one performs similarly to PhD students on challenging benchmark tasks in physics, chemistry, and biology, just not in geography, it seems, or basic elementary level English, or maths or programming. Also, I mean for the PhD listeners. I've met a few PhD people who authoritatively state things that are completely untrue that they know nothing about. This is not a broad stroke thing,
but I get the sense that it's true anyway. This is I should know, the big stupid magic trick I predicted in the past. Open AI is shoving strawberry out the door as a means of proving to investors and the greater public. But they've still got it that the AI revolution is still here, that this thing is thinking, and what they actually have is a clunky, on exciting and expensive model that doesn't really seem to have any measurable improvement. Okay, I'm sorry. It has a measurable improvement.
You can measure it on the weird rigged test they do for all of these things. And the thing is, at this point, you'd think that even Apple, when they pulled together a new thing, even when they had the first Apple Watch and it was not obvious why you had to own it, they still had apps that were connected to it. They still had things you could point out and go, oh, that's cool, I've got four square on this four square on there at the time. Nevertheless,
they had apps to show. I just feel like open Ai has this deep contempt for Silicon Valley and for the world at large. They don't even have it in them to be like, Okay, we have this new model, and here is the new thing we built with it, and this thing does this and now you will see how important this company is. Instead, we get this crap. We just get this very boring crap. And sure, I'm sure someone technical is going to email me and say, ed, wow,
chain thought reasoning. There are other companies that have been doing it already. Anthropic already had something like this, and even then they didn't do shit with it. Where's the product, man, where's the thing I meant to care about? Why should anybody give a shit about this? Well, sam Altman is
likely trying to trump up the reasoning abilities of O one. Well, people you know, such as the people bankrolling him, will actually see he's a ten to twenty second waiting time for an answer which may or may not be correct. But you have a bit more detail, which isn't even the reasoning happening because open AI hides that bit. Nobody gives a shit about better answers anymore. They want generative AI to do something new, and I don't think the open AI has any idea how to make that happen.
Sam Orman's limp shitty attempts to anthropomorphize I one by making it think can use reasoning obvious attempts to suggest that this is on how part of the path through AGI. But even the most staunch AI advocates, well, they can't seem to get excited about this. In fact, I kind of argue that O one shows that open AI is desperate and out of ideas. Now if you don't have any ideas, though, the following advertisements will be more than happy to fill your empty little brain with new ideas
that involve giving someone money or downloading something. And I must implore you to just accept everything that follows. I don't endorse any of it because I don't know what it's going to be, but you must. And we're back. So I think now is a good time to get
back to the root of the generative AI problem. Generative AI is being sold to you on multiple lives that it's AI, it's actually artificial intelligence, it's going to get better, that this will become artificial general intelligence, that this will become the thinking computer, and all of this is inevitable. Putting aside terms like performance, as they're largely us as a means of generating things accurately or faster, rather than
being good at anything. Large language models have effectively platowed more powerful never seems to mean does more, and more powerful often means more expensive to run or more expensive for you as the user to access, meaning that you've just made something that doesn't do more and does cost more to run. If the combined forces of every venture capitalist and big tech hyperscaler have yet to come up with a meaningful use case that lots of people will
actually pay for. I just don't see one coming. Large language models and yes, that's where all of these billions of dollars are going. Are not going to magically sprout new capabilities a big Tech and open Ai burn another one hundred and fifty billion dollars. And yes that number isn't hyperbole. It's actually pretty close to the amount being plowed into these companies when you include things like investments
in companies like Anthropic and open Ai. And they're genuinely and sane amount of capex from the likes of Google, Amazon and Microsoft going into expanding data centers and buying GPUs. Nobody seems to be trying to make these things more efficient, or at the very least nobody's succeeded in doing so, because I think if they had, they'd be shouting it
from the rooftops and as on the side. By the way, the biggest sign that no one's actually making money from this is that no one's talking about how much money they're making. Microsoft and all of these companies they love talking about making profit. They love doing that. Beyond earnings. They love talking about it instead whenever they're asked to go, oh, hey, I will do some things in the future, I need to take a phone CALLT and then they kind of
disappear from the room. Amy heard CFO of Microsoft classic bullshit artists dancing around Yeah, oh net revenue increase checking
a watch. It's just really sad. It's really sad because what we have here is a shared delusion, a shared delusion about a dead end technology that runs on copyright theft, one that requires a CONTINUALUS supply of capital to keep running as it provides services that are at best in essential sold to US dressed up as a kind of automation that does not exist, and it doesn't provide, costing billions and millions of dollars and continuing to do so
im perpetuity. Generative AI doesn't run on money or cloud credit so much as it does on faith. And the problem is that faith, like investor capital, is actually a finite resource. And that's where I bring you one of my biggest anxieties about this industry, because I think we're in the midst of a SUBPRIMEI crisis where thousands of companies have integrated this stuff into their software at prices that are far from stable and even further from profitable
for the services providing them. This concern, by the way, isn't unfounded. At the latest open ai dev Day, they said that they'd slash prices for their APIs by ninety nine percent over the previous two years, largely as tech crunchies. MAXT theorized due to price pressure from Meta and Google, both of whom want to take that API access for
I assume some reason. Anyway, almost every AI powered startup uses large language model features is based on some combination of GPT or chlaud so open Ai or Anthropics models. These models are built by two companies that are deeply unprofitable. Open Ai they can lose five billion this year, Anthropic is on course to lose two point seven billion this year on much less revenue, and they all have pricing design to get more customers through the door than make
any kind of profit. Open Ai, as mentioned, is subsidized by Microsoft, both in cloud credits they received in the twenty twenty three investment and the preferential pricing Microsoft offers for their cloud services about a quarter of the price of what everyone else pays. And these companies willow open Ai and Anthropic. Their pricing is entirely dependent on the support of big tech in the case of open ai, Microsoft's continued support. In the case of Anthropic, Amazon and
Google both as investors and service providers. Based on how unprofitable these companies are. I hypothesized that if open ai or Anthropic charge prices closer to their actual costs, they'll be a ten to one hundred times increase in the price of API calls, though it's impossible to say how
much without the actual numbers of direct burn from these companies. However, Let's consider for a moment that the numbers reported by the Information Estimate the open AI's server costs with Microsoft will be four billion dollars in twenty twenty four, which I add are over two and a half times cheaper than what Microsoft charges others. It's like about four dollars in something and they pay them out a dollar something
per GP per hour. And then consider, after knowing that they're getting this massive discount, that open ai still loses
over five billion dollars a year. Open ai is more than likely charging only a small percentage of what it likely costs to run its models, and can only continue to do so if it's able to continually raise more venture funding than has ever been raised ever and continue to receive preferential pricing from Microsoft, a company that recently mentioned that it considers open Ai a competitor and has
complete access to its IP and research. While I can't say for certain, I would think it's reasonable to believe that Anthropic receives a similarly preferential pricing package from both Amazon Web Services and Google Cloud. Both of those companies, by the way, put billions into them. Assuming that Microsoft gave open ai ten billion dollars of cloud credits and it spent four billion on server costs and let's say two three billion dollars on training costs, that are both
short to increase. With new models, open Ai will either need more credits will have to pay actual cash to Microsoft sometime in twenty twenty five, and Microsoft did participate in the latest round, by the way, but it's not obvious how much, and it was much less than last time, which was I believe ten billion, mostly in cloud credits. While it might be possible that Microsoft, Amazon and Google extend their preferred pricing indefinitely, the question is whether these
transactions are profitable for them in any way. As we saw following Microsoft's most recent quarterly earnings, there's growing investor concern over how capex is being spent and the amount that's being required to build the infrastructure for generative AI, with many voicing skepticism about the potential profitability of the technology,
including Jim Cavello of Gold and Sex. And what we really don't know is how on profit generative AIS for hyperscalers, because they baked those costs into other parts of their ownings. What we can't know for sure, I imagine this stuff is if this stuff was in any way profitable, they'd be talking about it all the time. They would never shut up. This would be their new golden goose, and
they're not. In fact, the most concrete information we have about open AI's balance sheet comes from leaked reports, well sourced reporters at places like The New York Times, and the information and invested prospectuses that found a wider audience
than Altman perhaps would have liked. So you may remember from a few months ago that the markets have become a little skeptical of the generative AI boom and Nvidia CEO Jensen Huang had no real answers about AI's return and investment from his latest earnings, which led to a historic two hundred and seventy nine billion dollar drop in Nvidia's market cap in a single day. This, by the way,
was the largest route in US market histories. The total value lost is equivalent of nearly five Layman Brothers Its peak value. They've recovered some of it, but nevertheless, that's what we in the business called are not so good.
At the beginning of August, Microsoft, Amazon, and Google all took a similar beating for the markets for their massive capital expenditures related to AI, and all three of them will face the wheel next quarter in a couple weeks in fact, if they can't show a significant increase in revenue from the combined one hundred and fifty billion or more in capex that they put into new data centers
in the Nvidio GPUs. What's important to remember here is that other than AI, bigtech really doesn't have any other ideas. There are no more hypergrowth markets left, and as firms like Microsoft and Amazon begin to show signs of declining growth, so too does their desperation to show the markets that they've still got it. Google, a company almost entirely sustained by multiple at risk monopolies in search and advertising, also needs something new and sexy to wave in front of
the street. Except none of this is working because the products aren't that useful, and it appears most of its revenue comes from companies trying out AI and then realizing it wasn't worth it. And if you think back to what I was saying about open aised cloud costs, they're making what eight hundred to a billion on this? How much does Google make? Probably much less considering their multiple stories about people not really caring about Gemini. But at
this point there are really two eventualities. Big Tech realizes that they've gotten in way too deep in this, and out of the deep fear of pissing off the street, chooses to reduce capital expenditures related to AI, or the second one, Big Tech, desperate to find a new growth hog, decides instead to cut costs to sustain their stupid fucking ideas, laying off workers and reallocating capital from other operations as a means of sustaining this death march nowhere, it's unclear
which will happen if Big Tech accepts that generative AI is in the future. I don't really have anything else to waive at Wall Street, but they could do their own version of from twenty twenty two. Meta did this Year of Efficiency thing, which involved reducing capital expenditures and laying off thousands of people while also promising to slow
down a little with investment. This, by the way, is the most likely path for Amazon and Google, who, while desperate to make Wall Street happy, they still kind of have their profitable monopolies now at least. Nevertheless, there really needs to be some kind of revenue growth from AI in the next few quarters. That has to be material.
It can't just be this thing about AI being a maturing market or how annualized run rates have improved, and said material contribution will have to be magnitudes higher if capex has increased along with it. I just don't think it's going to be there, whether it's Q four twenty twenty four or Q one twenty twenty five, or maybe
a little later. Wall Street's going to punish big tech for this the sin of lust, and the punishment is going to be to savage these companies, even more harshly than Nvidia, which, despite Jensen Huang's bluster and empty platitudes is pretty much the only company that's actually making money on AI, and that's because you do need their chips to do all this. But I worry more than anything
that option two is more possible. I think these companies are really capable of committing to AI as the future and the cultures are so disconnected from the creation of actual value or like software or solving problems that actual people face, that they're willingly start laying people off if it means bankrolling these operations. I really really worry about that.
By the way, the mass layoffs that could come from this will be horrifying, because otherwise it's just going to be feeding profit into this, and at this point they're feeding in pretty much all their profits. And all of this, by the way, could have been stopped if the media
had actually held the leaders of tech companies accountable. This narrative was sold through the same con as the previous hype cycles, and the media assumed that these companies would just work it out like they did with crypto and the metaverse, despite the fact that it was blatantly obvious that they wouldn't work this out. You think I'm a duma, Well, ask to me this, what's the plan, what does generative
AI do next? If your answer is that they'll work it out or that they have something behind the scenes that is incredible, you're an absolute mark. You're a participant in a marketing scheme. It's time to wake up. It is time to wake up to how stupid this is. And I'm sure some of you will say, oh, oh, you're going to look so stupid in six months. People were telling me that six months ago. And I still don't look stupid other than the ways they do, and
they're unrelated to the podcast. But let's get back to the real problem, and let's get back to the really worrying stuff, because I believe that the very least Microsoft will begin reducing costs in other areas of its business
as a means of sustaining the AI boom. In an email shared with me by a source from earlier this year, Microsoft's senior leadership team requested in a plan that was eventually scrapped, reducing power requirements from multiple areas within the company as a means of freeing up power for GPUs, including moving other services compute to other countries as a
means of freeing upseid capacity, specifically for AI. On the Microsoft section of anonymous social network Blind, where you're required to verify that you have a corporate email of the company in question. One Microsoft worker complained in mid December twenty twenty three that AI was taking their money, saying that the cost of AI is so much that it is eating up pay raises and that things will not
get better. In mid July twenty twenty four, another shared their anxiety about how it was apparent to them that Microsoft had and I quote a borderline addiction to cut costs in order to fund in Video's stock price with operational cash flows, and that doing so had and I
quote damaged Microsoft's culture deeply. Another added that they believe that copilot is going to ruin Microsoft's FY twenty five, referring of course to their financial year twenty twenty five, adding that the f y twenty five copilot focus is going to massively fall in f y twenty five, and they knew of big co pilot deals in their country that have less than twenty percent usage after almost a year of integration, adding that corpor is too much and
that Microsoft's huge AI investments are not going to be realized. While Blind is anonymous, it's kind of hard to ignore the fact that there are many, many posts that tell a tale of a kind of cultural cancer in Microsoft, with disconnected senior leadership, the only funds projects if they have AI takeed onto the side. Many posts the men satching the Della's words salard approach and complain of a lack of bonuses or upward mobility, and an organization focused
on chasing an AI boom that may not exist. And at the very least, there's a deep cultural sadness there with the many posts I've seen oscillating between I don't like working at Microsoft and I don't know where we're putting so much into AI, and then someone replying with get used to it, SAJA doesn't get a shit, and it all feels so ridiculous because there's so many signs
that these products don't have a product market fit. At the start of this episode, I mentioned an article from the Information about a lack of adoption of Microsoft's AI features. Buried within that one was a particularly worrying thought about the actual utilization of their data centers for this AI and it said, and I quote around March of this year, Microsoft had set aside enough server capacity in its data centers for three sixty five copilot to handle daily users
of the AI system in the low millions. According to someone with direct knowledge of those plans, it couldn't be learned how much of that capacity was used at the time. Based on the information's estimates, Elsewhere, Microsoft has somewhere between four hundred thousand and four million years of its office Copilot features, meaning that there's a decent chance that Microsoft has built out capacity that isn't getting used. Now. One could argue that it's building with the belief that the
product category will grow. But here's another idea. What if it doesn't. Huh ah, what do you think? What if? And this is crazy, Microsoft, Google and Amazon built out these massive data sentences to capture demand that may never arrive. I realized that sound a little crazy saying this, But back in March I made the point that I could find no companies that had integrated generative AI in a way that was truly benefited their bottom line, And just
under six months later, I'm still looking. The best that I can find is that big companies appear to have done is stapled AI onto existing products and hoping that that helps them shift them something that does not seem to be working either. It doesn't work for Microsoft, doesn't work for Box. It does seem to be working anywhere as I'm not sure any of these AI upgrades give
any kind of significant business value. Now. While there may be companies integrating AI that are driving some degree of spend on Microsoft as your Amazon Web Services and Google Cloud, I don't know how much it is, considering the last episode saying about how open ai was only making about a billion dollars licensing out their models, and I hypothesize that any of this demand is driven by investor sentiment, because companies are right now everywhere in the economy being
pushed to invest in AI without really knowing if it will work, or whether it's useful, or whether their users will like it. Nevertheless, these companies have spent a great deal of time and money baking generative AI features into their products, and I think they're going to face one
of a few different scenarios. Scenario the first, after developing and launching these features, these companies are going to find customers don't want to pay for them, as Microsoft's finding with three sixty five Copilot and if they can't find a way to make them pay for it. Now, they're going to be really hard pressed when nobody's telling them to get in on AI. And there's the same scenario.
After developing and launching these features, these companies can't find a way to get users to pay for them, or at least pay extra for them, which means that everyone is going to have to bake the same thing into their products. Everyone's going to have to do this because none of these companies are able to function without copying their competitors, which will turn Generative AI into a kind
of parasite. Now, just to broaden out what I mean here, I looked across most of the software as a service industry and a previous newsletter and I was looking and most of them are doing much the same thing. It's document summarization, document search, generation of staff, so emails and the like, and summarization summarization can be emails can be documents. For the most part, that's what everyone is doing. The problem is that everyone doing the same thing means that
no one can really make money off of it. And Jim Cavello out of Gold and Sex made the same worrying we had the same thought as me, which makes probably him smarter than me. I shouldn't think about that too much anyway. I mentioned previously in the last episode the commoditization effect of these large language models, and I think there's going to be a further commoditization of these
effects themselves, of these features. If everyone summarizes email, now you have to do it too, because otherwise the customer can go, there's another feature. I'm going to pay for this one because it's got more stuff in it, except the feature in questions more expensive. It's very worrying, But in general, what I fear is a kind of cascade effect.
I believe that a lot of businesses right now are trying AI, and once those trials end, and Gartner predicts that thirty percent of GENERAVIAI projects will be abandoned after the proof of concept by the end of twenty twenty five, these companies are going to stop paying for the extra
features or stop integrating GENERATEVII into their products. If this happens, it will reduce the already kind of shitty revenue flowing to the hyperscalers providing cloud computer or access to models for generat AI, which in turn could create more price pressure on these companies. They're already negative margins sour. At that point, Open AI and Anthropic will almost certainly have to raise prices. And what's fun is they're already not
making that much money from this. So we're in this weird situation where it isn't obvious which it's going to be. Is it that they're going to have to raise prices, or that no one wants to pay them, or some combination of both. It's also important to note that the hyperscalers are also terrified of pissing off Wall Street. I
really mean that one of them will eventually blink. And while they could theoretically do the layoffs and cost cutting measures I've mentioned, these are short term solutions that don't really work against burning billions tens of billions, like more than half more than fifty billion a year for each of them. How are you going to cut enough to
bankroll that? But in any case, putting aside the amount of money they're having to invest, it might be time to accept that there really isn't money here in Generative AI.
It might be time to stop and take stock of the fact that we're in the midst of what our third delusional epop our third stupid idea that everyone claims the future, But unlike cryptocurrency, in the metaverse, everyone seems to have joined this pie, and everyone's decided to burn as much money as he mainly possible on this unsustainable, unreliable, unprofitable, environmentally destructive bullshit sold to customers and businesses as artificial
intelligence to law may everything without ever having a path to do so. Because that's the thing, none of this is even AI. This is an automation. It's generation generation in different hats, and it burns the world around us to provide it. But you know, I don't think the following is going to burn the world. In fact, I think it could really make your life better. And I need you to directly and vacifiously engage with the following advertisements and we're back. See might ask why does this
keep happening? Why do we keep getting these stupid movements? Why did they tell us that cryptocurrency was the future. Why did they tell us the metaverse was the future? Why are they telling us that generative AI is the future when none of these things from the very beginning looked like the future. There were signs from GPT through like oh cool, you can generate entire things, and like a minute, wow, that's crazy. But past that point, past that moment of oh you can do that, I guess
what was there? And why does this keep happening. It's the natural result of a tech industry that's become entirely focused on making each customer more valuable rather than providing more value to the customer in exchange for I don't know, money or attention. The products you're being sold today almost certainly tried to wed you to a particular ecosystem when owned by Microsoft, Apple, Amazon or Google as a consumer at least and internally increase the burden of lead things
said ecosystem. Imagine trying to move all of your subscribe and save shit off of Amazon. Imagine trying I mean moving iOS to Android or It's not that easy. And
that's by design. Everything is about further monetization, about increasing the dollar per head value of each customer, be it through keeping them doing stuff on the platform, to show them more advertising, upselling them new features that are only kind of useful or previously we're free, or creating some new monopoly or oligopoly where only those with the massive war chests a big tech can really play, and very very little about this is about delivering any kind of
real value or utility or thing that you, the customer might like. Generative AI might not be super useful, but it's really easy to integrate into stuff and make new things happen, creating all sorts of new things that the company could theoretically charge for, both for a customer and an enterprise customer. Sam Molton was smart enough to realize that the tech industry needed a new thing, a new technology that everybody could take a piece of and sell.
And while he might not really understand technology, all one understands growth and the lust that the economy has for growth, and he's productized Transformer based architecture is something that everybody could sell, a magical tool that could plug into things and kind of connect to an ephemeral concept like AI. The problem is that the desperation to integrate genera if AI everywhere has shown a pretty nasty light on how disconnected these companies are from actual consumer needs or even
running good companies. Like really, I'm not even being facetious. I would genuinely like it if this stuff was useful. I like useful things. There would be ethical concerns about the copyright theft and such, but I would at least tip my hat to them if I could find something, anything that I looked at and could say, Wow, that's really useful in my daily life. I got nothing, and I've really looked. You can email me easy that's echoes
Abetter offline dot com if you have one. But I've yet impressed by one of those emails, So please try harder. And the really worrying part is that other than AI, many of these companies don't seem to have any other new products. What else is there? What are the things do they have to grow their companies? No? Really, what do they have? The new iPhone? I bought the new iPhone. I'm a little pig point cooin coin. I bought the iPhone. I bought the new one, and I've bought it every year.
I am that guy I sell the old one about the new one. This is the first year. I think from the beginning where I bought it and being like why did I do that? Man? What does this do? And that's because I think we're hitting a wall. This is the rock combubble. I talked about a few months ago. They've not got anything. There's nothing, They've got nothing. And that really is the problem, because when everything falls, when everyone realizes, when the markets look at tech and say, wow,
you're not going to grow forever. You're not going to come up with a new wiz bang that you can market to everyone and make billions in returns. You're not going to do that. No, they're not going to react well at all, because when you take away the massive growth that tech has, you have a very annoying industry full of annoying young people that will piss off the markets. They will piss off those with the money. The tech industry has a terrible rep with the government and a
terrible rep with society. The reevaluation of these companies will be merciless, and there are very few friends left, and I think there will be a cascade down to the other companies in the tech space, just in the same way that it will hit workers who will get laid off when all of this falls apart. Despite none of these people doing anything wrong other than the people up top having no creativity, no real innovation, and no understanding
of real people's problems. I hypothesize a kind of SUBPRIMEI crisis is brewing where almost the entire tech industry is brought in and a technology sold at this insanely discounted rate, heavily sentralized and subsidized by big tech companies like Microsoft, Amazon, and Google. At some point, this incredible toxic burn ray is going to burn through GENERATIVEAI and it's going to
catch up with them. And when the price increases come or companies realize that these features are not that useful and they see the lack of user adoption, they're going to start getting nervous. But right now are in the piss take section of the economy. Right now we're seeing the egregious share like Salesforce charging two dollars a conversation for their new agent Force product. But eventually the markets
will catch up because the money isn't there. And when these prices go up, I'm not confident that will have much of a generative AI industry left. And that's assuming that these companies still have enough money. It's assuming that open ai is able to raise another six and a half billion dollar round in the next six to eight months. How long can they do that? For? How many times? How many years? A VSA he's willing to prop up
open AI. How many years is Microsoft ready to burn capital to make what a billion or two on Generative AI? This is embarrassing. It's bad business and it's bad product. Satch an Adela, Sundarpishi, sam Ortmon, the whole lot of them. They should be absolutely fucking ashamed of themselves. They're insult to innovation and insult Silicon Valley and insult to their consumers.
And what happens, you tell me this when the tech industry, the entire tech industry, relies on the success of a kind of software that only loses money and doesn't create much value when it does so. And what happens when the heat gets too hot and these products become impossible to reconcile with, and that everyone realizes that none of these companies have anything else to sell. I really don't know.
I'm scared. I'm not trying to do fud, doing a fud, fear, uncertainty, in doubt, told to spell these things out, but I am worried because really the only other alternative to what I'm saying is that they magically make this profitable, that they just keep doing this until it goes into the green, despite no one appearing to know how, despite their not being a path there. How willing are you to believe them after they've lied to you for so many years?
How ridiculous is this really? How ridiculous have you been thinking this is? How much can you let them coast on? They'll work it out? Because they haven't. They haven't worked it out for a while. It's been over a decade since the last significance consumer tech innovation. It's been a ton on the chip side. But what is there for you?
And I? Not really much? And I don't think there's much in this industry either, And I worry that the tech industry is building towards a really grotesque reckoning, with a total lack of creativity, enabled by an economy that rewards growth over innovation, and monopolization over loyalty, and management over those who actually build things. The people in control of the tech industry are not the ones who built it. These people are management consultants. Even Samultman is one of them.
These people are superficially interesting and superficially smart, just like jat GPT. And I worry, I worry so much, So promise me, dear listener, then The next time someone tells you they'll work it out, that this stuff is the future, tell them some of this shit, send them the podcast, or just yell at them at the top of your voice. Don't even need to use words, but I'm so grateful to have you as listeners. Thank you for listening to Better Offline. The editor and composer of the Better Offline
theme song is Matasowski. You can check out more of his music and audio projects at Mattasowski dot com, m A T T O S O W s ki dot com. You can email me at easy at Better Offline dot com or visit Better Offline dot com to find more podcast links and of course, my newsletter. I also really recommend you go to chat dot Where's youread dot at to visit the discord, and go to our slash Better Offline to check out I'll Reddit. Thank you so much
for listening. Better Offline is a production of cool Zone Media. For more from cool Zone Media, visit our website cool Zonemedia dot com, or check us out on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts.