Sam Altman Is Dangerous To Silicon Valley - podcast episode cover

Sam Altman Is Dangerous To Silicon Valley

Jun 14, 202455 min
--:--
--:--
Listen in podcast apps:

Episode description

In this episode, Ed Zitron walks through how Sam Altman's ridiculous promises about the future of artificial intelligence could be ruinous for Silicon Valley, and speaks with Bloomberg's Ellen Huet about how Sam Altman - a non-technical founder with little business success - accumulated so much power.

See omnystudio.com/listener for privacy information.

Transcript

Speaker 1

All Zone Media.

Speaker 2

Hello and welcome to Better Offline. I'm your host ed ze tron. As I've discussed in the last episode, Sam Mortman has spent more than a decade accumulating power and wealth in Silicon Valley without ever having to actually build anything, using a network of tech industry all stars like LinkedIn co founder and investor Reid Hoffman and Airbnb CEO Brian

Chesky to insulate himself from responsibility and accountability. Yet things are beginning to fall apart as years of half baked ideas and terrible, terrible product decisions have kind of made society sour on the tech industry, and the last month has been particularly difficult for Sam, starting with the chaos cause by open ai blatantly mimicking Scarlet Johansson's voice for the new version of chat GPT, followed by the resignation of researchers who claimed the open ai prioritized and I

quote shiny products over AI safety. After the dissolution of open AI's safety team, I know, it's just it's almost cliche.

Shortly thereafter, former open ai board member Helen Toner revealed that Sam Altman was fired from an open Ai because of a regular pattern of deception, one where Aortman would give inaccurate info about the company's safety processes on multiple occasions, and his de seat was so severe the open Aiyes Board only found out about the launch of chat GPT, which by the way, is open a eyes first product that really made money, arguably the biggest product in tech.

Do you want to know how they found out about it, Well, they found out when they were browsing Twitter. They found out then, not from the CEO of open Ai, the

company which they were the border very weird. Toner also noted that Ortmand was an aggressi political player with the board, correctly by the way, worrying the and I quote again that if Sam Ortman had any inkling that the board might do something that went against him, he'd pull out all the stops, do everything in his power to undermine the board and to prevent them from even getting to

the point of being able to fire him. As a reminder, by the way, the board succeeded in firing Sam Mortman in November last year, but not for long, with Ortmand returning a CEO a few days later, kicking Helen Toner off of the board along with Ilia Sudskava, a technical co founder that Altman manipulated long enough to build chat GBT and announced it him the moment that he chose to complain. Sudskav, by the way, has resigned now He's also one of the biggest technical minds though it's so

how is OPENAA going to continue anyway? Last week, a group of insiders at various AI companies published an open letter asking for their overlords, for the heads of these companies, for the right to warn about advanced artificial intelligence in a monument genuinely impressive monument to the bullshit machine that Sam Olman has created. While there are genuine safety concerns with AI, there really are, There are many of them to consider. These people are desperately afraid of the computer

coming alive and killing them when they should fear. The non technical asshole manipulate are getting rich making egregious promises about what AI can do. AI researchers, you have to live up to sam Ulman's promises. Sam Molton doesn't. This is not your friend. The problem is not the boogeyman

computer coming alive. That's not happening, man. What's happening is this guy is leading your industry to ruin, and the bigger concern that they should have should be about what Leo Ashenbrenner, a former safety researcher open ai, had to say on the Duaquesh Ptel podcast, where he claimed that security processes of open ai were and I quote egregiously insufficient and that the priorities that the company were focused

on growth over stability of security. These people are afraid of open ai potentially creating a computer that can think for itself, that will come and kill them at a time where they should be far more concerned about this manipulative con artist that's running open ai. Sam Altman is dangerous to artificial intelligence. Not because he's building artificial general intelligence, which is a kind of AI that meets or surpasses human cove capabilities by the way, kind of like data

from Star Trek. They're afraid of that happening when they should be afraid of Aortman's focus. What does Sam Moortman care about? Because the only thing I can find reading about what Sam Mortman cares about is Sam Bloody Aortman. And right now the progress attached to Sam Mortman actually

isn't looking that great. Open AI's growth is stalling, with Alex Cantrewitz reporting that user growth has effectively come to a halt based on a recent release claiming that chat gpt had one hundred million users a couple of weeks ago, which is, by the way, the exact same number that the company claimed chat GPT had in November twenty twenty three. Chat GPT is also a goddamn expensive product to operate, with the company burning through capital at this insane rate.

It's definitely more than seven hundred thousand dollars a day. It's got to be in the millions if not. What it's insane and what open Ai is aggressively monetizing chat gbt, both the customers and to businesses. It's so obviously far from crossing the break even rubicon. They keep leaking and they'll claim, oh, I didn't put that out there. They keep telling people, oh, it's making billions of revenue, but

they never say profit. And eventually someone's going to turn to them and say, hey, man, you can't just do this for free or for negative. At some point, Sacha Nadella is going to call Sam Ortman and say, Sammy, Sammy, it's time, Sammy, it's got to be a real business. I assume he calls him that because the supernatural. But as things get desperate, Samuel was going to use the only approach he really has, sheer force of will. He's going to push open AI to grow and sell into

as many industries as possible. And he's a specious hype man. He's going to be selling to other specious hype men. The Jim Kramers of the world are going to eat it up. And they're all all of them, the mock Bernioffs, the satch In, the dell As, the sun Dapashais. They're all desperate to connect themselves with the future and with generative AI and those that he's selling to the company's

brokering deals, yes, even Apple. They're desperate to connect their companies to another company which is building a bubble, a bubble inflated by Sam Altman. And I'd argue that this is exceedingly dangerous for Silicon Valley and for the tech industry writ large, as executives that have become disconnected from the process of creating software and hardware follow yet another non technical founder hocking unprofitable, unsustainable, and hallucination prone software.

It's just very frustrating. If there was a very technical mind at these companies, they might walk away. And I'm not going to give Tim Cook much credit, but looking into it, I can't find any evidence that Apple is buying a bunch of GPUs, the things that you use to power these generative AI models. I found some researcher and analysts suggesting that they would buy a lot. But now open Ai is doing a deal with Apple to power the next iOS, and it's interesting. It is interesting

that Apple isn't doing this themselves. Apple a company with hundreds of billions of dollars in the bank. I believe that pretty much prints money. That alone makes me think it's a bubble. Now it might look like an asshole if it comes out they have. But also, why are they subcontracting this to open ai when they could build it themselves, as Apple has always done. Very strange. It's

all so peculiar. But I wanted to get a little bit deeper into the Sam Mormon's story, And as I discussed last episode, Ellen Hue of Bloomberg, she's been doing this excellent reporting on the land and joins me today to talk about the subject of a recent podcast. Sam Olman's Rise to Power. So tell me a little bit about the show are you working on?

Speaker 1

The show is the new season of Foundering, which is a serialized podcast from Bloomberg Technology. So this is season five, and in every season we've told one story of a high stakes drama in Silicon Valley. I was also the host of season one, which came out several years ago, was about we Work, and we've done other companies since then, and season five is about Open AI and Sam Altman, and I think we really tried to cover the arc

of the company's creation and where it is now. But in doing so, we really tried to do a character study of Sam Altman, like he's a very important person in the tech industry right now with a lot of power, and we really wanted to ask ourselves a question and to help listeners ask themselves a question. Should we trust him?

Should we trust this person who is currently in a position of a lot of influence and about whom there have been very serious, you know, allegations and questions raised about you know, to put it in the words of the opening eyeboard, his not consistently candid behavior. And I think it's, you know, my hope is that we give listeners a chance to hear kind of the whole story, and it's like broader, you know, when there's news that's happening, it can happen so quickly it's hard to get a

step back. And I think what the show really does is it collects a lot of information in one place, and we also have lots of new information that you won't hear anywhere else, and interviews with people who you know, have worked with Sam, who knew him when he was younger. We have an interview with Sam's sister Annie, from whom he is estranged, and there's a lot of material in there. I think that tries to get closer to this answer

of like, what should we make of this person? How should we think about checks and balances of power when we have these companies that are, by all appearances, gathering a lot of power and there for the people who are running them have a lot of power as well.

So we have it's a five episode arc, five episode season, and the first three episodes are out now to the general public, and the last two will come out on subsequent Thursdays, And if you would like to binge the whole season right away, The episodes are available early to Bloomberg dot Com subscribers.

Speaker 2

So you've just started this series about Sam Altman and his upbringing indoors, so the growth of open ai and looped and everything. Who are the people that have helped him get where he is today?

Speaker 1

Though, So the making of Sam Altman is really interesting part of the overall story of Sam Altman. You know, many people know him as the CEO of open Ai because that's the role he's been in when he has risen to prominence, you know, beyond Silicon Valley. Like I think for many years he was well known in Silicon Valley, but this is like now he's kind of a household name, and so it's important to understand where Sam came from.

You know, he's been in the valley for you know, since two thousand and five, I think is when he started college two thousand and four, two thousand and five at Stanford. Then he dropped out, and then he joined y Combinator, the now famous startup accelerator. But he was actually part of the first cohort of founders ever in

n YC. Along with which as well right yes, including the the co founders of Twitch and of Reddit and so Emmett Sheer, you know, for those who know Emmit Sheer has a like very short, seventy two hour cameo in the open AI very Sam been firing Saga. But yes, Emmett and Sam were both in the same y C batch.

So when we think about Sam's early career in Silicon Valley, I think what's important to know is that he rose very quickly, in part because he was very successful in making these strategic, advantageous friendships and connections with already established people in the valley. The most important one is Paul Graham, who is the you know, one of the founders of y Combinator and you know basically like immediately took Sam under his wing when Sam joined this first batch of YC.

And yeah, Paul's a really important mentor to Sam. He's kind of the first person who really sees in Sam this you know, ambition, this hunger for power, this like drive to really build bigger and bigger companies. Even when you know they met when Sam Altman was nineteen, so Paul like sees him as a teenager and sees this

future potential. And so yes, you know, not only did Paul become a mentor to him and sort of helped build Sam's profile over those early years because he would you know, Paul Graham is very famous for writing these essays about how to build startups and how to build the best startups, and if you're at all interested in building startups, you've read many of them. They're kind of like almost like a startup bible. And in many of

them he extols the virtues of Sam Altman. He talks about Sam's ambition, he talks about Sam's cunning, his ability to like you know, make deals and like think big and never.

Speaker 2

Actually think Sam Motman is done, is what I've found.

Speaker 1

Yeah, there are some interesting you know, I've read many of the things Paul has written about Sam. Some of my favorite ones include Paul writing that within three minutes of meeting Sam, this was when Sam was nineteen, Paul thought to himself, Ah, so this is what a young Bill Gates is like, or you know, this is what Bill Gates was like at nineteen, I think is the exact quote. So, you know, he really build him up in the way. And I do think Paul had like

unique insight into Sam, like they were close. They in many ways I'm sure still are, but it is this interesting role where you know, Paul met Sam when he really didn't have much to his name, and he really elevated him early on through his writings as this like startup founder to emulate right that other founders should be emulating Sam. And then of course, as Sam progresses in the valley, he also starts to write these like startup

wisdom essays in a similar style to Paul. And then, of course, the most important thing that happens is that in twenty fourteen, when Paul decides he no longer wants to run by Combinator, which at this point is a much bigger vehicle than it was when Sam first started. It has no longer just a few stops totally. It has produced Stripe, Dropbox, Airbnb. This is a big job, right,

like running y Combinator. And when Paul wants to hand it off to someone, you know, he has said that the only person he considered giving this to was Sam. So in twenty fourteen, when Sam is I believe twenty eight years old, he becomes the president of Y Combinator, and this is you know, he had started a startup, it didn't really work, he sold it and was starting to tabble in angel investing, and at that point Paul

really elevated Sam to this new position of power. And then he ran YC for a while and then started open Ai. And in starting open Ai, he also leveraged these like very useful connections with particularly powerful people who could help him, such as Elon Musk, who was able to give the vast majority of the pledged funding to start open Ai. Later, when Elon Musk splits from open Ai, Sam makes his very powerful partnership with Satya Nadella to

help fund open Ai. Another important partnership that Sam has made, you know, much earlier on was his friendship with Peter Teel And one of the things Peter Teal does is also you know, gives him millions of dollars to start investing. This is like before Sam takes over at y Yeah. And you know another thing that Paul did that really Paul Graham did that really helped Sam was also he gave Paul had the opportunity to be one of the

first investors in Stripe. He was offered the chance to invest thirty thousand dollars for four percent of Stripe, which, of course, now that Stripe is enormous we all know how valuable that was, and Paul split it with Sam. He was like, oh, I might as well share this with Sam. So Sam has said that that fifteen thousand dollars for a two percent of Stripe has been, you know, one of his best performing angel investments ever.

Speaker 2

That was something he question is always where he got fifteen grand from. He was still working on Looped at the time. It's funny how perfletrip anyway.

Speaker 1

Yeah, my guess is fifteen grand was I don't actually know this, but my guess is fifteen perllen was not hard for him to pull up. And it's one of those things where it's really is, you know, access to access and relationships are the sorts of things that can build a career and can lead to great wealth, right like Sam is now you know, by our own internal

accounts and by other lists billionaire. And this money comes from you know, not from open AI, but from these angel investments that he's made early on that have been enormously successful.

Speaker 2

So you called him in one of the titles, the Most Silicon Valley Man Alive. Is this what you're getting at, this kind of power player mentality?

Speaker 1

Yeah, I think it's it reflects a few things. One that even though he's you know, he's in his late thirties, he's been a player in Silicon Valley for such a long time, you know, close to two decades. And also that he's just someone who is extremely well connected. So even before he took over Y Combinator, which I think you could argue is like kind of king of the startup world in some sense, like Y Combinators, like you know, the topics totally, even before he took over I Y Comminator,

I think he was extremely well connected. He's very social, he's very helpful, he's very efficient, Like many people have told me stories in which he, you know, calling Sam and talking for five minutes has saw their problem because he knows exactly the right person to call to fix it,

or you know, he's really good at making deals. I think it's just clear he's extremely well integrated into this world and has very successfully moved up the Silicon Valley status ladder to the point where he is now, which is kind of you know, one of the you know, he's the CEO of the one of the arguably hottest companies in the valley right now, and I think that that's not luck right, Like he didn't just come up with He's not like a nobody who came up with

an idea. It's like he has the connections and has parlayed his connections into power to bring him to the point he is now.

Speaker 2

So in your experience talking to people about Sam Altman, how technical is he? Do you think? What if you heard? Because you say, there he wasn't lucky, But he also does not appear to have successfully run a business because Luke shut down two people, well, two executives tried to get him fired from there. He got fired from y Combinator, which did very well. But at the same time, why he was basically a convey about for money at one point,

not so much recently. Yeah, it just it feels weird that this completely non technical, semi non technical guy has ascended so far.

Speaker 1

My sense is that's not maybe the most fair description. Like I think Sam is incredibly smart, and people say this a lot, and you know, I believe them. I think his special skill, you know, he obviously knows how to like he's an engineer, he has training. I'm sure

he can build a lot of stuff. It seems like his comparative advantage, his special skill is relationships, deal making, figuring out who exactly is the right person to help him in whatever he's really trying to get done, and figuring out the best way to get something to happen.

You know, one of the people I spoke to is someone who knows Sam from when he was younger and knows him personally, and said that his superpower is figuring out who's in charge or figuring out who is in the best position to help him, and then charming them so that they help him with whatever goal he's trying

to get done. And I think that, like, yeah, one could argue that that's actually a really good skill set if you want to build a very big company, which you know, I think at this current moment he has right like opening eyes. Really, you know, you can there's a lot that you can say about whether they're upholding their original mission or that. You know, that's up for debate, But I think that they've obviously been commercially successful so far, so.

Speaker 2

It feels like Silicon Valley on some level. And I just to give some thoughts here within the two episodes I'm doing here, the pattern I've seen with Sam Oltman is that everyone seems to want him to win, and there's almost a degree of they will make it. So have you seen anyone who's really a detractor or anyone who's not pro Sam Oltman, because it's interesting how few people are in tech.

Speaker 1

Well there is. I won't get too into it because this is in some of the future episodes which will drop in future weeks, but you know, I would say in in in some of the conversations that I've had off the record about people about in some of the conversations I've had off the record with people about Sam, I think, you know, my general impressions are people often do find him impressive in terms of what he has gotten done, you know, the size and scale of his

ambitions and the way that he has generally been able to make that happen. I think there's also a lot of people who you know, are willing to privately share some gripes that they might have about him. Also, you know, in recent weeks we've seen people be a lot more

public about some of those gripes. We have Helen Toner, a former former board member at open AI who voted to remove Sam last November, saying publicly in the last few weeks that Sam lied to her and the other former board members that his you know, misdirection made them

feel like they couldn't do their jobs. And she has also said that people were intimidated to the point where they did not feel comfortable speaking more publicly about negative experiences they'd had about Sam, that they are afraid to speak more publicly about you know, times that he has not been honest with them or you know, has in

which they've had challenging experiences. And that has also been reflected in some of the private conversations I've had in which people you know, they might have complaints or they might have had like challenging situations with him, and I think they just feel like the risk calculus is not worth it to come out and say something like that. But you know, there have been bits and pieces where people have come out and said things that you know,

Sam has. You know. Another thing that the board members have said was that Sam had been deceptive and manipulative, and that's also followed up by or not followed up. There was also I think back in November, a former open ai employee who had tweeted something publicly about that you know, saying that Sam had lied to him on occasion, even though he had also always been nice to him, which I think is a very interesting combination of.

Speaker 2

That Silicon Valley though, I'm afraid of dealing with them, but they were so nice to me.

Speaker 1

And yeah, of course that you know that person has not elaborated more publicly about what they meant. But I think I think, you know, I think this is why people are asking themselves these questions, which is like, you know, the more that we hear about what the board uh was thinking before they decided to fire Sam, I think the more people are wondering about what are the patterns of behavior that he shows that you know that led to the board trying to make this drastic move.

Speaker 2

Yeah, that's actually an interesting point. So when Samulton was fired from open AI, there was this very strange reaction from Silicon Valley, including some in the media, where it was almost like a tounga Games everyone doing the symbol thing where everyone's like, oh, we got to put sam otment back in, isn't it kind of strange? We still don't know why he was actually fired, though, I mean, Helen Tona has elaborated like I've never seen a have you seen anything like this? In your career.

Speaker 1

I think that it has been surprising that there has not been more of a clear answer. I think, you know, as as time has gone on, like we have heard a little bit more like I think Helen Toner has, you know, to her credit, tried to give more information

in recent weeks about what happened. I think, you know, people were obviously asking this question six months ago, and so I think like there's been a little bit of a delay and trying to get this this answer, and I wonder if maybe there just isn't like a very neat answer to it, and so and then in that absence we get this kind of more of a like murky, multifaceted,

multi voiced answer. But I yes, I agree that it is sort of surprising that there that there hasn't been more clarification on what exactly happened or a little bit more granular detail about what led up to it.

Speaker 2

So on today, I aihype in general said that a bit weird. I'll keep going, why do you think there's such a dolf between what Sam Altman says and what chat GPT can actually do?

Speaker 1

What Sam Altman says, what are you talking about specifically.

Speaker 2

As in he says it will be a super smart company. Yeah, yeh yeah, that he'll be all of these things.

Speaker 1

Well, this is something that we get into in episode three, which is a personal interest of mine, which is kind

of the psychology of the AI industry right now. And you know, I what I find so interesting about this and what we try to delve into in episode three and kind of throughout the series is these kind of like extreme projections about AI and in the industry, you see both positive ones and negative ones, and I think, you know, the negative ones, that's what looks like AI dooomerism, AI existential risks, sometimes called AI safety depending on your

point of view. But you know, it's these projections that you know, superintelligence might very quickly and very soon learn to self improve in a way that allows it to rapidly outstrip our control and our capabilities and could lead to the extinction of humanity. There are so many interesting things to say about the psychology of believing that our human race might either be wiped out or incredibly changed within our lifetimes, and we get into that in episode three.

I think I really wanted to get into the psychology of someone who believes that AI doom is just around

the corner. And so we talked to someone who sort of became convinced of this belief soon after the twenty sixteen alphag Matches in which the go playing AI beat the you know, the world's champion in Go and he talks about yeah, no longer, you know, deciding not to make a retirement account because he was like, what is the point by the time I reach retirement age, either the world will be dramatically different and money won't matter,

or will all be dead? And and I think that even though some people might scoff at that, that's like a real belief that people believe that this, you know, these extreme possible scenarios are in our near future. And on the other hand, we also see extreme projections in a positive direction. You know, this idea that AI is going to unlock a whole new era of human flourishing, that we might expand beyond our planet, that we might be able to give say what abundant abundance right exactly?

You know. One of the things we do I believe in episode three is is do a little bit of a supercut of Sam all been talking about abundance. It's it's pretty clear that This is a way that he likes to frame our AI future is going to be this future in which everyone has plenty, right, everyone has you know, access to intelligence, abundant energy, abundant access to superintelligence that can help us live kind of our best

lives and beyond our wildest dreams. Right. And I you know, obviously Silicon Valley is a place where people like to make grandiose statements. But this is beyond that, right, This is not just this is not just like you know, we joked about we work, We works. Mission statement was to elevate the world's consciousness. Like well, galaxies of human flourishing for eons beyond us. Like that is on another scale, right, Like we're talking about something that is sort of at

an unprecedented level of extreme rhetoric. And I think that's really interesting. I think it is a very powerful motivator, both in a you know, in the doumer sense and also in the abundance sense. People believing that what they're working on is the most important technological leap forward for humanity. Talk about a motivating reason to work on this technology, right, talk about a way to feel powerful, feel like you're

making a huge difference. I think that's a really key part of what's driving a lot of work in AI right now.

Speaker 2

Striving a lot of work, sure, but with Aortman himself, there is this golf. It is a million mile golf between the things he says and what chat GBT is even even on the most basic level capable of doing and will be capable. And it just feels like it almost feels like he's become the propagandist for the tech industry. And it's very strange to me how far that distance is. Because you've got the AI doomers and the AI optimist

I guess you'd call them. But Aortmand doesn't even feel like he's in with He's just kind of He'll say one day that he doesn't think it's a creature, the next one will say it's going to kill us, or all just feels like a pr campaign but for nothing.

Speaker 1

Yeah, it has been interesting to try to answer the question, you know, one of the questions we tried to answer in the podcast is does Sam actually believe you know, because as you mentioned, there are some early clips of him, you know, And when I say early, I mean around the time of founding Open AI twenty fifteen. Or so, there's some clips of him talking about, you know, saying

somewhat jokingly that AI might kill us all. But there's also this uh, you know, very famous blog post that he wrote in twenty fifteen in which he says that, you know, basically super intelligence is one of the most serious risks to humanity, you know, full stop. And so it's clear that at some point in his life he believed kind of what we might now call a more doom ory outlook. But as time has gone on, he has you know, offered views that are a little bit

more measured and more positive. You know, he tends to you know, in his big media tour of twenty twenty three, he tended to talk about how AI was going to you know, his projection was that AI would radically transform society, but that it would be net good, right, that like, overall we would be glad that this happened, and that it would improve lives, even if in the short term or for some people it might prove to be bringing

a lot of challenges as well. And so it is you know, I think one of the interesting things about is about him is it is a little hard to pin down exactly what he thinks. I think you're right that I wouldn't consider him like a gung ho effective accelerationist. I would not consider him a dumer. He is like

somewhere in this large gulf in between there. But I think he's also smart enough to know that making grandiose projections about what AI could bring is a compelling story, right, Like, is a story that he can help sell by being like a spokesman for it. And often that is the role of a CEO, is to be a really good storyteller, to bring the pitch of the company to the public, to investors, to potential employees, to customers, to try to sell them on this vision of the future. And I

do think Sam is good at that. There is an interesting tidbit in episode three in which we interview a fiction writer who was actually hired on contract by open Ai to write like a novella about AI futures and things like that. And yeah, he just talks a little bit about you know, the novella is not I think in active use within open Ai, but they did at some point, see they did, at some point see value

in commissioning it. And I think you know something that the author, Patrick House explains to us, is you know that open Ai, just like many other startups, is really motivated by story, right, and that Sam Altman is inspired by fiction. You know, it's inspired by certain kinds of sci fi. I think this is not unique to Sam.

Many founders in Silicon Valley, you know, Elon Musk has talked about this as well, are driven to create things in part because of what they read about when they were younger, that you know, these dreams of the future. And so it's just interesting to get his perspective on how motivating a story can be, and how motivating this compelling story of like, oh, we're building something that's going to change the course of human history. Like you just couldn't ask for a more powerful motivating force.

Speaker 2

So as Alman accumulates power and as he kind of sends the top of open Ai, do you think he's done there? Do you think there's going to be another thing he starts? Because it feels like you've discussed like UBI and all these other things. Do you think he has grander ideas that he wants to pursue.

Speaker 1

Well, obviously I can't speak to what's inside Sam's I don't know the man's mind, but I mean past indicators would suggest yes, Like I think he has proven pretty consistently that he's someone who you know, is you know, as much as he might focus on one project with a lot of effort, like he is cooking things on the side, Like this is a man. This is gonna be an extended metaphor, but this is a man working at a stove that has like six burners, not one. And you know he we already know that.

Speaker 2

What you say, sorry, he's got a big house.

Speaker 1

He's got multiple houses. The uh, the you know, we already know that. You know. In addition to running open AI, he has funded and or helped prompt the founding of, or has you know, been very involved in investing in or supporting other startups that you know, are part of this kind of ecosystem of businesses that are connected to an AI future or might benefit in an AI future. So for example, he Leon, which is a nuclear fusion company which he has invested a ton of money into.

I think he has said publicly that you know, his vision is that this is a potential way to provide abundant energy that could then power the technology that we need to you know, improve AI to the level that we're hoping that it can get to, or that he's hoping that it can get to At the same time. You know, we've talked a little bit about universal basic income.

This has been something that Sam has been a proponent of and an advocate of since at least twenty sixteen, when he was running y Combinator, and they started a side research project to study universal basic income by giving cash payments to families in Oakland of I believe a thousand dollars a month. That research project is still ongoing. It's now moved away from y Combinator and is associated with open Research, which is I believe funded by open Ai, and so it has kind of moved with Sam to

his new role. And of course he also so co founded this company called world Coin, which used these silver orbs machines to scan to take pictures of your iris and give everyone register every individual human as like a unique human individual, and to create this eyeball registry in which by which one could in the future distribute a

universal basic income. So he's funding these energy companies, He's involved in these sort of the sort of crypto eyeball registry project that will help distribute ebi in this future that he's imagining, Like, I think it's safe to say he's definitely thinking about things beyond just open AI for the future and imagining like, Okay, well, if we have this piece that's growing, what else would we need to

support it? And I'm sure there are other things he's working on that we don't even know about, right, Like I know he has also funded some like longevity bioscience projects and things like that. He's I guarantee he's thinking about stuff beyond what we know about.

Speaker 2

Final question, why do you think the entire tech industry has become so fascinated with AI? Do you think it's just oldman or is it something more?

Speaker 1

I do think chat Gipt started heating up this interest that was already percolating a little bit in the tech industry. But it does seem like something about chatchipt capture the public imagination made people imagine very seriously for the first time, how AI could affect their lives, their lives individually. It used to be kind of this abstract thing that was a little farther away, or maybe you understood that like you were interacting with AI sometimes, like when you would

look at like flight price predictors or yeah, exactly. But you know, I think as we you know, we talk about this in episode three, but you know, chat chipet wasn't even new technology. It was actually just a different user interface on a model that already existed GPT three point five.

Speaker 3

And so.

Speaker 1

To me, that actually speaks, I guess to the power of like making a technology accessible to everyone and in a way that was like easy to use and you know, for better or worse. That kind of got a lot of people in this like public momentum of people thinking about AI feeling you know, just feeling like it had rapidly increased its capabilities in a short period of time.

And yeah, something about that really captured not just you know, the minds, but also the hearts of people and like like getting them really thinking about like what could a future like this look like. And I think while some people were excited, a lot of people also reacted with fear, right and like I think in the valley, like you will hear a lot of people more openly discussing their fears of uh, sort of like job loss or or just like dramatic social change that might come about in

the next ten or twenty years. The feeling I get in conversations that I have in and around San Francisco is you know, even people who are pretty deep in this technology are uncertain about whether it's going to be overall good or bad. Like they're just uncertain of how to look back on this time, like whether it will have ended up being elite forward for humanity or something different.

Speaker 2

Aortman has taken advantage of the fact that the tech industry might not have any hypergrowth markets left, knowing that chat GPT is much like Sam Altman, incredibly adept at mimicking depth and experience by parroting the experiences of those that have actually done things. Like Sam Altman, chat GPT consumes information and feeds it back to the people using it in a way that feels superficially satisfying, and it's quite impressive to those who don't really care about creativity

or depth. And like I've said, it takes advantage of the fact that the teche CoSystem has been dominated and funded by people who don't really build tech. As I've said before, generative AI things like chat GPT, anthropics Claude, Microsoft's co Pilot, which is also powered by chat GBT. It's not going to become the incredible supercomputer that Sam

Moortman is promising. It will not be a virtual brain or imminently human like or a super smart person that knows everything about you, because it is, at its deepest complexity, a fundamentally different technology based on mathematics and the probabilistic answer to what you have asked it, rather than anything resembling how human beings think, or act or even know things. Generative AI does not know anything. How can a thing

think when it doesn't know a anyone? I want to ask bradlight cap Miramarati Sam Ortmann of these questions just once hear what they fart out. No, a well CHATGBT isn't inherently useless. Allman realizes that it's impossible to generate the kind of funding and hype he needs based on its actual achievements, and that to continue to accumulate power and money, which is his only goal, he has the speciously hype it, and he has to hype it to wealthy and powerful people who also do not participate in

the creation of anything. And that's who he is. I've been pretty mean about this guy, I really have, but he does have a skill. He knows a mark, he knows he knows how to say the right things and get in the right rooms with the people who aren't really touching the software or the hardware. He knows what they need to hear. He knows what the vcs need to hear. He knows quite aptly what this needs to sound like. But if he had to say what chat

GBT does today, what would he say? Yeah, Yeah, it's really good at a generating a bunch of TEGs that's kind of shitty. Yeah, Sometimes it does math right and sometimes it does it really wrong. Sometimes you ask it to do. It can draw a picture, Hey, what do you think of that? These are all things, by the way, that if like a six year old told you'd be like, wow, that's really impressive, or like a ten year old perhaps, because that's a living being. Chat GPT does these things,

and it does it. I know it's cheesy to say, but in a soulless way, but it really does that. Because and the reason all of this, the writing and the horrible video and the images, the reason it feels so empty is because even the most manure adjacent press release still has gone through someone's manure adjacent brain. Even the most pallid empty copy you've read has gone through someone. A person has put thought and intention in, even if

they're not great with the English language. What chat GPT does is use math to generate the next thing, and sometimes it gets it pretty right. But pretty right is not enough to mimic human creation. But look at Sam Altman. Look who he is. What has he created other than wealth for him and other people? What about Sam Moltman is particularly exciting? Well, he's been rich before and his money made him even richer. That's pretty good. He was

at y Combinator. Don't ask too much about what happened there. Just feels like sometimes Silicon Valley can't wipe its own ass. It can't see when there's a wolf amongst the sheep. It can't see when someone isn't really part of the system other than finding new ways to manipulate and extract

value from him. And Sam Altman is a monster created by Silicon Valley's sin, and their sin, by the way, is empowering and elevating those who don't build software, which in turn has led to the greater sin of allowing the tech industry to drift away from fixing the problems

of actual human beings. Sam Altman's manipulative little power plays have been so effective because so many of the power players in venture capital and the public markets and even tech companies are disconnected from the process of building things, of building software and hardware, and that makes them incapable or perhaps unwilling to understand that Sam Altman is leading

them to a deeply desolate place. And on some level, it's kind of impressive how he succeeded in bending these fools to his whims, to the point that executives like sund up A Shai of Google are willing to break Google Search in pursuit of this next big hype cycle

created by Sam Altman. He might not create anything, but he's excellent at spotting market opportunities, even if these opportunities involve him transparently lying about the technology he creates, all while having his nasty little boosters further propagate these bullshit, mostly because they don't know, or perhaps they don't care, if Sam Morton's full of shit. It doesn't matter to them.

It doesn't matter that Google Search is still plagued with nonsensical AI answers that sometimes steal other people's work, or that AI in legal research has been proven to regularly hallucinate, which, by the way, is a problem that's impossible to fix. It's all happening because AI is the new thing that can be sold to the markets. And it's all happening because Sam Altman, intentionally or otherwise has created a totally

hollow hype cycle. And all of this is thanks to Sam Altman and a tech industry that's lost its ability to create things worthy of an actual hype cycle to the point that this spacious, non technical manipulator can lead

it down this nasty, ugly offensive anti tech path. The tech industry has spent years pissing off customers, with platforms like Facebook and Google actively making their products worse in the pursuit of perpetual growth and ashamedly turning their backs on the people that made them rich and acting with this horrifying contempt for their users. And I believe the result will be that tech is going to face a

harsh reprimand from society. As I mentioned in the rock com bubble, things that are already falling apart WHEB traffic is already dropping. And what sucks is the people around Sam Moltman should have been able to see this, even putting aside his resume. I've listened to an alarming amount of Sam Moreman talk. And I'm a public relations person, Who the hell am I? I'm someone who's been around

a lot of people who make shit up. I've been around a lot of people whose job it is to kind of obfuscate things, and quite frankly, almost really obvious. I'm not going to do any weird light to me esque ways of proving he's lying. He just doesn't ever get pushed into any depth. No one ever asks him really technical questions or even just a question like, Hey, Sam, did you work on any of the code at open AIY what did you work on? Yeah? I know you can't talk about the future, Sam, but how close are

we actually to AGI? And if he says, ah, a few years, that's not specific enough, Sam, how about give me a ballpark? And then when he lies again, you say, okay, Sam, how do we get from generative AI to AGI? And when he starts waffling, say no, no, no, be specific, Sam. This is how you actually ask questions. And when you say things like this, by the way, to technical founders,

they don't get worried. They don't offer skate. They may say I can't talk about this duty of legal things, which is fine, but they'll generally try and talk to you. Listen to any interview with any other technical AI person, Listen to them, and then listen to Sam Ortman. He's full of it. It's so obvious. And one deeply unfair thing with the value is there are people that get

held to these standards. Early stage startups generally do the ones that aren't handed to people like Altman or Alexis Ohanian have read it, or Paul Graham or read Hoffman. They don't get those chances because they're not saying the things that need to be said to the venture capitalists. They're not in the circles. They're not doing the right things because the right things are no longer the right thing for the tech industry. And when all of this

falls apart, Sam Almond's going to be fine. When this all collapses, he'll find something to blame it on market forces, a lack of energy, breakthroughs, unfortunate economic things. All are that nonsense, and he'll remain a billionaire, capable of doing anything he wants. The people that are going to suffer are the people working in Silicon Valley who aren't Sam Almon. The people that did not get born with a silver spoon in each hand and then handed further silver spoons

as they walk the streets of San Francisco. People that don't live in nine and a half thousand square foot mansions. The people trying to raise money who can't right now because all the vcs are obsessed with AI. The people that will get fired from public tech companies when a depression hits, because the markets realize that the generative AI boom was a bubble, when they realized that the most famous people in tech have been making these promises for

nobody other than the markets. Well, the markets need you to do something eventually, and I just don't think it's going to happen. And I think that we need to really think why was Sam Altman allowed to get to this point? Why did so many people like Paul Graham, like Reid Hoffman, like Brian Chesky, like Sachi Nadella. Back up, it's obvious con artists who has acted like this forever. And what sucks is I don't know if the value is going to learn anything unless it's really bad, and

I don't want it to be. By the way, I would love to be wrong. I would love for all of this to just be like Sam Ortman's actually a genius. Turns out the whole thing is no, it's not going to happen. And I worry that there is no smooth way out of this, that there is no way to just casually integrate open Ai with Microsoft, because now there's an antitrust thing going in with Microsoft and acquiring Inflection Ai,

another AI company, and that's the thing. It feels like we are approaching a press apiece here, and the only way to avoid it is for people to come clean, which is never going to happen, or of course, for Sam Wortman not to be lying, for AGI to actually come out of open Ai. And by the way, it's going to need to be in the next year. I

don't think they've got even three quarters left. I think that once this falls apart, once the markets realize, oh shit, this is not profitable, this is not sustainable, they're going to walk away from it. When companies realize that Generative AI it's given him a couple percent profit, maybe they're going to be pissed because this is not a stock rally worthy boondoggle. This is not going to be pretty

when things fall. Apart from for Nvidia, you're still over one thousand dollars when those orders stop coming in quite as fast. What do you think is going to happen to tech stocks? Startups are already having trouble raising money, and they're having trouble raising money because the people giving out the money are too disconnected from the creation of software and hardware. The only way to fix Silicon Valley perhaps is an apocalypse. Perhaps is people like Sam Ortman

getting washed out. I don't want it to happen. I really must be bloody clear. But maybe it won't be apocalyptic. Maybe it would just be a brutal realignment. And maybe Silicon Valley needs that realignment because this industry desperately needs a big bathful of ice. I need to dunk the head in it aggressively and wake the hell up. Venture

capital needs to put money back into real things. The largest tech companies need to realign and build for sustainability, so are not binging and purging staff with every boom, And if we really are at the end of the hypergrowth era, every tech company needs to be thinking profit and sustainability again. And that's a better silicon valley, because a better silicon valley builds things for people. It solves

real problems. It doesn't have to lie about what the thing could do in the future so that it can sell a thing today. And I realize that sounds like the foundation of most venture capital. That's fine at the seed stage, that's fine at this moonshot stage where your early early days. It is not befitting the most famous company in tech. It is not befitting a multi billionaire. It is not befitting anyone, and it is insulting to the people actually building things, both in and outside of technology.

The people I hear from after every episode, they are angry, They are frustrated because there are good people in tech. There are people building real things. There are people that remember a time when the tech industry was exciting, when people were talking about cool shit in the future, and then they'd actually do it. Returning to that is better for society and the tech industry. Just don't know when it's gonna happen. Thank you for listening to Better Offline.

The editor and composer of the Better Offline theme song is Mattasowski. You can check out more of his music and audio projects at Mattasowski dot com m A T.

Speaker 3

T O.

Speaker 2

S O w Ski dot com. You can email me at easy at better offline dot com or visit better offline dot com to find more podcast links and of course, my newsletter. I also really recommend you go to chat dot Where's Youreed dot at to visit the discord, and go to our slash Better Offline to check out our reddit. Thank you so much much for listening.

Speaker 3

Better Offline is a production of cool Zone Media. For more from cool Zone Media, visit our website Coolzonemedia dot com, or check us out on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts.

Transcript source: Provided by creator in RSS feed: download file