Today's episode is presented by SAP Business AI, revolutionary technology, real-world results. Let InstaCart shoppers overthink your groceries so that you can overthink what you'll wear on that third date. Download the InstaCart app to get free delivery on your first three orders while supplies last. Minimum $10 for order additional terms supply. Welcome to Search Engine, no question too big, no question too small. This week, should we be worried about OpenAI?
So last fall we reported a story about OpenAI, the leading company in artificial intelligence, led by charismatic co-founder Sam Alpin. The company was famous not just for its runaway success, but also for their unusual ethos and structure. Rather than simply being a for-profit company, it was a non-profit in charge of a for-profit company.
And that non-profit could seemingly disable the for-profit company at any point if it decided that the company was acting in a way that was dangerous for society. It was like a tech company with a doomsday switch built into it. A recognition both of AI's potential power to reshape society, as well as an understanding perhaps that the last round of technological innovation has not been completely wonderful for the world.
Anyway, our last story was about how OpenAI's non-profit guardians had decided that the company had in fact gone off course. In November 2023, they deposed their own leader, suddenly and dramatically. Sam Altman is out as CEO of OpenAI, the company just announcing a leadership transition. The godfather of chat GPT kicked out of the company he found it. It looks like things were over for Sam Altman, until his loyalists got on board with a countercoop.
Nearly every rank and file employee at the company signed a petition demanding his return. 90% of the company's 770 employees signed a letter threatening to leave unless the current board of directors resigned in reinstateed Altman as head of OpenAI. Finally, Microsoft, OpenAI's biggest shareholder, also stepped in in support of Sam, quickly thereafter he was reinstateed. Sam Altman back as CEO of OpenAI.
OpenAI posting on X that Sam Altman will now officially return as CEO, it's also overhauling the board that fired him with new directors, ending a dramatic five day standoff that's transfixed Silicon Valley in the artificial intelligence industry. So, OpenAI's rebellious board was basically replaced with a compliant one. Sam Altman, who was temporarily deemed too dangerous to run his own company, instead consolidated power there. That was a year ago.
In the years since, OpenAI has not turned on an army of terminators to kill us all, but the company has transformed into a somewhat different seeming institution, with lots of strange public errors and judgment along the way. We hoped to talk to someone at OpenAI for the story, they did not make anyone available for comment, so instead I call the tech journalist I know. Want to see something crazy? Of course I want to see something crazy. Okay. Oh, I guess I can only go one way.
Wait, what are you doing? I just got a new webcam and it follows my face. But it didn't follow you. Now it's not following it. Damn it! You just stood up and just sat straight out of frame and I was like trying to figure out what I was saying. Kasey Newton, founder and editor of the platformer newsletter, co-host of the HardFork podcast, and perhaps a sometimes too early adopter of exciting technologies.
Kasey is a reporter we spoke to last year when everything is going to be exploding at OpenAI. And he's continued covering all the strange happenings of the company since then. I wanted to talk to him not because I'm a gossip hound for Silicon Valley, but because I really wondered if AI is a technology that can really change the world. How concerned should I be about some relatively erratic behavior from the company leading the field?
Kasey was happy to fill me in on what had been going on with Sam Altman and his very valuable startup since I last wondered about these things 12 months ago. Well, I think on the business side, OpenAI has had an incredible year. The New York Times recently reported that its monthly revenue had hit $300 million in August, which was up 1700% since the beginning of 2023. And it expects about $3.7 billion in annual sales this year.
I went back to February and back then it was predicted that OpenAI was going to make a mere $2 billion this year. So just this year, the amount of money they expected to make doubled, they further believe that their revenue will be $11.6 billion next year. So those are growth rates that we typically see only for kind of once in a generation companies that really manage to hit on something new and novel in technology.
And what about how are they actually running the place? Because I will tell you my perception as a person who follows this less closely than you is like, I feel like I see as many stories about OpenAI tripping over its clown shoes as I do stories about how the new GBT is slightly better than the one that proceeded it. Can you give me the timeline of the last year, which stories stuck out to you and how you thought about them?
So I think at a high level and somewhat to my surprise, Sam Altman changed very little about the way that he led OpenAI in the last year. Like if the concern that came up last year was that Sam was not being very collaborative, that he was not empowering other leaders, that he was operating this as a sort of very strong CEO who was not delegating a lot of power.
I haven't seen a lot of change in the past year. I have seen him continue to pursue his own highest priorities like fundraising to build giant microchip fabrication plants, for example, which has been a huge priority for him.
At the same time, there have been stories that have come out along the way that reminded you why people were nervous about the company last year. One that comes to mind is that it was revealed this spring that OpenAI had been forcing employees when they left to sign non-disclosure agreements, which is somewhat unusual. But then very unusually, they told those employees, if you do not sign this NDA, we can claw back the equity that we have given you in the company.
So how unusual is that? Like how unusual is that in tech for a tech company to say like, if a person quits Facebook and then they say Facebook was a bad company, how unusual would it be for Facebook to be like, we are taking back your stock.
It would be impossible. They don't do that. They don't do that. So this is just extraordinarily unusual. You know, sometimes with like a C-suite executive or someone very high up in the company, if they maybe let's say they're fired, but the company doesn't want them to run around badmouthing them to their competitors, they might make that person sign an NDA in exchange for a lot of money.
But this thing was just hitting the rank and file employees at OpenAI and that was really, really unusual. And afterwards Sam Altman posted on X saying that he would not do this and that it was one of the few times he had been genuinely embarrassed running OpenAI. He did not know this was happening and he should have is what he said.
And just to like, I feel like journalists have this bias, which is like, we believe in transparency, we believe in disclosure. Sometimes I think non-journalists care less than we do because we kind of have a rooting interest in transparency and disclosure. But it's also been really confusing, not as a reporter, but just as a human being.
I don't know, there's a lot of things I worry about. Most of them are selfish and personal. Like what happens at OpenAI is maybe in the top 500 or a couple hundred. But there is a part of my mind that worries about it. And when I worry about it and I try to like my prediction ledger activates, I'm always like, well, it seems like a lot of people are quitting.
A lot of the people work on the let's stop this from screwing up the world team, but they always quit. And they're like, well, we just have a difference of agreement can't say more. And it's really confusing. Yeah, absolutely. And you know, I will say that there has been great reporting over the past year by other journalists who have gotten at what some of those concerns are. And a lot of them wind up being the same thing, which is we launched a product.
And I think we should have done a lot more testing before we launch that product, but we didn't. And so now we have accelerated this kind of AI arms race that we're in. And that will likely end badly because we are much closer to building super intelligence. Then we are to understanding how to safely build a super intelligence.
I see it's like what I've noticed as a user of AI, I actually noticed the safeguards the other day I saw somebody was making a meme making fun of a celebrity online and as often happens these days, I like didn't recognize the celebrity. And I plugged the picture into chat GPT and I was like, who's this, which is the main way I use chat GPT is to say what's this. And it was like, I don't identify human beings.
I was like, OK, that's a rule that you're following. But what you're saying is that in these fast roll outs, smart rules like that, which would stop people from using AI in a bad way or stop AI from just to sign to do things that are bad. Those might be getting overridden and that if all these companies are competing with each other to build the most powerful thing the fastest, one company ignoring safeguards means all the other companies ignore safeguards.
Exactly. And we have seen this time and time again. I mean, this is really fundamental to the DNA of open AI. When they released chat GPT, other companies had developed large language models that were just as good. But Sam got spooked that his rival anthropic, which had an LLM named Claude was going to release their product first and might steal all of their thunder.
And so they released chat GPT to get out in front of Claude. And that was essentially the starting gun that launched the entire AI race. And so I think it is fundamental to how Sam sees the world that all of this stuff is inevitable. And if it's going to happen anyway, all other things being equal, you would rather be the person who did it right and got the credit and the glory and the users and the revenue.
So that is our overarching problem here. AI developers might care about safety, but in the rush to be first in the field, the company who wins can actually be the company who cares about safety at least. Which is why we are talking about worrying incidents from the industry leader, open AI.
So one of the incidents was this NDA incident first reported by Vox this May and then company did backtrack on those NDAs. And open AI spokesperson told Vox, quote, we have never canceled any current or former employees vested equity, nor will we if people do not sign or release or non disparagement agreement when they exit. End quote.
A separate incident case in it again too was the scarlet Johansson incident. Do you want to tell that story? Yeah. So for a while, open AI had been working on a voice mode for chat GPT. So instead of just typing in a box, you could tap a button on your phone and interact with the model using a voice.
And a movie that has long inspired people in Silicon Valley is the spike Jones film per. And in that film, Joaquin Phoenix who plays the protagonist of that film talks constantly to an AI companion who is voiced by scarlet Johansson. Do you want to know how I work? Yeah, actually.
And do you work? Well, basically, I have intuition. I mean, the DNA of who I am is based on the millions of personalities of all the programmers who wrote me, but what makes me me is my ability to grow through my experiences. So basically in every moment, I'm evolving just like you. And I just want to say before you even continue with your story, what is so weird about this movie being a huge inspiration of people in Silicon Valley is it is a cautionary dystopian film.
I saw this movie. This is not a joke. I saw this movie and it upset me so much at the time. I was talking to a friend afterwards and she said, I think you should probably talk to a psychiatrist and go on antidepressants, which I did for several years. I'm not on them any longer. I went on them because of the movie. Oh my gosh. So strange to me that people saw this movie and we're like, we should have this. But anyway, they love it. They want to make it the future.
Well, you could take different lessons from her. You know, I think a bad lesson to take would be human companionship is worthless at the moment that we invent AI super intelligence because we can just talk to super intelligence all day long and turn our backs on humanity. That would be that lesson. I did a lot of people in Silicon Valley looked at her and they thought, oh, that's a really good natural user interface.
Like if we could just wear earbuds all day long and you can answer any question you ever had just by saying, hey, her what's going on with this. That would be great. And then in fact, you do start to see the arrival of products like Siri and Alexa and sort of baby steps toward this new world. So I completely agree with you. Her is a dystopian film. It should not be viewed as a blueprint to build the future. At the same time, I do feel like I see what Silicon Valley saw in it.
Right. You could see Star Wars and be like, oh, space ships one person can probably like could be a good idea. It doesn't mean you're trying to build like tie fighters to take over all to run or whatever. Right. And lightsabers are a good idea. And we should do completely great. I still think about it. So. Her comes out tag people are like, oh, it'd be really good to have an AI you could talk to that's like one lesson from the movie lightsabers would be good to.
And when open AI releases their voice agent, which is sort of, you know, a real life version of part of this movie. The thing that a lot of people notice is that one of the possible voices for the voice agent sounds quite a bit like Scarlett Johansson the voice from the movie. Hey, how's it going? Hey Rocky, I'm doing great. How about you? I'm awesome. Listen, I got some huge news. Oh, do tell. I'm all yours.
Well, in a few minutes, I'm going to be interviewing at open AI. Have you heard of them? Open AI. Huh? Sounds vaguely familiar. Getting, of course, that's incredible. Rocky. What kind of interview? Not only did the voice sound very much like Scarlett Johansson, it was also presented in this very flirty way. When they did this demo, it was like it's a man using a assistant who has the voice of a woman who sounds a lot like Scarlett Johansson. And she's like, oh, PJ, you're so bad.
That was the tone of it. And it was sort of like, what are you doing here exactly? After the product launched, a user on TikTok even asked Chatchy PT itself if it believed it was Johansson clone. Hey, is your voice supposed to be Scarlett Johansson? No, my voice isn't designed to replicate Scarlett Johansson or any specific person. Well, seriously, the voice has never sounded more similar to Johansson's to me than when it was denying the resemblance.
Casey said the company itself had also contributed to this confusion. Sam Altman had primed everyone to think that way because a couple days before they do this demonstration where they show off the voice for the first time, Sam Altman tweets the word her. Or I should say he posted on X. And so of course, when this demo happens, everyone is like, oh, and so everyone was sort of primed to think, oh, wow, open AI has realized Silicon Valley's decade long dream of making the movie her a reality.
And then what happens? Then it turned out that Scarlett Johansson was really mad because Sam Altman had gone to her last year and said, hey, would you like to be a voice for this thing? She thought about it and she said, no, I don't want to. And then apparently after he had posted like in just in the couple days before the demo, he'd gone back to her agents and tried to renegotiate this whole thing and said, are you sure you don't want to be the voice for this thing?
And she said no, and they showed it off anyway. And they never said this is Scarlett Johansson, but they absolutely let everyone believe it. A new controversy tonight in the world of artificial intelligence as one of Hollywood's biggest movie stars as her voice was copied without her consent by one of the most powerful AI companies actress Scarlett Johansson claimed open AI's chat GPT mimicked her voice for its latest personal assistant program.
This bizarre moment led to Scarlett Johansson then making the rounds on TV advocating for legislation to protect the intellectual property, really the identity of actors like herself. Obviously we're all waiting and supporting like this, like the passing of legislation to protect everybody's individual rights. And I think, you know, it's yeah, we're still waiting for it, right? So like until this is just maybe sort of highlights like how vulnerable everybody is to it.
I think this was the story for me of all the stories that like really stuck with me. And maybe it was because the message it gave me was a kind of impunity. And like the promise as I've understood from opening eye has been exactly the opposite of impunity. And obviously like of all the choices they make, whether they find a sound alike voice actress and do a voice it sounds a lot like Scarlett Johansson and then kind of smudge the truth.
I could see a person getting over enthusiastic and making that mistake. It's the kind of mistake podcast would make it in the first couple of years or like, oh geez, oh god, we're really sorry. But it seems careless. Also, this is a product where one of people's concerns is the copyright implications where these AI companies are hoovering up a lot of people's creative work to make their products.
And it just felt like what you expect from a company that doesn't care what you think and wants to do what it wants. And I don't know if I'm over reading, but it was a moment that kind of like gave me a little bit of future nausea. I agree with you. And I think you framed it really well because this is the company that has told us from the beginning we're working on something very powerful.
We think it could solve a lot of problems. If it falls into the wrong hands, it could also be extremely dangerous. And so that's why we're going to come up with a very unusual structure for ourselves and try to absolutely everything we can do in our power to proceed safely, cautiously and responsibly. And so you look at the Scarlett Johansson thing and like none of that squares with their behavior in that case.
So that was the Scarlett Johansson incident. Casey told me about another incident this one from this past August. Let's call that one the lazy student problem. I mean, this is like this is a kind of short and funny one, but there was reporting this year that they built a tool that detects when students are using Chad G. P. T. to do their homework, but they won't release it.
How do they explain why they're not releasing it as someone who has had to have a conversation with a teenager about why they shouldn't cheat using open AI and really stumbled on the part where I was like listen. It's the wrong thing to do and you probably won't get caught and also yes, probably all your friends are doing it. And then like there was like several ellipses of pause while I realized like the whole I dug myself into why won't they just release the homework checker.
So I should say the Wall Street Journal broke this story and the statement they gave to them was the text water marking method we're developing is technically promising, but as important risks were weighing while we research alternatives. We believe the deliberate approach we've taken is necessary given the complexities involved and it's likely impact on the broader ecosystem beyond open AI.
That is what they said the journal sort of made an alternate case, which is that if you can't use Chad G. P. T. to cheat on your homework, you will stop paying the company $20 a month.
It's so much imagine what part of their revenue is coming from high schoolers in college kids and also like I don't know maybe there's an argument that sort of like the same way we don't need to do long division nobody needs to be able to think or reason in a essay for him, but I kind of think people still need to be able to think or reason in an essay form. I mean maybe long division is important to I don't know.
So if we're trying to decide if we trust open AI to be not just a profitable company, but also a kind of unusually ethical AI standard bearer, they're willing this to accept a bunch of grubby $20 bills from high schoolers who want to skip their homework and play more fortnight. It's not the end of the world, but it is behavior unethical enough that you'd probably fire a babysitter over it.
Casey also told me about an additional instant that it'd given some people pause the investments incident this one had to do with Sam Altman personally, specifically the way he's been quietly spending his money investing in companies like Stripe, Airbnb and Reddit.
We did learn about Sam Altman's investment empire this year, thanks to some reporting in the Wall Street Journal, and they really dug into all of the stakes that he has in many startups and found that he controls at least $2.8 billion worth of holdings. And he's used those holdings to create a line of debt, which he has from JP Morgan Chase, which gives him access to hundreds of millions of more dollars, which he can put into private companies.
And like why is this interesting? Well, one, that's kind of a pretty risky gamble to have a lot of your net worth tied up in like debt that you raised. Raised using your venture investments as collateral, like that's kind of like a rickety ladder of investments right there, but it also creates questions around what companies is open AI doing deals with are those companies that Sam has investments in of course, you know, Sam doesn't own equity in open AI right now.
And so his own wealth is tied up in these investments. And while nobody really thinks that Sam is doing any of this for the money, there was just kind of also this financial element to what we learned about him this year that I think raised some questions for people.
I feel like one of the things where I feel a little bit disapused is I think a couple years ago, I hadn't made up my mind, but I felt very willing to entertain the possibility that Sam Altman was a very unusual kind of person that he didn't seem to be motivated by accumulating wealth to the same degrees maybe other people are that he might not be entirely motivated by accumulating power that he might just have a vision for technology that could be really useful or could be really dangerous and thought he might be the best person to be able to do that.
He might be the best person to be a steward of that. I'm not saying I was right then. I'm not saying I was wrong then. But like, do you feel like you have a changed or refined view of what motivates this person who has a lot of power. I essentially have the same view of his motivations. And I think the generous version of it is that he is in a long line of Silicon Valley entrepreneurs who thought they could use innovation to solve some of the world's biggest problems.
And that is how they want to spend their lives. I think the less generous version of it is that this person coming out of that tradition found himself working on this technology that could essentially be like the technology that ends all other technologies because if the thing works out, the thing you create it just creates all other innovation automatically for the rest of time.
And that is a position of extraordinary power to put yourself into it. And I do think that he is attracted to the power and the influence that will come from being one of the people that invents this incredibly powerful thing. After a short break, Casey already mentioned that there have been a lot of senior level departures at OpenAI. We're going to dive deeper into who left and what they seemed to believe by the company they were quitting.
Plus, we'll look at a fairly worrying warning manifesto published by an ex-openAI employee. It's after some math. Today's episode is presented by SAP Business AI, revolutionary technology, real world results. Hi Bailey. Hello, it's so nice to meet you. I recently spoke to a listener in North Carolina who emailed us about life at her job. She was calling us from outside her office inside her car. My friend Zoe is lurking outside because she's the one that introduced me to the podcast.
So she's looking at Elise excitedly and giving me strange looks. That's so funny. That's so funny. Bailey is a web designer for a company that helps promote events for clients all over the world. She told me about some of ways she uses AI to shorten her work day. There are some fun use cases like extending images a little bit larger in Photoshop. There's languages that are like non-latent that have funky characters with no spaces like Japanese.
And sometimes I have to do translated websites for events that are happening in say West Japan. Oh wow. And when that happens, I have to include manual breaks for the lines because if there aren't manual breaks, the meaning will change. And so I will sometimes use AI to help me find the best breaks that can be there every possible spot.
So you're using it non-for-translation but to make sure that you're not starting a new line in a way that would totally change the meaning of what you're trying to communicate? Yeah, exactly. And did you learn this the hard way that the line break can change the meaning of a thing? A little bit. I know we sent a site to a client in a local office, someone who spoke Japanese. And it didn't go the best. They had to respond back and say like, hey, this doesn't work. This isn't going to be okay.
Bailey at first thought she was going to have to learn rudimentary Japanese. But then she realized software could actually help her out here. No need to learn a new language. I get to spend more of my time doing the actual like creative intensive in UX parts of web design versus the mean you'll kind of repetitive parts. Thank you so much for talking about those. Yeah. Should we wave to Zoe? Oh, absolutely. Wait, where did she go? She might have wandered off. She might be hiding behind my car.
Thanks again to Bailey for chatting with us and Zoe for so expertly hiding nearby. Ready to elevate your business? With SAP Business AI, you can grow revenue, increase efficiencies, and manage risks seamlessly. It's relevant, reliable, and responsible. AI is embedded into SAP Solutions to enable businesses to drive immediate business impact with AI embedded across their organization.
It can also help you make confident decisions based on AI grounded in business data and put AI into practice with the highest ethical, security, and privacy standards. SAP Business AI, revolutionary technology, real world results. This episode is brought to you by Shopify. Forget the frustration of picking commerce platforms when you switch your business to Shopify. The global commerce platform that supercharges your selling, wherever you sell.
With Shopify, you'll harness the same intuitive features, trusted apps, and powerful analytics used by the world's leading brands. Sign up today for your $1 per month trial period at Shopify.com slash tech. All lowercase, that's Shopify.com slash tech. Welcome back to the show. So if you, like me, were at best quarter paying attention to developments at OpenAI in the past 12 months, the thing you still may have noticed was just a very unusual amount of senior level people leaving their jobs.
It was the kind of turnover you'd expect to see at a Halloween store in November, not typically at one of the most valuable new American technology companies. We've already mentioned this, but OpenAI employees were in many cases discouraged from criticizing the company. And yet, there's still been some evidence about why they left and what they saw before they did. So we're going to get into that. This part is not so much an incident as it is a series of incidents, a trend.
Let's call this bit sudden departures. So the first big one out the door this year is this guy, Andre Carpathy, who was part of the founding team. He left for a while to go to Tesla. He comes back for exactly one year and then leaves. Okay. A. Ilya Sutskovur, who was one of the board members who had forced Sam out last year, he announces that he is leaving the company. And doesn't really say much about why he's leaving.
But within a month, it's revealed that he's working on his own AI company called Safe Super Intelligence and raises a billion dollars just to get it off the ground. Oh, wow. Yeah. He had a guy on his research team named Jan Leike, so this was somebody else who was trying to make sure that AI is built safely. He leaves to go to Amthropic to work on that problem there. Gretchen Kruger, who's another policy researcher, leaves in May.
Then in August, John Schulman, who was one of the members of the founding team, he announced that he was going to Amthropic and he had previously helped to build ChatGPT. And then, Greg Rockman, who is the president of OpenAI and one of its kind of main public facing spokespeople, he announces that he is taking an extended leave of absence. Basically, just says he really needs a break, not entirely sure what happened there. And then finally, Miramarati announces that she is leaving in September.
She had also been part of this board drama last year and at the same day that she left, it was revealed that the company's chief research officer, Bob McGrew, and another research VP, Barrett Zoff, or also leaving the company. That's just a lot of talent walking out the door, PJ. And I can say, if you look at the other major AI companies, so like a Google, a meta, an anthropic, there has been nothing comparable this year in terms of that level turnover.
So you have like huge turnover at the top of a company that in theory, people should want to stay at because it's like leading the industry, it's incredibly valuable. It's the winning team and people are walking out the door saying they don't want to play for it. Yeah, totally. But you know, another really important story about Miramarati is that before Sam was ousted last year, she had written a private memo to Sam raising questions about his management.
And had shared his concerns with the board. Oh, interesting. And my understanding is that that would wade heavily on the board when they fired Sam because to have the CTO of the company coming to you and saying, hey, this is a real problem. Yeah. That's going to get your attention in a way that, you know, maybe a rank and file employee might not have been able to get their attention. So we have known for some time now that Mira has had long standing concerns with Sam's management style.
And so when she finally left, it felt like the end to a story that we had been following for some time. And so has she said anything publicly that is very decipherable about her reason for exiting. So, you know, she said there's never an ideal time to step away from a place one cherishes, which I felt like was just an acknowledgement that this seemed like a pretty bad time to step away. But she said that she wanted the time and space to do her own exploration.
And on the day that we recorded this, the information reported that she's already talking to some other recently departed open AI people about potentially starting another AI company with that. Because that is what people do. Like most people when they leave opening, they start an AI company that looks shockingly similar to open AI just without Sam. And why is that? Well, my glib answer is that the high ranking people who leave open AI seem to feel like the problem with open AI is Sam Altman.
And that if you could build AI without Sam Altman, you would probably be having a better time. I see. I see. And then there's this one other guy who left that I want to talk about. Yeah. This is guy named Leah Pold Ashenbrenner. Okay. Have you heard of this guy? No, I've not. So, he is quite young. He's still in his 20s. He was a researcher at open AI. He is fired. He says for taking some concerns to the board about safety research. Open AI don't know his name.
But he goes away and he comes back in June and he publishes a 50,000 word document online called situational awareness. Were you aware of situational awareness? I was not aware of situational awareness. Okay. I'm here to make you aware of situational awareness. It's this very long document that was the talk of Silicon Valley for a week or so. And in it, Leah Pold says, essentially, the rest of you out there in the world don't seem to be getting it. You don't understand how fast AI is developing.
You don't understand that we're actually running out of benchmarks to have it blow past. And this technology really is about to change everything just within a few years. And it sure seems like outside our tiny little bubble here, not enough people are paying attention. And this document winds up getting circulated all throughout the Biden White House. It is been circulated in the Trump campaign.
And I think Leah Pold, Ashenbrenner, you know, might in a Trump administration have talked himself into a role like leading the Homeland Security Department or something. But yeah, he was another one of the interesting departures this year. That's a crazy document. Like what do you make of it?
I think that while you might take issue with some of his logic and some of his graphs and maybe he's hand waving past certain potential limits and the development of this technology, he is getting at something real, which is that it does seem like even though AI is essentially topic number one in tech. It doesn't feel like people are really reckoning with the potential consequences maybe as much as they should have.
You know, some people may listen to this and say, well, you know, Casey has sort of fallen for all of the hype here. You know, there remains this contingent of people who believe that this whole thing is a house of cards and that once the successor to GPT-4 comes out, we will see that the rate of progress has slowed. And in fact, no one is going to invent super intelligence anytime soon. And all of these things are just going to sort of wash away.
It might just be an effective who I spend my time with and the conversations that are happening at dinners and drinks in San Francisco every day. But I am more or less persuaded that we are very close to having technology that is smarter than very smart humans in most cases and that if you are the person who controls the keys to that technology, then yes, you will be extraordinarily powerful.
Listening to Casey, I start to imagine a potential world where AI continues to grow at whatever pace it grows at, but where opening eye squanders its early lead in the industry and just becomes less important over time. I wanted to know what Casey thought of this possibility.
Do you think there's a world where open AI becomes less important to the future of this thing and you know, we'll end up talking more about these other companies because these other companies have absorbed so much of the talent of that place? Yes, and there's actually this really fascinating precedent for this in Silicon Valley. So we call Silicon Valley Silicon Valley because it was where the semiconductor industry was founded. And the biggest early semiconductor company was called Fairchild.
And much like open AI in the early days of chip manufacturing and attracted all the best talent. But one by one, for various reasons, a lot of people leave Fairchild and they go on to start their own companies, companies with names like Intel.
And they're wind up being so many of these companies that they start calling them the fair children because they were born out of this initial company that sort of seated the ecosystem with talent made some of the key early discoveries and then lost all that talent. My guess is you probably didn't know the name Fairchild before I said it just now, but you do know the name Intel.
And the question is do anthropic and some of these other upstarts become the actual winners of this race and open AI 50 years from now is just a footnote in history. So how much should we be worried about open AI? I guess the answer for now seems to be somewhat. If you think AI really could be powerful and if you think AI safety is then important, it doesn't really seem like the incentives in a race to dominate the AI market are really that aligned.
Opening AI might end up leading the field. It might end up being a Fairchild, but it's hard to imagine why any AI company would succeed while also moving forward within abundance of caution, at least not without some regulation. After a quick break, we're going to switch tracks a little bit. We talked a lot about why this technology may be concerning. A lot of people agree so much so that on some quarters of social media you can get shamed just for using AI products.
But I am one of the people who both worries about AI and uses AI. And in the last year as the technology has gotten much more powerful, I find I'm using it in stranger ways. When we come back, I'm going to talk to Casey a little bit about how he thinks about the ethical concerns here. And also about the very bizarre way he's been talking intimately with the machine. That's after some ads.
This episode is brought to you by JIRA. JIRA is the only project management tool you need to plan and track work across any team. So if you're a team of developers, JIRA better connects you with teams like marketing and design so you have all the information you need in one place. Plus, the AI helps you knock out the small stuff so you can focus on delivering your best work. Get started on your next big idea today in JIRA.
Welcome back to the show. So I wanted to ask Casey about this AI question I have been personally conflicted on and remain somewhat personally conflicted on. It's the first time in my life I've seen a new digital technology that some people despise so much they don't want to use it at all.
I see people shaming each other online for using AI at all and that feels like a very online response to something but it doesn't feel like a strategy but I also like understand where the impulse of shame comes from like how do you square it for yourself where people's jobs are important people having jobs is important all that money just sort of getting swept into a big pile for open AI doesn't feel like totally socially advantageous at the same time like I use chat GPT it's not replacing anybody's job.
I don't think as it became more useful there to be a point where I would say it's a moral for me to use it I'm going to stop. Yeah, I mean we have always used software tools since their advent to try to automate away drudgery and that has traditionally been seen as a good thing right it's nice that you have a spreadsheet to do your financial planning and aren't trying to do it all on a legal pad.
Presumably that brought a benefit to your life made you better at your job and also helps you do it faster and I view the AI tools I use as doing that they take something that used to take me a lot of time and effort and now make it simpler for just one example I have a human editor who reads my column before I send it out but I also will most of the time just run it through Claude actually which is an anthropic model and just see if it can find any spelling or
grammatical errors and every once in a while it really saves my bacon and all it cost me is $20 a month so I don't think there is any shame in using these tools as a kind of backstop to provide you for making a mistake or you know from doing some research
because that's the way that we've always used software and technology so I understand the anxiety about this I understand people who for their own principle reasons decide well I don't want to use this in my work maybe I'm a creative person it's very important to me that all the work that I do is 100% human it has no a in it these are like very reasonable positions to strike but I think that to tell someone you shouldn't use this particular kind of software because it is a very important thing to do is to do it.
Because it is evil I don't understand that argument can I tell you about another way I've been using AI this year yeah and I was actually thinking about you because during one of our conversations we were reflecting on the fact that there were only a couple of things that people could do to improve their mental health and one was therapy and the other was meditation
and I was saying how frustrating it is to know what the answer is and to not want to do it right yes like yes if you started a meditation practice like that would obviously be very helpful but then you have to like sit quietly with your thoughts for 20 minutes at it like obviously that seems horrible yes so recently I've been experiencing these feelings of burnout related to my newsletter where I love doing it but it also feels harder than it has and I've been doing it at least three times a week sometimes has been as five for seven years
and so I think this is just sort of a natural thing and so I felt like I need to maybe break glass in the case of this emergency and try something that I never previously wanted to do which was meditate oh wow so I'm only a few days into this I don't want to tell you that I've solved anything here I did enjoy my first few experiences
but one of the things that I did both in the run up to and the aftermath of these meditation experiences was to just chat with Claude because Claude lets you create something called a project where you can upload a few documents and you can chat with those documents and then you can just also kind of check in with it from day to day and tell you what you're noticing or observing or if you have questions
and to me this was a perfect use case for this technology because I truly know nothing about meditation I mean people have talked to me about it I've done it a couple of times before but I've never read a book about it I've never talked with any of my friends that length about it.
So I'm just as fresh as you can be and the level of knowledge that is inside Claude which was of course just stolen from the internet without paying anyone from their labor Yes, it's actually quite high and it was able to help give me a good start. And then afterwards I could come back and say, well, you know, here's what I noticed. And I struggled with this thing and I said, oh, well, you might want to try that. Or, you know, I sort of wish it was a little bit more like this.
And it would say, oh, well, then you might want to try this other kind of meditation. Oh, tell me more about that. Okay. Yes. Sure. Here's everything. And I was talking earlier about like, what will it be like when you have an AI coworker? It's like, well, I have a meditation coach that I pay 20 bucks a month for. Some people are laughing. Some people are saying, case, you can meditate for free. You don't need a coach. I get that. I am somebody who likes to like pay for access to expertise.
And I feel like I haven't. And first of all, I am going to go meditate after this because I want to resend to myself and I didn't get to do it this morning. I don't know if I'm still going to be doing this in like two or three weeks. But if I am, I think the AI is actually going to be part of that story because it's giving me a place where I can go after these experiences to reflect. Again, I hear people saying, Casey, you realize that journals exist?
You could like write this down to, but I get what you're saying. But I'm telling you is, this is a journal that talks back to you. This is a journal that is an expert about the thing that I'm journaling about that is holding my hand through a process. None of this existed two years ago, right? Totally. The challenge of talking about any of this stuff is when the rate of change in your day to day is high, sometimes it feels quite obvious.
Other times, it becomes this weird blind spot where you don't even realize that the conditions around you have changed, right? This is what Leopold is getting at in situational awareness. It's like, you need to stop and collaborate and listen as Vanilla Icewood said. You need to sit down. What you're doing on this podcast, PJ, which is like, it's been a year. What happened? This is the right question, right?
You know, we were talking so much earlier about these AI critics that are like, it's all hype, it's constantly wrong. Screw these Silicon Valley bros, right? And I totally get all of the animus and resentment the power of that. But something that those folks do to their detriment is they tune out everything that is happening in AI because they think I've already made up my mind about this stuff. I already know that I hate everyone involved. I hate the output and I hope it chokes and dies, right?
This is how these people feel. And again, I understand all those emotions. What I'm saying, too, though, is you actually have to look around. You have to engage. You have to keep trying out these chatbots every two or three months. If only to get a sense of what they can do now that they couldn't do two to three months ago. Because otherwise, you are going to miss what is happening here. And it is wild. It is wild.
It's to me, it's really interesting that it is in a strange way a tool you are using to know yourself. And I don't mean to overstate it. It is also just a journal that is talking to you and giving you the pointers. But I find that interesting.
I also feel like for whatever reason, I think because there's such a culture of like, we don't want to be enthusiastic about technology anymore, particularly the technology, which you don't want to end up looking like the person who was gleefully celebrating the arrival of our doom. And so there's kind of a weird lack of just like 10 years ago, I think, had this come out. There'd be a tech press that would say, here's 10 new ways you can use this. Here's how I'm using it.
Kind of nobody wants to be seen doing that. That's no one's using it. I had a thing happen a couple days ago. I think Sam Altman, he was retweeting someone whose suggestion was, like, ask your agent from all of our interactions, what is one thing that you can tell me about myself that I may not know about myself? And I asked at this question, and I got an answer, and it wasn't like fortune cookie, horoscope, like vague enough that it would apply to anybody, and maybe be useful anyway.
Like, it was a real thing that I hadn't noticed. It was like the preponderance of your questions to me are about trying to put structure and precision around processes in your life that do not have them. You are constantly asking how long things should take, and how much time to allocate. It is clearly something you're struggling with. Wow. Which is the kind of thing like a good friend would tell me. Yeah. And it is not an experience I've had with software.
And I don't know, like, I find myself in a moment where I'm trying to hold everything in my head at the same time to say, these are technologies which you'd be skeptical of, and to your point, keep paying attention to. And also, in the time before this possibly changes the world in ways I might not enjoy, I'm pretty useful. Absolutely. Absolutely. I mean, it's interesting because like, I think you're right. I think we've always used software to automate treasury.
And one way you could think of that is like, it does eliminate human labor. And the people who have dredgy jobs, and I've had dredgy jobs, aren't like, I'm so glad that I've been freed to produce something else. They're like upset that their social income is being taken from them. Why do you think AI is the place where these anxieties finally come to a head? Because in previous eras of software, whatever skepticism people had about it, this skepticism actually feels new to me.
That's a great question. I think there's a lot that goes into it. I think that we're living at a time where there's kind of low water mark in trust in our technology companies. I think the social media era really destroyed most of the goodwill that Silicon Valley had in the world because people see these technologies like Facebook and Instagram as TikTok.
It's mainly just things that like steal our time and reshape the way we relate to each other and ways that are obviously worse and the whole time that people are building these technologies and says that actually that they're saving the world and that there's nothing wrong with them. Yeah. And so when another generation comes along and says, oh, hi, we are actually here to invent God. There's going to be a lot of skepticism about that.
And it is the AI companies themselves who told us, this thing will create massive job loss. It will create massive social disruption. We may have to come up with a new way of organizing society when we are done with our work. That is something that every CEO of every AI company believes, BHA is that we will have to reorganize society because essentially capitalism won't make sense anymore. So most people will agree that they don't like change, you know, change is bad.
And when they say they don't like change, it usually means well, I have a new manager at work. The change that these people are talking about is that capitalism won't exist anymore and it's unclear like it's so funny because everybody, I mean, this would be a little bit broadly, many people in our generation are like, I would love for capitalism to not exist anymore.
By which they don't mean robots do the work now and robots are your boss and robots take all the money and you're hoping for maybe universal business. I mean, it's a good couple. No one meant for capitalism to go away like that. Yeah, yeah, exactly. And nobody wanted capitalism to go away and be replaced with something where Silicon Valley seemed to be in control of everyone's future. Right.
And so we continue to pay attention to this because while who knows how true these problems will come, the idea that this is socially disruptive seems like a safe bet. Yeah, maybe something else to say that's important is that the way all of this is unfolding is anti-democratic, right? No one really asked for this and the average person does not get a vote, right? If you're just like an average person, you don't want AI to replace your job, there's really nothing you can do about it.
And so I think that actually breeds a ton of resentment against these companies. And while the government is starting to pay attention, at least here in the United States, they're being very, very gentle about everything. And so if you wanted to change the course of AI, it's not actually clear how you would go about that. And so I think that's another really big reason why people often resent it.
And it's funny, there's always a part in my mind when you see these stories, while these departures to say, okay, that's like the internal drama of a company that I do not have an internal view on. And it might matter, it might not. I would have to know more than I know to know.
But to what your point is, if part of the problem is that these technologies can research your society, we have a democratic society, but the way their research and society is not democratic, then the fact that even within these companies, they're becoming more like monarchies, does seem like something that's worth paying attention to. Yeah, yeah, absolutely. Kasey Newton, he writes the newsletter platformer, go check it out. You can also listen to him every week on the podcast hard form.
We're gonna keep using you to monitor this. Yeah, let me just say, I'm gonna keep paying attention to it. Kasey, thank you. You're welcome. So, attention is a presentation of Odyssey and Jake's subproductions. It's created by me, PJ vote, and Shruthi Pinomini, and it's produced by Garrett Graham and Noah John. Fact checking this week by Mary Mathes, theme, original composition, and mixing by Armin Bizarrian.
Our executive producers are Genoise Berman and Leah Restennis, thanks to the team at Jigsaw, Alex Gibney, Rich Prello, and John Schmidt, and to the team at Odyssey, JD Crowley, Rob Morandi, Craig Cox, Eric Donnelly, Kate Rose, Matt Casey, Mora Curran, Josephina Frances, Kirk Courtney, and Hillary Shuff. Our agent is Orrin Rosenbaum at UTA.
If you would like to help support the making of this show, if you would like to vote for our existence, you can sign up for a premium subscription at searchengine.show. You'll get ad-free episodes of the show, as well as the occasional bonus episode. You can follow and listen to search engine with PJ vote now for free, on the Odyssey app, or wherever you get your podcasts. Thanks for listening, we'll see you next week.