'Hard Fork': An Interview With Sam Altman - podcast episode cover

'Hard Fork': An Interview With Sam Altman

Nov 24, 202359 min
--:--
--:--
Listen in podcast apps:

Episode description

It was a head-spinning week in the tech world with the abrupt firing and rehiring of OpenAI’s chief executive, Sam Altman. 

The hosts of “Hard Fork,” Kevin Roose and Casey Newton, interviewed Altman only two days before he was fired. Over the course of their conversation, Altman laid out his worldview and his vision for the future of A.I. Today, we’re bringing you that interview to shed light on how Altman has quickly come to be seen as a figure of controversy inside the company he co-founded.

“Hard Fork” is a podcast about the future of technology that's already here. You can search for it wherever you get your podcasts. Visit nytimes.com/hardfork for more.

Hear more of Hard Fork's coverage of OpenAI’s meltdown:

Unlock full access to New York Times podcasts and explore everything from politics to pop culture. Subscribe today at nytimes.com/podcasts or on Apple Podcasts and Spotify.

Transcript

Hey, it's Michael. I hope you're having a wonderful Thanksgiving holiday. If you didn't catch us yesterday, let me just say, in addition to you and everyone who listens to The Daily, one of the things we are so grateful for here at the show is our amazing colleagues, reporters and editors throughout the newsroom and also throughout the Times' audio department. So yesterday and today, we're doing something a little bit different. We're turning the stage over

to those colleagues in the audio department to showcase their terrific work. Today, it's our friends at Hard Fork. If you're not familiar, Hard Fork is a weekly tech conversation hosted by Kevin Rus and Casey Newton. It's excellent. Case in point, for today's show, we're going to play you an interview that Kevin and Casey did with Sam Altman, the CEO at OpenAI, just

two days before Altman was abruptly ousted and later reinstated by his board. If you're a daily listener, earlier this week, we covered the entire saga in our episode Inside the Coo at OpenAI. Anyway, here's Kevin and here's Casey, who are going to say a little bit more about their interview with Altman and about their show, Hard Fork. Take a listen. Hello, daily listeners. Hope you had a good Thanksgiving. I'm Kevin Rus, a tech columnist

for The New York Times. I'm Casey Newton from Platformer. As you just heard, we are the hosts of Hard Fork. It's a weekly podcast from The New York Times about technology, Silicon Valley, AI, the future, all that stuff. Casey, how would you describe our show Hard Fork to the daily audience? I would say if you're somebody who is curious about the future, but you're also the sort of person who likes to get a drink at the end of the week with

your friends, we are the show for you. We're going to tell you what is going on in this crazy world, but we're also going to try to make sure you have a good time while we do it. Yep. So this week, for example, we have been talking about the never-ending saga at OpenAI that Michael just mentioned. If you haven't been following this news, let's just summarize

what's going on in the quickest way possible. So last Friday, Sam Altman, the CEO of OpenAI, and arguably one of the most important people in the tech industry was fired by his board. This firing shocked everyone, investors, employees, seemingly Sam Altman himself, who see not to know what was coming. Then over the next few days, there was a wild campaign by investors, employees, and eventually some of the board members to bring back Sam Altman

as CEO. And late on Tuesday night, that campaign was successful. The company announced that Sam was coming back and that the board was going to be reconstituted and basically back to normal. Yeah. On one hand, Kevin, a shocking turn of events and on the other, by the time we got here, basically the only turn of events possible, I think. Yeah. So it's been a totally insane few days. We've done several emergency podcasts about

this. And today, we are going to bring you something that I think is really important, which is an interview with Sam Altman. Now, this interview predates Sam Altman's firing. We recorded it on Wednesday of last week, just two days before he was fired. We obviously

weren't able to ask him about the firing or the events that followed. But I think this conversation lays out Sam's worldview and is really important to understanding why he's been such a controversial figure inside OpenAI and how he's thinking about the way that AI is developing and how it's going to influence the future. Yeah. In a way, Kevin, it's almost as if the firing never happened because what we were curious about was, how are you going

to be leading OpenAI into the future? And as of Tuesday evening, he now will once again be leading OpenAI into the future. Totally. So when we come back, our conversation with Sam Altman, the CEO of OpenAI, recorded just two days before all of this drama started. Sam Altman, welcome back to Hard4. Thank you. Sam, it has been just about a year since

Chatchy PT was released. And I wonder if you have been doing some reflecting over the past year and kind of where it has brought us in the development of AI. Frankly, it has been such a busy year. There has like not been a ton of time for reflections. Well, that's what we brought you in. We want you to reflect here. Great. I can do it now. I mean, I definitely think this was the year so far. There will be maybe more in the future. But the year so far where the general average tech person went

from taking AI, not that seriously, that they're taking it pretty seriously. And the sort of recombiling of expectations given that. So I think in some sense, that's like the most significant update of the year. I would imagine that for you, a lot of the past year has been watching the world catch up two things that you have been thinking about for some time. Does it feel that way? Yeah, it does. We kind of always thought on like the inside of OpenAI that it was strange

that the rest of the world didn't take this more seriously. It wasn't more excited about it. I mean, I think at five years ago, you had you had explained like what Chatchy PT was going to be. I would have thought, wow, that like that sounds pretty cool. And presumably I could have just looked into it more. And I would have smartened myself up. But I think until I actually used it, as is often the case, it was just hard to know what it was.

Yeah, I actually think we could have explained it and it wouldn't have made that much of a difference. We tried. Yeah. Like people are busy with their lives. They don't have like a lot of time to sit there and like listen to some tech people prognosticate about something that may or may not happen. But you should have a product that people use, like get real value out of. And then it's different. Yeah. I remember reading about the early days of the run-up

to the launch of Chatchy PT. And I think you all have said that you did not expect it to be a hit when it launched. No, we thought it would be a hit. We didn't be like this. We did it because we thought it was going to be a hit. We didn't think it was going to be like this big of a hit. Right. As we're sitting here today, I believe it's the case that you can't actually sign up for Chatchy PT. Plus right now. Was that

right? Correct. Yeah. So what's that all about? We have like not enough capacity always. But at some point, it gets really bad. So over the last 10 days or so, we have done, you know, we've like done everything we can. We've wrote out new optimizations. We've like disabled some features. And then people just keep signing up. It keeps getting slower and slower. And there's like a limit at some point to what you can do there. And you can't,

we just don't want to offer like a bad quality of service. And so it gets like slow enough that we just say, you know what, until we can make more progress, either with more GPUs or more optimizations, we're going to put this hold. Not a great place to be into be honest, but you know, it's like the least of several bad options. Sure. And I feel like in the history of tech development, they're often is a moment with really popular products where

you just have to close signups for a little while. Right. The thing that's different about this than others is it's just it's so much more compute intensive than the world is used to for internet services. So you don't usually have to do this like usually by the time

you're at this scale, you've like solved your scaling bottlenecks. Yeah. One of the interesting things for me about covering all the AI changes over the past year is that it often feels like journalists and researchers and companies are discovering properties of these systems sort of at the same time altogether. I mean, I remember when we had you and Kevin Scott

from Microsoft on the show earlier this year around the big, big relaunch. And you both said something to the effect of well to discover what these models are or what they're capable of. You kind of have to put them out into the world and have millions of people using them. Then we saw, you know, all kinds of crazy, but also inspiring things you had you had being Sydney and but you also had people starting to use these things in their lives.

So I guess I'm curious what you feel like you have learned about language models and your language model specifically from putting them out into what we don't want to be surprised by is the capabilities of the model. That would be bad. And we were not, you know, with GPT-4, for example, we took a long time between finishing that model and releasing it, red

team did heavily, really studied it, did all of work internally, externally. And there's, I'd say there's at least so far, and maybe now it's been long enough that we would have, we have not been surprised by any capabilities the model had that we just didn't know about at all in a way that we were for GPT-3, frankly, sometimes that people found stuff. But what I think you can't do in the lab is understand how technology and society are going to co-evolve.

So you can say, here's what the model can do and not do, but you can't say like, and here's exactly how society is going to progress given that. And that's where you just have to see what people are doing, how they're using it. And that, like, well, one thing is they use it a lot. Like that's one takeaway that we did not, clearly we did not appropriately

plan for. But more interesting than that is the way in which this is transforming people's productivity, personal lives, how they're learning, and how, like, you know, one example that I think is instructive because it was the first and the loudest is what happened with Chatchy-P-T in education. Days, at least weeks, but I think days after the release of Chatchy-P-T school districts were like falling all over themselves to ban Chatchy-P-T. And

that didn't really surprise us. Like that could have predicted, didn't predict. The thing that happened after that quickly was, you know, like weeks to months, was school districts and teachers saying, hey, actually we made a mistake. And this is really important part of the future of education and the benefits far away the downside. And not only are we

unbaning it, we're encouraging our teachers to make use of it in the classroom. We're encouraging our students to get really good at this tool because it's going to be part of the way people live. And, you know, then there was like a big discussion about what the kind of path forward should be. And that is just not something that could have happened

without releasing. And part, can I say one more thing? Yeah. Part of the decision that we made with the Chatchy-P-T release, the original plan had been to do the Chatchy-P-T interface and GP-T-4 together in March. And we really believe in this idea of iterative deployment. And we had realized that Chatch, the Chatch interface plus GP-T-4 was a lot. I don't think we realized quite how much it was because we split it. Like too much

for society to take in. And we put it out with GP-T 3.5 first, which we thought was much weaker model, turned out to still be powerful enough for a lot of use cases. But I think that in retrospect was a really good decision and helped with that process of gradual adaptation for society. Looking back, do you wish that you had done more to sort of, I don't know, give people some sort of a manual to say, here's how you can use this

at school or at work. Two things. One, I wish we had done something intermediate between the release of 3.5 in the API and Chatchy-P-T. Now, I don't know how well that would have worked because I think there was just going to be some moment where it went like viral in the mind of society. And I don't know how incremental that could have been. That's sort of a like, either it goes like this or it doesn't kind of thing. And I think, I

have reflected on this question a lot. I think the world was going to have to have that moment. It was better sooner than later. It was good we didn't when we did. Maybe we should have tried to push it even a little earlier. But it's a little chancey about when it hits. And I think only a consumer product could have done what happened there. Now, the second thing is should we have released more of a how-to manual? And I honestly don't

know. I think we could have done some things there that would have been helpful. But I really believe that it's not optimal for tech companies to tell people like, here is how to use this technology and here's how to do whatever. And the organic thing that happened there actually was pretty good. Yeah. I'm curious about the thing that you just said about we thought it was important to get this stuff into folks hands sooner rather than later. Say more about what that is.

More time to adapt for our institutions and leaders to understand for people to think about what the next version of the model should do, what they'd like, what would be useful, what would not be useful, what would be really bad, how society and the economy need to co-evolve? Like the thing that many people in the field or adjacent to the field have advocated or used to advocate for which I always thought was super bad was, you know, this is so disruptive,

such a big deal. It's got to be done in secret by the small group of us that can understand it. And then we will fully build the AGI and push a button all at once when it's ready. And I think that would be quite quite bad. Yeah, because it would just be way too much change too fast. Yeah, again, society and technology have to co-evolve and people have to decide what's going to work for them and not and how they want to use it. And we're, you know, you

can criticize OpenApp many, many things. But we do try to like really listen to people and adapt it in ways that make it better or more useful and we're able to do that. But we wouldn't get it right without that feedback. Yeah. I want to talk about AGI and the path to AGI later on. But first I want to just define AGI and have you talk about sort of where we are on the continuum. So I think it's ridiculous and meaningless term. Yeah. I saw that I apologize. I just never know

what people are talking about. No one else talking about it. They mean like really smart AI. So it stands for artificial general intelligence and you could probably ask 100 different AI researchers and they would give you 100 different definitions of what AGI is. Researchers at Google DeepMind just released a paper this month that sort of offers a framework. They have five levels level, I guess they have levels ranging from level zero, which is

no AI all the way up to level five, which is superhuman. And they suggest that currently chat, GPT, Bard, Lama, two are all at level one, which is sort of equal to or slightly better than an unskilled human. Would you agree with that? Like where are we? If you would, if you'd say this is a term that means something and you sort of define it that way, how closer are we? I think the thing that matters is the curve and the rate of progress and

there's not going to be some milestone that we all agree. Like, okay, we've passed it. Now it's called AGI. Like what I would say is we currently have systems that are like there will be researchers who will write papers like that and you know, academics will debate it and people need to be able to debate it. And I mean, most of the world just cares like is this thing useful to me or not. And we currently have systems that are somewhat useful

clearly. Like, and you know, whether we want to say like it's a level one or two, I don't know, but people use it a lot and they really love it. There's huge weaknesses in the current systems, but it doesn't mean that like I'm, you know, a little embarrassed by GPDs, but people still like them. And that's good. Like, it's nice to do useful stuff for people. So yeah, call it a level one. It doesn't bother me at all. I am embarrassed by

it. We will make them much better. But at their current state, they're still like deliding people. I mean, useful to people. Yeah. I also think it underrates them slightly to say that they're just better than unskilled humans. Like when I use chatGPD, it is better than skilled human for some things and worse than unskilled that worse than any human and many other things. But I guess that this is one of the questions that people

ask me the most. And I imagine ask you is like, what are today's AI systems useful and not useful for doing? I would say the main thing that they're bad at, well, many things, but one that is on my mind, that is they're bad at reasoning. And a lot of the valuable human things require some degree of complex reasoning. But they're good at a lot of other things like, like, you know, GPT-4 is vastly superhuman in terms of its world knowledge.

I can notice there's a lot of things in there. And it's just, it's like very different than how we think about evaluating human intelligence. So it can't do these basic reasoning tasks. On the other hand, it knows more than any human is ever known. On the other hand, again, sometimes it like totally makes stuff up in a way that a human would not. But, you know, if you're using it to be a coder, for example, it can hugely increase your productivity.

And there's value there, even though it has all of these other weak points. If you're a student, you can learn a lot more than you could without using this tool in some ways. You're there too. Let's talk about GPT's, which you announced at your recent developer conference. For those who haven't had a chance to use one yet, Sam, what's a GPT? It's like a custom version of chat GPT that you can get to behave in a certain way. You can give it limited

ability to do actions. You can give it knowledge to refer to. You can say, like, act this way. But it's super easy to make. And it's a first step towards more powerful AI systems and agents. We've had some fun with them on the show. There is a hard fork bot that you can sort of ask about anything that's happened on any episode of the show. It works pretty well. We found when we did some testing. But I want to talk about where this is going.

What is the GPT's that you've released a first step toward? AI's that can accomplish useful tasks. Like the, the, I mean, we need to move towards this with great care. You know, we don't, I think it would be a bad idea to put, like, turn powerful agents free on the internet. But, AIs that can act on your behalf to do something with a company that can access your data that can, like, help you be good at a task. I

think that's, that's going to be an exciting way we use computers. Like, we have this belief that we're heading towards a vision where there are new interfaces, new user experiences possible because finally the computer can understand you and think. And so the sci-fi vision of a computer that you just like, tell what you want and it figures out how to do it.

This is a step towards that. Right now, I think what's holding a lot of people back in, a lot of companies and organizations back in sort of using this kind of AI and their work is that it can be unreliable. It can make up things. It can give wrong answers, which is fine if you're doing creative writing assignments, but not if you're a hospital or a law firm

or something else with big stakes. How do we solve this problem of reliability? And do you think we'll ever get to the sort of low fault tolerance that is needed for these really high stakes applications? So, first of all, I think this is like a great example of people understanding the technology, making smart decisions with it, society and the technology co-evolving together. Like, what you see is that people are using it where appropriate

and where it's helpful and not using it where you shouldn't. And for all of this sort of like fear that people have had, like both users and companies seem to really understand the limitations that are making appropriate decisions about where to roll it out. It, the kind of controllability, reliability, whatever you want to call it, that is going to get much better. I think we'll see a big step forward there over the coming years. And I think

that there will be a time. I don't know if it's like 2026, 2028, 2030, whatever, but there will be a time where we just don't talk about this anymore. Yeah. It seems to me, though, that that is something that becomes very important to get right in the, as you build these more powerful GPs, right? Once I tell, like, I would love to have a GPTB in my assistant, go through my emails, hey, don't forget to respond to this before the

end of the day. The reliability's got to be way up before that happens. Yeah. Yeah. That makes sense. You mentioned, as we started to talk about GPs, that you have to do this carefully. For folks who haven't spent as much time reading about this, explain what are some things that could go wrong. You guys are obviously going to be very careful with this. Other people are going to build GPT-like things might not put the same kind of controls

in place. So what can you imagine other people's doing that, like you as the CEO would say, you're a folks, hey, it's not going to be able to do that. Well, that example that you just gave, like, if you let it act as your assistant and go, like, you know, send emails, do financial transfers for you, like, it's very easy to imagine how that could go wrong. But I think most people who would use this don't want that to happen on their behalf either. And so there's more resilience

to this sort of stuff than people think. I think that's, I mean, for what's worth on the whole on the hallucination thing, which it does feel like, like, has maybe been the longest conversation that we've had about Chatchy Pt in general since it launched, I just always think about Wikipedia, as a resource I use all the time. And I don't want Wikipedia to be wrong, but 100% of the time, it doesn't matter if it does. I am not relying on it for life-saving information, right? Chatchy Pt

for me is the same, right? It's like, hey, you know, it's, I mean, it's like great and just kind of bar trivia. Like, hey, you know, what's like the history of this conflict in the world? Yeah, I mean, we want to get that a lot better. And we, we will. Like, I think the next model will just hallucinate much less. Is there, is there an optimal level of hallucination in an AI model? Because I've heard researchers say, well, you actually don't want it to never hallucinate because that would

mean making it not creative. That new ideas come from making stuff up. That's not necessarily tethered. This is why I tend to use the word controllability and not reliability. You want it to be reliable when you want. You want it to either you instruct it or it just knows based off of the context that you're asking a factual query and you want the 100% black and white answer. But you also want it to know when you want it to hallucinate or you want it to make stuff up. As you just said,

like new discovery happens because you come up with new ideas. Most of what you're wrong and you discard those and keep the good ones and sort of add those to your understanding of reality. Or if you're telling a creative story, you want that. So, so the if these models, like if these models didn't hallucinate at all ever, they wouldn't be so exciting. They wouldn't do a lot of the things that they can do. But you only want them to do that when you want them to do that.

And so, so like that the way I think about is like model capability, personalization and controllability. And those are like the three axes we have to push on. And controllability means no hallucinations when you don't want lots of it when you're trying to emit something new. Let's maybe start moving into some of the debates that we've been having about AI over the

past year. I actually want to start with something that I haven't heard as much, but that I do bump into when I use your products, which is like they can be quite restrictive in how you use them. I think mostly for for great reasons, right? Like I think you guys have learned a lot of lessons from the past era of tech development. At the same time, I feel like like I've tried to ask chat GPT a question about sexual health. I feel like it's going to call the police on me, right?

So I'm just curious how you've approached this. Yeah, look one thing, no one wants to be scolded by a computer ever. Like that is not a good feeling. And so you should never feel like you're going to have the police call it as more like horrible horrible horrible. We have started very conservative,

which I think is a defensible choice. Other people may have made a different one. But again, that principle of controllability, what we'd like to get to is a world where if you want some of the guardrails relaxed a lot, and that's like you're not like a child or something, then fine, we'll relax the guardrails. It should be up to you. But I think starting super conservative here, although annoying is a defensible decision, and I wouldn't have gone back and made it differently.

We have relaxed it already. We will relax it much more, but we want to do it in a way where it's user controlled. Yeah, are there certain red lines you won't cross things that you will never let your models be used for other than things that are obviously illegal or dangerous? Yeah, certainly things that are illegal and dangerous we won't. There's like a lot of other things that I could say, but they so depend like where those red lines will be so depend on

how the technology evolves that it's hard to say right now. Like here's the exhaustive set. Like we really try to just study the models and predict capabilities as we go, but we get, you know, if we learn something new, we change our plans. Yeah. One other area where things have been shifting a lot over the past year is in AI regulation and governance. I think a year ago, if you'd asked the average congressperson, what do you think of AI? They would have said, what's that?

Get out of my office. Right. We just recently saw the Biden White House put out an executive order about AI. You have obviously been meeting a lot with lawmakers and regulators, not just in the US, but around the world. What's your view of how AI regulation is shaping up? It's a really tricky point to get across. What we believe is that on the frontier systems, there does need to be proactive regulation there, but heading into overreach and regulatory capture would be really bad.

There's a lot of amazing work that's going to happen with smaller models, smaller companies, open source efforts, and it's really important that regulation not strangle that. It's like, I've sort of become a villain for this, but I think there was a... We have. How do you feel about this? I know it, but have bigger problems in my life. But this message of regulate us, regulate the really capable models that can have significant consequences, but

leave the rest of the industry alone. It's just a hard message to get across. Here is an argument that was made to me by a high-ranking executive at a major tech company as some of this debate was playing out. This person said to me that there's essentially no harms that these models can have that the internet itself doesn't enable.

Right? And that to do any sort of work like it is proposed in this executive order to have to inform the Biden administration is just essentially pulling up the ladder behind you and ensuring that the folks who've already raised the money can reap all of the profits of this new world and will leave the little people behind. So I'm curious what you make of that argument.

I disagree with it on a bunch of levels. First of all, I wish the threshold for when you do have to report was set differently and based off of like, you know, evals and capability thresholds. Not flops. Not flops. Okay. But there's no small company train with that many flops anyway. So that's like a little bit, you know, for the listener who maybe didn't listen to our last episode. Let's know our flops episode. The flops are the sort of measure of the amount of computing that

is used to train these models. The executive order says if you're above a certain computing threshold, you have to tell the government that you're training a model that big. But no small effort is training at 10 to the 26 slops. Currently no big effort is either. So that's like a dishonest comment. Second of all, the burden of just saying like here's what we're doing is not that great. But, but third of all, the the underlying thing there, there's nothing you can do here that you couldn't

already do on the internet. That's the real either dishonesty or lack of understanding. You could maybe say with GPT-4, you can't do anything. You can't do it on the internet. But I don't think that's really true even at GPT-4. Like, you there are some new things. And GPT-5 and 6, there will be like very new things. And saying that we're going to be like cautious and responsible and have some testing around that, I think that's going to look more prudent in retrospect than it

may be sounds right now. I just say for me, these seem like the absolute gentlest regulations you could imagine. It's like tell the government and report on any safety testing you did. Seems reasonable. Yeah. I mean, people are not just saying that these fears are unjustified

of AI and sort of existential risk. Some people, some of the more vocal critics of OpenAI have said that OpenAI that you are specifically lying about the risks of human extinction from AI, creating fear so that regulators will come in and make laws or give executive orders that prevent smaller competitors from being able to compete with you. Andrew Ng, who's I think one of your professors at Stanford recently said something to this effect. What's your response to that?

I'm curious if you have thoughts about that. Yeah. Like, I actually don't think we're all going to go extinct. I think it's going to be great. I think we're like heading towards the best world ever. But when we deal with a dangerous technology as a society, we often say that we have to confront and successfully navigate the risks to get to enjoy the benefits. And that's like a pretty

consensus thing. I don't think that's like a radical position. You can imagine that if this technology stays on the same curve, there are systems that are capable of significant harm in the future. And Andrew also said not that long ago that he thought it was like totally irresponsible to talk about AGI because it was just never happening. I think he compared it to worrying about overpopulation on Mars. And I think now he might say something different. So like

it's, humans are very bad at having intuition for exponentials. Again, I think it's going to be great. I wouldn't work on this if I didn't think it was going to be great. People love it already. I think they're going to love it a lot more. But that doesn't mean we don't need to be responsible and accountable and thoughtful about what the downsides could be. And in fact, I think the tech industry often has only talked about the good and not the bad. And that doesn't go

well either. The exponential thing is real. I have dealt with this. I've talked about the fact that I was only using GPT 3.5 until a few months ago and finally at the urging of a friend upgraded. And I thought, oh, I would have given you a free account. Sorry. I'm sorry, but I should have asked. But it's a real improvement. It is a real improvement. And not just in a sense of, the copy that it generates is better. It actually transformed my sense of how quickly the industry

was moving. It made me think, oh, like the next generation of this, they is going to be sort of radically better. And so I think that part of what we're dealing with is just that it has not been widely distributed enough to get people to reckon with the implications. I disagree with that. I mean, I think that like, you know, maybe the tech expert sailor, this is like, you know, not a big deal. Whatever. Like most of the world is like, who has it used

even the free version is like, oh man, like our real AI. Yeah. Yeah. And you went around the world this year talking to people in a lot of different countries. I'd be curious what, you know, to what extent that informed what you just said. Significantly. I mean, I was, I had a little bit of a sample bias right because the people that wanted to meet me were probably like pretty excited. But you do get a sense. And there's like quite a lot of excitement, maybe more excitement in the rest of the

world than the US. Sam, I want to ask you about something else that people are are not happy about when it comes to these language and image models, which is this issue of copyright. I think a lot of people view what open AI and other companies did, which is sort of, you know, hoovering up work from across the internet using it to train these models that can in some cases output things that are similar to the work of living authors or writers or artists. And they just think like this is

the original sin of the AI industry. And we are never going to forgive them for doing this. What do you think about that? And what would you say to artists or writers who just think that this was a moral lapse? Forget about the legal, whether you're allowed to do it or not, that it was just unethical for you and other companies to do that in the first place. Well, we block that stuff. Like you can't go to like Dolly in general. I mean, you get speaking of being annoyed. Like,

we may be too aggressive on that. But I think it's the right thing to do until we figure out some sort of economic model that works for people. And you know, we're doing some things there now, but we've got more to do. Other people in the street like do allow quite a lot of that. And I get why artists are annoyed. I guess I'm talking less about the output question than just the act of taking all of this work much of it copyrighted without the explicit permission of the people who

created it and using it to train these models. Do you think the people, what would you say to the people who just say that Sam, that was, that was the wrong move you should have asked. And we'll never forgive you for it. Um, well, first of all, like always have empathy for people who are like,

hey, you did this thing and it's affecting me. And you know, we didn't talk about it first or it was just like a new, a new thing like the, um, I do think that in the same way humans can read the internet and learn AI should be allowed to read the internet and learn shouldn't be regurgitating, shouldn't be, you know, violating any copyright laws. But if we're really going to say that like

AI doesn't get to read the internet and learn. Um, and if you read a physics textbook and learn how to do a physics calculation, not every time you do that in the rest of your life, like, you've got to like figure out how to like, uh, that seems like not a good solution to me. But on individuals private work, um, under, yeah, we, we try not to train on that stuff. We really don't want to be here upsetting people. Again, I think other people in the industry have taken different approaches.

And we've also done some things that I think now that we understand more what we will do differently in the future. Um, like what? Like what we do differently? We want to figure out new economic models so that say if you're an artist, we don't just totally block you. Um, we don't just not train on your data, which a lot of artists also say, no, I want this in here. I want like whatever.

But we have a way to like help share revenue with you. GPDs are maybe going to be an interesting first example of this because people will be able to put private data in there and say, hey, use this version and they're gonna be a revenue share around it. Well, I had one question about the future that kind of came out of what we were talking about, which is what is the future of the

internet as Chatchy PT rises? And the reason I ask is I now have a hotkey on my computer that I type when I want to know something and it just acts as, it acts as Chatchy PT directly through software called Raycast. And because of this, I am using Google search, not nearly as much. I am visiting websites,

not nearly as much. That has implications for all the publishers. And for frankly, just the model itself, because presumably if the economics change, there will be fewer web pages created, there's less data for Chatchy PT to access. So I'm just curious what you have thought about the internet in a world where your product succeeds in the way you want it to. Um, I do think if this all works, it should really change how we use the internet. There's a lot of things that the interface for is

like perfect. Like if you want to like mindlessly watch TikTok videos, perfect. But if you're trying to like get information or get a task accomplished, it's actually like quite bad relative to what we should all aspire for. And you can totally imagine a world where you have a task that right now takes like hours of stuff, clicking on the internet and bringing stuff together. And you just ask Chatchy PT to do one thing and it goes off and computes and you get the answer back. And uh,

I'll be disappointed if we don't use the internet differently. Yeah. Um, do you think that the economics of the internet as it is today are robust enough to withstand the channels that AI poses? Probably. Okay. Well, I, I worry in particular about the publishers, the publishers have

been having a hard time already for, you know, a million other reasons. Um, but to the extent that they're driven by advertising and visits to web pages and to the extent that the visits to the web pages are driven by Google search in particular, a world where web search is just no longer the front page to most of the internet. I think does require a different kind of web economics. I think it does require a shift, but I think the value is, so what I thought you were asking

about was like, is there going to be enough value there for some economic model to work? And I think that's definitely going to be the case. Yeah, the model may have to shift. I would love it if ads become less a part of the internet. Like I was thinking the other day, like I just had this like for whatever reason, like this thought in my head is I was like browsing around the internet being

like, there's more ads than content everywhere. I was reading a story today, scrolling on my phone, and I wait, man, I should get it to a point where between all of the ads on my relatively large phone screen, there was one line of text from the article visible. You know, one of the reasons I think people like, tattooed to even if they can't articulate is we don't do ads. Yeah, like as a intentional choice, because there's plenty of ways you could imagine us putting ads totally.

But we made the choice that ads plus AI can get a little dystopic. We're not saying never like, we do want to offer a free service, but like a big part of our mission fulfillment, I think, is if we can continue to offer chat GPT for free at a high quality of service to anybody who wants it, and just say like, hey, here's free AI and good for AI. And no ads, because I think that really does, especially as the AI gets really smart, that really does get a little strange. Yeah, yeah.

I know we talked about AGI and it not being your favorite term, but it is a term that people in the industry use is sort of a benchmark or a milestone or something that they're aiming for. And I'm curious what you think the barriers between here and AGI are. AGI are maybe let's define AGI as sort of a computer that can do any cognitive task that a human can. If it like, let's say we make an AI that is really good, but it can't go discover novel physics.

Would you call that a AGI? I probably would. Yeah. Would you? Well, again, I don't like the term, but I wouldn't call that, we're done with the mission. I'd say we still got a lot more work to do. The vision is to create something that is better at humans than doing original science that can invent, can discover. Well, I am a believer that all real sustainable human progress comes from scientific and technological progress.

And if we can have a lot more of that, I think it's great. And if the system can do things that we un-aided on our own can't just even as a tool that helps us go do that, then I will consider that a massive triumph and happily, you know, I can happily retire at that point. But before that, I can imagine that we do something that creates incredible economic value, but is not the kind of AGI super intelligence, whatever you want to call it, thing that we should aspire to. Right.

What are some of the barriers to getting to that place where we're doing novel physics research? And keep in mind, Kevin, I don't know anything about technology. That seems unlikely to be true. If you start talking about, you know, like retrieval augmented generation or anything, like I might, you might all fall in, but you lose Casey. Yeah. We talked earlier about just the model's limited ability to reason.

And I think that's one thing that needs to be better. The model needs to be better at reasoning. Like GBT4. An example of this that my co-founder, Ilya, used to sometimes, there's really stuck in my mind is there was like a time in Newton's life where the right thing for him to do. You're talking, of course, about Isaac Newton, not my life. Isaac Newton. Yeah. Okay. Well, maybe you might want to find out. Stay tuned.

Where the right thing for him to do is to read every math textbook he could get his hands on. He should like talk to every smart professor, talk to his peers, you know, do problems, that's whatever. And that's kind of what our models do today. And at some point, there was, he was never going to invent calculus doing that. But didn't exist in any textbook. And at some point, he had to go think of new ideas and then test them out and build them

whenever else. And that phase, that second phase, we don't do yet. And I think you need that before. It's something we want to call an AGI. Yeah. One thing that I hear from AI researchers is that a lot of the progress that has been made over the past, call it five years in this type of AI has been just the result of just things getting bigger, right? Bigger models, more compute. Obviously, there's there's work around the edges in how you build these things that makes them

more useful. But there hasn't really been a shift on the architectural level, you know, of the systems that these models are built on. Do you think that that is going to remain true? Or do you think that we need to invent some new process or new, you know, new mode or new technique to get through some of these barriers? We will need new research ideas. And we have

needed them. I don't think it's fair to say there haven't been any here. I think a lot of the people who say that are not the people building GPT-4, but they're the people sort of opine from the sidelines. But there is some kernel of truth to it. And the answer is we have open AI has a philosophy of we will just do whatever works. Like if it's time to scale the models and work on the engineering challenges, we'll go do that. If now we need a new algorithm

we break through, we'll go work on that. If now we need a different kind of data mix, we'll go work on that. So like we just do the thing in front of us and then the next one and then the next one and then the next one. And there are a lot of other people who want to write papers about, you know, level one, two, three and whatever. And there are a lot of other people who want to say, well, it's not real progress. They just made this like incredible thing that people are using in

love and it's not real sign like. But our belief is like we will just do whatever we can to usefully drive the progress forward and we're kind of open minded about how we do that. What is super alignment? You all just recently announced that you are devoting a lot of resources and time and computing power to super alignment. And I don't know what it is. So can you help me understand? That's alignment that comes with sour cream and guacamole.

There you go. Francisco Takasha. That's a very San Francisco specific joke, but it's pretty good. I'm sorry. Go ahead, Sam. I don't kind of leave it as that. I don't really want to fall. I mean, that was such a good answer. No. So alignment is how you sort of get these models to behave in accordance with the human who's using them, what they want. And super alignment is how you do that for super capable systems. So we know how to align GPT-4 pretty well. But like better than

people thought we were going to be able to do. Now there's this like, we put out GPT-2 and 3 people are like, oh, it's irresponsible research because this is always going to just like speutoxic shit. You're never going to get it. And it actually turns out like we're able to align GPT-4 reasonably well. Maybe too well. Yeah. I mean, good luck getting it to talk about sex is my official comment about GPT-4.

But that's, you know, in some sense, that's an alignment failure because it's not doing what you wanted there. Yeah. So, but now we have that. Now we have like the social part of the problem. We can technically do it. But we don't yet know what the new challenges will be for much more capable systems. And so that's what that team research is. So like what kinds of questions are they

investigating or what research are they doing? Because I confess I sort of I lose my grounding in reality when you start talking about super capable systems and the problems that can emerge with them. Is this sort of a theoretical future forecasting team? Well, they try to do work that is useful today, but for the theoretical systems of the future. So they all have their first result coming out. I think pretty soon. But yeah, they're interested in these questions of

as the systems get more capable than humans. What is it going to take to reliably solve the alignment challenge? Yeah. And I mean, this is a stuff where my brain does feel like it starts to melt as I ponder the implications, right? Because you've made something that is smarter than every human. But you the human have to be smart enough to ensure that it always acts in your interest even though

by definition is way smarter. Yeah, we need some help there. Yeah. I do want to stick on this issue of alignment or super alignment because I think there's there's an unspoken assumption in there that well, you just you just put it as alignment is sort of what what the user wants it to behave like. And obviously, there are a lot of users with good intentions. No, no, yeah, I has to be like what society and the user can intersect on there are going to have to be some rules here.

And I guess where do you derive those rules? Because you know, if you're anthropic, you use, you know, the UN Declaration of Human Rights and the Apple terms of service. And that becomes your most important documents. If you're not just going to borrow someone else's rules, how do you decide which values these things should align themselves? So we're doing this thing. We've been doing this thing. We've been doing these like democratic input governance grants.

We're giving different research teams money to go off inside different proposals. There's some very interesting ideas in there about how to how to kind of fairly decide that. Then I have approached this that I have always been interested in. Maybe we'll try at some point is what if you had hundreds of millions of chat GPT users spend in our few hours a year answering questions about what they thought. The default setting should be what the wide bound should be.

Eventually, you need more than just chat GPT users. You need the whole world to represent it in some way. Because you don't have to think about using it. You're still impacted by it. But to start, what if you literally just had chat GPT chat with its users? It's very important. It would be very important in this case to let the users make final decisions, of course. But you could imagine it saying like, hey, you answered this question this way.

Here's how this would impact other users in a way you might not have thought of. If you want to stick with your answer, that's totally up to you. But are you sure given this new data? Yeah. Then you could imagine like GPT 5 or whatever just learning that collective preference set. I think that's interesting to consider. Yeah. I wonder if that's the Apple terms of service. I want to ask you about this feeling. Kevin and I call it AI Vertical. Is this a widespread term

that you're just sort of us? There is this moment when you contemplate even just the medium AI futures start to think about what it might mean for the job market, your own job, your daily life for society. And there is this kind of dizziness that I find sets in. This year I actually had a nightmare about AGI. And then I sort of asked around and I feel like people who work on this

stuff, like that's not uncommon. I wonder if you have had these moments of AI Vertical, if you continue to have them, or is there at some point where you think about it long enough that you feel like you get your legs underneath you? I think you used to have. I mean, that was some point to these moments, but there were some very strange extreme Vertical moments. Particularly around the launch of GPT 3. But you do get your legs under you.

Yeah. And I think the future will somehow be less different than we think. It's amazing to say, we invent AGI and it matters less than we think. It doesn't sound like a sense that parses. And yet it's what I expect to happen. Why is that? There's like a lot of inertia in society and humans are remarkably adaptable to any amount of change. One question I get a lot that I imagine you do too is from people who want to know what they can do.

You mentioned adaptation as being necessary on the societal level. I think for many years, the conventional wisdom was that if you wanted to adapt to a changing world, you should learn how to code. That was like the classic advice. That'd be such good advice anymore. Exactly. Now AI systems can code pretty well. For a long time, the conventional wisdom was that creative work was untouchable by machines. If you were a factory worker, you might get

automated out of your job. But if you were an artist or a writer, that was impossible for computers to do. Now we see that's no longer safe. Where is the high ground here? Where can people focus their energy if they want skills and abilities that AI is not going to be able to replace? My answer is you always, it's always the right bet to just get good at the most powerful new tools, most capable new tools. And so when computer programmers that, you did want to become

a programmer. And now that AI tools like totally change one person can do, you want to get really good at using AI tools. And so like having a sense for how to work with chat Gbt and other things, that is the high ground. And that's like, we're not going back. That's going to be part of the world. And you can use it in all sorts of ways, but getting fluent at it, I think is really important.

I want to challenge that because I think you're partially right in that I think there is an opportunity for people to embrace AI and sort of become more resilient to disruption that way. But I also think if you look back through history, it's not like we learn how to do something new and then the old way just goes away, right? We still make things by hand, just still in our tisinal market. So do you think there's going to be people who just decide, you know what? I

don't want to use this stuff totally. And they're going to, like there's going to be something valuable in their sort of, I don't know, non AI assisted work. I expect, like, I expect it if we look forward to the future, things, everything that, things that we want to be cheap can get much cheaper. And things that we want to be expensive are going to be astronomically expensive. Like what? Real estate, like handmade goods, art.

And so totally, like there'll be a huge premium on things like that. And there'll be many people who like really, you know, there's always been like a, even when machine made products have been much better, there has always been a premium on handmade products. And I'd expect that to intensify. This is also a bit of a curve ball. Very curious to get your thoughts. Where do you come down on the idea of AI romances? Are these net good for society? I don't know one person. You don't know one.

Okay. But it's clear that there's a huge demand for this, right? Yeah. Like I think that, I mean, you know, replica is building these. They seem like they're doing very well. I would be shocked if this is not a multi-billion dollar company, right? Someone will make a whole lot of money. Yeah. Somebody will. Yeah. For sure. Yeah. Do you, like, I just personally think we're going to have a big culture war. Like I think Box News is going to be doing segments about the generation lost

day. I girlfriend's a boyfriend's like at some point within the next few years. But at the same time, you look at all the data on loneliness. And it seems like, well, if we can give people companions that make them happy during the day, it could be a net good thing. It's complicated. Yeah. I have misgivings, but I don't. This is not a place where I think I get to like impose what I think is good on other people. Tell it. Okay. But it sounds like this is not at the top of your

product. Roadmap is building the boyfriend API. Yeah. All right. You recently posted on X that you expect AI to be capable of superhuman persuasion well before it is superhuman at general intelligence, which may lead to some very strange outcomes. Can you expand on that? Like what are some things that AI might become very good at persuading us to do? And what are some of those strange outcomes you're worried about? The thing I was thinking about at that moment was the upcoming election.

There's a huge focus on the US 2024 election. There's a huge focus on deep fakes and the impact of AI there. And I think that's reasonable to worry about, good to worry about. But we already have some societal antibodies towards people seeing like doctored photos or whatever. And yeah, they're going to get more compelling. It's going to be more believe kind of knows those are there. There's a lot of discussion about that. There's almost no discussion about what are like the new

things AI can do to influence an election AI tools can do to influence an election. And one of those is to like carefully, you know, one-on-one persuade individual people tailored messages. That's like a new thing that the content harms couldn't quite do. Right. And that's not a GI, but that could still be pretty harmful. I think so. Yeah. I know we are running out of time, but I do want to push us a little bit further into the future than this sort of, I don't know, maybe five-year horizon

we've been talking about. If you can imagine a good post-AGI world, a world in which we have reached this threshold, whatever it is, what does that world look like? Does it have a government? Does it have companies? What do people do all day? Like a lot of material abundance. People are, people continue to be very busy, but the way we define work always moves. Like if you, our jobs would not have seemed like real jobs to people several hundred years ago. Right. This would have seemed like

incredibly silly entertainment. It's important to me. It's important to you. And hopefully it has some value to other people as well. There will be, and the jobs of the future may seem, I hope they seem even sillier to us, but I hope the people get even more fulfillment. I hope society gets even more fulfillment out of them. But everybody can have a really great quality of life, like to a degree that I think we probably just can't imagine now. Of course, we'll still have governments. Of course,

people will still squat below whatever they squat below. Over. You know, less different in all of these ways than someone would think and then like unbelievably different in terms of what you can get a computer to do for you. One fun thing about becoming a very prominent person in the tech industry as you are is that people have all kinds of theories about you. One fun one that I heard the other days that you have a secret Twitter account where you are way less measured and careful.

I don't anymore. I did for a while. I decided I just couldn't keep up with the op-suck. It's so hard to lead a double life. What was your secret Twitter account? Obviously, I can't. I mean, I had a good alt. A lot of people have good alt. But, you know, your name is literally Sam Altman. I mean, it would have been weird if you didn't have one. But I think I just got

yeah, too well known or something to be doing that. Yeah. Well, and the sort of theory that I heard attached to this was that you are secretly an accelerationist, a person who wants AI to go as fast as possible and then all this careful diplomacy that you're doing and asking for regulation. This is really just the sort of polite face that you put on for society. But deep down, you just think we should go all gas no breaks toward the future. No, I certainly don't think all

gas no breaks the future. But I do think we should go to the future. And that probably is what differentiates me than like most of the AI companies is I think AI is good. Like, I don't secretly hate what I do all day. I think it's going to be awesome. Like, I want to see this get built. I want people to benefit from this. So all gas no break, certainly not. And I don't even think like most people who say it mean it. But I am a believer that this is a tremendously beneficial technology.

And that we have got to find a way safely and responsibly to get it into the hands of the people to confront the risk so that we get to enjoy the huge rewards. And like, you know, maybe relative to the prior of most people who work on AI that does make me an accelerationist. But compared to those like accelerationist people, I'm clearly not them. So, you know, I'm like somewhere, I think you like want to see you of this company to be somewhere. You're accelerating somewhere in the

imagination. I think I am your gas and breaks. I believe that this will be the most important and beneficial technology humanity has ever has yet invented. And I also believe that if we're not careful about it, it can be quite disastrous. And so we have to navigate it carefully. Yeah. Yeah. See, I'm thanks for coming on HardFork. Thank you guys. HardFork is produced by Davis Land and Rachel Cone. We're edited by

Jen Poient. Today's show was engineered by Alyssa Moxley, original music by Marion Luzano and Dan Powell. Our audience editor is Nulgalogly, video production by Ryan Manning and Dylan Bergason, special thanks to Paul Assumin, Puywing Tam, Kate LaPresti and Jeffrey Miranda. You can email us at hardfork.nyx.com

This transcript was generated by Metacast using AI and may contain inaccuracies. Learn more about transcripts.