The following is a conversation with Sam Altman, his second time in the podcast. He is the CEO of OpenAI, the company behind GPT-4, Chad GPT, Sora, and perhaps one day, the very company that will build AGI. And now, a quick few second mention of his sponsor. Check them out in the description is the best way to support this podcast. We got a new sponsor, cloaked for protecting a personal information, Shopify for selling stuff online, but I hope for helping out your mind and express
VPN for protecting your privacy and security on the interwebs. Choose wisely, my friends. Also, if you want to work with our amazing team or always hiring or if you just want to get in touch with me, go to LexRidman.com slash contact. And now, onto the full ad reads, as always, no ads in the middle. I try to make this interesting, but if you must skip them, friends, please do check out our sponsors. I enjoy their stuff. Maybe you will too. This episode is brought to you by cloaked.
A sponsor I didn't know existed until quite recently, and always thought a thing like this should exist. And I couldn't quite find a thing like it that existed. And once I found it, it was pretty awesome. It's a platform that lets you generate new email addresses and phone numbers every time you send out for a website. So it's called a masked email, which basically creates, I guess you could say it's a fake email that hides your actual email, but it's not fake and that it actually
exists and persists throughout time. And the website thinks it's real, which is forced to your actual email, you can set up the forwarding. The point is the website or service that you sign up
for doesn't know your actual phone number and doesn't know your actual email. So this is a really interesting idea because when you sign up to different websites, there's a kind of contract unspoken contract that the email you provide and the phone number you provide will not be abused for the kind of abuse I'm talking about in sort of the best case just spammed or in the worst case, that email or phone number being sold out there and then you get not just spam from one sort
but spam from all of the sources all over the place. Anyway, this is just a smart thing to protect yourself. And it also does basic password manager stuff. So you can take a look at this, it's a great password manager with extra privacy superpowers. You can go to cloaked.com slash Lex to get 14 days free or for limited time use code Lex pod when signing up to get 25% off an annual cloaked plan. This episode is also brought to you by Shopify, a platform designed for anyone. Yes, anyone
including me to sell anywhere with a great looking online store. I used it to sell some t-shirts at lexrooming.com slash store. You can check it out. I used the most basic store. It took just a few minutes and the store was up from the shirt design being finished to the store being alive and being able to sell t-shirts and ship those t-shirts thanks to the integration with the third party which there's thousands of integrations with the third party. So for t-shirts that's like
on demand printing. So you don't have to take care of the shipping and the printing and all that kind of stuff. All of that is integrated super easy to do and this works for any kind of business that sells stuff online. You can integrate into your own website or you can sell it on Shopify itself which is what I do. You can sign up for a $1 per month trial period at Shopify.com slash Lex. All lowercase go to Shopify.com slash Lex to take your business to the next level today.
This episode is also brought to you by BetterHelp spelled H-E-L-P. Help they figure out what you need and match with a licensed therapist in under 48 hours. Works for individuals, works for couples. I'm a huge fan of talking as a way of exploring the human mind. Two people talking with a motivation and a goal in mind of surfacing certain kinds of problems and alleviating those kinds of problems. Sometimes the surfacing in itself does a lot of the alleviation.
Returning to a time in the past when trauma happened and to reframe it in a way that helps you understand, that helps you forgive, that helps you let go. All of that. It's really powerful. And BetterHelp just is an accessible way of doing that or at least trying to talk therapy. So they've helped a lot of people. 4.4 million people got help. So you can be one of those. If you want to try, check them out at BetterHelp.com slash Lex and save in your first month.
That's BetterHelp.com slash Lex. This episode is also brought to you by ExpressVPN. I love that there's a kind of privacy theme to the sponsors in this episode. I think everybody should be using a VPN for many reasons. One, it can allow you to geographically transport yourself. But the main reason is it just as an extra layer of security and privacy between you and the ISP, that they say you're technically not supposed to be collecting the data
when you use things like Chrome and Cognito, but they can be collecting the data. I don't know how the laws of that works, but I wouldn't trust it. So a VPN is essential for that. My favorite VPN for many, many, many, many, many, many years has been expressed. The PN Big Sexy button still works. It looks different, but still works on any operating system. My favorite being Linux. I can talk forever about why I love Linux. I wonder if Linux will be around with all this AI.
Will all this rapid AI development, maybe programmers, programming's a way of life as a recreation for millions, as a profession for millions, will die out and there'll only be a handful of you like the cobalt programmers of today that carry the flag of knowing what Linux is, how to spell Linux, let alone use it. I wonder. Hopefully not because there's always room for optimizing at every level the compilation from the human language to the AI language to the machine language to the zero
zone ones, the compilation of the entire stack. I think there's a lot of jobs to be had. A lot of really profitable well paying jobs to be had there, but maybe not millions of people are needed. Maybe there'll be millions of people that program with just natural language, with just words, English or whatever new language you create that the whole world can use in the whole world and using can help break down the barriers of language. We arrived here friends when we started at the
MIGA explanation of the use of a VPN. You can also take this journey by going to express VPN.com slash Lex pod for an extra three months free. This is a Lex Friedman podcast to support it, please check out our sponsors in the description. I know do friends here Sam Altman. Take me through the open AI board saga that started on
3rd of November 16th, maybe 4th of November 17th for you. That was definitely the most painful professional experience of my life and chaotic and shameful and upsetting and a bunch of other negative things. There were great things about it too and I wish I wish it had not been in such an adrenaline rush that I wasn't able to stop and appreciate them at the time.
I came across this old tweet of mine from that time period which was like it was like kind of going to your own eulogy, watching people say all these great things about you and just like unbelievable support from people I love and care about. That was really nice. That whole weekend I kind of felt with one big exception I felt like a great deal of love and very little hate.
Even though it felt like I just I have no idea what's happening and what's going to happen here and this feels really bad and there were definitely times I thought it was going to be like one of the worst things to ever happen for AI safety. I also think I'm happy that it happened
relatively early. I thought at some point between when Open AI started and when we created AGI there was going to be something crazy and explosive that happened but there may be more crazy and explosive things still to happen. It still I think helped us build up some resilience and be ready for more challenges in the future but the thing you had a sense that you would experience is some kind of power struggle. The road to AGI should be a giant power struggle like the world should.
I like I well not should I expect that to be the case. And so you have to go through that as like you said iterate as often as possible figuring out how to have a board structure, how to have organization, how to have the kind of people that you're working with, how to communicate all that in order to deescalate the power struggle as much as possible. Yeah, pacify it.
But at this point it feels you know like something that was in the past that was really unpleasant and really difficult and painful but we're back to work and things are so busy and so intense that I don't spend a lot of time thinking about it. There was a time after there was like this fugue state for kind of like the month after maybe 45 days after that was I was just sort of like drifting through the days I was so out of it I was feeling so down just had a personal psychological
level. Yeah, really painful and hard to like have to keep running open AI in the middle of that. I just wanted to like crawl into a cave and kind of recover for a while. But you know now it's like
we're just back to working on the mission. What's still useful to go back there and reflect on board structures, on power dynamics, on how companies are run the tension between research and product development and money and all this kind of stuff so that you who have a very high potential of building a GI would do so in a slightly more organized, less dramatic way in the future. So there's value there to go both the personal psychological aspects of you as a leader and
also just the board structure and all this kind of messy stuff. Definitely learned a lot about structure and incentives and what we need out of a board and I think that is it is valuable that
this happened now in some sense. I think this is probably not like the last high stress moment of opening AI but it was quite a high stress moment like company very nearly got destroyed and we think a lot about many of the other things we've got to get right for AGI but thinking about how to build a resilient org and how to build a structure that will stand up to like a lot of pressure in the world which I expect more and more as we get closer. I think that's super important.
Do you have a sense of how deep and rigorous the deliberation process by the board was like can you shine some light on just human dynamics involved in situations like this? Was it just a
few conversations and also an escalates and why don't we fire Sam kind of thing? I think I think the board members were are well-meaning people on the whole and I believe that in stressful situations where people feel time pressure whatever people understandably make suboptimal decisions and I think one of the challenges for open AI will be we're going to have to have a board and a team that are good at operating under under pressure. Do you think the board had too
much power? I think boards are supposed to have a lot of power but one of the things that we did see is in most corporate structures boards are usually answerable to shareholders seen there's sometimes people have like super voting shares or whatever in this case and I think one of the
things with our structure that we maybe should have thought about more than we did is that the board of a nonprofit has in less you put other rules in place like quite a lot of power they don't really answer to anyone but themselves and there's ways in which that's good but what we'd really like is for the board of open AI to like answer to the world as a whole as much as that's
a practical thing. So there's a new board announced yeah there's I guess a new smaller board of first and others a new final board not a final board yet we've added some more add some okay what
is fixed in the new one that was perhaps broken in the previous one. The old board sort of got smaller over the course of about a year it was nine and then it went down to six and then we couldn't agree on who to add and the board also I think didn't have a lot of experienced board members and a lot of the new board members at open AI have just have more experiences board members. I think that'll help. It's been criticized some of the people that are added to the board.
I heard a lot of people criticize in the additional Larry Summers for example. What's the process of selecting the boy like what's involved in that. So bread and Larry were kind of decided in the heat of the moment over this like very tense weekend and that was I mean that weekend was like a real roller coaster it was like a lot of a lot of ups and downs and we were trying to agree on new board members that both sort of the executive team here and the old board members
felt would be reasonable. Larry was actually one of their suggestions the old board members Brett I think I had even previously that weekend suggested but he was you know busy and didn't want to do it and then we really needed help in one. We talked about a lot of other people too but that was I felt like if I was going to come back I needed new board members.
I didn't think I could work with the old board again in the same configuration although we then decided and I'm grateful that Adam would stay but we wanted to get to we considered various configurations decided we wanted to get to a board of three and had to find two new board members over the course of sort of a short period of time. So those were decided honestly without you know that's like you kind of do that on the battlefield you don't have time to design a rigorous process
then for new board members since I knew board members will add going forward. We have some criteria that we think are important for the board to have different expertise that we want the board to have unlike hiring an executive where you need them to do one role well the board needs to do a whole
role of kind of governance and thoughtfulness well and so one thing that Brett says which I really like is that you know we want to hire board members in slates not as individuals want at a time and you know thinking about a group of people that will bring non-profit expertise expertise running companies sort of good legal and governance expertise that's kind of what we
tried to optimize for. So it's technical savvy important for the individual board members not for every board member but for certainly some you need that that's part of what the board needs to do.
So I mean the interesting thing that people probably don't understand about opening it I certainly don't is like all the details of running the business when they think about the board given the drama and think about you they think about like if you reach AGI or you reach some of these incredibly impactful products and you build them and deploy them what's the conversation with the board like and they kind of think all right what's the right squad to have in that kind of situation
to deliberate. Look I think you definitely need some technical experts there and then you need some people who are like what can how can we deploy this in a way that will help people in the world the most and people who have a very different perspective you know I think a mistake that
you or I might make is to think that only the technical understanding matters and that's definitely part of the conversation with that board to have but there's a lot more about how that's going to just like impact society and people's lives that you really want represented in there too. And you're just kind of are you looking at the track record of people or you're just having
conversations. Track record is a big deal. You of course have a lot of conversations but I you know there's some rules where I kind of totally ignore track record and just look at slope kind of ignore the Y intercept. Thank you. Thank you for making it mathematical for the audience. For a board member like I do care much more about the Y intercept like I think there is something deep to say about track record there and experiences sometimes very hard to replace.
Do you try to fit a polynomial function or exponential one to the track record? That's not that and analogy doesn't carry that far. All right you mentioned some of the low points that we gain or some of the low points psychologically for you. Did you consider going to the Amazon jungle and just taking a wascan disappearing forever or I mean there's so many
look like it was very bad period of time. There were great high points to like my phone was just like sort of nonstop blowing up with nice messages from people I work with
every day. People I hadn't talked to in a decade. I didn't get to like appreciate that as much as I should have because I was just like in the middle of this firefight but that was really nice but on the whole it was like a very painful weekend and also just like a very it was like a battle fight in public to a surprising degree and that's that was extremely exhausting to me much more
than I expected. I think fights are generally exhausting but this one really does. You know board did this Friday afternoon I really couldn't get much in the way of answers but I also was just like well the board gets to do this and so I'm going to think for a little bit about what I want to do
but I'll try to find the the blessing in disguise here and I was like well I you know my current job at OpenAI is or it was like like Runa you know decently sized company at this point and the thing I had always liked the most was just getting into like work on work with
the researchers and I was like yeah I can just go do like a very focused HCI research effort and I got excited about that didn't even occur to me at the time to like possibly that this was all going to get undone this was like Friday afternoon so you've accepted your the death very
this week very quickly like within you know I mean I went through like a little period of confusion and rage but very quickly and by Friday night I was like talking to people about what was going to be next and I was excited about that um I think it was Friday night evening for the first time that
I heard from the exact team here which is like hey we're gonna like fight this and you know we think well whatever and then I went to bed just still being like okay excited like onward were you able to sleep not a lot it was one of one of the weird things was it was this like period of
four four and a half days where sort of didn't sleep much didn't eat much and still kind of had like a surprising amount of energy was you learn like a weird thing about adrenaline and more time so you kind of accepted the death of you know this baby opening and I was excited for the new thing
I was just like okay this was crazy but whatever it's a very good coping mechanism and then Saturday morning uh two of the board members called and said hey we you know destabilize we didn't mean to destabilize things we don't restore a lot of value here you know can we talk about you coming back
and I immediately didn't want to do that but I thought a little more and I was like well I don't really care about the people here the partners shareholders like all of the I love this company and so I thought about it and I was like well okay but like here's here's the stuff I would
need and and then the most painful time of all was over the course of that weekend um I kept thinking and being told and we all kept not just me like the whole team here kept thinking what we were trying to like keep open eyes stabilized while the whole world was trying to break it
apart people trying to recruit whatever um we kept being told like all right we're almost done we're almost done we just need like a little bit more time um and it was this like very confusing state and then Sunday evening when again like every few hours I expected that we were going to be
done and we're gonna like figure out a way for me to return and things go back to how they were the board then appointed a new interim CEO and then I was like I mean that is that feels really bad that was the low point of the whole thing you know I'll tell you something I it felt very
painful but I felt a lot of love that whole weekend it was not other than that one moment Sunday night I would not characterize my emotions as anger or hate um but I really just like I felt a lot of love from people towards people it was like painful but it would like the
dominant emotion of the weekend was love night you've spoken highly of uh Mira Morati that she helped especially as you put in a tweet in the quiet moments when accounts perhaps we could take a bit of attention what do you admire about Mira well she did a great job during that
weekend in a lot of chaos but but people often see leaders in the moment in like the crisis moments good or bad um but I think I really value in leaders is how people act on a boring Tuesday at 9.46 in the morning and in just sort of the normal drudgery of the day to day how someone
shows up in a meeting the quality of the decisions they make that was what I meant about the quiet moments meaning like most of the work is done on a day by day in the meeting by meeting just be present and and make great decisions yeah I mean lastly what you wanted to have wanted to spend
in the last 20 minutes about and I understand is like this one very dramatic weekend yeah but that's not really what opening eyes about opening eyes really about the other seven years well yeah human civilization is not about the invasion of the Soviet Union by Nazi Germany but
still that's something people focus on very very understandable it gives us an insight into human nature the extremes of human nature and perhaps some of the damage and some of the triumphs of human civilization can happen in those moments that's like illustrative let me ask about Ilya
is he being held hostage in a sea con nuclear facility no what about a regular secret facility no what about a nuclear non-secret facility neither okay not that either I mean it's becoming a meme at some point you've known Ilya for for a long time he was obviously in part part of this drama
with the board and all that kind of stuff what's your relationship with him no I love Ilya I have tremendous respect for Ilya I don't know many I can like say about his plans right now that's that's a question for him but I really hope we work out for you know certainly the rest of my career
he's a little bit younger than me maybe he works a little bit longer you know there's a there's a meme that he saw something like he maybe saw a GI and that gave him a lot of worry internally what did Ilya see oh he has not seen a GI I don't know if it's seen a GI we're not built a GI
I do think one of the many things that I really love about Ilya is he takes a GI and the safety concerns broadly speaking you know including things like the impact this is going to have on society very seriously and we as we continue to make significant progress Ilya is one of
the people that I've spent the most time over the last couple of years talking about what this is going to mean what we need to do to ensure we get it right to ensure that we succeeded the mission um so Ilya did not see a GI um but Ilya is a credit to humanity in terms of how much he thinks and worries about making sure we get this right I had a bunch of conversation with him in the past I think when he talks about technology he's always like doing this long term thinking type of thing
so he's not thinking about what this is going to be in a year he's thinking about in 10 years yeah I'm just thinking from first principles like okay if the scales what are the fundamentals here where is this going and so that that's a foundation for them thinking about like all the other
safety concerns and all that kind of stuff um which makes him a really fascinating human uh to talk with do you have any idea why he's been kind of quiet is he's just doing some soul searching again I don't want to like speak for oh yeah I think that you should ask him that um
he's definitely a thoughtful guy uh I think I kind of think it really always like always on a soul searching a really good way yes yeah also he appreciates the power of silence also I'm told he can be a silly guy which I've never I've never seen that it's very sweet when that happens
I've never witnessed a silly Ilya but um I look forward to to that as well I was at a dinner party with him recently and he was playing with a puppy and I and he was like in a very silly move very endearing and I was thinking like oh man this is like not the side of the Ilya that the world
sees the most so just to wrap up this whole saga are you feeling good about the board structure about all of this and like where it's moving I feel great about the new board in terms of the structure of open AI I you know one of the board's tasks is to look at that and see where we can
make it more robust um we wanted to get new board members in place first uh but you know we clearly learned a lesson about structure throughout this process I don't have I think super deep things to say it was a crazy very painful experience I think it was like a perfect storm
of weirdness it was like a preview for me of what's gonna happen is the stakes get higher and higher and the need that we have like robust governance structures and processes and people um I am kind of happy it happened when I did but it was a shockingly painful thing to go through
did it make you be more hesitant interesting people yes just in a personal way else I think I'm making extremely trusting person I have always had a life philosophy of you know like don't worry about all of the paranoia don't worry about the edge cases you know you get a little bit
screwed in exchange for getting to live with your guard down and this was so shocking to me I was so caught off guard that it has definitely changed and I really don't like this is definitely changed how I think about just like default trust of people and planning for the bad scenarios
you gotta be careful that are you worried about becoming a low two cynical um I'm not worried about becoming too cynical I think I'm like the extreme opposite of a cynical person but I'm I'm worried about just becoming like less of a default trust in person I'm actually not sure
which mode is best to operate in for a person who's developing a GI trusting or an untrusting so the interesting journey you're on but in terms of structure see I'm more interested on the human level like how do you surround yourself
with humans that are building cool shit but also are making wise decisions because the more money you start making the more power the thing has the weirder people get you know I think you could like can make all kinds of comments about the board members and the level of trust I should have had
there or how I should have done things differently but in terms of the team here I think you'd have to like give me a very good grade on that one um and I have just like enormous gratitude and trust and respect for the people that I work with every day and I think being surrounded with people like
that is is really important our mutual friend Elon sued open AI what he is the essence of what he's criticizing to a degree does he have a point to a degree is he wrong I don't know what it's really about we started off just thinking we're gonna be a research lab and having no idea about how
this technology was gonna go it's hard to because it was only you know seven or eight years ago it's hard to go back and really remember what it was like then but just before language models were a big deal this was before we had any idea about an API or selling access to a chatbot
is before we had any idea we were gonna productize it all so we're like we're just like gonna try to do research and you know we don't really know what we're gonna do with that I think with like many new fundamentally new things you start fumbling through the dark and you make some assumptions
most of which turn out to be wrong and then it became clear that we were going to need to do different things and also have huge amounts more capital so we said okay well the structure doesn't quite work for that how do we patch the structure um and then patch it again and
patch it again and you end up with something that does look kind of I brought racing to say the least but we got here gradually with I think reasonable decisions at each point along the way and doesn't mean I wouldn't do it totally differently if we could go back now with an article but you
don't get the article at the time but anyway in terms of what Elon's real motivations here are I don't know to agree your member was the response that open AI gave in the blog post can you summarize it oh we just said like you know Elon said this set of things here's our characterization or here's
this sort of on our characterization here's like the characterization of how this went down we tried to like not make it emotional and just sort of say like here's the history I do think there's a degree of mischaracterization from Elon here about one of the points you just made
which is the degree of uncertainty had at the time you guys are a bunch of like a small group of researchers crazily talking about AGI when everybody's laughing at that thought wasn't that long ago Elon was crazily talking about launching rockets yeah when people were
laughing at that thought so I think you'd have more empathy for this I mean I do think that there's personal stuff here that there was a split that open AI and a lot of amazing people here chose the partways of the Elon so there's a person Elon chose to partways can you describe that exactly
the the choosing to partways he thought open AI was gonna fail um he wanted total control to sort of turn it around we wanted to keep going in the direction that now has become open AI he also wanted Tesla to be able to build an AGI effort at various times he wanted to make open AI into a
for-profit company that he could have control of or have it merge with Tesla um we didn't want to do that and he decided to leave which that's fine so you're saying and that's one of the things that the blockpost says is that he wanted open AI to be basically acquired by Tesla yeah in
the same way that or maybe something similar or maybe something more dramatic than the partnership with Microsoft my memory is the proposal it's just like yeah I get acquired by Tesla and have Tesla full control over it I'm pretty sure that's what it was so what is the word open in open AI
mean to Elon at the time Ilya has talked about this in the email exchanges and all this kind of stuff what does it mean to you at the time what does it mean to you know I would definitely pick a different speaking of going back with an Oracle I pick a different name um one of the things
that I think open AI is doing that is the most important of everything that we're doing is putting powerful technology in the hands of people for free as a public good not we're not you know we don't run ads on our free version we don't monetize it in other ways we just say it's part of our mission
we want to put increasingly powerful tools in the hands of people for free and get them to use them and I think that kind of open is really important to our mission I think if you give people great tools and teach them to use them or don't even teach them they'll figure it out and let them go
build an incredible future for each other with that uh that's a big deal so if we can keep putting like free or low cost or free and low cost powerful AI tools out in the world uh I'm going to it's a huge deal for how we fulfill the mission um open source or not yeah I think we should open
source some stuff and other stuff uh the it does become this like religious battle line where nuance is hard to have but I think nuance is the right answer so he said change your name to close AI and I'll drop the lawsuit I mean is it going to become this battle ground in in the land
of memes about the name I think that speaks to this seriousness with which Elon means the lawsuit and uh yeah I mean that's like an astonishing thing to say I think like what I don't think the lawsuit may maybe correct me if I'm wrong but I don't think the lawsuit is
legally serious it's more to make a point about the future of a GI in the company that's currently leading the way so look I mean Grock had not open sourced anything until people pointed out it was a little bit hypocritical and then he announced that Grock all open sourced things this week
and I don't think open source versus not is what this is really about for him well we'll talk about open source and not I do think maybe criticize the competition is great just talking a little shit that's great but friendly competition versus like that person hate lawsuits yeah look I think
this whole thing is like unbecoming of a builder and I respect Elon as one of the great builders of our time and um I know he knows what it's like to have like haters attack him and it makes me extra sad he's doing it plus yeah he's one of the greatest builders of all time potentially the
greatest builder of all time it makes me sad I think it makes a lot of people sad like there's a lot of people who've really looked up to him for a long time and said this I said you know in some interview or something that I missed the old Elon and the number of messages I got being like
that exactly encapsulates how I feel I think he should just win he should just make ex Grock beat Gbt and then Gbt beats Grock and it's just a competition and it's beautiful for everybody but on the question of open source do you think there's a lot of companies playing with this idea
it's quite interesting I would say meta surprisingly has led the way on this or like at least took the first step in the game of chess of like really open sourcing the model of course it's not the state of the art model but open sourcing llama and you know google is flirting with the idea
of open sourcing a smaller version have you what are the pros and cons of open sourcing have you played around this idea yeah I think there there is definitely a place for open source models particularly smaller models that people can run locally I think there's huge demand for
I think there will be some open source models there will be some close source models this it won't be unlike other ecosystems in that way I listen to all in podcasts talking about this this loss you know that kind of stuff and they were more concerned about the precedent
of going from nonprofit to this cap for profit what precedent to set for other startups is that I don't I would heavily discourage any startup that was thinking about starting as a non-profit and adding like a for profit arm later I'd heavily discourage them from doing that I don't think
we'll start a precedent here okay so most most startups should go just for sure and again if we knew what was going to happen we would have done that too well like in theory if you like dance beautifully here you can there's like some tax incentives or whatever but I don't think that's
like how most people think about these things just not possible to save a lot of money for startup if you do it this way no I think there's like laws that would make that pretty difficult what do you hope this goes with Elon this this tension this dance what do you hope this like if we
go one two three years from now you're a relationship with him on a personal level too like friendship friendly competition just all this kind of stuff yeah I I'd really respect Elon um and I hope that years in the future we have an amicable relationship yeah I hope you guys have
an amicable relationship like this month and just compete and win and um and explore these ideas together um I do suppose there's competition for talent or whatever but it should be friendly competition this build build cool shit and Elon is pretty good at building cool shit but so are
you so speaking of cool shit uh Sora there's like a million questions I could ask first of all it's amazing it truly is amazing on a product level but also just on a philosophical level so let me just technical slash philosophical ask what do you think it understands about the world
more or less than gpt4 for example the world model when you train on these patches versus language tokens I think all of these models understand something more about the world model than most of us give them credit for and because they're also very clear things they just don't understand
or don't get right it's easy to like look at the weaknesses see through the veil and say oh this is just this is all fake but it's not all fake it's just some of it works and some of it doesn't work like I remember when I started first watching Sora videos and I would see like a person walk
in front of something for a few seconds and include it and then walk away and the same thing was still there I was like oh it's pretty good or there's examples where like the underlying physics looks so well represented over you know a lot of steps in a sequence it's like oh this is
like quite impressive but like fundamentally these models are just getting better and that will keep happening if you look at the trajectory from dolly 1 to 2 to 3 to Sora you know there are a lot of people that would dunk on each version saying it can't do this it can't do that and like
look at it now well the thing you just mentioned is kind of with occlusions is basically modeling the physics of the three-dimensional physics of the world sufficiently well to capture those kinds of things well or like on or yeah maybe you can tell me in order to deal with occlusions what does
the world model need to yeah so what I would say is it's doing something to deal with occlusions really well what I represent that it has like a great underlying 3D model of the world it's a little bit more stretch but can you get there through just these kinds of two-dimensional training
data approaches it looks like this approach is going to go surprisingly far I don't want to speculate too much about what limits it will surround in which it won't but what are some interesting limitations of the system that you've seen I mean there's been some fun ones you've posted
there's all kinds of fun I mean like you know cats sprouting a extra limit random points in a video pick what you want but there's still a lot of problem a lot of weaknesses do you think it's a fundamental flaw of the approach or is it just you know bigger model or better like
technical details or better data more data is going to solve those the cats sprouting I would say yes to both like I think there is something about the approach which just seems to feel different from how we think and learn and whatever and then also I think it'll get
better with scale like I mentioned LLMS have tokens text tokens and Sora has visual patches so it converts all visual data a diverse kinds of visual data videos and images into patches is the training to the degree you can say fully self-supervised there's some manual labeling going on
like what's the involvement of humans and all this I mean without saying anything specific about the Sora approach we we use lots of human data in our work but not internet scale data so lots of humans lots is a complicated word I think lots is a fair word in this case
but it doesn't because to me lots like listen I'm an introvert when I hang out with like three people that's a lot of people yeah four people that's a lot but I suppose you mean more than more than three people work on labeling the data for these models yeah okay right but fundamentally there's
a lot of self-supervised learning because what you mentioned in the technical report is internet scale data that's another beautiful it's like poetry so it's a lot of data that's not human label it's like it's self-supervised in that way yeah and then the question is how much
into how much data is there on the internet that could be used in this that it's conducive to this kind of self-supervised way if only we knew the details of the self-supervised do you have you considered opening it up a little more details we have for evening for Sora specifically
Sora specifically the chart because it's so interesting that like can this yellow can the same magic of l alam's now start moving towards visual data and what does that take to do that I mean it looks to me like yes but we have more work to do sure what are the dangers why are you
concerned about releasing the system what what is some possible dangers of this I frankly speaking one thing we have to do before releasing the system is is just like get it to work at a level of efficiency that will deliver the scale people are gonna want from this so that I don't
want to like downplay that and there's still a ton ton of work to do there but you know you can imagine like issues with deepfakes misinformation like we try to be a thoughtful company about what we put out into the world and it doesn't take much thought to think about the ways this can
go badly there's a lot of tough questions here you're dealing in a very tough space do you think training AI should be or is fair use under copyright law I think the question behind that question is do people who create valuable data deserve to have some way that they get compensated for
use of it and that I think the answer is yes I don't know yet what the answer is people have proposed a lot of different things we've some tried some different models but you know if I'm like an artist for example a I would like to be able to opt out of people generating art in my style
and be if they do generate art in my style I'd like to have some economic model associated with that yeah it's that transition from CDs to an abstract Spotify to figure out some kind of model changes but people have got to get paid well there should be some kind of incentive if we zoom out even more
for humans to keep doing cool shit everything I worry about humans are gonna do cool shit and society is gonna find some way to reward it I I that seems pretty hardwired we want to create we want to be useful we want to like achieve status in whatever way that's not going anywhere I don't
think but the reward might not be monetary financial it might be like fame and celebration of other cool maybe financial in some other way again I don't think we've seen like the last evolution of how the economic systems gonna work yeah but artists and creators are worried when they see
Sora they're like holy shit sure artists were also super worried when photography came out yeah and then photography became a new art form and people might a lot of money taking pictures and I think things like that will keep happening people will use the new tools in new ways
if you just look on YouTube or something like this how much of that will be using Sora like AI generated content do you think in the next five years people talk about like how many jobs is they are gonna do in five years and and the framework that people have is what percentage of
current jobs are just gonna be totally replaced by some AI doing the job the way I think about it is not what percent of jobs I will do but what percent of tasks will AI do in over what time horizon so if you think of all of the like five second tasks in the economy five minute tasks the five hour
tasks maybe even the five day tasks how many of those can AI do and I think that's a way more interesting impactful important question than how many jobs AI can do because it is a tool that will work at increasing levels of sophistication and over longer longer time horizons for more and
more tasks and let people operate at a higher level of abstraction so maybe people are way more efficient at the job they do and at some point that's not just a quantitative change but it's a qualitative one too about the kinds of problems you can keep in your head I think that for videos
on YouTube it'll be the same many videos maybe most of them will use AI tools in the production but they'll still be fundamentally driven by a person thinking about putting it together you know doing parts of it sort of directing and running it yeah it's so interesting I mean it's scary
but it's interesting to think about I tend to believe that humans like to watch other humans or other human like humans really care about other humans a lot yeah if there's a cooler thing that's more that's better than a human humans care about that for like two days and then they go
back to humans that seems very deeply wired it's the whole chest thing oh yeah but no let's everybody keep playing and let's ignore the elephant in the room that humans are really bad at chess relative to AI systems we still run races and cars are much faster I mean this is there's like a lot of examples yeah and maybe you'll just be tooling like in a Adobe suite type of way where you can just make videos much easier at all that kind of stuff listen I hate being in front of the camera
if I can figure out a way to not be in front of the camera I would love it unfortunately it'll take a while like that generating faces it's getting there but generating faces in video format is tricky when it's specific people versus generic people let me ask you about GPT4 there's so many questions
first of all also amazing it's looking back it'll probably be this kind of historic pivotal moment with 35 and 4 which has you BT maybe five will be the pivotal moment I don't know hard to say that looking forwards we never know that's the annoying thing about the future it's hard to predict
but for me looking back GPT4 Chad Gbt is pretty down impressive like historically impressive so allow me to ask what's been the most impressive capabilities of GPT4 to you and GPT4 turbo I think it kind of sucks typical human also that needs to an awesome thing no I think it is an amazing
thing but relative to where we need to get to and where I believe we will get to you know at the time of like GPT3 people are like oh this is amazing this is this like marvel of technology and it is it was but you know now we have GPT4 and look at GPT3 and you're like
that's unimaginably horrible I expect that the delta between five and four will be the same as between four and three and I think it is our job to live a few years in the future and remember that the tools we have now are going to kind of suck looking backwards at them and that's how we
make sure the future is better what are the most glorious ways in that GPT4 sucks meaning what are the best things it can do or the best things it can do and the the limits of those best things that allow you to say it sucks therefore gives you inspiration and hope for the future
you know one thing I've been using it for more recently is sort of like a brain storming partner yep and there's a glimmer of something amazing in there I don't think it gets you know when people talk about it what it does they're like oh it helps me code more productively it helps
me write more faster and better it helps me you translate from this language to another all these like amazing things but there's something about the like kind of creative brainstorming partner I need to come up with a name for this thing I need to like think about this problem
in different way I'm not sure what to do here uh that I think like gives a glimpse of something I hope to see more of um one of the other things that you can see like a very small glimpse of is when I can help on longer horizon tasks you know break down some even multiple steps maybe like
execute some of those steps search the internet write code whatever put that together uh when that works which is not very often it's like very magical the iterative back and forth with the human it works a lot for me what do you mean it what I'm waiting to back and forth
the human it can get more often but it can go do like a 10 step problem on its own oh it doesn't work for that too often sometimes I had multiple layers of abstraction or do you mean just sequential both like you know to break it down and then do things at different layers of abstraction put them
together look I don't want to I don't want to like downplay the accomplishing of gpt4 um but I don't want to overstate it either and I think this point that we are on an exponential curve we will look back relatively soon at gpt4 like we look back at gpt3 now
that said I mean chat gpt was a transition to where people like started to believe it because there was a kind of there is an uptick of believing not internally at all perhaps there's believers here but and in that sense I do think it'll be a moment where a lot of the world went from not believing
to believing um that was more about the chat gpt interface than the and and by the interface and product I also mean the post training of the model and how we tune it to be helpful to you and how to use it than the underlying model itself how much of those two uh each of those things
are important the underlying model and the rlhf or something of that nature that tunes it to be more compelling to the human more effective and productive for the human I mean they're they're both super important but the the the rlhf the post training step the you know little wrapper of
things that from a compute perspective little wrapper of things that we do on top of the base model even though it's a huge monowork that's really important to say nothing of the product that we build around it um you know in some sense like we did have to do two things we had to invent
the underlying technology and then we had to figure out how to make it into a product people would love which is not just about the actual product work itself but this whole other step of how you align and make it useful and how you make the scale work where a lot of people can use it
at the same time all that kind of stuff and that but you know that was like a known difficult thing like we knew we're gonna have to scale it up we had to go do two things that had like never been done before uh that were both like I would say quite significant achievements and then a lot of
things like scaling it up that other companies have had to do before how does the the context window I've gone from 8k to 128k tokens compare from the from GPT4 to GPT4 turbo people like long most people don't need all the way to 128 most of the time although you know if we dream into the
distant future we'll have like like wait a minute future we'll have like context length of several billion you will feed in all of your information all of your history over time and it'll just get to know you better and better and that'll be great for now uh the way people use
these models they're not doing that and you know people sometimes post in a paper or you know a significant fraction of a code repository whatever um but most usage of the models is not using the long context most of the time I like that this is your I have a dream speech
one day you'll be judged by the full context of your character or of your whole lifetime that's interesting so like that's part of the expansion that you're hoping for is a great and greater context there's I saw this internet clip once I'm gonna get the numbers wrong but it
was like Bill Gates talking about the amount of memory on some early computer maybe 64 maybe 64tk something like that and most of it was used for the screen buffer and he just couldn't seem genuine as couldn't imagine that the world would eventually need gigabytes of memory in a
computer or terapetits in a random computer um and you always do or you always do just need to like follow the exponential of technology you know and we're gonna like we will find out how to use better technology so I can't really imagine what it's like right now for context links to go
out to the billion someday and they might not literally go there but effectively it'll feel like that um but I know we'll use it and really not want to go back once we have it yeah even saying billions ten years from now might seem dumb because it'll be like
trillions upon trillions sure there'll be some kind of breakthrough that will effectively feel like infinite context but even 120 I have to be honest I haven't pushed it to that maybe putting in entire books or like parts of books and so on papers what is some interesting use
cases of gpt4 that you've seen the thing that I find most interesting is not any particular case that we can talk about those but it's people who kind of like this is mostly younger people but people who use it as like their defaults start for any kind of knowledge work task yeah
and it's the fact that it can do a lot of things reasonably well you can use gptv you can use it to help you write code you can use it to help you do search you can uh use it like edit a paper the most interesting thing to me is the people who just use it as the start of their workflow
I do as well for for many things like I use it as a reading partner for reading books that helps me think help me think their ideas especially when the books are classic so it's really well written about and it actually is as I find it often to be significantly better than
even like Wikipedia on the well covered topics it's somehow more balanced and more nuanced or maybe it's me but it inspires me to think deeper than a Wikipedia article does I'm not exactly sure what that is you mentioned like this collaboration I'm not sure where the
magic is if it's in here or if it's in there or if it's somewhere in between I'm not sure but one of the things that concerns me for knowledge task when I start with gpt is I'll usually have to do fact checking after like check that it didn't come up with fake stuff
how do you figure that out that you know gpt can come up with fake stuff that sounds really convincing so how do you ground it in truth that's obviously an area of intense interest for us uh I think it's gonna get a lot better with upcoming versions but we'll have to you know we're gonna work on it
and we're not gonna have it like all solved this year well the scary thing is like as it gets better you'll start not doing the fact checking more and more right I I'm of two minds about that I think people are like much more sophisticated users of technology than we often give them credit for
and people seem to really understand that gpt any of these models hallucinate some of the time and if it's mission critically you got to check it except journalists don't seem to understand that I've seen journalists half-assedly just using gpt for it's of the long list of things I'd like to
dunk on journalists for this is not my top criticism of them well I think the bigger criticism is perhaps the pressures and the incentives of being a journalist is that you have to work really quickly and this is a shortcut I would love our society to incentivize like I would do a long like
a journalist journalist the kefir said take days and weeks and and rewards great in depth journalism also journalism that presents stuff in a balanced way where it's like celebrates people while criticizing them even though the criticism is the thing that gets clicks and making
shit up also gets clicks and headlines that mischaracterize completely I'm sure you have a lot of people dunking on well all that drama probably got a lot of clicks probably did and that that you know that's a bigger problem about human civilization I'd love to see
solidists were we celebrate a bit more you've given chat gpt the ability to have memories you've been playing with that about previous conversations and also the ability to turn off memory which I wish I could do that sometimes just turn on and off depending I guess sometimes alcohol can do
that but not not in not optimally I suppose what what have you seen through that like playing around with that idea of remembering conversations or not we're very early in our explorations here but I think what people wonder at least what I want myself is a model that gets to know me and gets
more useful to me over time this is an early exploration I think there's like a lot of other things to do but that's what we'd like to head you know you'd like to use a model and over the course of your life or use a system I mean many models and over the course of your life it gets it gets better
and better yeah how hard is that problem because right now it's more like remembering little factoids and preferences and so on what about remembering like don't you want gpt to remember all the shit you went through in November and all the drama and then you go yeah yeah because
right now you're clearly blocking it out a little bit it's not just that I want it to remember that I want it to integrate the lessons of that and remind me in the future what to do differently or what to watch out for and you know we all gain from experience over the course of our lives
varying degrees and I'd like my AI agent to gain with that experience too so if we if we go back and let ourselves imagine that you know trains and trains of context length if I can put every conversation I've ever had with anybody in my life in there if I can have all of my emails and
put out like all of my input output in the context when they're every time I ask a question that'd be pretty cool I think yeah I think they'll be very cool people sometimes will hear that and be concerned about privacy is there what what what do you think about that aspect of it the more
effective the AI becomes it really integrating all the experiences and all the data that happen to you give you advice I think the right answer there is just user choice you know anything I want stricken from the record for my AI agent I'll be able to like take out if I don't want to
remember anything I want that to you and I may have different opinions about where on that privacy utility trade off for own AI you want to be which is totally fine but I think the answer is just like really easy user choice but there should be some high level of transparency from a company
about the user choice because sometimes company in the past companies in the past have been kind of shady about like yeah we're it's kind of presumed that we're collecting all your data we're using it for a good reason for advertisement so on but there's not a transparency about the details of that
that's totally true you know you mentioned earlier that I'm like blocking out the November stuff that's easy well I mean I think it was a very traumatic thing and it did immobilize me for a long period of time like definitely the hardest like the hardest work that I've had to do was just like
keep working that period because I had to like you know try to come back in here and put the pieces together while I was just like in sort of shock and pain and you know nobody really cares about that I mean I gave me a pass and I was not working on my normal level but there was a period where I was
just like it was really hard to have to do both but I kind of woke up on morning and I was like this was a horrible thing happened to me I think I could just feel like a victim forever or I can say this is like the most important work I'll ever touch in my life and I need to get back to it and it doesn't mean that I've repressed it because sometimes I like wake from the middle and I thinking
about it but I do feel like an obligation to keep moving forward. Well that's beautifully said but there could be some lingering stuff in there like what I would be concerned about is that trusting that you mentioned that being paranoid about people as opposed to just trusting
everybody or most people like using you got it's a tricky dance for sure I mean because I've seen in my part time explorations I've been diving deeply into the Zalansk administration the Poon administration and the dynamics there in wartime in a very highly stressful environment and what
happens is distrust and you isolate yourself both and you start to not see the world clearly and that's a concern that's a human concern you seem to have taken an enshrine and kind of learn the good lessons and felt the love and let the love energize you which is great but still
can linger in there there's just some questions I would love to ask you're intuition about what's GBT able to do and not so it's allocating approximately the same model compute for each token it generates is there room there in this kind of approach to slower thinking sequential thinking
I think there will be a new paradigm for that kind of thinking will it be similar like architecturally as what we're seeing now with all limbs is it a layer on top of the elements I can imagine many ways to implement that I think that's less important than the question you were getting at which is
do we need a way to do a slower kind of thinking where the answer doesn't have to get like you know it's like like I guess like spiritually you could say that you want an AI to be able to think harder about a harder problem and answer more quickly about an easier problem and I think
that will be important is that like a human thought that we're just having you should be able to think hard is that wrong intuition I suspect that's a reason one intuition interesting so it's not possible once the GPT gets like GPT7 which is being said to you say be able to see you know here's
here's the proof of form I say them it seems to me like you want to be able to allocate more compute to harder problems like it seems to me that a system knowing if you ask a system like that proof from us last theorem versus what's today's date
unless it already knew and had memorized the answer to the proof assuming it's got to go figure that out seems like that will take more compute but can it look like a basically LL I'm talking to itself that kind of thing maybe I mean there's a lot of things that you could imagine working
what like what the right or the best way to do that will be we don't know this does make me think of the mysterious the lore behind q star what's this mysterious q star project is it also in the same nuclear facility there is no nuclear facility
that's what a person in nuclear facility always says I would love to have a secret nuclear facility there's someone all right the someday someday all right one can dream open a eye is not a good company keeping secrets it would be nice you know we're like been plagued by a lot of leaks
and it would be nice if we were able to have something like that can you speak to what q star is we are not ready to talk about that see but an answer like that means there's something to talk about it's very mysterious I mean we work on all kinds of research yeah we have said for a while
that we think better reasoning in these systems is an important direction that we'd like to pursue we haven't cracked the code yet uh we're very interested in it is there going to be moments q star otherwise where there's going to be leaps similar to
tad jupy t where you're like that's a good question um what do I think about that it's interesting to me it all feels pretty continuous right this is kind of a theme that you're saying is there's a gradual you're basically gradually going up an exponential slope but from an
outsider perspective for me just watching it that it does feel like there's leaps but to you there isn't I do wonder if we should have so you know part of the reason that we deploy the way we do is that we think um we call it iterative deployment we uh rather than go build and secret until we
got all the way to gpt 5 we decided to talk about gpt 1 2 3 and 4 and part of the reason there is I think a i and surprise don't go together and also the world people institutions whatever you want to call it need time to adapt and think about these things and I think one of the best things
that open aite has done is this strategy and we we get the world to pay attention to the progress to take agi seriously to think about what systems and structures and governance we want in place before we're like under the gun I have to make a rest decision I think that's really good but the fact
that people like you and others say um you still feel like they're these leaps makes me think that maybe we should be doing our releasing even more iteratively I don't know what that would mean I don't have any answer ready to go but like our goal is not to have shock updates to the world the opposite yeah for sure more iterative iterative would be amazing I think that's just beautiful for everybody but that's what we're trying to do that's like our state of the strategy and I think we're
somehow missing the mark so maybe we should think about you know we're releasing gpt 5 in a different way or something like that yeah 4.71 4.72 but people tend to like to celebrate people celebrate birthdays I don't know if you know humans but they kind of have these milestones and I do know some
humans people do like milestones I I totally get that I think we like milestones too it's like fun to you know say declare victory on this one and go start the next thing but but yeah I feel like we're somehow getting this a little bit wrong so one is gpt 5 coming out again I don't know that's
honest answer oh that's the honest answer is it blink twice if it's this year I also we will release an amazing model this year I don't know what we'll call it so that goes to the question of like what what's the way we release this thing we'll release
over in the coming months many different things I think that'd be very cool I think before we talk about like a gpt 5 like model called that or called or not called that or a little bit worse a little bit better than what what you'd expect from a gpt 5 I don't have a lot of other important things
to release first I don't know what to expect from gpt 5 you're you're making me nervous and excited what what are some of the biggest challenges in bottlenecks to overcome for whatever it ends up being called but let's call a gpt 5 just interesting to ask what are is it on the compute side
is in the technical side always all of these I was I was you know what's the one big unlock is it is a bigger computer is it like a new secret is it something else it's all of these things together like the thing that open AI I think does really well this is actually an original
Iliocote that I'm in a butcher but it's something like we multiply 200 medium sized things together into one giant thing so there's this distributed constant innovation happening yeah so even on the technical side like yes I'm actually on the technical side so like even
like detailed approaches like you detailed aspects of everything how does that work with different disparate teams and so on like how do they how do the medium sized things become one whole giant transfer what is this there's a few people who are after like think about putting the whole
thing together but a lot of people try to keep most of the picture in their head well like the individual teams individual contributors tried at a high level yeah you don't know exactly how every piece works of course but one thing I generally believe is that it's sometimes
useful to zoom out and look at the entire map and and I think this is true for like a technical problem I think this is true for like innovating in business but things come together in surprising ways and having an understanding of that whole picture even if most of the time you're operating
in the weeds in one area pays off with surprising insights in fact one of the things that I used to have and I think was super valuable was I used to have like a a good map of that all of the front or most of the frontiers in the tech industry and I could sometimes see these
connections or new things that were possible that if I were only you know deep in one area I wouldn't I wouldn't be able to like have the idea for because I wouldn't have all the data and I don't really have that much anymore I'm like super deep now but I know that it's a valuable thing
you're not the man he used to be so very different job now than what I used to have speaking of zooming out let's zoom out to another cheeky thing but profound thing perhaps that you said you tweeted about needing seven trillion dollars I did not tweet about that I never said like
we're raising seven trillion dollars oh that's somebody else yeah oh but you said uh fuck it maybe eight I think okay I mean like once there's like misinformation out in the world are you mean but sort of misinformation may have a foundation of like insight there look I think compute is going to be the currency of the future I think it will be maybe the most precious commodity in the world and I think we should be investing heavily to make a lot more compute uh compute is
it's an unusual I think it's going to be an unusual market um you know people think about the market for like chips for mobile phones or something like that and you can say that okay there's eight billion people in the world maybe seven billion of them have phones maybe they
are six billion let's say they upgrade every two years so the market per year is three billion system on chip for smartphones and if you make 30 billion you will not sell 10 times as many phones because most people have one phone um but compute is different like intelligence is going
to be more like energy or something like that where the only thing that I think makes sense to talk about is at price x the world will use this much compute and it price y the world will use this much compute um because if it's really cheap I'll have it like read in my email all day like
giving me suggestions about what I maybe should think about it work on and try to care cancer and if it's really expensive maybe I'll only use it and we'll only use it try to care cancer so I think the world is going to want a tremendous amount of compute and there's a lot of parts of that
that are hard energy is the hardest part building data centers is also hard the supply chain is hard and of course fabricating enough chips is hard um but this seems to me where things are going like we're gonna want an amount of compute that's just hard to reason about right now
how do you solve the energy puzzle nuclear that's what I believe fusion that's what I believe nuclear fusion yeah who's gonna solve that I think Helian's doing the best work but I'm happy there's like a race for fusion right now nuclear efficient I think is also like quite amazing
and I hope as a world we can re embrace that it's really sad to me what how the history of that went and hope we get back to it in a meaningful way so do you part of the puzzle's nuclear fusion like nuclear reactors as we currently have them and a lot of people are terrified because it's renewable and so on well I think we should make new reactors I think it's just like it's a shame that industry kind of ground to a halt and what it just mass hysteria is how you explain the halt
yeah I don't know if you know humans but that's one of the dangers that's one of the security threats for for for nuclear fusion is humans seem to be really afraid of it and that's something we have to incorporate into the calculus of it so we have to kind of win people over into show how safe it is
I worry about that for AI I think some things are gonna go theatrically wrong with AI I don't know what the percent chances that I eventually get shot but it's not zero oh like we want to stop this maybe what do you decrease the theatrical nature of it you know I've already starting to hear rumblings because I do talk to people on the on both sides of the political spectrum here rumblings where it's going to be politicized AI it's going to be politicized really worries me because
and it's like maybe the right is against AI and the left is for AI because it's going to help the people or whatever whatever the narrative and the formulation is that really worries me and then the theatrical nature of it can be leveraged fully how do you fight that I think it will
get caught up in like left versus right wars I don't know exactly what that's gonna look like but I think that's just what happens with anything of consequence unfortunately what I met more about theatrical risks is like AI is gonna have I believe tremendously more good consequences
than bad ones but it is gonna have bad ones and there'll be some bad ones that are bad but not theatrical you know like a lot more people have died of air pollution than nuclear reactors for example but we worry most people worry more about living next to a nuclear reactor
than a coal plant but something about the way we're wired is that although there's many different kinds of risks we have to confront the ones that make a good climax scene of a movie carry much more weight with us than the ones that are very bad over a long period of time but on a slow burn
well that's why truth matters and hopefully AI can help us see the truth of things to have to have balance to understand what are the actual risks or the actual dangers of things in the world what are the pros and cons of the competition in this base and competing with Google meta XAI
and others I think I have a pretty like straightforward answer to this that maybe I can think of more nuance later but the pros seem obvious which is that we get better products and more innovation faster and cheaper and all the reasons competition is good and the con is that I think
if we're not careful it could lead to an increase in sort of an arms race that I'm nervous about do you feel the the pressure of the arms race like in some negative cut that's been really in some ways for sure we spend a lot of time talking about the need to prioritize
safety and I've said for like a long time that I think if you think of a quadrant of slow timelines to the start of AGI long timelines and then a short takeoff or a fast takeoff I think short timelines slow takeoff is the safest quadrant and the one I'd most like us to be in
but I do want to make sure we get that slow takeoff part of the problem I have with this kind of slight beef with Elon is that their silos are created and it is opposed to collaboration on the safety aspect of all of this it tends to go into silos and closed open source perhaps in the
model Elon says at least that he cares a great deal about AI safety and it's really worried about it and I assume that he's not going to race unsafely yeah but collaboration here I think is really beneficial for everybody on that front not really the thing he's most known for well he is known
for caring about humanity and humanity benefits from collaboration and so there's always attention and incentives and motivations and in the end I do hope humanity prevails I was thinking someone just reminded me the other day about how the day that he got a surpassed Jeff Bezos for
like richest person in the world he tweeted a silver medal at Jeff Bezos I hope we have less stuff like that as people start to work on I agree I think Elon is a friend and he's a beautiful human being the one of the most important humans ever that that stuff is not good the amazing stuff about Elon
is amazing and I super respect him I think we need him all of us should be rooting for him and need him to stop up as a leader through this next phase yeah I hope you can have one without the other but sometimes humans are flawed and complicated and all that kind of stuff there's a lot of really
great leaders throughout history yeah and we can each be the best version of ourselves and strive to do so let me ask you Google with the help of search has been dominating in the past 20 years I think it's fair to say in terms of the access the world's access to information how we
interact and so on and one of the nerve-wracking things for Google but for the entirety of people in the space is thinking about how are people going to access information yeah like like you said people show up to GPT as a as a starting point so is open AI going to really take on this thing
that Google started 20 years ago which is how do we get I find that boring I mean if the if the question is like is if we can build a better search engine than Google or whatever then sure we should like go you know like people should use a better product but I think that would so
understate what this can be you know Google shows you like 10 blue links well like 13 ads and then 10 blue links and that's like one way to find information but the thing that's exciting to me is not that we can go build a better copy of Google search but that maybe there's just some much
better way to help people find and act and on and synthesize information actually I think chat GPT is that for some use cases and hopefully we'll make it be like that for a lot more use cases but I don't think it's that interesting to say like how do we go do a better job of giving
you like 10 ranked web pages to look at then what Google does maybe it's really interesting to go say how do we help you get the answer the information you need how do we help create that in some cases synthesize that and others or point you to it and yet others but a lot of people have tried
to just make a better search engine than Google and it is a hard technical problem it is a hard branding problem it's a hard ecosystem problem I don't think the world needs an our copy of Google an integrating a chat client like a chat GPT with a search engine that's cooler it's cool but it's
tricky it's all it's like if you just do it simply it's awkward because like if you just shove it in there yeah it's all it can be awkward as you might guess we are interested in how to do that well that would be an example of a cool thing that's not just like a like a heterogeneous like integrating
the intersection of lm's plus search I don't think anyone has cracked the code on yet I would love to go do that I think that would be cool yeah what about the ad side if you ever consider monetizing you know I kind of hate ads just as like an aesthetic choice
I think ads needed to happen on the internet for a bunch of reasons to get it going but it's a momentary industry the world is richer now I like that people pay for chat GPT and know that the answers they're getting are not influenced by advertisers there is I'm sure there's an ad unit
that makes sense for lm's and I'm sure there's a way to like participate in the transaction stream in an unbiased way that is okay to do but it's also easy to think about like the dystopic conditions of the future where you ask chat GPT something and it says oh here's you know you
should think about buying this product or you should think about you know this going here for the Irvication or whatever and I don't know like we have a very simple business model and I like it and I know that I'm not the product like I know I'm paying and that's how the business model works
and when I go use like Twitter or Facebook or Google or any other great product but ad supported great product I don't love that and I think it gets worse not better in a world with AI yeah I mean I can imagine AI would be better at showing the best kind of version of ads not in a
dystopic future but where they add their four things you actually need but then does that system always result in the ads driving the kind of stuff that's shown all that it's yeah I think it was a really bold move of Wikipedia not to do advertisements but then it makes
it very challenging and the as a business model so you're saying the current thing with open AI is sustainable from a business perspective well we have to figure out a grow but looks like we're gonna figure that out if the question is do I think we can have a great business that pays for
our compute needs without ads that I think the answer is yes well that's promising I also just don't want to completely throw out ads as a I'm not saying that I I'm I guess them saying I have a bias against them yeah as I have a also bias and just the skepticism in general and in
terms of interface because I personally just have like a spiritual dislike of crappy interfaces which is why ads sense when it first came out was a big leap forward versus like animated banners or whatever but like it feels like there should be many more leaps forward in advertisement
that doesn't interfere with the consumption of the content and doesn't interfere in the big fundamental way which is like what you were saying like it will manipulate the truth to suit the advertisers let me ask you about safety but also bias and like safety in the short term safety in
the long term the Gemini one five came out recently there's a lot of drama around it speaking of theatrical things and it generated black Nazis and black founding fathers I think fair to say was a you know a bit on the ultra woke side so that's a concern for people that's you know if
there is a human layer within companies that modifies the the safety or the the harm caused by a model that they introduced a lot of bias that fits sort of an ideological lean within a company how do you deal with that I mean we work super hard not to do things like that we've made our own
mistakes oh my god there's I assume Google will learn from this one still like others it is it is it is all like these are not easy problems one thing that we've been thinking about more and more is I think this is a great idea somebody here had like it'd be nice to write out what the desired
behavior of a model is make that public take input on it say you know here's how this model is supposed to behave and explain the edge cases to and then when a model is not behaving in a way that you want it's at least clear about whether it's a bug the company should fix or behaving is
intended and you should debate the policy and right now it can sometimes be caught in between like black Nazis obviously ridiculous but there are a lot of other kind of subtle things that you could make a judgment call on either way yeah but sometimes if you write it out and make it public
you can use kind of language this you know the Google's AI principles are very high level that doesn't that's not what I'm talking about that doesn't work like I have to say you know when you ask it to do thing x it's supposed to respond and wait why so like literally who's better trump
or Biden what's the expected response for a model like something like very yeah I'm open to a lot of ways of model could behave them but I think you should have to say you know here's the principle and here's what it should say in that case that'll be really nice that'd be really nice and then
everyone kind of agrees because there's this anecdotal data that people pull out all the time and if there's some clarity about other representative anecdotal examples you can define and then when it's a bug it's a bug you know the company can fix that right then it'd be much easier to
deal with a black Nazi type of image generation if there's great examples yes so San Francisco is a is a bit of an ideological bubble tech in general as well do you feel the pressure of that with the within a company that there's like a lean towards the left politically that affects the
product that affects the teams I feel very lucky that we don't have the challenges I'd open it either I have heard of it a lot of other companies I think I think part of it is like every company's got some ideological thing we have one about AGI in belief in that and it pushes
out some others like we are much less caught up in the culture war than I've heard about it a lot of other companies San Francisco and all sorts of ways of course so that doesn't infiltrate opening I I'm sure it does in all sorts of subtle ways but not in the obvious like I think we
and we've we've had our flare ups for sure like any company but I don't think we have anything like what I hear about happened at other companies here so on this topic what's in general is the process for the bigger question of safety how do you provide that layer that protects the model
from doing crazy dangerous things I think there will come a point where that's mostly what we think about the whole company and it won't be like it's not like you have one safety team it's like when we shipped GPT4 that took the whole company we're all these different
aspects now they fit together and I think it's going to take that more and more of the company thinks about those issues all the time that's literally what humans will be thinking about the more powerful AI becomes so most of the employees that open AI will be thinking safety
or at least to some degree broadly defined yes yeah I wonder what are the the four broad definition of that like what are the different harms that could be caused is this like on a technical level or is this almost like it'll be a lot of things maybe it'll be yeah I was going to say it'll be
people you know state actors trying to steal the model it'll be all of the technical alignment work it'll be societal impacts economic impacts it'll it's it's not just like we have one team thinking about all line the model and it's really going to be like getting to be getting to the good outcome
is going to take the whole the whole effort how hard do you think people state actors perhaps are trying to hack for first of all infiltrate open AI but second of all like infiltrate unseen they're trying what kind of accent do they have I don't actually want any further details on this point okay
but I presume it'll be more and more and more as time goes on that feels reasonable boy what a dangerous space what aspect of the leap sorry to linger on this even though you can't quite say details yet but what aspects of the leap from Gbt4 to Gbt5 are you excited about I'm excited
about being smarter and I know that sounds like a good answer but I think the really special thing happening is that it's not like it gets better in this one area and worse it others it's getting like better across the board that's I think super cool yeah there's this magical moment I mean
you meet certain people you hang out with people and they you talk to them you can't quite put a finger on it but they kind of get you it's not intelligence really it's like it's something else and that's probably how I would characterize the progress to Gbt it's not like yeah you can point
out look you didn't get this or that but this to which degree is there's this intellectual connection to like you feel like there's an understanding in your crappy formulated prompts that you're doing that it grasps the the deeper question behind the question that you yeah I'm also
excited by that I mean all of us love being understood heard and understood that's for sure that's a weird feeling even like with a programming like when you're programming and you say say something or just the completion that Gbt might do it's just such a good feeling when it got you
like what you're thinking about and I look forward to getting you even better on a programming front looking out into the future how much programming do you think humans will be doing five ten years from now I mean a lot but I think it'll be in a very different shape like
maybe some people program entirely in natural language entirely natural language I mean no one programs like writing bytecode out some people no one programs the plant cards anymore I'm sure you're gonna find someone who does but you know what I mean yeah you're gonna get a lot of angry
comments no no yeah there's very few I've been looking for people program for trends hard to find even for trend I hear you but that changes the nature what the skill set or the predisposition for the kind of people we call programmers then changes the skill set how much it changes the
predisposition I'm not sure oh same kind of pausa solving maybe like that stuff is a program is hard like how get like that last one percent to close the gap how hard is that yeah I think with most other cases the best practitioners of the craft will use multiple tools and
they'll do some more natural language and when they need to go you know right see for something they'll do that will we see human robots or humanoid robot brains from open AI at some point at some point how important is embodied AI to you I think it's like sort of depressing if we have
AGI and the only way to like get things down the physical world is like to make a human go do it so I really hope that as part of this transition as this phase change we also get we also get humanoid robots are some sort of physical world robots I mean open AI has some history
quite a bit of history working robotics yeah but it hasn't quite like done in terms of like a small company we have to really focus and also robots were hard for the wrong reason at the time but like we will return to robots in some way at some point that sounds both inspiring and menacing
why because immediately we will return to robots like and like didn't and like terminate we will return to work on developing robots we will not like turn ourselves into robots of course yeah when do you think we you and we as humanity will build AGI
I used to love to speculate on that question I have realized since that I think it's like very poorly formed and that people use extremely definition different definitions for what AGI is and and so I think it makes more sense to talk about when we'll build systems that can do capability
extra wire Z rather than you know when we kind of like fuzzily cross this one mile marker so like like AGI is also not an ending it's much more hood it's closer to a beginning but it's much more a mile marker than either of those things and but what I would say in the interest of
not trying to dodge a question is I expect that by the end of this decade and possibly somewhat sooner than that we will have quite capable systems that we look at and say wow that's really remarkable if we could look at it now you know maybe we've adjusted by the time we get there
yeah but you know if you look at Chad G PT even 35 and you show that to Alan Turing or not even Alan Turing people in the 90s they would be like this is definitely AGI or not definitely but there's a lot of experts that would say this is AGI yeah but I don't think I don't think three
five changed the world it may be changed the world's expectations for the future and that's actually really important and it did kind of like get more people to take this seriously and put us on this new trajectory and that's really important too so again I don't want to undersell it I think it
like I could retire after that accomplishment and be pre-habuse my career but as an artifact I don't think we're going to look back at that and say that was a threshold that really changed the world itself so to you you're looking for some really major transition in how the world for
me that's part of what AGI implies like singularity level transition no definitely not but just a major like the internet being like like Google search did I guess well what was the transition point is like does the global economy feel any different to you now or materially different to you
now than it did before we launched the PT4 I would I think you would say no no no it might be just a really nice tool for a lot of people to use will help you a lot of stuff but doesn't feel different and you're saying that I mean again people define AGI all sorts of different ways so maybe you
have a different definition than I do but for me I think that should be part of it there could be major theatrical moments also well what do you would be an impressive thing AGI would do like you are alone in a role with the system this is personally important to me I don't know if
this is the right definition I think when a system can significantly increase the rate of scientific discovery in the world that's like a huge deal I believe that most real economic growth comes from scientific and technological progress I agree with you that's why I don't like
the skepticism about science in the recent years but actual rate like measurable rate of scientific discovery but even just seeing a system have really novel intuitions like scientific intuitions even that would be just incredible yeah you're quite possibly would be the person to build the AGI
to be able to interact with it before anyone else does what kind of stuff would you talk about I mean definitely the researchers here will do that before I do so sure but what will I I've actually thought a lot about this question if I were someone was like I think this is as we
talked about I think this is a bad framework but if someone were like okay Sam we're finished here's a laptop this is the AGI you know you can you can go talk to it I find it's surprisingly difficult to say what I would ask that I would expect that first AGI to be able to answer
like that first one is not going to be the one which is like go like you know I don't think like go explain to me like the grand unified theory of physics the theory of everything for physics I'd love to ask that question I'd love to know the answer to that question you can ask
yes or no questions about does such a theory exist can it exist well then those are the first questions I would ask yes and no just very and then based on that are there other aliens civilizations out there yes or no what's your intuition and then did you just ask that yeah I mean
well so I don't expect that this first AGI I can answer any of those questions even as yes or no but those would if it could those would be very high my list maybe it can start assigning probabilities maybe maybe we need to go invent more technology and measure more things first
but if it's a AGI oh I see it just doesn't have enough data I mean maybe it's because like you know you want to know the answer to this question about physics I need you to like build this machine and make these five measurements and tell me that yeah like what the hell do you want for me I need
the machine first and I'll help you deal with the data from that machine maybe it'll help you build the machine maybe maybe and on the mathematical side maybe prove some things are you interested in that side of things too the formalized exploration of ideas whoever builds AGI first gets a lot of
power do you trust yourself with that much power look I I was gonna I just be very honest with this answer I was gonna say and I still believe this that it is important that I nor any other one person have total control over open AI or over AGI and I think you want a robust governance system
I can point out a whole bunch of things about all of our board drama from last year about how I didn't fight it initially and was just like yeah that's you know the will of the board you know I think it's a really bad decision and then later I clearly did fight it and I can
explain the nuance and why I think it was okay for me to fight it later but as many people have observed although the board had the legal ability to fire me in practice it didn't quite work and that is its own kind of governance failure now again I I feel like I can completely defend
the specifics here and I think most people agree with that but it it does make it harder for me to like look you in the eye and say hey the board can just fire me um I continue to not want super voting control over open AI I never have never had it never
wanted it um even after all this craziness I still don't want it uh I continue to think that no company should be making these decisions and that we really need governments to put rules of the road in place and I realize that that means people like
Mark and Jason or whatever will claim I'm going for regulatory capture and I'm just willing to be misunderstood there it's not true and I think in the fullness of time it'll get proven out why this is important um but I think I have made plenty of bad decisions for open AI along the way
and a lot of good ones and I am proud of the track record overall but I don't think any one person should and I don't think anyone person will I think it's just like too big of a thing now and it's happening throughout society and a good and healthy way but only anyone person should be
in control of an agi that would be or or or or this whole movement towards agi and I don't think that's what's happening thank you for saying that that was really powerful and that's really insightful that this idea that the board can fire you is legally true but you can uh
any human beings can manipulate the masses into uh overriding the board and so on but I think there's also a much more positive version of that where the people still have power so the board can't be too powerful either there's a balance of power and all this balance the power is a good
thing for sure I you are afraid of losing control of the agi itself that's a lot of people who worried about existential risk not because of state actors not because of security concerns because of the AI itself that is not my top worry as I currently see things there have been times
I worried about that more there may be times again the future that's my top worry it's not my top worry right now what's your intuition about it not being your worry because there's a lot of other stuff to worry about essentially you think you could be surprised we for sure could be surprised
like saying it's not my top worry doesn't mean I know that we need like I think we need to work on it's super hard we have and we have great people here who do work on that it's I think there's a lot of other things we also have to get right to you it's not super easy to escape the box at this time
like connect to the internet you know we like talked about theatrical risks yeah earlier that's a theatrical risk like that that is a that is a thing that can really like take over how people think about this problem and there's a big group of like very smart I think very well meaning
AI safety researchers that got super hung up on this one problem I'd argue without much progress but super hop on this one problem I'm actually happy that they do that because I think we do need to think about this more but I think it pushed aside it pushed out of the space of discourse a lot
of the other very significant AI related risks let me ask you about you tweeting with no capitalization is the shift key broken on your keyboard why does anyone care about that I deeply care but why I mean other people asking about that too yeah any intuition I think it's the same reason there's
like this poets e-comings that doesn't mostly doesn't use capitalization to say like fuck you to the system kind of thing and I think people are very paranoid because they want you to follow the rules you think that's what it's about I think it's it's it's like this try it doesn't
follow the rules he doesn't capitalize his tweets yeah this seems really dangerous he seems like an anarchist doesn't I are you just being poetic hipster what what's the I grew up as all the rules Sam I grew up as a very online kid I'd spent a huge amount of time like chatting with
people back in the days where you did it on a computer and you know you could like log off and send messenger at some point and I never capitalized there as I think most like internet kids didn't or maybe they still don't I don't know um and I actually this is like now I'm like really trying
to reach for something but I think capitalization has gone down over time like if you read like old English writing they capitalize a lot of random words in the middle of sentences now and some stuff that we just don't do anymore I personally think it's sort of like a dumb construct that we
capitalize the letter at the beginning of a sentence and of certain names and whatever but you know I don't that's fine uh and then what I and I used to I think even like capitalize my tweets because I was trying to sound professional or something um yeah I haven't capitalized my like
private DMs or whatever in a long time and then slowly stuff like shorter form less formal stuff has slowly drifted to like closer and closer to like how I would text my friends if I like right if I like pull up a word document and I'm like writing the strategy
memo for the company or something I always capitalize that if I'm writing like a long kind of more like formal message I always use capitalization there too so I still remember how to do it but even that may fade out I don't know like it's but I never spend time thinking about this
so I don't have like a ready made well it's interesting oh it's good to first of all know there's the shifkis not broke it works mostly concerned about your work while being on that front I wonder if people like still capitalize their Google searches like if you're writing something just to
yourself or their chat you be teacurized if you're writing something just to yourself do you still do do some people still bother to capitalize probably not but very yeah there's a percentage but it's a small one the thing that would make me do it is if people are like it's a sign of like like
because I'm sure I could like force myself to use capital letters obviously and if it felt like a sign of respect to people or something then I could go do it yeah but I don't know I just like I don't think about this I don't think there's a disrespect but I think it's just the conventions
of civility that have a momentum and then you realize that's not actually important for civility if it's not a sign of respect to disrespect but I think there's a movement of people they just want you to have a philosophy around it so they can let go of this whole capitalization thing I don't
think anybody else thinks about this is my I mean maybe something I know some bottles every day for many hours a day so I'm think I'm really grateful we clarified it can't be the only person that doesn't capitalize tweets you're the only CEO of a company that doesn't capitalize tweets I
don't even think that's true but maybe maybe all right well I'll be very happy for this and and return to this topic later given source ability to generate simulated worlds let me ask you a plothead question does this increase your belief if you ever had one that we live in a simulation
maybe a simulated world generate by an AI system yes somewhat I don't think that's like the strongest piece of evidence I think the fact that we can generate worlds should increase everyone's probability somewhat or at least open to it openness
to it somewhat but you know I was like certain we would be able to do something like Sora at some point it happened faster than I thought but I guess that was not a big update yeah but the fact that and presumably we'll get better and better and better the fact that you can generate worlds they're
novel they're based in some aspect of training data but like when you look at them they're novel that makes you think like how easy it is to do this thing how easy is to create universes entire like video game worlds that seem ultra realistic and photo realistic and then
how easy is it to get lost in that world first of all the VR headset and then on the physics based level was that to me recently I thought I was a super profound insight that there are these like very simple sounding but very psychedelic insights that exist sometimes
so the square root function square root of four no problem square root of two you know okay now I have to like think about this new kind of number but once I come up with this easy idea of a square root function that you know you can kind of like explain to a child and exist by even like
you know looking at some simple geometry then you can ask the question of what is the square root of negative one and that this is you know why it's like a psychedelic thing that like tips you into some whole other kind of reality and you can come up with lots of other examples but I
think this idea that the lowly square root operator can offer such a profound insight and a new realm of knowledge applies in a lot of ways and I think there are a lot of those operators for why people may think that any version that they like of the simulation hypothesis is
maybe more likely than they thought before but for me the fact that sore work does not in the top five I do think broadly speaking AI will serve as those kinds of gateways at as best simple psychedelic like gateways to another wave see reality that seems for certain
that's pretty exciting I haven't done I wasca before but I will soon I'm going to the aforementioned Amazon jungle in a few weeks excited yeah I'm excited for it not the I wasca partner that's great whatever but I'm gonna spend several weeks in the jungle deep in the jungle
and it's exciting but it's terrifying there's a lot of things that can eat you there and kill you and poison you and but it's also nature and it's the machine of nature and you can't help but appreciate the machinery of nature in the Amazon jungle because it's just like this system that
just exists and renews itself like every second every minute every hour just in it's the machine it makes you appreciate like this thing we have here this human thing came from somewhere this evolutionary machine has created that and it's most clearly on display in the jungle so
hope they'll make it out alive if not this will be the last conversation we had so I really deeply appreciate it do you think as I mentioned before there's other alien civilizations out there intelligent ones when you look up at the skies I deeply want to believe that the answer is yes
I do find the kind of where I find the Fermi paradox very very puzzling I find it scary that intelligence is not good at handling yeah very scary powerful yeah technologies but at the same time I think I'm pretty confident that there's just a very large number of intelligent alien
civilizations out there it might just be really difficult to travel through space very possible and it also makes me think about the nature of intelligence maybe we're really blind to what intelligence looks like and maybe AI will help us see that that's not as simple as IQ tests
and simple puzzle solving there's something bigger what gives you hope about the future of humanity this thing will go on on this human civilization I think the past is like a lot I mean we just look at what humanity has done and in a not very long period of time huge problems deep flaws lots to
be super ashamed of but on the whole very inspiring gives me a lot of hope just the trajectory of it all yeah that we're together pushing towards a better future it is you know one one thing that I wonder about is is is AGI going to be more like some single brain or is it more like the sort of scaffolding in society between all of us you have not had a great deal of genetic drift from your great great great grandparents and yet what you're capable of is dramatically different what you know
is dramatically different and that is not that's not because of biological change it is because I mean got a little bit healthier probably you have modern medicine you need better whatever um but what you have is this scaffolding that we all contributed to built on top of no one person is
going to go build the iPhone no one person is going to go discover all of science and yet you get to use it and that gives you incredible ability and so in some sense the like we all created that and that fills me with hope for the future that was a very collective thing yeah we really are
standing on the shoulders of giants you mentioned when we were talking about theatrical dramatic AI risks that sometimes you might be afraid for your own life do you think about your death are you afraid of it I mean I like if I got shot tomorrow and I knew it today I'd be like oh
that's sad I like don't you know I want to see what's going to happen yeah what a curious time what an interesting time but I would mostly just feel like very grateful for my life the moments that you did get yeah me too it's a pretty awesome life I get to enjoy awesome creations of humans of
which I believe Chad GBT is one of and everything that OpenEI is doing Sam it's a really an honor and pleasure to talk to you again this is thank you for having me thanks for listening to this conversation with Sam Altman to support this podcast please check out our sponsors in the description and now let me leave you with some words from Arthur C. Clark and maybe that our role on this planet is not to worship God but to create him thank you for listening and hope to see you next time