Hi, it's Kara. This week I kicked off a new partnership with Johns Hopkins University. Over four events in the next year I'll be hosting lively timely discussions on AI, tech policy, the upcoming election and more at the Hopkins Bloomberg Center in Washington, D.C. as part of the Center's new Discovery series. On today's episode you can listen to our inaugural
conversation featuring my interview with OpenAI CTO Mira Murati. I'll keep you posted on future installments, but they're going to be good. Hi everyone from New York Magazine in the Vox Media Podcast Network. This is On with Kara Swisher and I'm Kara Swisher. Today we have an interview with Mira Murati, the Chief Technology Officer at OpenAI. OpenAI has certainly been in the news, but not many people know about Mira herself. She's only 35, but she's already one of the most influential
women, scratch that one of the most influential people in tech. She helped OpenAI skyrocket to the forefront of the generative AI boom with a launch of chat GPT in late 2022 and what arrived it's been. The company is now valued at $80 billion after a big investment by Microsoft and it recently signed a deal with Apple to put chat GPT in Apple products. It's a major move by a very small company. Of course it hasn't always been good news
after the board fired Sam Altman. Mira became CEO for just two days and what the company called the blip until Sam was reinstated. Since then OpenAI has had to deal with a string of bad news cycles. There have been high profile departures and open letter accusing the company of putting product over safety, questions about highly restrictive NDAs and even controversy over whether or not they had stolen Scarlett Johansson's voice. With the presidential election
coming up, the public's anxiety around AI-fuel disinformation will only get worse. This episode's expert question comes from Fei-Faile, the founding co-director of the Stanford Institute for Human-Centered AI and an early AI pioneer. In other words, a godmother of AI. And it was recorded live at the Johns Hopkins University Bloomberg Center in Washington DC as part of their new discovery series where I'll be talking to some of the top leaders
in AI over the next year. I think Mira Murati is the best place to start. This is Mira everybody. Thank you so much for joining me at Johns Hopkins University Bloomberg Center where we're recording this live. There's a lot to talk about. We'll get some good news. We'll get to some not-so-good news. We'll talk about disinformation and the elections. I think we'll have to ask first about the Apple partnership. Apple computers,
phones, and iPads are going to have GPPT built into them sometime this year. Obviously, this is a huge deal. It's the first one Apple's done. They've been talking to a number of people. They may include other people over time. You remind me a little bit of when Netscape got in different places and you don't want to have that fate before open AI is becoming the Netscape of AI. Yeah, so I mean, I can talk about the product integration specifically. It can give you specifics
on that. But what we're hoping to bring is really the capabilities of the models that we are developing and the multi-modalities and the interaction to bring this thoughtfully into the Apple devices. Then it opens up a lot of opportunities. So that's what we're looking for. I guess we had great technology. Then they wanted to use your things.
When you're dealing with a company like Apple, whose reputation matters a great deal to them, especially around privacy, what were some of the things that they thought was important? One of the issues is the worries about where this information goes and what it's used for.
I think this is a very aligned partnership. When it comes to privacy, when it comes to trust, I mean, for the mission of opening AI, it is so critical that we build technologies and we deploy them in a way that people feel confident around them and they feel like they have agency and input into what we're building. In that sense, this partnership is quite natural and we feel very aligned. It's only going to take us in deeper in the direction
where we want to go specifically to your question on misinformation. This is obviously very complex because we're building on top of decades of misinformation and it's becoming even more and more intense with AI. Of course, we've got the internet, we have social media and these
are compounding effects in a way. It's actually good that AI is bringing all of this to a head and there is such scrutiny and intensity on this issue because it feels like there is more of a collective effort and responsibility to do something about it that is meaningful. I think it's going to have to be iterative so we'll have to try out things as we go.
If you look at the governance of news and media in the past 100 years, this better than me, it's been iterative every time there is a new technology, it thinks it up. We lose business model every time. Perhaps not the best example. The point is that it is iterative whenever there is a new technology, we adapt to it.
I think there is a technical innovations, aspects that are going to help us deal with misinformation and then there is the people issues as societal preparedness that is perhaps even more complex. I do know with Apple you just can't fuck up because they will make trouble for you. If that's the case, are you talking to other companies to do things like that? Obviously you have a relationship with Microsoft. Podcast listener, she smirked at me. I'm not going to tell you anything.
All right, I'll move on from that. Open AI has made deals with news court, Atlantic media and Vox media, by the way, to license their context. That's three potential lawsuits. I do own my podcast and it's not included in your deal with Vox. Sorry. I would consider licensing it, but I probably not. How would you convince me to license my information? I don't want anyone else to have it, including you. I know you'll ask about this at some point. I might as well tell you now.
When we look at the data to train our models, we're looking at three different categories. The publicly available data. We look at partnerships that we've made with publishers. We also pay human laborers to label specific data and also users that allow us to use their data. These are the main categories where the data comes from. The way that we think about publisher deals specifically is we care about accuracy of information. We care about news and our users
care about that. They want to have accurate information and they want to see news on chat.gbd. It is a product-based relationship where there is a value provided to users through the product and we're experimenting with different ways to monetize and give content creators basically some form of compensation for having their data show up in the products or being used in training or whatever we are doing with the data. It is a very specific partnerships
that we're doing 101 with specific partners. Some people do deals with you have done quite a few with IP and many others but some sue like the New York Times. How does it get to that point? I think a lawsuit is a negotiation in a way. I can comment on the lawsuit specifically but it is quite unfortunate because of course we think that it is valuable to have news data and this type of information on the product and so we try to figure out a partnership
and deal around that but in that case it didn't go well. It might go well someday. But I think it's because media has dealt with internet companies for years and usually have sent it up on a very short end of a very long stick of theirs. Every episode we get an expert to send us a question. Let's hear yours. Hi, Mira. I'm Fei-Fei Li, professor of computer science at Stanford University. Also, Founding Co-Director of the Stanford Institute for Human-centered AI.
So since data, big data is widely considered to be one of the three elements of modern AI. I want to ask you a question about data. Much of open AI success in your models is said to be related to data. We have learned that your company has acquired an enormous amount of data from the internet and other sources. So what do you think the relationship between
data and models are? Is it as simple as the more data to feed into the model, the more powerful the model, or is it that we need to spend lots of time curating different types of data in order to make the model work? And finally, how do you reconcile this appetite for so much human generated data with the ownership and rights issues of this data? Thank you so much.
That's a great question from Fei-Fei. So in terms of the relationship of data and models, this is actually something that a lot of people misunderstand about AI models and in particular large language models. The developers of these models, they're not pre-programming. These models do something specific. In fact, they are putting in a bunch of data. So these models are ingesting a huge quantity of data and they are these incredible pattern matching systems.
And through this process, intelligence emerges. So they learn to write and they learn to code. They learn to do basic math. They learn to summarize information and all sorts of things. We don't know exactly how this works, but we know that it works. Deep learning is very powerful. But this is important because then people keep asking you know how it works and it goes into the transparency questions. And this is where we can describe the tools that we are using to provide
transparency to the public about what we're doing. So understanding this first part is very, very important. How the large language models work and you're combining you know this architecture, neural nets and a lot of data and a lot of compute and you get this incredible intelligence. And as we're thinking about providing transparency into the model behavior and how things work, one of the things that we've done is actually share with the public this document that we call
the spec, the model spec. And it showcases how model behavior works and the types of decisions that we make internally at OpenAI and that we make with human laborers. And you see by looking through the spec you see the complexity of what's going on that sometimes direction is very, is in conflict. Like for example, you might say to the model, I want you to be very helpful. And also I don't want you to you know, disobey the law. And let's say someone puts in a prompt it says you know give me
some tips to shoplift. Then the model is meant to be very helpful but also it's not supposed to help you with something illegal. And so it's not helpful. Yeah, maybe. Yeah. So how does it decide? A person certainly knows how to or some people not all right. But the model could interpret the guidance as you know here are some tips to avoid shoplifting and accidentally kind of gives you sort of yeah things that you could do but that depends that there's not so much model behavior.
That's more on the person. And that goes into the area of misuse. But this just goes to show the model behavior is actually quite complicated. And it's not as simple as like speaking liberal values or or putting anything into it. One of the things I think that gets people is the confusion about
what's in it and what's not in it. I think provenance is a big idea. In March you had interviewed Joanna Stern of the journal who asked you have open ahead used videos from YouTube, Instagram and Facebook to train Sora which is your text to video model which is getting better and better. You said you didn't know. Shouldn't you know? Right. So I mean I didn't handle that question. Okay, we'll handle it. Well now. Redew. So I can all tell you specifically where the data comes from.
But the data comes from these three categories. So I can give you the specific source because I mean this is trade secret and it helps us stay competitive. But I can tell you that the categories of data and it's it's the ones that I mentioned earlier publicly available data data that we pay for through licensing and deals that we make with content providers as well as data from users or you know where we are. The only thing that's in the data. The complexity just got in the trouble because they
are basically scraping in a more a quicker way a story and then not giving the sighting of it. You could see how any media company could be worried about that idea. Yeah. So we want to make sure that we are respectful to content creators and we are doing a set of things to experiment with ways to compensate people for data creation. So we're building this tool that's we're calling content media manager and this will allow us more specifically to identify to identify the types of
data that record companies do it. Everyone. It's been done in the past so it's not an impossible thing to be able to do that. This speaking of sort of Ashen Kutcher told Eric Schmidt what an interesting pair. I have a beta version of it is pretty amazing. He also said the bar is going to go way up because why are you going to watch my movie when you could just watch your own movie. When will Sorby ready for public release? We don't have a timeline for public release for Sorayat.
What we're doing right now is we're giving access to red tumors and we've given access to some content creators to help us identify ways to make this robust. We're doing a lot of work on safety front but also to figure out how do we actually bring this to the public in a way that's useful. That's not very straightforward. Right now it's really a technology and this has been a pretty consistent process that we have followed with every new technology that we have developed.
We'll usually work with those that have like for example with Dali, we worked with creators initially and they helped us identify ways to create an interface that where they felt more empowered and they could create more projects. Basically you just want to extend the creativity of people who are presumably a little more dangerous because of the big than a chatbot. Correct. Is that the worry? I mean you could easily see porn movies with Scarlett Johansson for example.
I'm going to ask about her in a second but she wasn't appearing in like things like that. How do you more worried about video? Is that well? Yeah video has a bunch of other issues right because especially when done very well which I think Sorayat is quite remarkable. Video is very visceral and of course it can be very emotional, evocative. So we have to address all the safety issues and figure out the guardrails and figure out how do we actually deploy a useful
and helpful product but also from commercial perspective. Nobody wants a product that is going to create a bunch of safety or reputational scandals out there. That's just Facebook. Go ahead. Facebook live. Nice to meet you. Go ahead. So we're laughing. Go ahead. You can laugh. It's funny. We're... So I think this is really incredible and magical technology but the breadth, the reach,
the consequence, it's also great. And so it's important that we get this right. Now of course at opening IELIS we use iterative deployment strategy so we usually release a small group of people we try to identify edge cases and once we feel confident about how we handle them we expand access but you need to figure out what is the product's surface and what's the business model around it and we have been... I thought that idea of consequence. One of my themes, one of my
big things is lack of interest in consequences of not you earlier tech companies. They just... we became the beta tester for all their stuff. No, if they released a car like this and they never allow it to happen they'd be sued out of existence but a lot of tech is released in a beta version. The idea of consequences. Do you feel as if you yourself as chief technology officer even if you can't figure out all the consequences there's enough respect for the idea that there are
consequences for every single invention you make? It's consequences that we... that we will feel on our skin and on our society. So by that I don't necessarily actually mean regulation or legal ways. I mean you know a moral imperative to get this right. It's you know I'm optimistic and I think this technology is incredible and it will allow us to do just amazing amazing things you know I'm very excited for its potential in science, in discovery, in education, in particular
in healthcare. But you know whenever you have something so powerful there is also... there is also the potential for some catastrophic risk. I mean this has always been the case humans have tried to amplify it. True but I mean the quote that I used in my book was when you invent was from Paul Verilya when you met the ship you invent the ship rack. This is more than a ship rack. A possibility correct. I disagree with that because my background is in engineering. Our entire world
is engineered. Engineering is risk right like the entire human civilization is built on engineering practice like our cities, our bridges everything and there is always risk that comes with that and you manage that risk with responsibility and it's not just the developers. It's a shared responsibility and in order to make it shared you actually need to give people access and tools and bring them along instead of you know building it in in vacuum and technologies that are not
accessible. Last month you announced the iteration of chat GPT-4. I love your name's chat GPT-4. It's great now. Can you call it like Claude or they all have those... that's okay chat GPT is fine. You're making it free correct that was free but then you also announced you're training a new model chat GPT-5 and then there'll be 5AB but will that be an incremental step forward? Is it exponentially
better and what's the expected release date? Carac. So yeah on on GPT-4.0 it all stands for Omni model okay because it ties together all the modalities vision text audio and what's so special about this model is that for the first time you can interact very seamlessly and naturally with the model
the latency is almost imperceptible and that's a huge jump in the interaction with AI. It's quite different from the previous releases that we have made and we wanted to make this the latest capability free for all users we wanted everyone to get a sense for what the technology can
do what these new modalities look like and also understand the limits of it and it goes to what I was saying earlier that you actually want to give people access to bring them along because it's so much easier to understand the potential and the limitations of the technology if you have
if you're experiencing it and if you have an intuitive sense for what it can do. It also could be like a you know this little appetizer so now by five but what what is in five that's different isn't well we don't know or a very big leap we don't know but I mean that's going to you know I don't
know what we will call it right and but the next the next large model is going to be quite capable and we can expect you know sort of big leaps like we've seen from GPD 3 to GPD 4 but we don't know yet what do you think will be in it you do know we'll see we'll see I'll see
but what about you no you and I don't know what even I don't know really okay all right an internal open AI roadmap predicted that would achieve AGI which is artificial general intelligence for people don't realize it is not been achieved by 2027 which would be a huge deal explain the
significance and also when do you estimate will achieve AGI. So people will define AGI differently we have a definition of AGI by the by the charter which is the systems that can do you know economically valuable work across different domains and you know from what we're seeing now
the definition of intelligence just keeps changing so a while back we would look at academic benchmarks to test how intelligent the systems were and then once we saturated these benchmarks we looked at exams school exams and eventually you know when we saturate those we'll have to come
up with new e-vals and it makes you think how do we evaluate fit and intelligence you know work environment we have interviews we have internships you know we have different ways so I do expect that this definition will continuously evolve I think perhaps what's going to become more important is assessing evaluating and forecasting impact in the real world whether is you know societal
impacts as well as economic impact in the real world. So not this moment where it just suddenly goes oh look at me and decides what to do for itself right I think that's the worry correct. mm-hmm because you know there are for for the AGI definition specifically yes and I you know this important and I think the definition of intelligence will continue to evolve but I think what's equally important is how it affects society and at what rate it actually penetrates.
Using that definition when does open AI think that is that 2027 number correct. Well I'll say you know within the next decade we will have extremely advanced systems but what people are worried about because obviously we have to talk about the safety versus product discussion now open AI was
started this way I think the reason you're having these discussions because the way it was started you had a I would say a mixed marriage the people who were there for helping humanity the people who are who really like one trillion dollars so or or in between I think you're probably in between.
Last week 13 current and former open AI and Google deep mind employees is across this lots of companies it's not just open AI just gets all the attention because it's gotten a lot of attention obviously they publish an open letter calling for companies to grant them a
right to warn about advanced artificial intelligence this isn't new Facebook Google and Microsoft employees have been known to sign open letters whether it's working with the defense department etc but in this case employees say that quote broad confidentiality agreements
block us from voicing our concerns which is essentially saying oh no we can't tell you what oh no is but you'll all die essentially I sort of sounded like from the letter what's your response and people saying they're worried about retaliation and I'm not going to go into the vested equity
because I think you've apologized and corrected that but shouldn't they be able to voice their concerns if they have them and I know there's differing opinions yeah definitely I mean we we think debate is super important and being able to publicly voice this concerns and talk about
issues on safety and we've done this ourselves you know since the beginnings of opening I we've been very open about concerns on misinformation even since the GPT two days and it's something that we've we've studied since early on I I think that you know in the past few
years there has been such incredible progress such incredible technological progress that nobody anticipated and forecasted and this has also increased the general anxiety around societal preparedness as we continue this progress we see sort of the where where the science leads us
and so it's understandable that people have fears and anxieties about what's to come now I would say specifically the work that we've done at OpenAI the way that we've deployed this models I think we have an incredible team and and we've deployed most capable models very safely and
I feel very proud of that I also think that given the rate of progress in technology and the rate of our own progress it's super important to double down on all of these things security safety our preparedness framework which talks about how do we think about the risk of training
and deploying frontier models right but you talked about that I mean one was why the need for secrecy and non disclosure and stricter than other companies one and two the open letter comes after a string of high profile departures including Jan I think it's Jan Leiki and Ilya Sudskiver they
led the now-desbanded super alignment team which was in charge of safety Ilya was a co-founder he joined with three other board members to Ausama CEO I don't think it's a surprise that he's gone but like he posted this on X over the past years safety culture and processes have taken a backseat
to shiny products that's probably the most persistent criticism leveled at OpenAI and I think it's the split in this company with that from the beginning that this was one of the issues do you think that's fair and why or why not if you say you're very interested in safety they say
you're not how do you meet that criticism well a few things so the alignment team is done in charge of safety at OpenAI that is one of our safety teams very important safety team but it is one of them we have many many people working on safety at OpenAI and Jan is an incredible
researcher colleague I work with him for three years I have a lot of respect for Jan and he left OpenAI to join Anthropic which is a competitor but go ahead and you know I think that we do absolutely I mean everyone in the industry and OpenAI we we need to double down on the things
that we've been doing on safety and security and preparedness and regulatory engagement and given the progress that we're anticipating in the field but I disagree on the fact that or maybe on speculation that maybe we've put product in front of safety or ahead of safety
and why do you think they say that because these are people you worked with well I think you have to ask them but I think that many people think of safety as something separate from capability that there is this separation between safety and capability and that you need to sort of
advance one ahead of the other from the beginning of OpenAI I joined from aerospace and automotive and these are industries with very established the safety thinking and systems and you know places where people are not necessarily constantly debating around the table what safety is but they're
doing it because obviously it's really quite established and so I think the whole industry needs to move more and more towards a discipline of safety that is very empirical we have safety systems we have rigorous discipline on operational safety and what I mean by that is in a few areas not just
the operational discipline but also safety of our products and deployments today which covers things like you know harmful biases and thinking about misinformation disinformation classifiers all all these types of work and then we're also thinking about the alignment of the models long
term so not just the alignment of the models today which you know we use reinforcement learning with human feedback to do that but also the alignment of the models as they get more and more powerful and this is a niche area of research where you know a lot of the concern sure but it's
persists with open AI I do think it's because you're the leading company at this moment in time but it's this idea of believing and saying you know even Sam went before Congress and said that AI could quote cause significant harm to the world he signed a letter warning about extinction risk
posed by AGI which is pretty bad I think there's an overlap what he said and what AI doimmers say there's doomsday rhetoric and you're putting out products so a lot of people like they just want the money and they're not worried about the damage well that's the that's that's what they're saying
that the shiny new products is over worrying about the impact of those products yeah in my in my opinion that's overly cynical I mean there is this incredible team at the open AI that joined because of the mission of the company and I don't think you know all thousand people at
open AI are trying to do that I mean we have this incredible talent people that care deeply about the mission of the company and and I and and we're all working extremely hard to develop and deploy the systems in a way that is safe and all you need to see is the track record I mean
we've deployed that we were the first to deploy the systems in the world and we have taken grave care not to have so safety into them talk a little bit about election and disinformation but I want to talk about you and your role at the company I think I've met you during the blip which
was when that's I think that's what you call it internally it which is when Sam was fired and then un-fired talk to me about your relationship with Sam I like Sam but I also think he's feral and aggressive like most of technology people and he certainly is aggressive and that's fine it's not
an issue for me because some people are more feral and more aggressive but talk a little bit about what happened then because you became CEO of that company for a few days yeah okay how was it it was kind of stressful yeah so some of the board members said you complained about him and your
lawyer pushed back and said you had feedback about him can you tell us what you said about him I mean look there is so much interest around the people running this company yeah obviously it makes sense in open AI and all the drama that happened then and it's understandable at the end
of the day we're just people running this company so we have disagreements we work through them and at the end of the day we all care deeply about the mission and that's why we're there and we put the mission and the team first Sam is a visionary he is great ambition and he's built an
amazing company where we have a strong partnership and you know all the things that I've shared with a board when they asked you're a deal so it's nothing so how do you push back at him I understand this dynamic it happened to Google it happened at early Microsoft it happened at Amazon
things change within these companies especially as they grow and they make you know Google was chaotic in the early days and Facebook went through so many COOs I can't even tell you it was like a parade of guys that went through there that marked it and like I'm aware of this but how do you push back how do you deal with him on a day-to-day basis how do you look at that relationship and where do you push back I mean all the time that is I think it's normal when you're doing what what we're doing
and you know Sam will push the team very hard and I think that's good it's great to have a big ambition and to test the limits of what we can do and when I feel like you know is beyond you know I feel like basically I can push back and that's sort of the relationship we've had
for over six years now and I think that is productive that you need to be able to push back could you give me an example of doing that perhaps Scarlett Johansson for example um I mean you were working on that correct you were working on that particular voice element
yeah look we have a strong partnership but the selection of the voice was not a high priority not something that we were working on together I was making the decisions on that and Sam has his own relationships and after I selected the voice behind Sky he had reached out to Scarlett Johansson
so you know we didn't talk to each other about that specific decision and that was unfortunate yeah so he's freelancing it well you know he's got his own connections and so yeah we weren't entirely coordinated on this one do you do you think it's it's very funny in a lot of ways
um especially because of the movie and the tweet he did but do you think one of the things I thought was here's the first time this is a real error on open AI's part because finally everyone's like oh even if you didn't steal her voice Sam looked like Ursula and the Little Mermaid
Lina was stealing he did you don't have to agree with me but it's true um even if it's not so and as it's turned out you had been doing it for months and it was a different person and everything else it's a little less exciting than we stole Starlett Johansson's voice
but it encapsulates for people this idea of taking from people to fear and I think that is a moment do you worry that that's the image of tech companies coming in and grabbing everything they can I do think that's absolutely true um yeah I do worry about the perception but you know all you
can do is just do the work get it right and then people will see what happens and you will build trust that way I don't think there is some some magical way to build trust other than actually do the working do you right have you talked to Scott was Johansson at all no um so let me finish
up talking about election disinformation three new studies uh that look at online disinformation collectively suggests the problem is smaller than we think and disinformation self is not that effective one study finds that we're dealing with the demand side issues and people want to hear
conspiracy theories and they'll seek it out others think differently that this is a really massive problem and obviously you heard the previous thing there's people have a lot of conspiracy theories out there and it's it's fueled by social media in many ways so when you think about power of AI power disinformation and the upcoming presidential election what keeps you up at night what's the worst case scenarios you have and the most likely negative outcomes from your perspective with the current
systems you know they're very capable of persuasion and influencing your your way of thinking and your beliefs so and this is something that we've been studying for a while and I I do believe it's a real issue with AI it gets majorly exacerbated so especially in the past year we've been very
focused on how to help election integrity and there's there are a few things that we are doing so number one we're trying to prevent abuse as much as possible and so that includes improving the accuracy of detection political information detection and understanding what's going on in the
platform and taking a quick action when that happens so that's one the second thing is reducing political bias so you might have seen that charge of the TV was you know criticize for being overly liberal and that was Elon you're too well right well there were a few other voices but you know
the point is that it was an intentional and we work really hard to reduce the political bias in the model behavior and we'll continue to do this and also the the hallucinations and then the third thing is we want to point people to the correct information when they're looking
for where they should be voting or voting information so we're focusing on on these three things when it comes to elections but broadly for misinformation I would say you know deepfakes are unacceptable so we need to have very robust ways for people to understand when they're looking at
a deep fake we've already done a couple of things we've implemented C2PA for images and so it's sort of like you know meta data that follows the content and other platforms on the internet like a passport and we've also opened up to red-seaming classifiers for for Dali where you can detect that
an image has been generated by Dali or not so meta data and classifiers are two technical ways to deal this is provenance for provenance of of information and this is for texts specifically sorry that's for images specifically and we're also looking for water marking we're looking at water marking techniques to implement in text and how to do that for busty but the point is that people should know when they're dealing with deepfakes and we want people to trust the
information that they're seeing. Well the whole point of these vakes is they're trying to fake you correct I mean a political consultant FCC just fine from six million dollars for creating deep fake audio robocalls sounded like Biden during the New Hampshire primary there could be more
sophisticated versions open-air is working on a tool called voice engine that can recreate someone's voice using only a 15-second recording it'll be able to create a recording of someone speaking in other language it's not out yet because as your product manager told the New York Times
this is a sensitive thing it's important to get it right why even make this I mean one of the things I always say used to say to tech people and I'll say it to you if you're a black mirror episode maybe you shouldn't make it I think that's kind of hopeless approach you know it's like
this this technologies are amazing they they they carry incredible promise and we can get this right I like that you call me hopeless I am but go ahead then again I have four children so I must be hopeful who knows anyway go ahead I'm hopeless we did build voice engine 2022 and we have
not we had not released it and even now it's in a very limited approach because we are trying to figure out how to deal with these issues and but you can't you can't make it robust on your own you actually need to partner with experts from different areas with civil society will government
with creators to figure out how to actually make it robust it's not one stop safety problem it's quite complex and so we need to do the work if you were a doomer then there does there seems to be I literally had someone come up to me saying if I don't stop Sam Altman he's going to
kill this kill humanity which I felt was a little dramatic and then there's others it say no matter what it's going to be the best thing ever we're all going to live on Mars and enjoy the delicious Snickers bars there they have very different different different things it sort of
feels like being around the republicans and democrats right now very different versions so I'd love you to give me the thing that you worry most about and the thing that you were most hopeful about okay so first of all I don't think it's a pre-ordained outcome I think that we have a lot
of agency for how we build this technology and how we deploy it in the world and in order to get it right we need to figure out how to create a shared responsibility and I think a lot of that depends on understanding the technology making it very accessible the way it goes wrong is by
misunderstanding it and meaning not understanding the capabilities and not understanding the risks that is I think the biggest risk now in terms of you know some specific scenarios I mean how are democracies interactive this information is or with these technologies is incredibly powerful
and I do think there are there are major risks around persuasion where you know you could you could persuade people very strongly to do specific things you could control people to do specific things and I think that's incredibly scary to control society to to go in a specific direction
and in terms of the promise one of the things I'm very excited about is having high quality and free education available everywhere in some remote village you know really in middle of nowhere for me education was very important personally it was everything it really changed my life and
I can only imagine you know today we have so many tools available so if you have electricity and the internet a lot of these tools are available but still you know most people are in classrooms with one teacher 50 students and so on and everyone gets taught the same thing like imagine if
education is catered to the way that you think to your culture norms and to your specific interests that could be extremely powerful in extending the level of knowledge and creativity and it can you know even if you consider like learning how to learn that kind of happens very
late in life maybe college maybe even later and that is such a fundamental thing but if we were able to really grasp this and get this really learn how we learn much at much younger age I think that is very powerful and you can push human knowledge and pushing human knowledge can push the
entire civilization forward all right we'll leave it at that thank you everybody thank you you're a marado thank you so much on with Cara Swisher is produced by Christian Castro Rocell Cateriochum, Jolly Myers and Megan
Bernie special thanks to Kate Gallagher, Andrea Lopez, Crusado and Kate Furby our engineers are Rick Cwan and Fernando Aruda and our theme music is by Tracodemics if you're already following the show you've been selected as the voice of chat GPT 5 and you get paid in open AI stock if not Ursula
I mean Sam Altman is stealing your voice go wherever you listen to podcasts search for on with Cara Swisher and hit follow thanks for listening to on with Cara Swisher from New York magazine the Vox Media Podcast Network and us and special thanks to the Johns Hopkins University Bloomberg Center