Hey, beloved here. This episode is a bit different. Today I'm interviewing Helen Toner, a researcher who works on AI regulation. She's also a former board member at OpenAI. In my interview with Helen, she reveals for the first time what really went down at OpenAI late last year when the CEO Sam Altman was fired. And she makes some pretty serious criticisms of him. We've reached out to Sam for comments and if you respond we'll include that update at the end of the episode. But first,
let's get to the show. I'm Bilavol Sadoo and this is the TED AI show where we figure out how to live and thrive in a world where AI is changing everything. The OpenAI saga is still unfolding. So let's get up to speed. In case you missed it, on a Friday in November 2023, the board of directors at OpenAI fired Sam Altman. This ouster remained a top news item over that weekend with the board saying that he hadn't been, quote, consistently candid in his communications. Unquote. The Monday after
Microsoft announced that they had hired Sam to head up their AI department. Many OpenAI employees rallied behind Sam and threatened to join him. Meanwhile, OpenAI announced an interim CEO and then a day later, plot twist, Sam was rehired at OpenAI. Several of the board members were removed or resigned and replaced. Since then, there's been a steady fallout. On May 15th, 2024, just last week as of recording this episode, OpenAI's chief scientist, Ilya Satskivir formally resigned.
Not only was Ilya a member of the board that fired Sam, he was also a part of the super alignment team, which focuses on mitigating the long-term risks of AI. With a departure of another executive, Jan Leica, many of the original safety conscious folks in leadership positions have either departed OpenAI or moved on to other teams. So what's going on here? Well, OpenAI started as a non-profit in 2015. Self-described as an artificial intelligence research company. They had one mission to create
AI for the good of humanity. They wanted to approach AI responsibly to study the risks up close and to figure out how to minimize them. This was going to be the company that showed us AI done right. Fast forward to November 17th, 2023, the day Sam was fired, OpenAI looked a bit different. They'd released Dolly and Chatchy BT was taken the world by storm. With hefty investments from Microsoft, it now seemed that OpenAI was in something of a tech arms race with Google. The release
of Chatchy BT prompted Google to scramble and release their own chatbot, Bard. Over time, OpenAI became closed AI. Starting 2020 with the release of GPT-3, OpenAI stopped sharing their code. And I'm not saying that was a mistake. There are good reasons for keeping your code private. But OpenAI somehow changed, drifting away from a mission-minded non-profit with altruistic goals to a run-of-the-mill tech company shipping new products at an astronomical pace.
This trajectory shows you just how powerful economic incentives can be. There's a lot of money to be made in AI right now. But it's also crucial that profit isn't the only factor driving decision-making. Artificial general intelligence or HGI has the potential to be very, very disruptive. And that's where Helen Toner comes in. Less than two weeks after OpenAI fired and rehired Sam Altman, Helen Toner resigned from the board.
She was one of the board members who had voted to remove him. And at the time, she couldn't say much. There was an internal investigation still ongoing, and she was advised to keep mum. And oh man, she got so much flak for all of this. Looking at the news coverage and the tweets, I got the impression she was this techno pessimist who was standing in the way of progress, or a kind of maniacal power seeker using safety policy as her cudgel.
But then, I met Helen at this year's Ted Conference. And I got to hear her side of the story. And it made me think a lot about the difference between governance and regulation. To me, the OpenAI saga is all about AI board governance, and incentives being misaligned amongst some really smart people. It also shows us why trusting tech companies to govern themselves may not always go beautifully, which is why we need external rules and regulations. It's a balance.
Helen's been thinking and writing about AI policy for about seven years. She's the director of strategy at CESET, the Center for Security and Emerging Technology at Georgetown, where she works with policymakers in DC about all sorts of AI issues. Welcome to the show. Hey, good to be here. So Helen, a few weeks back at Ted and Vancouver, I got the short version of what happened at OpenAI last year. I'm wondering, can you give us the
long version? As a quick refresher on sort of the context here, the OpenAI board was not a normal board. It's not a normal company. The board is a nonprofit board that was set up explicitly for the purpose of making sure that the company's public good mission was primary, was coming first over profits, investor interests, and other things. But for years, SAM had made it really difficult for the board to actually do that job by withholding information, misrepresenting things that were
happening at the company. In some cases, outright lying to the board. At this point, everyone always says, like, what? Give me some examples. I can't share all the examples, but to give a sense of the kind of thing that I'm talking about. It's things like, you know, when Chatchy PT came out, November 2022, the board was not informed in advance about that. We learned about Chatchy PT
on Twitter. SAM didn't inform the board that he owned the OpenAI startup fund, even though he, you know, constantly was claiming to be an independent board member with no financial interest in the company. On multiple occasions, he gave us inaccurate information about the small number of formal safety processes that the company did have in place, meaning that it was, you know, basically impossible for the board to know how well those safety processes were working or
what might need to change. And then, you know, a last example that I can share because it's been very widely reported relates to this paper that I wrote, which has been, you know, I think way overplayed in the press. For listeners who didn't follow this in the press, Helen had co-written a research paper last fall intended for policymakers. I'm not going to get into the details, but what you need to know is that Sam Altman wasn't happy about it. It seemed like Helen's paper was critical of
OpenAI and more positive about one of their competitors, Anthropic. It was also published right when the Federal Trade Commission was investigating OpenAI about the data used to build its generative AI products. Essentially, OpenAI was getting a lot of heat and scrutiny all at once. The way that played into what happened in November is pretty simple. It had nothing to do with. The substance of this paper, the problem was that after the paper came out, Sam started lying to
other board members in order to try and push me off the board. So it was another example that just like really damaged our ability to trust him. And actually, it only happened in late October last year when we were already talking pretty seriously about whether we needed to fire him. And so, you know, there's kind of more individual examples. And for any individual case, Sam could always come up with some kind of like innocuous sounding explanation of why wasn't a big
deal or misinterpreted or whatever. But the end effect was that after years of this kind of thing, all four of us who fired him came to the conclusion that we just couldn't believe things that Sam was telling us. And that's a completely unworkable place to be in as a board, especially a board that is supposed to be providing independent oversight over the company, not just like helping the CEO to raise more money, not trusting the word of the CEO who is your main conduit to the company,
your main source of information about the company is just to totally impossible. So that was kind of the background that the state of affairs coming into last fall. And we had been working at the board level as best we could to set up better structures, processes, all that kind of thing to try
and improve these issues that we have been having at the board level. But then in mostly in October of last year, we had this series of conversations with these executives where the two of them suddenly started telling us about their own experiences with Sam, which they hadn't felt comfortable sharing before, but telling us how they couldn't trust him about the toxic atmosphere he was creating, they used the phrase psychological abuse, telling us they didn't think he was the right person to
lead the company to AGI, telling us they had no belief that he could or would change, no point in giving him feedback, no point in trying to work through these issues. I mean, they've since tried to kind of minimize what they told us, but these were not casual conversations. They were really serious to the point where they actually sent us screenshots and documentation of some of the instances they were telling us about of him, why and being manipulative in
different situations. So, you know, this was a huge deal. This was a lot. And we talked it all over very intensively over the course of several weeks. And ultimately just came to the conclusion that the best thing for OpenAI's mission and for OpenAI's organization would be to bring on
a different CEO. And, you know, once we reached that conclusion, it was very clear to all of us that as soon as Sam had any inkling that we might do something that went against him, he, you know, would pull out all the stops, do everything in his power to undermine the board, to prevent us from, you know, even getting to the point of being able to fire him. So, we were very careful, very deliberate about who we told, which was essentially almost no one in advance, other than,
you know, obviously, our legal team. And so, that's kind of what took us to, to November 17th. Thank you for sharing that. Now, Sam was eventually reinstated as CEO with most of the staff supporting his return. What exactly happened there? Why was there so much pressure to bring him back? Yeah, this is obviously the elephant in the room. And unfortunately, I think there's been a lot of misreporting on this. I think there were three big things going on that helped make sense of
kind of what happened here. The first is that really pretty early on, the way the situation was being portrayed to people inside the company was you have two options. Either Sam comes back immediately with no accountability, you know, a totally new board of his choosing, or the company will be destroyed. And, you know, those weren't actually the only two options and the outcome that
we eventually landed on was neither of those two options. But I get why, you know, not wanting the company to be destroyed, got a lot of people to fall in line, whether because they were, in some cases, about to make a lot of money from this upcoming tender offer, or just because they love their team, they didn't want to lose their job, they cared about the work they were doing, and of course,
a lot of people didn't want the company to fall apart, you know, us included. The second thing, I think it's really important to know that has really gone under reported is how scared people are to go against Sam. They had experienced him retaliating against people, retaliating against them for past instances of being critical. They were really afraid of, you know, what might happen to them. So when some employees started to say, you know, wait, I don't want the company to fall apart,
like let's bring back Sam. It was very hard for those people who had had terrible experiences to actually say that for a few that, you know, if Sam did stay in power as he ultimately did, you know, that would make their lives miserable. And I guess the last thing I would say about this is that this actually isn't a new problem for Sam. And if you look at some of the reporting that has come out since November, it's come out that he was actually fired from his previous job
at Y Combinator, which was hushed up at the time. And then at, you know, his job before that, which was his only other job in Silicon Valley, his startup looped. Apparently, the management team went to the board there twice and asked the board to fire him for what they called, you know, deceptive and chaotic behavior. If you actually look at his track record, he doesn't, you know,
exactly have a glowing trail of references. This wasn't a problem specific to the personalities on the board as much as he would love to kind of portray it that way. So I had to ask you about that, but this actually does tie into what we're going to talk about today. Open AI is an example of a company that started off trying to do good.
But now it's moved on to a for-profit model and it's really racing to the front of this AI game, along with all of these like ethical issues that are raised in the wake of this progress. And you could argue that the open AI saga show is that trying to do good and regulating yourself isn't enough. So let's talk about why we need regulations.
Great. Let's do it. So from my perspective, AI went from the sci-fi thing that seemed far away to something that's pretty much everywhere and regulators are suddenly trying to catch up. But I think for some people, it might not be obvious why exactly we need regulations at all. Like for the average person, it might seem like, oh, we just have these cool new tools, like Dolly and Chatchy PT that do these amazing things. What exactly are we worried about in concrete terms?
There's very basic stuff for very basic forms of the technology. Like if people are using it to decide who gets alone, to decide, who gets parole, to decide who gets to buy a house, like, you need that technology to work well. If that technology is going to be discriminatory, which AI often is, it turns out you need to make sure that people have recourse. They can go back and say, hey, why was this decision made? If we're talking AI being used in the military,
that's a whole other kettle of fish. And it's not, I don't know if we would say like regulation for that, but certainly need to have guidance rules, processes in place. And then kind of looking forward and thinking about more advanced AI systems, I think there's a pretty wide range of potential harms that we could well see if AI keeps getting increasingly sophisticated, letting every little script kitty in their appearance basement,
having the hacking capabilities of a crack in a say-sell. That's a problem. I think something that really makes AI hard for regulators to think about is that it is so many different things. And plenty of the things don't need regulation. I don't know how Spotify decides how to make your playlist, the AI that they use for that. I'm happy for Spotify to just pick whatever songs they
want for me, and if they get it wrong, who cares. But for many, many other use cases, you want to have at least some kind of basic common sense guardrails about it. I want to talk about a few specific examples that we might want to worry about, not in some battle space overseas, but at home in our day-to-day lives. Let's talk about surveillance. AI has gotten really good at perception, essentially understanding the contents of images,
video, and audio. And we've got a growing number of surveillance cameras in public and private spaces. And now companies are infusing AI into this fleet, essentially breathing intelligence into these otherwise dumb sensors that are almost everywhere. Madison Square Garden in New York City is an example. They've been using facial recognition technology to bar lawyers involved in lawsuits against their parent company, MSG Entertainment, from attending events at their venue.
This controversial practice obviously raised concerns about privacy, due process, and potential for abuse of this technology. Can we talk about why this is problematic? Yeah, I mean, I think this is a pretty common thing that comes up in the history of technology, is you have some existing thing in society, and then technology makes it much faster,
much cheaper, much more widely available. Like surveillance where it goes from, like, oh, it used to be the case that your neighbor could see you doing something bad and go talk to the police about it. It's one step up to go to, well, there's a camera, a CCTV camera,
and the police can go back and check at any time. And then another step up to, like, oh, actually, it's just running all the time, and there's an AI facial recognition detector on there, and maybe in the future, an AI activity detector that's also flagging, this looks suspicious. In some ways, there's no qualitative change in what's happened. It's just like you could be seen doing something, but I think you do also need to grapple with the fact that if it's
much more ubiquitous, much cheaper, then the situation is different. I mean, I think with surveillance, people immediately go to the kind of law enforcement use cases, and I think it is really important to figure out what the right trade-offs are between achieving law enforcement objectives and being able to catch criminals and prevent bad things from happening, while also recognizing the huge issues that you can get if this technology is used with overreach. For example,
facial recognition works better and worse on different demographic groups. And so if police are, as they have been in some parts of the country, going at arresting people purely on a facial recognition match and on no other evidence, there's a story about a woman who was eight months pregnant, having contractions in a jail cell, after having done absolutely nothing wrong and being arrested only
on the basis of a bad facial recognition match. So I personally don't go for the needs to be totally banned and no one should ever use it in any way for anything, but I think you really need to be looking at how are people using it, what happens when it goes wrong, what recourse do people have, what kind of access to do process do they have. And then when it comes to private use, I really
think we should probably be a bit more restrictive. I don't know, it just seems pretty clearly against, I don't know, freedom of expression, freedom of movement for somewhere like Madison Square Gardens to be kicking the runways out. I don't know, I'm not a lawyer myself, so I don't know what exactly the state of the law around that is, but I think the sort of civil liberties and privacy concerns
there are pretty clear. I think the problem with sort of an existing set of technology getting infused with more advanced capabilities, sort of unbeknownst to the common population at large, is certainly a trend. And one example that shook me up is a video went viral recently of a security camera from a coffee shop, which showed a view of a cafe full of people and baristas, and basically over the heads of the customers, like the amount of time they spent at the cafe. And then over
the baristas was like, how many drinks have they made? And then, you know, so what does this mean? Like, ostensibly the business can one track who is staying on their premises for how long, learn a lot about customer behavior without the customer's knowledge or consent. And then number two, the businesses can track how productive their workers are and could potentially fire, let's say, less productive baristas. Let's try other problems in the risk here. And like, how is this legal?
I mean, the short version is, and this comes up again and again and again if you're doing AI policy. The US says no federal privacy laws. Like, there's no, there are no rules on the books for, you know, how companies can use data. The US is pretty unique in terms of how few protections there are of what kinds of personal data are protected in what ways efforts to make laws have just failed over and over and over again. But there's now this sudden stealthing new effort that people
think might actually have a chance. So who knows, maybe this problem is on the way to getting solved. But at the moment, it's a big, big hole for sure. And I think step one is making people aware of this, right? Because people have to your point heard about online tracking. But having those same set of analytics and like the physical space and reality, it just feels like the Rubicon has been crossed and we don't really even know that's what's happening when we walk into whatever grocery
store. I mean, again, yeah. And again, it's, it's about sort of the scale and the ubiquity of this because again, it could be like your favorite barista knows that you always come in and you sit there for a few hours on your laptop because they've seen you do that a few weeks in row. That's very different to this, this data is being collected systematically and then sold to, you know, data vendors all around the country and used for all kinds of other things or outside the country.
So again, I think we have these sort of intuitions based on our real world person to person interactions that really just break down when it comes to sort of the size of data that we're talking about here. I also want to talk about scams. So folks are being targeted by phone scams. They get a call from their loved ones. It sounds like their family members have been kidnapped and being
held for ransom. In reality, some bad actor just used off the shelf AI to scrub their social media feeds for these folks voices and scammers can then use this to make these very believable hoax call where people sound like they're in distress and being held captive somewhere. So we have reporting on this particular hoax now, but what's on the horizon? What's like keeping you up at night?
I mean, I think that the obvious next step would be with video as well. I mean, definitely if you haven't already gone and talked to your parents or grandparents, anyone in your life who is not super tech savvy and told them like, you need to be on the lookout for this, you should go do that. I talk a lot about kind of policy and what kind of government involvement or regulation we might need for AI.
I do think a lot of things we can just adapt to and we don't necessarily need new rules for. So I think we've been through a lot of different waves of online scams and I think this is the newest one and it really sucks for the people who get targeted by it. But I also expect that five years from now would be something that people are pretty familiar with and will be a pretty small number of people who are still vulnerable to it. So I think the main thing is, yeah, be super suspicious of any
voice. Definitely don't use voice recognition for your bank accounts or things like that. I'm pretty sure some banks will offer that. Ditch that. Definitely use something more secure. And yeah, be on the lookout for video scamming as well and for people on video calls who look real. I think there was recently just the other day a case of a guy who was on a whole conference call where there are a bunch of different AI generated people all in the call and he was the only real person
got scammed out a bunch of money. So that's coming. Totally content based authentication is on its last legs, it seems. Definitely. It's always worth like checking in with what is the baseline that we're starting with. And I mean, so for instance, a lot of things, a lot of things are already public and they don't seem to get misused. So I think a lot of people's addresses are listed publicly. We used to have little white pages or you can look up someone's address. And that
mostly didn't result in, you know, in terrible things happening. Or even the Givesilly examples, I think it's really nice that delivery drivers or when you go to a restaurant to pick up food that you ordered, it's just there. All right. So let's talk about what we can actually do. It's one thing to regulate businesses like cafes and restaurants. It's another thing to rein in all the bad actors that could abuse this technology. Can laws and regulations actually protect us?
Yeah, they definitely can. I mean, and they already are. Again, AI is so many different things that there's no one set of AI regulations. There's plenty of laws and regulations that already apply to AI. So there's a lot of concern about AI algorithmic discrimination with good reason. But in a lot of cases, there are already laws on the book saying you can't discriminate on the basis of race or gender or sexuality or whatever it might be. And so in those cases,
it's not even you don't even need to pass new laws or make new regulations. You just need to make sure that the agencies in question have the staffing they need. Maybe they have the, maybe they need to have the exact authorities. They have tweaked in terms of who are they allowed to investigate or who are they allowed to penalize or things like that. There are already rules. For things like self-driving cars, the Department of Transportation is handling that. It makes sense
for them to handle that. For AI and banking. There's a bunch of banking regulators. Have a bunch of rules. So I think there's a lot of places where AI actually isn't fundamentally new and the existing systems that we have in place are doing an okay job at handling that that they may need. Again, more staff or slight changes to what they can do. And I think there are a few different places where there are kind of new challenges emerging at sort of the cutting edge of AI where you have systems
that can really do things that the computers have never been able to do before. And whether there should be rules around making sure that those systems are being developed and deployed responsibly. I'm particularly curious. If there's something that you've come across that's really clever or like a model for what good regulation looks like. I think this is mostly still a work in progress.
So I don't know that I've seen anything that I think really absolutely nails it. I think a lot of the challenge that we have with AI right now relates to how much uncertainty there is about what the technology can do, what it's going to be able to do in five years. You know, experts disagree enormously about those questions, which makes it really hard to make policy. So a lot of the policies that I'm most excited about are about shedding light on those kind of questions,
giving us a better understanding of where the technology is. So some examples of that are things like President Biden created this big executive order last October and had all kinds of things in there. One example was a requirement that companies that are training, especially advanced systems, have to report certain information about those systems to the government. And so that's a requirement where you're not saying you can't build that model, can't train that model. You're not saying the
government has to approve something. You're really just sharing information and creating kind of awareness and more ability to respond as the technology changes over time, which is such a challenge for government keeping up with this fast-moving technology. There's also been a lot of good movement towards funding, like the science of measuring and evaluating AI. A huge part of the challenge with figuring out what's happening with AI is that we're really bad at actually
just measuring how good is this AI system? How do these two AI systems compare to each other? Is one of them sort of quote unquote smarter? So I think there's been a lot of attention over the last year or two into funding and establishing within government better capabilities on that front. I think that's really productive. Okay, so policymakers are definitely aware of AI if they weren't before. And plenty of people are worried about it. They want to make sure it's safe, right? But that's
not necessarily easy to do. And you've talked about this, how it's hard to regulate AI. So why is that? What makes it so hard? Yeah, I think there's at least three things that make it very hard. One thing is AI is so many different things, like we've talked about. It's cuts across sector, you know, it has so many different use cases. It's really hard to get your arms around, you know, what it is,
what it can do, what impacts will have. A second thing is it's a moving target. So what the technology can do is different now than it was even two years ago, let alone five years ago, 10 years ago. And, you know, policymakers are not good at sort of agile policymaking. They're not like software developers. And then the third thing is no one can agree on how they're changing or how they're
going to change in the future. If you ask five experts, you know, where the technology is going, you'll get five completely different answers, often five very confident, completely different answers. So that makes it really difficult for policymakers as well, because they need to get scientific
consensus and just like take that and run with it. So I think maybe this kind of third factor is the one that I think is the biggest challenge for making policy for AI, which is that for policymakers, it's very hard for them to tell who should they listen to, what problems should they be worried about, and how is that going to change over time? Speaking of who you should listen to, obviously, you know, the very large companies in the space have an incentive and there's been a lot of talk
about regulatory capture when you ask for transparency. Why would companies give a peak under the hood of what they're building? They just cite this to be proprietary. On the other hand, you know, they might be these companies might want to set up, you know, policy and institutional framework that is actually beneficial for them and sort of prevents any future competition. How do you get these powerful companies
to like participate and play nice? Yeah, it's definitely very challenging for policymakers to figure out how to interact with those companies again, because, you know, in part because they're lacking the expertise and the time to really dig into things and depth themselves. Like a typical Senate staffer might cover like, you know, technology issues and trade issues and veterans affairs and agriculture and education, you know, and that's like their portfolio. So they are scrambling,
like they have to, they need outside help. So I think it's very natural that the companies do come in and play a role. And I also think there are plenty of ways that policymakers can really mess things up if they don't, you know, know how the technology works and they're not talking to the companies they're regulating about what's going to happen. The challenge, of course, is how do you balance that with external voices who are going to point out the places where the companies are
actually being self-serving. And so I think that's where it's really important that civil society has resources to also be in these conversations. Certainly what we try to do at CSET, the organization I work at, where totally independent and, you know, really just trying to work in the best interest of, you know, making good policy. The big companies obviously do need to have a seat at the table, but you would hope that they have, you know, a seat at the table at 99 seats out of 100 in terms of
who policymakers are talking to and listening to them. There also seems to be a challenge with enforcement, right? You've got all these AI models already out there. A lot of them are open source. You can't really put that genie back in the bottle, nor can you really start, you know, moderating how this technology is used without, I don't know, like going full 1984 and having a
process on every single computer monitoring what they're doing. So how do we, how do we deal with this landscape where you do have, you know, close source and open source, like various ways to access and build upon this technology? Yeah, I mean, I think there are a lot of intermediate things between just total anarchy and full 1984. There's things like, you know, hugging face, for example,
is a very popular platform for open source AI models. So hugging face in the past has, has delisted models that are, you know, considered to be offensive or dangerous or whatever it might be. And that actually does meaningfully reduce kind of the usage of those models because hugging face as whole deal is to make them more accessible, easier to use, easier to find. You know, depending on the specific problem we're talking about, there are things that, for example, you know, social
media platforms can do. So if we're talking about, as you said, child pornography or also, you know, political disinformation, things like that, maybe you can't control that at the point of creation, but if you have the Facebook's, the Instagram's of the world, you know, working on, they already have methods in place for how to kind of detect that material, suppress it, report it. And so that,
you know, there are other mechanisms that you can use. And then of course, specifically on the, kind of image and audio generation side, there are some really interesting initiatives underway, mostly being led by industry around what gets called content provenance or content authentication, which is basically, how do you know where this piece of content came from? How do you know if it's real? And that's a very rapidly evolving space and a lot of interesting stuff happening there.
I think there's a good amount of promise, not for perfect solutions, where we'll always know, is this real or is it fake, but for making it easier for individuals and platforms to recognize, okay, this is fake, it was AI generated by this particular model, or this is real, it was taken on this kind of camera, and we have the cryptographic signature for that. I don't think we'll ever have perfect solutions. And again, I think, you know, societal adaptation is just going to be a big part
of the story. But I do think there's, there's pretty interesting technical and policy options that, that can make a difference. Definitely. And even if you can't completely control, you know, the generation of this material, there are ways to drastically cap the distribution of it. And so like, I think that reduces some of the harms there. Yeah, at the same time, labeling content that is synthetically generated a bunch of platforms have started doing that. That's exciting because like,
I don't think the average consumer should be a deep fake detection expert, right? But really, like, if there could be a technology solution to this that feels a lot more exciting, which brings me to the future. I'm kind of curious in your mind, what's like the dystopian scenario and the utopian scenario in all of this? Let's start with a dystopian one. What does a world look like with inadequate or bad regulations? Paying a picture for us. So many possibilities.
I mean, I think, I think their worlds are not that different from now where you just have automated systems doing a lot of things, playing a lot of important roles in society, in some cases, doing them badly and people not having the ability to go in and question those decisions.
There's obviously this whole discourse around existential risk for mayi, etc, etc. Kamala Harris had a whole speech about like, you know, if someone's, I forget the exact examples, but if someone loses access to Medicare because of an algorithmic issue, like, is that not existential
for that, you know, an elderly person? So there are already people who are being directly impacted by algorithmic systems and AI in really serious ways, even some of the reporting we've seen over the last couple months of how AI is being used in warfare, like, you know, videos of a drone chasing a Russian soldier around a tank and then shooting him. I don't think we're a full dystopia,
but there's there's sort of plenty of plenty of things we worried about already. Something I think I worry about quite a bit or that feels intuitively to me to be a particularly plausible way things could go is sort of what I think of as the, um, the Wally future. I don't know if you remember that movie. Oh, absolutely. With the little robot and the piece that I'm talking about is not the
like junk earth and whatever. The piece I'm talking about is the people in that movie. They just sit in their soft roll around wheelchairs all day and, you know, have content and, um, content and food and whatever to keep them happy. And I think what worries me about that is I do think there's a really natural gradient to go towards what people want in the moment and will, you know, will go, will choose in the moment, which is different from what they, you know, both really
fine fulfilling or what will build kind of a meaningful life. And I think there's just really natural commercial incentives to build things that people sort of superficially want. But then end up with this really kind of meaningless, shallow, superficial world and potentially one where kind of most of the consequential decisions are being made by machines that have no concept of what it means to lead a meaningful life. And, you know, because how would we program that into them? Because
we have no, we, we struggled to kind of put our finger on ourselves. So I think those kinds of futures where it's not where there's some, you know, dramatic big event, but just where we kind of gradually hand over more and more control of the future to computers that are more and more sophisticated, but that don't really have any concept of meaning or beauty or joy or fulfillment or, you know, flourishing or whatever it might be. I hope we don't go down those paths, but it
definitely seems possible that we will. They can play to our hopes, wishes, anxieties, worries, all of that just give us like the junk food all the time, whether that's like in terms of nutrition or in terms of just like audio visual content and that could certainly end badly. Let's talk about the opposite of that, the utopian scenario. What does a world look like where we've got this perfect balance of innovation and regulation and society is thriving?
I mean, I think a very basic place to start is can we solve some of the big problems in the world? And I do think that AI could help with those. So can we have a world without climate change, a world with much more abundant energy that is much more cheaper and therefore more people can have more access to it, where we have better agriculture, so there's greater access to food.
And beyond that, I think what I'm more interested in is setting our kids and our grandkids and our great-grandkids up to be deciding for themselves what they want the future to look like from there, rather than having some particular vision of where it should go. But I absolutely think that AI has the potential to really contribute to solving some of the biggest
problems that we face as a civilization. It's hard to say that sentence without sounding kind of grandiose and, you know, trite, but I think it's true. So maybe to close things out, just like, what can we do? You mentioned some examples of being aware of synthetically generated content. What can we, as individuals, do when we encounter use or even discuss AI, any recommendations? I think my biggest suggestion here is just not to be intimidated
by the technology and not to be intimidated by technologists. Like, this is really a technology where we don't know what we're doing, the best experts in the world don't understand how it works. And so I think just, you know, if you find it interesting, being interested, if you think of fun
ways to use it, use them. If you're worried about it, feel free to be worried. Like, you know, I think the main thing is just feeling like you have a right to your own take on what you want to happen with the technology and no regulator, no, you know, CEO is ever going to have full visibility into all of the different ways that it's affecting, you know, millions and billions of
people around the world. And so kind of, I don't know, trusting you an experience and exploring for yourself and seeing what you think is, I think, the main suggestion I would have. It was a pleasure having you on Helen. Thank you for coming on the show. Thanks so much. This is fun. So maybe I bought into the story that played out on the news and on X, but I went into that interview expecting Helen Toner to be more of an AI policy max on list, you know, the more laws
the better, which wasn't at all the person I found her to be. Helen sees a place for rules, a place for techno optimism, and a place for society to just roll with, adapting to the changes as they come for balance. Policy doesn't have to mean being heavy handed and hamstringing innovation. It can just be a check against perverse economic incentives that are really not good for society. And I think you'll agree. But how do you get good rules? A lot of people in tech
are going to say, you don't know shit. They know the technology the best, the pitfalls, not the lawmakers. And Helen talked about the average Washington staffer who isn't an expert, doesn't even have the time to become an expert. And yet it's on them to craft regulations that govern AI for the benefit of all of us. Technologists have the expertise, but they've also got that profit motive. Their interest aren't always going to be the same as the rest of ours.
You know, in tech you'll hear a lot of regulation bad, don't engage with regulators. And I get the distrust. Sometimes regulators do not know what they're doing. India recently put out an advisory saying every AI model deployed in India first had to be approved by regulators. Totally unrealistic. There was a huge backlash there, and they've since reversed that decision. But not engaging with government is only going to give us more bad laws.
So we got to start talking if only to avoid that wally dystopia. Okay, before we sign off for today, I want to turn your attention back to the top of our episode. I told you we're going to reach out to Sam Altman for comments. So a couple of hours ago, we shared a transcript of this recording with Sam and invited him to respond. We've just received a response from Brett Taylor, chair of the opening I board. And here's the statement in full. Quote, we are disappointed that Miss Toner continues
to revisit these issues. An independent committee of the board worked with the law firm Wilmer Hale to conduct an extensive review of the events of November. The review concluded that the prior board's decision was not based on concerns regarding product safety or security, the pace of development, open AI's finances, or its statements to investors, customers, or business partners.
Additionally, over 95% of employees, including senior leadership, asked for Sam's reinstatement as CEO and the resignation of the prior board, our focus remains on moving forward and pursuing open AI's mission to ensure AGI benefits all of humanity. And quote, we'll keep you posted if anything unfolds. The TED AI show is a part of the TED audio collective and is produced by TED with cosmic standard. Our producers are Ella Federer and Sarah McCray. Our editors are Ben Ben
Chang and Alejandra Salazar. Our showrunner is Ivana Tucker and our associate producer is Ben Montoya. Our engineer is Asia Polar Simpson. Our technical director is Jacob Winning and our executive producer is Eliza Smith. Our fact checkers are Julia Dickerson and Dan Kalachi. And I'm your host, Balavl Sadoop. See y'all on the next one.