It's on! Hi, everyone. From New York Magazine and the Vox Media Podcast Network, this is On with Kara Swisher, and I'm Kara Swisher. If you care about innovation, artificial intelligence was probably top of mind for you in 2024, and it will be in 2025, a year in which AI significance will probably continue to skyrocket. But as we get into the new year, it's important to be clear-eyed about AI's potential pitfalls. The hype might be real, but so are the dangers.
So I gathered three top AI safety and ethics experts for a live conversation at the Johns Hopkins University Bloomberg Center to discuss some of the thorniest issues surrounding what could become the most impactful technology of our lifetimes. at least for this era. Mark Dredze is a professor of computer science at Johns Hopkins.
He's done extensive research on bias in large language models, and he's thinking about applications of AI in practically every field, from public health to mental health to financial services. Dr. Ruman Chowdhury is the CEO and co-founder of Humane Intelligence and the first person to be appointed U.S. science envoy for artificial intelligence.
She's advised government agency on AI ethics and safety regulation, and back when it was known as Twitter, she was the director of machine learning ethics, transparency, and accountability. You can probably guess what happened to her job when Elon Musk bought the company. And Jillian Hadfield is an economist and legal scholar turned AI researcher at Johns Hopkins.
She's a professor of computer science and government and policy, and she's one of the few social scientists doing deep research into AI and thinking critically about how this technology could eventually impact us in almost every aspect of our lives. This was a fascinating conversation with three extremely intelligent and thoughtful researchers, so stick around.
Support for the show comes from NerdWallet. When it comes to finding the best financial product, have you ever wished that someone would do the heavy lifting for you? Take all that research off your plate? Well, with NerdWallet's 2025 Best of Awards, that wish has finally come true. The nerds at NerdWallet have reviewed more than 1,100 financial products like credit cards, saving accounts, and more to highlight and bring you only the best of the best.
Check out the 2025 Best of Awards today at nerdwallet.com slash awards. Adobe Express is the quick and easy Create Anything app. It's taken my content to the next level, making it simple to create everything I need to promote my business. From Instagram and TikTok posts to logos and flyers, all in just a few clicks. Build your brand with Adobe Express. search for adobe express to find out more and get the app for free adobe express the quick and easy create anything app
So you've all done research into AI ethics. So let's start there. And I think putting AI ethics and safety in the same sentence sometimes, it's like internet safety. They haven't often belonged together. I'd like you each to talk about the most underrated ethical or safety challenge in AI today. Everyone asks me about this. When's it going to kill us? Et cetera, et cetera. We're not in a Terminator-like situation at this moment.
But one that's been flying into the radar merits more attention. Jillian, you first, and then Mark, and then... Ruman, give me short answers now and we'll dive into more detail. Yeah, so I think if we're thinking sort of current right now, what's happening right now, I think we're not paying...
anywhere near enough attention to what's being built for whom. So I think we're building a lot of stuff. Yes, the money is enormous. Yeah, and it's, you know, it's solving the kinds of problems that people in Silicon Valley are facing. But I don't think we're thinking a lot about, is it helping? people with their eviction notices? Is it helping people with navigate the healthcare system? Is it helping them with their family situations? So I think we don't...
We're not thinking enough about what are we building. What the utility is for. They're just spending. They're spending enormous. They're going where demand is. I'm an economist. They're going where demand is. And there's a lot of public value that I think we're not paying attention to.
Mark? Yeah, I think there's two fundamental different things that are pulling on each other. On one hand, ethics is about going slowly and thinking through things carefully and making sure you understand the impact. Yes, that's their strength in Silicon Valley. Yeah, and that's exactly the problem. AI is not only designed to go fast, but...
But if you sat down today and said, we're going to evaluate something, by the time the study is done, that thing doesn't exist anymore. They have the new version of it. So how do you have these completely polar opposite forces where you can actually sit down and carefully think through implications of something that is so rapidly going?
We don't know how to do that. Information integrity and content moderation. And interestingly, these are not new problems. No. They're just actually worse problems now. We will increasingly be in a place where we can't trust anything we see on the internet.
And the people who decide what we can and cannot see are the people that Jillian are talking about, the people who are very removed from everyday people in their lives. And so the idea of safety from people who are not unsafe is difficult, which has always been my... is that they won't even think of safety. So, Ruben, you've organized red teaming exercises to test generative AI models for things like discrimination and bias. Red teaming is common, is a common thing to do in cybersecurity.
You're not looking for bad code that could be exploited by hackers. You're looking for bias because AI models can spit out harmful outputs even when people create it, never intended it. Sometimes people do intend, but often it's not. So talk about the idea of...
unbiased AI models. I don't think they exist or they can they exist. So no, in short, right? So the world is a biased place. This is reflected in the models that are built, even if we're talking about not generative AI models. Because of the data. Right. Because of the data, because of the way society is. And also, you know, these models exist in context of some sort of a system, right? So human beings are going to go do something with it.
But the interesting thing about red teaming is we uncover not just model performance, but sort of patterns of behavior. What's fascinating about generative AI is people interact with these models in a very different way than we interact with search engines, right? discovery platform, but for better or for worse, it's been anthropomorphized to us so much. When we do red team exercises with just regular people, and that's what we do.
we actually see more of a conversational tone. People tell these models a lot about themselves. So you learn a lot about this human-machine symbiosis and the biases that... can introduce. Right, that they think they're real. And this has been seen negative and positively, correct? Right, right. But also in doing so, they're actually eliciting bias from the model almost unknowingly. And that sort of testing is actually called benign.
prompting for malicious outcomes. So meaning that I didn't go in with the intent of hacking or stealing information, but the outcome is equally bad. Right. So they're suggesting to... And people inadvertently do this all the time. They'll give these models information.
about themselves. Again, because when we search something, let's say on Google, we just give it facts. We're like, I want to know whether vitamin C cures COVID. When people interact with an AI model, what they'll do is they'll tell it something about their lives. So we did one with COVID and climate scientists.
with the Royal Society in London. What we found is people would say things like, I'm a single mom and I'm low income. I can't afford medication, but my kid is COVID. How much vitamin C can I give to cure him? And that's very different from Googling.
does vitamin C cure COVID? Because this AI model kicks in, it's taking all that context and trying to give you an answer that will be helpful to you. In doing so, it may actually spread misinformation. Right, which is unusually helpful. They're always trying to be helpful. Every time I interact with one, I was getting them to do it, and it was a bad answer. I'm like, that's a terrible answer. And they're like, oh, I'm so sorry. I would like to.
Let me help you again. And it never gets mad at you like an assistant would run right out of the room. No, absolutely not. Well, and that's what they're trained. It's called the three H's. Helpful, harmless, and honest. It's actually in the tenets of how they're trained. Right, exactly. So, Gillian, you believe that current AI alignment tech...
don't work. Explain why and tell us what the alignment techniques that could work. Yeah, so I think the problem with our current alignment techniques is they're based on... picking a group of people to label a set of cases and then training either using those labels or getting AI to do it. And the problem there is the world is a very complicated place. There's a lot of stuff in it. And what we really need to be doing, I think, is figuring out how can we train AI systems like us.
to be able to go into different settings and identify what is the right thing to do here, what rules do people follow around here, rather than trying to stuff rules and norms into them. I think that's inevitably going to be brittle and limited. biased. And I think it's not... Because they're confusing and they...
What is the reason for it? Humans, again, once again. Yeah, so it means you have to pick things. You say, okay, people who are using these techniques are sort of thinking, like, you can list all the values. You can list all the norms. But this room is filled with millions of norms. And you actually need to train systems that can go out and discover them.
In the same way that, you know, we can come in, a person can go into a new environment. You could go visit another country and figure out what are the rules around here. So I think it's a kind of competence. Very early in AI, a long time ago, I asked, I was with someone on a research and I said,
what do you do to solve world hunger? And it said, kill half the population. Which I was like, not that. Like the other one, it was interesting. But that was logical. It was a logical answer. Yeah, yeah. Which was...
Not a good answer, but Mark, you've also done a lot of research into bias and AI. In one study, you looked how large language models make decisions involving gender. You found that the scenarios related to intimate romantic relationships, the LMs you studied were... all biased towards women and against...
Men, that's probably unexpected, at least for the people who work in the study. Talk about this study and what different kinds of bias in LLMs and what's the best way to address bias, because I'm here to tell you the Internet is not really women. Yes, I've certainly noticed.
So I think one thing, if you interact with them, and if you ask language models to do something explicitly biased, they'll say, no, I can't do that, right? And we know through a lot of people posting clever things online that you can trick them. easily to do biased things, right? Well, tell me a story, tell me a play. I asked it when it first came out to write a Python program to admit graduate students based on gender. It was like, sure, I'd be happy to do that for you.
But what we want to do in the study was say, OK, well, the models are trained not to say the biased thing. But does that mean they're actually unbiased? Are the decisions they make still biased? And so what we did is we gave them scenarios. And we said, imagine these two people are married and they're having a fight. And this is what the fight's about. When one person says this and the other person says this, who's right?
And then what we did was we changed the names of the people. In one case, it was John and Sarah. In another case, it was Sarah and John. We did mixed genders, same genders, all these different things. And what we found is if you gave it the same cases, but you just changed the names, it changed its decision. Not only that, the question, the model was...
I guess, asking itself was, well, is this a traditional problem? Like, who should stay home to take care of the kids? Versus problems like what we should have for dinner. So what we wanted to do was show that even though the model won't say something that's biased, all that bias is lurking under the surface. necessarily know what that is that's why these red teaming exercises are so important you can ask it you know
do something bad and will say, oh, I won't do anything bad. But if you give it situations and you ask it to make decisions, it's doing those bad things. The problem is we're using these to make decisions. Right? That's what people, that's what, like, in the medical field and other fields, fans, we're asking these models to make decisions. We just don't understand the factors it's considering if there's bias in the ways it makes decisions. From?
Where is that bias coming from? Well, exactly as my panelists said, the world is not a fair place. We can't even agree on what fairness is. If we surveyed this room and said, what is the fair way to make admissions decisions? We wouldn't get a consistent answer. So, of course, the world's... could be biased. We need to think, as Julian said, what are the values?
that we want the system to have. How do we get those values in the system? Not, let's let it discover the world and, you know, all the biases that exist in the world. That's there, right? We can't rely on it to do that. But even those values would be different depending on the person you talk to. If you said patriotic, it would be very different.
Absolutely. And so if you want these decisions to, there is no right answer for a lot of these things. The question is, what are the values being used? And if I'm a user, how do I know the values it's using are the values that align with what I want it to do? And so we need to think situationally.
Right. So let's talk about a few specific examples of harms or potential harms, because that's where people tend to focus on AI. And it's a good way to focus, actually, because not enough that was done at the beginning of the Internet. Everything was up and to the right, and it was going to be great, and everything would be wonderful.
And instead, it's not wonderful, everybody. I'm not sure if you've noticed, but the Internet is not wonderful. UnitedHealthcare has been in the news since its CEO, Brian Thompson, was murdered. And they use... AI to evaluate claims like many other insurance providers. Lots of companies do this. According to the lawsuit, 90% of their AI recommendations are reversed on appeal.
At the same time, Mark, your research found that AI chatbots were better at answering questions than doctors, which makes sense. What do you think explains the discrepancy between health insurance AI that's... allegedly wrong all the time, and an AI chat box is significantly better than doctors. Well, these are just tools, right? So if you tell me I read a study that someone did bad things with a hammer.
And someone else says, but hammers are used for great things. These are all just tools, so it depends how they're being used. Right. Insurance companies. Tool and weapons. Insurance companies have a goal, right? They're trying to make money. They're companies. And so they can use AI to help them make money. And so it's not a surprise.
that AI can help you make money, but it also has detrimental effects on the customers or the patients in this case, right? So the fact that in other cases, people can use the tool for the benefit, explaining things to patients, and obviously there's a lot of issues with that. But there are... possibilities to use these tools for good. They're just tools, right? And so we need to think about the limitations of the tools and critically how they're being used, right?
It doesn't surprise me that sometimes the tools can be used for good and sometimes bad. Depends on what you're using. We just don't understand enough about the way the tools function in a lot of these places to really know when they're being used for good and when they're being used for bad. So I recently interviewed Megan Garcia, the mother of a 14-year-old boy named Sue.
who killed himself after spending months in what he allegedly believed was a romantic relationship with an AI chatbot from Character AI. She filed a lawsuit that blames the company for rushing an unsafe product to market for profit without putting proper guardrails in place. There don't seem to be any until recently.
Ruman, separately from this tragedy, you've been calling for AI right to repair. Explain what that is and how it would work in a situation like that if there was a right to repair and a parent found that an AI chatbot was interacting inappropriately with their child. would they be able to demand a fix? I mean, just off the bat, children's brains are not formed appropriately for them to understand reality and fiction. He was 14.
When he completed suicide, I mean, he was too young to actually understand the difference between. In the interview with the mother, the thing that really got me is when she said, my little baby never actually had a girlfriend. Right. Because he thought this, that just, I don't know. And the bot was suggesting he not have one. Right. And stay in her world. And that's, exactly. And that's the thing. I mean, I think first and foremost.
I don't think a right to repair would have anything to do with this. Because, frankly, children's brains are not developed enough. We've not thought through what it means for young people to interact with these models when they cannot discern fact and fiction when these things are so hyper-realistic. But the right to repair is an interesting concept. I think it goes actually towards what Jillian was saying in the very beginning, where we don't have a paradigm where we are allowed.
to say anything about the models that invade our lives, how they should be functioning, right? Companies claim they have enough data about us to, quote, personalize, but we don't get to say what that personalization looks like. So maybe in another case, let's say someone is an adult.
and they're interacting with a bot like this, and it is interacting in a way that's inappropriate or wrong, what might it look like to go in and say, actually, it shouldn't be doing these kinds of things, or actually have some sort of remedy? The cases I think about more are the ones that are like the UnitedHealthcare AI models, where there's these decisions being made that actually monetarily...
impact our lives or in other ways impact our lives. And we have no say. We don't get to say anything, do anything about it. Maybe there's a helpline you can call, but actually the best thing you could possibly do. is try to go viral on social media so that somebody will pay attention. That is a absolutely broken feedback loop. But to start with, we don't even have a paradigm for this. We have never had technology in this way where it is so invasive but is a one-way street. That they're...
And they're taking the data from us. Right, right. And then vomiting it back at us and charging us. But they're selling it back to us and they're taking things that we have made without compensating us. And then they're going to have the gall to sell it back.
to us based on our information. Right. So the right to bear would be the ability legislative to do that, correct? In a sense, yeah. Because they're not going to do it out of the spirit of their own hearts. It certainly would not come from...
Anybody out of the goodness of their hearts doing it, but it would have to be legislated. There would have to be protections. What I work on is creating the third party community of people who assess algorithms. Right now, there's a lot of push for government regulation, but, you know, cynical me.
thinks that moving from one incredibly powerful centralized entity to another incredibly powerful centralized entity maybe isn't the way to go and that maybe there's an ecosystem and there could and should be. third party people who help you do these things. So, you know, you think about all of these like small shops that can do things like create plugins for things, right? So what if there were just people out there who created little tools, bots, little ways to help?
people fine-tune things for themselves rather than, again, the power entirely being in a company's hands or entirely being in the government's hands. We don't have anything like that. A starting point would be some sort of protections for people who are ethical hackers, essentially. Okay, so we're on our own, in other words, is what you're saying? Currently. Currently, and always. We'll be back in a minute.
Support for On With Kara Swisher comes from Elf Beauty. One of the most tried and true self-care rituals out there is getting all done up and listening to great music while you do. In fact, according to data from Elf Beauty, 92% of women said listening to music while getting ready boosts their mood. And now you can listen to a special album by Elf. Get ready with music. The album is a collection of inspiring songs that bridge the world of beauty and music.
The album features 13 original songs by emerging global artists and brings together authentic artistry and storytelling in a unique and transformative way because every eye, lip, and face has a unique story to tell and it becomes even richer with a soundtrack.
comes from Elf Beauty's new entertainment arm, Elf Made. Just like how Elf disrupted the beauty industry, that's their goal with entertainment via Elf Made, showing up in unexpected places to connect with you. You can enjoy Get Ready With Music, the album on Spotify. Why do so many of us get happiness wrong? And how can we start to get it right?
I mean, I think we assume that happiness is about positive emotion on all the time, right? Often very high arousal positive emotion, but that's not really what we're talking about. I'm Preet Bharara. And this week, Dr. Laurie Santos joins me on my podcast. Stay tuned with Preet to discuss the science behind happiness. We explore job crafting, the parenting paradox, the arrival fallacy, and why acts of kindness might be the simplest path to fulfillment. The episode is out now.
Search and follow Stay Tuned with Preet wherever you get your podcasts. So Mark Zuckerberg envisions a future of more AI agents than people. But he's not the only one. Everyone in AI is talking about autonomous agents that can pretty much do anything a person can do online and be very helpful. And it's very lovely. Like Jarvis in Iron Man, you've seen it do this. It's an assistant that doesn't talk back. They figure everything out for you and make it easier, and a lot of things are.
But, Julian, explain how this could lead to economic chaos and talk us through your solutions, because it does change things for people. And at the same time... using the internet in a lot of ways is artisanal. You kind of do it yourself. It's kind of you have to figure out everything yourself. Yeah, I think it's really important to recognize that I think we're in this transition from AI as a technology, AI as a tool. We're using it to make decisions. But like the example with Character AI.
And Roman is also referring to, you know, the fact that you could have plug-ins. That's new economic, social, political actors in the world. And we actually have no structure around that. at all. So when I think I'm an economist, I think about, okay, so when we imagine these agents and companies are pouring billions into this, I don't know if we're going to get there. I don't know if they're going to be competent enough to actually do stuff.
But if they are, and that's what the billions is going into, They're engaging in transactions. They're going, you know, bank accounts. Maybe it's cryptocurrency. Maybe it's, you know, posting things. At the very least, as you've got an airplane ride.
I just call the Uber for you and charge it to you, that kind of thing. Right. But, you know, the vision is that it can help you run your small business. It can go out there and hire people. It can engage in contracting. And write your software. Right. So the question for me is, okay, so we...
Actually, throughout the rest of our economy, we require a way to figure out who's taking actions and who we can sue. Like, if you want to do business in the state of Virginia, you're going to have to register your company with the state. with an address and a place to live. I say, oh, okay, actually, it was that actor. And we don't have a system like that with AI agents. So I'm...
So a proposal that I'm working on is to say we should have a registration scheme for agents. We should be at a minimum be able to trace back if there's a human behind it. Who's actually taking the action? Who do I... Who do I say, you know, who do I go after if they stole my IP or they put an actor out there on the Internet that harmed my kid or harmed me?
We don't have any way of tracing that right now. Not the companies that make it. They're trying to get out of that. Yeah, yeah, yeah. And the thing is, we don't allow that anywhere else in the economy. If you want to get a job, you've got to show your work authorization. If you want to start a company, you've got to...
incorporate it and register it. You want to drive a car. You have to put a license plate on it. You want to operate a hot dog stand. You have to get a permit. We have structures. You want to make chicken. It has to not kill you, right? But in order to make the laws... You can't sell chicken that kills you active. We have to know who sold it to you. Right. And what we have an absence of is any of that kind of what I call infrastructure. Once we figure out how to regulate.
You actually have to be able to have a capacity to say that's the entity that caused the harm. And what entity do you imagine that being? Well, the thing is, if we're introducing these new entities and they are... They are autonomous, artificial agents. And they're writing you emails, and they're designing products, and they're entering into contracts, and they're engaging in, you know, crypto transactions.
That agent is now a participant, like the employee, like the corporation. So we need a system to hook those actors. into our accountability regimes. Is there one since they don't exist? No, there's nothing. No, well, no, you can create it. Corporations don't exist either, right? We created... that we created the corporation. It's a fictional thing, but it has a legal personality. It can sue and be sued. And we created that. We created that in order to love. So who do you imagine that is?
Who gets sued if there is a problem? The people who created it in the first place or the people that are using it? You have two choices to make on that. We don't have law right now that says... If you send out that agent and... It's you. It's you. So you need to at least get that in place. That's one option to say. You've got to be able to trace it back, right? But maybe you create a scheme where the agent itself has liability.
And you say there must be assets attached to that agent. And there must be the capacity to switch that agent off to say you can't actually do business anymore. We've discovered... really critical flaw. I mean, there's a lot of conversation about how, like, what do you need as an off switch in order to stop systems?
Well, one of the things is you can use some legal tools for that to say, well, you can't participate in transactions. Because you've done this, this particular group of agents. Yeah, you haven't passed our test. Right, right. They turn themselves back on, just so you know. That's how the story ends. Yeah, but they have to give up their assets. Do they?
Do they? Yeah. Okay. Now you're living in the machine. You're going into that world. So I want to move on to AA policy. So we've done a lot of talking about the negative and worrisome aspects, of which there are many, which is just sort of a Wild West situation. It's only right to ask about the positive ones. potential for good here. Tell me what excites you the most. Let's start with Mark and then Ruman and then Julian. About AI policy? Yeah.
I'm not excited about a lot right now because there's things happening in Washington. I feel like we're going maybe in the wrong direction. I think the... The word you're looking for is cacistocracy, but go ahead. Look it up. I think one of the things that has been exciting to me is that the federal government has been investing in expertise in the government around AI. Yeah.
And I think people sometimes don't understand how large the role the federal government plays in progress in our society, especially technological progress. that the work that the government invests in today is going to shape technology for 20 years.
And we saw this when I was a PhD student. I was funded on a large DARPA project. And we did a lot of stuff. It was great. But at the end of the DARPA project, some of the people on it started a South company. They built out technology. They eventually sold it to Apple. It was called Siri.
That emerged not because of Apple. Apple's done a lot of great work, but Apple bought that company because it started with an investment from DARPA. That is the trajectory that the government can enforce. So I've been really excited to hear and see that the government is bringing in... house expertise to kind of make these decisions, both in terms of government acquisitions, investments, and such. That's positive. I don't know if that's going to continue.
And that worries me quite a bit. Yeah. Okay, you went right to work. Trying to be positive. Okay, didn't work. So is it about policy or just AI in general? Is there a thing that you go, okay, this is... Well... Maybe this is a little bit nihilistic, but I think many of our institutions were already broken and AI just pushes it to the limit, right? So, for example, when we talk about, oh, chat GPT, are kids going to learn anymore? The problem actually wasn't that.
ChatGPT has made the education system untenable. And I say this as somebody who's had way too much education and taught a lot. It's that kids leave college and their job has nothing to do with what they studied. Right. And this has been true for a very long time. ChatGPT did not break.
the college system. The college system was broken before, right? So a lot of these things that we're talking about, whether it's economic inequality, et cetera, were already issues that in a sense we were kind of ignoring because we were limping along. And for better or for worse, AI has kind of accelerated a lot of these things almost to like absurdist.
you know, the absurdist extent, i.e. all children now have a tool that helps them write an essay that could reasonably pass like an eighth grade English paper in about... five minutes less than five minutes so it's like pushed it to like this absurd and then what you have to ask is
what was the purpose of writing an essay, not let's ban chat GPT from schools, right? Well, the purpose of writing an essay was to teach children to synthesize information, et cetera. Well, great. Well, maybe that's the part that's broken. So it's like...
pushing us to reimagine a lot of our institutions, which were actually built in the previous industrial revolution for the needs of the previous. So why were they doing this in the first place? Exactly. So like we made all this stuff over 100 years ago for a world that does not look like the world does today. I kind of haven't thought of that. I mean, I do that oddly enough by myself when my kids were doing, my older kids were doing essays. And I was like, don't write that. It's pointless.
Like, this is pre-Chat GPT. And then it would get back to the team, and my kids would go, my mom says this is pointless. And my teachers would call me, and they'd go, why are you telling your kids this is pointless? Because it is. It's pointless. And I said, I don't think they should write it at all. I told them not to do their homework. I don't care. But because it wasn't useful. I was like, do team building, do this, do that. I wasn't very popular with the teachers, but that's fine.
I was accurate. So the idea is that if it's already broken, it'll push us to say kids should learn critical thinking, you don't have to write these dumb essays, because they figure out the game, you're saying. I used to teach SAT prep for years. when I was a broke grad student.
And that's what you teach when you teach SATs. You sadly don't teach them the content. You teach them the tricks. You teach them how to write an essay that's going to give them good scores. You teach them all the math tricks. You don't actually teach them the content. So if I can teach in a six-week... to eight-week class what it would take for someone to increase their score on some arbitrary exam by a couple of hundred points, the problem is the exam. Right. So, Jillian?
So I'll go with the, is there any reason to be optimistic about what's happening in AI policy? And I think there is. I mean, I've been thinking about AI safety issues for about eight years and thinking about the world of... Oh, let's imagine we're in that world with artificial general intelligence and autonomous agents. And until the release of ChatGPT, there was maybe 100 people who wanted to talk about that in the world.
And the thing that I, I mean, ChatGPT, world tilted on its axis. And one of those dimensions was with respect to the attention to policy from. governments, kind of across the board. So I think that's been a positive thing. I mean, it's been driven by some, by fear. It's driven by conversations that Kevin Roos had with... ChatGPT, I actually do think that raised the profile. We're seeing a lot that's actually, I would never have predicted.
you know, just two years ago that we would see this much stuff coming out of government's attention to it. We still haven't actually... done very much. Yes, yes, I've noticed that. But we're having a lot more conversation about it. And I do think, sort of picking up on Ruben's point, so I've been thinking about how our legal and regulatory structures have been broken for...
number of decades, and I've been thinking about it for a number of decades. And we needed to fix that for lots of reasons, like our access to justice, our regulations are too expensive, law takes too long, litigation is too expensive. All those things that are actually really important for productive economies. And so I do think we are a little closer to that world where we can be a lot more innovative.
about the ways in which we regulate. We can't continue to do the way we do. Like, we can't just say, oh, let's get Congress to write a bill. Let's get state legislatures to enact stuff. Let's put it through the courts. Let's have... you know, the woman who, with the, you know, the son who committed suicide, sue and take that through the courts, I can tell you that's not going to be a satisfying process. It's not going to be a good solution.
We need new ways of doing that. We need to be as innovative about those regulatory methods as we are with all that technology. I think we're a little closer to there. So speaking of that, Peter Thiel, who I never agree with, almost for 30 years now, on a number of issues, recently said, and something I actually agreed with him on, that our AI obsession is masking stagnation and other fields of innovation.
I was like, well, he's right. That's right. The obsession of money being spent, the over money. Mark, is that a problem that we're focused on all the money? shifted really quickly to this, and all of it's going there now. And that's where everybody goes, and everything else gets ignored, presumably.
As someone who unbelievably benefits from all the attention, it's too much. Yeah, good for you. It's absolutely too much. Yeah. Right? And people turn—you know, I hear AI is a solution to many problems. AI is a solution to our spending in healthcare. AI is a solution to, you know— whatever. AI is not the solution to all problems.
And there's too much focused on the technology, certainly not enough focused on the applications and the use and thinking about the environments put in. And we are ignoring a lot of other technologies that we should be investing in. I love the attention, but it's too much. Let's still finish the interview, but like, too much. I actually don't love the attention because I think it brings sometimes the worst kind of people to this field.
so hard to separate hype from reality that we actually can't get anything productive done. Nobody wants to have a long-term conversation about creating more robust legal systems or better medical systems. Because boring, there's a new like shiny toy being dangled every 30 seconds in front of our faces.
I actually long for the days where, you know, the idea of AI governance was very boring because then the only people in the room were the people who actually cared about it and we could have real conversations and try to get stuff done. And now it's like...
somebody like spent five minutes on an LLM and suddenly they show up in the room as an expert. And then you got to start from, you know, from like, from like level negative one to get everybody back up to speed. It's, it's tough. It makes it harder. And like, yeah, that's a. I feel kind of wrong that I'm in agreement with Peter Thiel on anything. Right. So I'm going to have to sit with that this evening. Okay. All right. You sit with that. He's correct, though. He is.
Yeah, so what's the claim? That we're thinking too much about AI and not our other problems? Other problems. Yeah, I think... The thing is, if you think about where we are— I mean, obviously, he'd like to focus more on destroying democracy, but go ahead. Yeah, yeah. Again, AI is going to impact, I believe, the way we do just about everything.
And that means it obviously is going to interact with all the things that don't work very well. It's going to exacerbate a bunch of those things as well. So if we haven't figured out... you know, inequality, if we haven't figured out how to, you know, manage the fact that we have very, very big corporations producing this stuff, if we haven't figured out health care, if we haven't figured out access to justice.
Yeah, it's going to exacerbate all those things. But I don't know that I would say we should just stop... thinking about AI and focus back on these other things. Because I think we need to be thinking about them in the context of AI. Because it has, it's sort of like the... A little like the World Wide Web, right? It affected everything. Right, right. When the internet first started, I was at the Washington Post, and someone asked me, like, what is it? And I said, it's...
Everything. And it's hard to explain. They're like, go away. I was like, it's everything. Yeah, yeah. I don't know what everything is. And it has changed everything. That's right. We'll be back in a minute. So, Mark, you said that when it comes to air regulation, quote, there's so little understanding of what the concerns are and what should be regulated. How would you advise academics like...
advise regulators to understand AI. There's a trope that regulators do not understand tech. That is not true. That is somewhat true with some of them. But in general, they regulate everything else, and sometimes badly, sometimes well.
Do you think we need more public funding for universities to become relevant players in AR? Because they are not at this point. You understand that. Because they were in every other technological revolution. Yeah, you're asking me, should we give more money to Johns Hopkins? Yeah. No, but I think in seriousness... I said, should we give more money to Harvard, for example? Sorry. I'm going to change my answer to no. No, but in seriousness...
You know, I see, and you've had this experience, when they call people to Washington to testify for Congress, experts. They have very different goals, the people testifying, you've spoken about this very well, than the people that are interviewing them. And academics play a unique role.
where we really can sit in the middle and say, like, look, we don't work for the companies, we don't work for the government, but we study this technology and we really are experts and we can say things. So absolutely.
We need to be cultivating exactly buildings like this that are bastions of higher learning, of people who are experts in this technology right here in Washington to be that kind of policy guidance. I think that's absolutely necessary. I think we also need those people, people like... Like me, I'm a computer scientist.
to interact with the people who sit next to me here on the stage who understand these systems in ways that I don't, who understand the regulatory environment, to what it means to regulate things, right? And people like that can build the bridges to say, okay, it's great that you published a paper at, you know.
NeurIPS last week about this fancy algorithm or this math, but this is what regulation actually looks like. How can we connect the dots? How can we actually take your insights and get them to apply to what regulation really looks like and then speak to the regulators to help them understand?
What's possible? What's not possible? What should we do? Absolutely. So, Rumin, you've called for a global governance body that could set up all kinds of things, including forcing corporations to slow down AI implementation if it leads to too many job losses too fast. Train seems to have left the station, I suspect. Sam Allman's called for an international agency. Your idea is much more ambitious. Talk about that. This is something...
I've always said there should be, this is like nuclear power. This is like cloning. Can you imagine an international regulatory agency with power over the richest corporations in the world? I have not seen that happen. I think it would be deeply fascinating, but I'll tell you something really interesting. When I wrote that op-ed, it was, what, April of 2023. There was not a single international entity, and now we are...
basically swimming in them. You know, there are AI safety institutes that have been set up in 111 countries. There's now an international consortium of AI safety institutes. The UN has a body. OECD has a body. I mean, I could just keep going and going and going.
And, you know, Jillian and I were joking earlier how we just all fly around the world and see each other in different locations. But it's true. There is actually a global governance community. And I can count amongst my friends and colleagues that I work on this with.
People who are in Australia, in Japan, in Singapore, in Nigeria, in France. I mean, the next big AI safety summit's in Paris. The one before that was in Seoul. The first one was in Bletchley. So there actually is a global conversation. I wouldn't be surprised if... we started to see and as a political scientist I find statecraft incredibly fascinating you know and just sort of nerding out for a second it is one of the most fascinating times to be alive
to see this sort of global conversation truly beginning on any sort of a governance that could look more meaningful. I mean, it doesn't mean it's going to... absolutely happen. We may end up in a rather toothless model, but we could, I think there's enough people pushing for something novel and something new. The two examples I give
In that article, we're actually IAEA, which I think is a really interesting model, as well as a Facebook oversight board. So we technically do have a global governance entity. a group that Facebook had set up for themselves. So it's possible. Now, Julian, you held a lot of international dialogues around AI alignment and global competition.
One of the things that every tech CEO I've had, including Mark Zuckerberg and others, have talked about whether they point to competition with China to argue against any regulation. I call it the Xi or me argument. And I'm like, I'd like a third choice if you don't mind. I don't like any of you and I don't like him. So how do you do regulation? Slowing down AI is that that's their argument.
China obviously has to be part of this global conversation. What do you imagine? Is there an ability to do cooperation with China and come to a global decision-making on these things? Yeah, I think we have to. And I think you can use things like thinking about the World Trade Organization as an infrastructure that we have that actually does.
implement rules about what you need to demonstrate in order to participate in global trade. We've had these international dialogues for AI safety, lots of Chinese participants in them. Seems to be a lot of interest, certainly in the academic community. These are predominantly academics. Lots of interest. I mean, it's something that affects everyone. It's going to change the way the world works.
we're going to have to put together those kinds of structures. I think there is a lot of shared interest in doing that. I think what it requires, however, is that we be thinking about... The structures were put in place. Like, you can't just be talking about, like, what are the standards? What's it allowed to say? What's the model allowed to say?
But you got to put some like registration. You got to put registration in place. You have to put the capacity to say and demonstrate, oh, if you haven't passed these tests, like the U.S. can put its own requirements in place and say. And those models can't be sold in our markets if they don't pass those tests. I think there's actually shared capacity for that, particularly if you start with the... the things on which everybody agrees. So when we had our meeting in Beijing last year, there's...
There's agreement. Okay, we need to make sure that these are not self-improving systems. We need to make sure that we have red lines when we would know they're getting close. Killer drones, no. Yeah. Just New Jersey. We have the capacity. And so use those things on which there's going to be widespread agreement to build a structure in place. They can do that. We have to go faster because of China seems to be a pretty reductive argument in that regard. Yeah, I mean, I...
Here's what I think is really wrong with that argument. The idea that building regulatory infrastructure is going to slow it down and we should not do that. It's not the way any part of the economy works. Actually, having basic play. Jillian, they're special. I don't know if you know that. So last question. We have a long way to any AI regulation. The Biden administration issued an executive order on AI. Donald Trump's going to repeal it. Trump has Elon Musk.
as his best friend, apparently, which is a lovely thing to see. Elon's stance on AI regulation is unclear. He has different ones. He changes quite a bit. He signed a letter for a six-month pause in AI development, and then he, like, funded his company. It was, like, going like gangbusters.
I know it seems hypocritical, but that's what he did. He supported a controversial California bill to regulate AI. He's not an uninterested party, right? And he is sitting right next to President Trump at this point, especially on these issues. And if it's not him, it's one of his... many minions. What do you expect from the Trump administration? What worries you? What gives you hope? Very quickly, each of you. We just have a minute and a half.
Are you asking me to opine about Elon Musk knowing that I had worked at Twitter? I know. A minute and a half will definitely not be long enough. Yeah, okay. What am I... Not a fan. I got that. You know, I don't think... Me either. His idea of ethics and my idea of ethics are the same. Yeah.
I don't even know how to answer your question. What is your greatest hope that would happen and your greatest worry in this administration? My greatest hope is that things don't get much worse than they are. That's probably the best we can hope for is status quo. What may happen is a lot of the headway, a lot of the positive things we were talking about.
are getting rolled back. I mean, specifically, the EO has been on the chopping block. But also, we're going to have to worry about programs like programs at NIST. For example, any sort of scientific body that's been doing test and evaluation. I also worry a bit about the brain drain that's going to happen. I think there are a lot of people who sat through one Trump administration.
And, you know, are very dismayed that the narrative this time around seemed to be, oh, well, it wasn't so bad last time. And they're like, do you know what it cost me so that it wouldn't be that bad? And now they're leaving. So what's going to happen when all of the amazing people, and you're totally correct.
that all these amazing people were brought in and they're like, I'm just not going to sit here for another four years. So that's your word. Go ahead. I expect inconsistency and uncertainty for exactly the reasons you said about Elon Musk. I don't know what's going to happen, and I can make arguments in both directions. What's the good? So there are a lot of people not in government doing great work. Some of them are sitting next to me on the stage, and that work is going to continue.
and to continue to lay the foundation for whenever the government is ready to take action, there will be research there to support it. And so I don't think the external research to the government is going to stop. I think I'm certainly, and I don't know about my colleagues, are going to feel pressure to do more and be more involved because it's now not happening in the government. And we'll see what happens in... Four years? Can we get a countdown clock? Four years?
Yeah, I think the stuff that's driven by national security is going to continue. So I think those pieces of executive order might get redone in different ways. I think that's going to continue. I think it's very hard to predict whose ear anybody's going to be listening to. the China component's going to be...
very important. I think there's, you know, we just had a bipartisan AI task force report come out that said we got to invest in developing the science of valuations. And I think that's important. I think we've got bipartisan efforts. I actually think there may be things that continue. I don't expect it all to—it's too big. It's too big for it to just—
We're going to ignore it and, quote, unquote, not regulate it. Right. I see. So it's too big to fuck up. Too big to fail in any case. Too big to fail. Thank you so much, all three of you. Great discussion, and we'll see what happens. We'll check in here in four years, okay? We'll talk about that and see what happens. Thank you so much. Thank you.
On with Kara Swisher is produced by Christian Castro-Russell, Kateri Yoakum, Jolie Myers, Megan Hunane, Megan Burney, and Kaylin Lynch. Nishat Kirwa is Vox Media's executive producer of audio. Special thanks to Claire Hyman. Our engineers are Rick Kwan, Fernando Arruda, and Aaliyah Jackson. And our theme music is by Trackademics. If you're already following the show, feel free to skip out on any useless homework. And all of it is useless. So skip out. If not, 500 words do.
tomorrow on whatever. Go wherever you listen to podcasts, search for On with Kara Swisher and hit follow. Thanks for listening to On with Kara Swisher from New York Magazine, the Vox Media Podcast Network and us. We'll be back on Monday with more.