Support for the show comes from Zell. Scammers are nothing new. Believe it or not, they even existed before the internet. But as technology keeps advancing, the tools and techniques at the scammers' disposal are ever changing. And they're getting much savvier at separating people from their money, so it's important to stay vigilant. Always remember to only send money to people you know and trust. And be sure to educate yourself on how to spot a scam,
so you'll recognize the signs. Learn more at zelpay.com slash safety. What if AI could help your business deliver mission critical outcomes with speed? With IBM consulting, your business can design, build, and scale trusted AI using Watson X, and modernize the way you work to accelerate real impact. Let's create AI that transforms your business. Learn more at IBM.com slash consulting. IBM. Let's create. Hello and welcome to Decoder. I'm Nilay Patel, Editor-in-Chief of The Verge,
and Decoder is my show about big ideas and other problems. We're at a short summer break right now. We'll be back after Labor Day with new interview and explainer episodes, and pretty excited about what's on the schedule. In the meantime, we thought we'd re-share an explainer that's taken on a whole new relevance these last couple weeks. It's about deepfakes and misinformation.
In February, I talked with the Verge policy editor, Adi Robertson, about how the generative AI boom might start fueling a wave of election-related misinformation, especially AI-generated deepfakes and manipulated media. At the time, the biggest news in AI fakes was a robo call with an AI version of Joe Biden's voice. It's been about six months, and while there hasn't been quite an apocalyptic AI freefall out there, the election itself took some pretty unexpected turns.
Now we're headed into the big noisy home stretch before Election Day, and the use of AI is starting to get really weird and much more troublesome. Elon Musk's X has become the de facto platform for AI-generated misinformation, and Trump's campaign has also started to boost its own AI use. For the most part, these AI stunts have been mostly for cheap laughs,
unless Taylor Swift decides to sue the Trump campaign. But as you'll hear Adi and I talk about in this episode, there are not a lot of easy avenues to regulate this kind of media without running headlong into the first amendment, especially when dealing with political commentary around public figures. There's a lot going on here, and a lot of very difficult problems to solve that haven't really changed since we last talked about it. Okay, AI deepfakes, dream of 2024 election. Here we go.
Adi Robertson, how are you doing? Hi, good. You've been tracking this conversation for a very long time. It does seem like there's more nuance in the disinformation conversation than before. It's not just Russia made people at Trump, which is I think where we were in 2016. Can you just give a background? What's the shape of how people are thinking of disinformation right now? We've had, I think about three major in the US
presidential election cycles where disinformation was a huge issue. 2016 where there was a lot of discussion in the aftermath about all right, was there foreign meddling in the election, where people being influenced by these coordinated campaigns. There was 2020 where deepfakes technically did exist, but generative AI tools were just not as sophisticated. They were not as easy to use. They were not nearly as prevalent. And so there was a huge conversation about what
role do social platforms play in preventing general manipulated information. And there was in a lot of ways a huge crackdown. There was the entire issue of stop the steel. There are these large movements that are trying to just lie about who won the election. What do we do? There were questions about all right, do we kick Trump off social networks? These were the locus of debate. And now it's 2024. And we have in some ways, I think a little bit of a hangover from 2020,
where platforms are really tired of policing this. And so they're dealing with all right, how do we renegotiate this for the 2024 election? And then you have this whole other layer of generative AI imagery, whether or not you want to technically call it deepfakes as like an open question. And then there are all the layers of how that gets disseminated and whether that turbo charges a bunch of issues that already existed. So the platforms are getting tired of this is worth
talking about for one second longer. There was a huge rush of how do we make ultra sophisticated content moderation systems? And I think the pinnacle of that rush was Facebook setting up its oversight board, which is effectively a supreme court of content moderation decisions. And that was seen as, okay, Facebook is as big as a state. It has the revenue of a state. It's a government now. It's going to have some government like functions to regulate speech on its platforms.
That didn't pan out, right? Like the oversight board exists. It moves very slowly. It's I think hard for the average Facebook user or average Instagram user to think there's a moderating force on content moderation on this platform. It's the same as it ever was for the user perspective as far as I can tell. Yeah, I think that the oversight board, what it tends to do is that is maybe comparable to the supreme court is do sophisticated outside thinking about what does a consistent
moderation framework look like. But like the supreme court in real life does not adjudicate every single complaint that you have. You have a whole bunch of other courts. Facebook doesn't have really those other courts. Facebook has a gigantic army of moderators who don't always necessarily even see its policies. So yeah, it's this very macro level we're going to do the big thinking.
But also even at the time, there was the question of is this really just Facebook or now meta kind of outsourcing and kicking the can out of its court and putting the hard questions on other people? I wanted to bring that up specifically because that was the pinnacle I think of the big thinks about content moderation. Since that time, the companies have all done lots of layoffs. We've seen trust and safety diminished across the board. I think most famously with Twitter now X,
Elon Musk basically decimated the trust and safety team on that platform. It appears Linda Acherina is trying to bring some of them back. But the idea that content moderation is the thing these platforms have to do has is no longer in vogue. I think the way it was when the oversight
board was created. Yeah. And part of this is also political that there was a huge largely in the again in the US right wing backlash to this that this was the kind of thing that would get a state attorney general mad at you and get a congressional committee to investigate you as it ended up doing with pre-musk Twitter. I think that yeah, there became a real political price for doing this as well. Since then, some platforms have let Donald Trump back on. They've said, all right,
but we cannot possibly moderate every single lie on this. We're going to just wash our hands of whether you're saying the election was stolen or not. Yeah, let's go through the new players and how they might turbocharge the disinformation conversation now. And then let's talk about what might be done about it. I do just want to emphasize for the audience. It doesn't seem like the desire to regulate information and social networks is nearly
as high as it has been in the past. And I think that is an important thing to start with because that the technical challenges are so hard that wanting to solve them is actually an important component of the puzzle. Let's talk about the actual technical challenges and the players behind them. Open AI, that's a new company. There are a lot of other new companies in various stages of
controversy. So mid-journey exists that is an image generator. Stability AI exists. Another image generator that can be sued by Getty for using the Getty library allegedly to train images that look like Getty photos in this context. Very important. Mid-journey is getting sued as well. Open AI is getting sued for training on the New York Times database. Just a few days ago, Open AI announced Sora. It's text to video generator, which frankly makes terrifying videos. All
those videos look terrifying. But you can see how enterprising scammer could immediately use that to make something that looks like compelling video or something that didn't happen. All of these companies talk about AI alignment, making sure AI doesn't go off the rails. Where's the AI industry broadly on we shouldn't do political defects? Do they have a unified point of view or are they all in different spots? How's that working out? The companies are in slightly different
spots, but they actually have come together. Very recently, they've signed an accord that says, look, we're going to take this seriously. They've announced policies that are varying levels of strictness, but tend toward if you're a major AI company, you're going to try to prevent people from creating information that maybe looks bad for public figures. Maybe you ban producing images
of recognizable figures altogether or you try to. And you have something in your terms of service that says if you're using this for political causes or if you're creating deceptive content, then we can kick you off. One challenge here in America is the existence of the First Amendment. The Biden administration recently did an executive order saying, don't do bad stuff and these companies
all agreed. Okay, we want you bad stuff. But the United States government is pretty restricted in saying you can't make deep fakes of other people because the First Amendment exists and it can't control that speech directly. Are the companies rising to that challenge? It's like we will self-regulate
because the government can't directly regulate us. We don't know necessarily how good the enforcement of it is going to be, but the companies seem so far pretty open to the idea of self-regulation in part because I think this isn't just a civic-minded political thing. Dealing with unflattering stuff about real people is just a minefield they don't want.
That said, there are open-source tools stability AI is pretty close to open-source. It's pretty easy to go in and make a thing that builds on it that maybe strips away the safeguards you get in its public version. It's just not quite equivalent to the way that say social platforms are able to completely control what's on their platforms. You've got a handful of companies with varying sets of restrictions, a broad general industry consensus. We shouldn't do deep fakes.
Then you have reality, which is that there are deep fakes of celebrities all the time. There are deep fakes of teenage girls in high schools that are going to circulate on private message wards. It is happening. What can be done to stop it? The stopping mean that you're just trying to limit the spread to where this doesn't become a huge viral thing that a bunch of people see, but it's still maybe technically possible to create this. Or do you want to say, all right, we have a zero
tolerance policy. If anything is created with any tool anywhere, even if someone keeps it to themselves, that is unconscionable. Let's start with the second one, which I think has the more obvious answer. Saying no deep fakes are allowed whatsoever seems like it comes with a host of unintended consequences about speech and also seems like impossible to actually accomplish because of the the existence of open source tools. I think how would you actually
enforce a total ban on deep fakes? And the answer is that Intel, an Apple, and Qualcomm, and Nvidia, and AMD and every other chip maker have to prevent it somehow at the hardware level, which seems impossible. The only example I can think of where we have allowed that to happen is that Adobe Photoshop won't allow you to scan and print a dollar bill, which makes sense. Like it broadly makes sense that Adobe made that deal with the government.
But it's also like, well, that's about as far as you should let that go, right? Like there's a point where you want to make a parody image of a Biden or a Trump and you don't want Photoshop saying, Hey, are you manipulating a real person's face? Like you're saying, that seems way too far. So a total ban seems implausible. There are other things you could do at the creation step.
Open AI, ban certain prompts that are violates their terms of service, getty won't let you talk about celebrities at all if you type a celebrity's name or basically any proper noun into the getty image generator. It just tells you to go away. There's a lot of conversation about watermarking the stuff and making sure that real images have a watermark that say they're real images and AI images have a watermark that say they're AI images. Do any of those seem
promising? The most promising argument I've heard for these is the idea that you can, and this is an argument that Adobe has made to me train people to expect a watermark. And so if what you're saying is we want to make it impossible to make these images without a watermark, I think that raises the same problems that we just talked about, which if anyone can make a tweaked version of an open source tool, they can just say don't put a watermark in. But I think that you could potentially get
into a situation where you require a watermark. And if something doesn't have a watermark, there are ways that it's designed or it's spread or people trusting it or severely hobbled. That's maybe the best argument for it I've heard. The part where you restrict the prompts, open and I restrict the prompts, get you restrict the prompts. It's pretty easy to get around that, right? The Taylor Swift deepfakes that were floating around on Twitter, they were made in a Microsoft tool and
Microsoft just had to get rid of the prompts. Is that just a forever cat and mouse game on the restrict the prompts idea? It does seem like the thing about a lot of generative AI tools is that they're just vast, vast numbers of ways to get them to do something. People are going to find those. Software bugs are a thing that has been a problem. Zero-day exploits have been a problem on computers for a very long time. And this feels like it kind of falls into that category.
That's the creation side. We need to take a quick break when we come back, we'll get into the harder problem. Distribution. Support for this podcast comes from Huntress. If you're a small business owner, the threat of hackers isn't just a threat, it can affect your livelihood. Small businesses are easy targets for hackers and Huntress wants to give businesses the tools to help. Huntress is where fully managed cybersecurity meets human expertise. They offer a revolutionary approach to
manage security that isn't all about tech. It's about real people providing real defense. When threats arise or issues occur, their team of seasoned cyber experts is ready 24 hours a day, 365 days a year for support. They provide real-time protection for endpoints, identities, and employees, all from a single dashboard. Their cutting-edge solutions are backed by experts who monitor, investigate, and respond to threats with unmatched precision. Now you can bring enterprise
level expertise without needing a massive IT department. Huntress can empower your business as they have done for over 125,000 other businesses. Let them handle the hackers so you can focus on what you do best. Visit Huntress.com slash decoder to start a free trial or learn more. What if AI could help your business deliver mission critical outcomes with speed? With IBM consulting, your business can design, build, and scale trusted AI using Watson X,
and modernize the way you work to accelerate real impact. Let's create AI that transforms your business. Learn more at IBM.com slash consulting. IBM, let's create. On September 28th, the Global Citizen Festival will gather thousands of people who took action to end extreme poverty. Watch Post Malone, Doja Cat, Lisa, Jelly Roll, and Rao Alejandro, as they take the stage with world leaders and activists to defeat poverty, defend the planet, and demand equity.
Download the Global Citizen app to watch live. Learn more at globalcitizen.org slash box. Welcome back. So we've talked about what the companies that make software and hardware can do about the creation of deepfakes. And it seems like the best answer we have right now is adding watermarks to AI-generated content. But the real problems are in distribution.
Let's talk about the distribution side, which is I think where the real problem lies. If you make a bunch of deepfakes at your house with Donald Trump and you never share them with anyone, what kind of one have you caused? You start telling lies about both presidential candidates and share them on social platforms like O'Viril. Now you have caused a giant, external problem. And so it feels like the pressure to regulate this stuff is going to come to the platforms.
And again, I think the desire of the platforms to moderate waxes and wanes and it feels low right now, maybe it will ramp back up. Where are the platforms right now with the deepfake distribution problem? So far, it feels like the consensus is we're going to label this, and that's going to be mainly our job, is that we're going to try to make sure we catch it. There are cases where say maybe you get it taken down if you haven't disclosed,
if you're a company or you're buying a political ad. But broadly, the idea seems to be we want to give people information and tell them that this is manipulated. And then they can make their own call. The one platform that stands out to me and you and I have talked about this lot is YouTube,
which has an enormous dependency on the music industry. The music industry is not happy about AI generated covers using the voices of its artists, notably fake Drake caused a huge kerfuffle universal music group went to Google, they announced some rules, they're going to prevent deepfakes or allow some licensing of the money to flow back to the artists. That is a very private sort of licensing scheme that sits outside of the law, it sits outside of the other platforms.
Do you think YouTube is going to lead the way here because it has that pressure, or is that just a one off to the music industry? I feel like the incentives for something like the music industry and for things that are basically aesthetic deepfakes, I think the incentives, they're very different than they are for political manipulated imagery. That a lot of the question with YouTube is, okay, you are basically parodying someone in a way that may or may not legally
be considered parody. And we can make a deal where that person really all they want is to get paid. And maybe they want something sufficiently controversial taken down. But if you give them some money, they'll be happy. That's just not really the issue at hand with political generated images. The problem there is around reputation, it's around people who do at least
in theory care about did this person say this thing is this true. So I just, I don't know that you could cut a deal with Joe Biden that says every time Joe Biden, you make something up about him, he gets a penny. No, I feel like politicians are always asking for donations. Maybe that's just
the way to solve the problem. You just pay for the lies. As long as, as long as politicians are getting paid from what I gather, I think particularly Donald Trump, as long as he's getting paid, he might be cool with it outside of YouTube, which does have this big dependency on the labels and licensing. And so I think is leading the way on having any particular policy with specificity. Do any of the other platforms have ideas here that are more than we have an existing policy?
And we'll see how it works with the problem. There are companies that are signing onto an initiative called C2PA, which is we were talking about watermarks earlier. It's a content provenance system. It includes a watermark that has metadata. And the goal there is the idea that you will be able to at least tell where something has come from and whether it's manipulated and that it's supposed to be this broad industry wide. Everybody has the same watermark system.
So it's very easy to look at an image and pop it in and check and see if it has the watermark. That's one of the leading ways the AI industry at this point is trying to deal with truth and provenance. Is that shipped yet? I feel like we've been talking about C2PA and the content authenticity initiative. We've had Dana Roud, the Derby General Counsel on the show. He's the deep pick of the Pope wearing a puffer jacket was quote a catalyzing event for content provenance,
which is an amazing quote and I'll credit to Dana for it. But there's people wanting to do it. There's the activity we see and then there's shipping it. Has that shipped anywhere? Can you go look at it? Watermarks are rolling out places. Open AI adopted them in mid February. They're starting to appear on Dolly images. You can look at them in Photoshop. I think the problem is more that this thing rolled out, but really most people are not going to care enough to check.
Well, unless the labels are in their face, right? Unless you are scrolling on TikTok and you see something and TikTok puts a big label right over the top. It says this is AI, which doesn't seem to happen anywhere. Open AI did Sora. It's video generator. The videos are compelling, although they have some extremely terrifying errors in them. But there's not a big label on them. This says this is AI generated. They're going to travel without the context of Open AI having
produced them to promote their AI tool. And even that seems dangerous to me. Yeah, a lot of the issue with C2PA right now is that you have to actually go in and pop it into a tool to check the metadata, which is just an extra step that the vast majority of people are not going to take. And yes, it's not applying to things like Sora yet, at least as far as Open AI has told us. So there is not a really prominent in your face. This thing is AI in most cases.
Can you remove those watermarks? I mean, a screenshot tool as far as I can tell you from the watermarks. And I think there are ways that you can end up just stripping these things out. It's very, very hard to create a perfect watermarking system. Just so much of this relies on a lot of Adobe's argument being, well, eventually we want people to expect this. That it's going to be like, you know, if you look in your browser and you get a certificate warning that says,
there's no certificate for this web page. Maybe you won't trust the web page. I think the goal they're going for is the idea that everyone will be trained into expecting a certain level of authenticity. And I just don't think we're at that point. In some ways, these problems already existed. Photoshopped nodes have been a thing that has been used to harass people for a very long time and Photoshopped images of politicians and manipulated content about politicians is nothing new.
A thing AI definitely does is scale up those problems, a huge amount, and make us confront them in a way that it was maybe easier to ignore before, especially by adding a technology that people who are creating that technology are trying to hype up in a way that sounds terrifying and world ending for other reasons. The problem with a lot of this is that you can't apply the kinds of paradigms that social media has because it really only takes one person with one capability to
do a thing. It takes one bad actor to make something that you can spread in huge variations that are hard to recognize across huge numbers of platforms. I think that raises slightly different problems than say there is this big account on social media that's spreading something. Well, all right, Facebook can ban them. So we've talked a lot about what the platforms can do, what the AI companies can do is private companies. These initiatives like content, authenticity, that's a private
framework. The government has some role to play here, right? The big challenges are the first amendment exists in the United States and that really directly restricts the government for making speech regulations. And then we are a patchwork of state and federal laws. What is the current state of deep fake law? Are you allowed to do it? Are you not allowed to do it? How does that work? Is mostly a small patchwork of past laws, a huge number of laws of varying likely
constitutionality that people are debating and not a whole lot at the federal level. There are a lot of different problems that AI generated images pose and there are cases where individual states have passed rules for individual problems. There are a few states that incorporate say non-consensual AI pornography laws into anti-general revenge, non-consensual porn rules. There are a few states with rules about how you have to disclose manipulated images for elections.
And there are some attempts in Congress to create a larger framework or in the, say, FEC and other government regulatory agencies to create a larger framework. But we just are still in this large chaotic period of people debating things. What start with non-consensual deep fake pornography, which I think everybody agrees is a bad thing that we should find ways to regulate away. A solution to revenge porn broadly on the internet is copyright law. You have made these
files with your phone or your computer, someone else distributes them. You say, no, those are mine. Copyright law will let me take this down. When you have deep fakes, there is no original. That's not a copy of something that you've made or that you own. You have to come up with some other way to do it. You have to come up with some other mechanism, whether that's just a law that says this is not right or it's some other idea like the right to your likeness. Where have most of the
existing laws landed there? The copyright issue is actually something that came up with non-synthetic non-consensual pornography because say if one of your partners took a nude picture of you, you don't own that picture. That was already just a huge loophole that legislators have spent about a decade trying to make laws that meaningfully address non-consensual pornography that's not AI generated.
The frameworks they've come up with are getting ported over to AI-generated imagery that a lot of it is about, this is harassment, this is obscenity, this is some other kind of speech restriction that is allowable. A lot of non-consensual pornography is a kind of sexual harassment that we can find ways to wall outside protective speech and that we can target it in a way where it's not going to necessarily take down huge amounts of other speech the way that say just banning all AI-generated
images would. There are a bunch of state laws around non-consensual AI-generated pornography. What states are those and is there any federal law in the horizon?
There's California, New York is another, there's Texas. At the federal level there have been attempts to work this into, it's not a criminal statute but there is a federal civil right to do if you have non-synthetic non-consensual porn of you and there have been attempts to work AI into that and say, all right, well it's not a crime but it's a thing that you can sue for under, I believe it is the reauthorization of the Violence Against Women Act. Then there have been attempts to,
like you mentioned, just tie all of this into a big federal likeness law. So likeness laws are a mostly state level thing that says, all right, you can't take Taylor Swift and make her look like
she's advertised your instant pot. And so there have been some attempts to make a federal version of that but likeness laws are really tricky because there's so much broader that they end up catching things like parody and satire and commentary and they're just, I think, a much riskier than trying to create really targeted specific use laws. The idea that someone should be in absolute control of the photograph of themselves
has only gained prominence over time. Emily Radakowski wrote that great essay for the cut a couple of years ago where she said, a street photographer took a photo of me and I put it on my Instagram and I'm suing him to say that I can take his photo because it's a photo of me. And that is a very complicated argument in that case but the idea that you should be in total control of any photo of you. I think a lot of people just instinctively believe that.
And I think likeness law is what makes that have legal force but you're saying, oh, there's some stuff here you wouldn't want to pull under that umbrella. If you're talking about non-synthetic stuff then there are all kinds of documentaries and news reports and really things that people have a public interest in making where you don't want to
give someone the right to say you cannot depict me in a thing. In that case it's doing something I actually did but AI generated images raise the whole other question which is, okay, so where do you draw the line between an AI generated image and a Photoshop of someone and a drawing of someone should you not be able to depict any person in a situation that they don't want to be depicted in even if that situation is something that would just broadly be protected by the First Amendment?
Where do we think that the societal benefit of preventing a particular usage that hurts someone should be able to override the interest we have in just being able to write about or create images of someone. Yes, take another quick break. We'll be right back. This episode is brought to you by Shopify. Forget the frustration of picking commerce platforms when you switch your business to Shopify. The global commerce platform that supercharges
your selling wherever you sell. With Shopify you'll harness the same intuitive features, trusted apps and powerful analytics used by the world's leading brands. Sign up today for your $1 per month trial period at Shopify.com slash tech. All lowercase. That's Shopify.com slash tech. Hey, Sue Bird here. Megan Rapinoe. Women's sports are reaching new heights these days and there's so much to talk about. So Megan and I are launching a podcast where we're going to
deep dive into all things sports and then some. We're calling it a touch more. Because women's sports is everything pop culture, economic, politics, you name it and there's no better folks than us to talk about what happens on the court or on the field and everywhere else too. And we'll have a whole bunch of friends on the show to help us break things down. We're talking athletes, actors,
comedians, maybe even our moms. That'll be a fun episode. Whether it's breaking down the biggest games or discussing the latest headlines, we'll be bringing a touch more insight into the world of sports and beyond. Follow a touch more wherever you get your podcasts. New episodes drop every Wednesday. Recently, Vox's senior tech correspondent Adam Clark Estes got some bad news from his telephone. I got a little from my bank, which is Chase and the message said your social security
number has allegedly been compromised. Allegedly was a word that I really like held onto as hope that maybe it wasn't true, but then I found out there was a lawsuit about a huge data breach. It comes from what may be the worst data breach ever, one reportedly that's resulted in the theft of the social security numbers of every American. A couple weeks ago it was confirmed me and a few hundred million other Americans got their social security numbers stolen.
But Adam didn't just panic. He took action. He protected his information. And on today explained, he's going to teach you how to do the same and he's going to argue believe it or not that this massive data breach is actually a good thing. Find us, follow us, learn from us Monday to Friday in the afternoon. We're back talking with Rich Policy Editor Eddie Robertson about why it's really hard to limit either the creation or sharing of deep picks. So that's the philosophical policy debate.
You want to restrict this because in many cases it can be used to very bad things. There's some things that we absolutely want to forbid. But if we let that get too wide, we're going to start running into people's everyday speech. We're going to start running into absolutely constitutionally protected speech like documentaries, like news reporting. That's pretty blurry. And I think the audience here, you should sit with that because that is pretty blurry. On the flip side, there are two
bills in Congress right now that purport to restricts on this stuff. There's something called the No Fakes Act, which is Chris Coons, Marshall Blackburn, Amy Klobuchar, Tom Tillis. And then after the Taylor Swift situation on X, there's something called the Defiance Act, which stands for the Disrupt, Explicit Forged Images in Non-Consensual Edits Act, which is quite a lot of words. Do they go
towards solving the problem? Do you see differences there? Do you see them as being an effective approach? The two bills are a little bit the thing I talked about, where one of them, the Defiance Act is really specifically about we want to look at non-consensual pornographic images. We define what that means. And we think that this particular thing we can carve out. There are lots of questions about in general how far you want to go in banning synthetic images, but it's really targeting sexually
explicit pictures of real people. And I think things like the No Fakes Act, I believe there's also something called the No AI Fraud Act. These are much broader. We just think that you shouldn't be able to fake images of people. And we're going to make some carve-outs there, but the fundamental idea is that we want to create a giant federal likeness law. And I think that's much riskier, because that is much more a, we start from a point of saying that you shouldn't be able to
fake an image of someone without their permission. And then we're going to create some options, with some options where you're allowed to do it. And I think that raises so many more of these questions that do we really want to create a federal ban on being able to create a fictionalized image of somebody. That is the likeness law approach to it, which has big problems of its own. Another approach we've heard about on decoder is rooted in defamation law. So Barack Obama was on
decoder. He said, there are different rules for public figures than 13 year old girls. We're going to treat them differently. We should have different rules for what you can do with a public figure than teenagers. We should have different rules for what is clearly political commentary in satire versus cyber bullying. And then Senator Brian Schatz was recently on and he said, he said something
similar is defamation where this goes where it's hey, you made a deep fake of me. Maybe it's maybe it's my likeness, but you're actually defaming my character and you did it on purpose. And that rises to the level of you knowingly telling a lie about me and I can defamation law is what's going to punish you for this instead of some law about my likeness.
Defamation law has already come up with text-based generative AI, where if something like chat GPT tells a lie about you, are you allowed to say they're making things up about me I can sue? And I think the benefit of defamation law is that there is a really huge framework for hammering out when exactly something is an acceptable lie and when it's not. That all right, well, what a reasonable person believe that this thing is actually true or is this really
obviously political commentary and hyperbole? I think that we're on at least more solid ground there than we are with just saying, all right, fine, you know what, just band deep fakes. I do think that still defamation law is complicated and every time you open up defamation law as Donald Trump has once suggested. You end up getting a situation where in a lot of cases it's very powerful people throwing the law against people who don't necessarily have the money to defend
themselves. And in general, I'm cagey about trying to open up defamation law, but it is a place where at least you have a framework that people have spent a very long time talking about. One thing we constantly say here at the Verges that copyright law is the only real law on the internet because it's the only speech regulation that everyone just kind of accepts. Defamation law is not a speech regulation that everyone just accepts that it has boundaries,
the cases go back and forth. The idea that there should be a federal right to likeness doesn't even exist yet. So that feels like it will be very controversial. It happens as speech regulation. But the hard of that is the First Amendment, right? People have such a strong belief in the First Amendment that saying the government should make a speech regulation even if something is really bad is extraordinarily complicated and high-barrier to cross. Do you see that changing in the context of AI?
When a new technology comes along, there are a large number of people who don't necessarily think about it in terms of the First Amendment or of speech protections where you're able to say, well, this thing is just categorically different. We've never had technology like this before. The First Amendment shouldn't apply. And I always hope we don't go there with the technology because I think that the problems that come from just blanket outlying it tend to be really huge.
I don't know. I think that we're still waiting to see how disruptive AI tech actually is. We're still waiting to see whether it is meaningfully different from something like Photoshop, even though it seems intuitively like it absolutely should be. But we're still waiting to see that play out. We said a lot of times talking about the visual side of it. We're going to make deep-fake images. Those images have real-world harms, especially to young people, especially
young women. In an election cycle, making it seem like Trump or Biden fell down the stairs could be very damaging. There's also the voice side of it, right? We're having Joe Biden do AI generated robo calls as a real problem. Or convincing people on TikTok that Trump said something he didn't say is a real problem. Do any of these laws address that aspect of it? If we're talking about non-internet systems like robo calls, then we actually have laws that
aren't really even related to most of the things we've talked about. There's a rule called the TCPA that's an anti-robocall law. Basically, that says you cannot just bombard people with synthetic phone calls. And it was recently decided that all right, should artificial voices there include voice cloning? Yes, obviously. So at this point, things like robo-called laws apply
to AI. And so if you're going to try to get Joe Biden calling a bunch of people and telling them not to vote, that's something that just can be regulated under a very long-standing law. What about fake Joe Biden-Jorogun podcasts on TikTok? That raises really all the same questions that image-based AI raises. In some ways, it's probably going to be harder to detect and regulate against at a non-legal platform level
because so much stuff is optimized for detecting images. And so in some ways, it's maybe even a thorny year problem. And also, on the other hand, voice impersonation was a thing before this, that there were really good impersonators of celebrity voices. And so I think that that might be a technically harder problem to fix, but I think that the legal questions it raises are very similar.
All right, so we've arrived at what I would describe as existential crisis. Many, many problems, one set of things that should clearly be illegal, which deep fake non-consensual pornography, seems like it should clearly be illegal. Everything else seems kind of uproar grabs. How should people be thinking about these challenges as they go into the selection year? There are a bunch of really
hard technical issues. And a lot of those issues are going to be irrelevant to people because so many people do not check even very obviously fake information because of a variety of reasons that do not have anything to do with it being undetectable as a fake. I think that trying to actually make yourself care about whether something is true is in a lot of ways a bigger, more important step than making sure that nothing false is capable of being produced.
I think that's the place where huge numbers of people have fallen down and where huge numbers of people have fallen for things. And I think that while all of these other issues we've been talking about are incredibly important, this is just a big individual psychological thing that people can do on their own that does not come naturally to a lot of us. Thanks again to Verge Paul, the editor, Adi Robertson for joining us on decoder. These issues are so challenging
and she always helps me understand them so much more clearly. If you have thoughts about a sub-sode or what you'd like to hear more of, you can email us at decoderattheverge.com. We really do read every email and we talk about them quite a bit. You can also hit me up directly on threads on that reckless 1280. The rest of it ticked off, it's a lot of fun, check it out, it's at decoder pot. If you like decoder, please share it with your friends and subscribe wherever you
get your podcasts if you really love the show. Hit us with that five star review. The decoder is a production of Verge and part of the VoxMini podcast network. Today's episode was produced by Kate Cox in Nextat, who was edited by Kelly Wright. The decoder music is read Break Master's cylinder. We'll see you next time.