Let me tell you something. There's a full out war going on on this subject right now. The bunch of filthy sonamite perverts and if you don't like to get out of here, they're at war with us tonight. This is a video from YouTube of Believe It or Not, a sermon from Steven Anderson. He's a pastor in temper Arizona. He's young, he's bearded, wearing a tie, working up a sweat, pretty much screaming at his audience. It's intense. He's pacing
back and forth in front of a podium. At one point he even jumps up on the podium, raising his voice even more. He's vigorously wagging his finger. You know what, we have hundreds of people, hundreds of people here that will not compromise. And if you're not one of them, the get out. I don't want to hang around with a bunch of fag hags at a bunch of queer baits and a bunch of feminints. Get out. Look no one in a million years what I thought when I
was a child. Stephen Anderson uploads his sermons online several times a week, sometimes multiple times a day. The one we're watching right now has over seventy thousand views. It doesn't have any ads running before it or beside it. Well.
A couple of weeks ago, reporters for The Times of London were watching another YouTube video from the same pastor and they saw right next to him and add from the cosmetics maker Lourel promoting a British charity, obviously not the kind of thing Loria would want to be associated with. The newspaper found several other ads from household names running on videos, like one from an Egyptian cleric who had been banned from the UK for terrorism. There was one
from David Duke, a former Ku Klux Klan leader. The pastor Stephen Anderson, his videos appeared next to ads from Nissan and the Guardian newspaper. When The Times called these advert tiser's, several of them said they were shocked and they immediately yanked their ads from YouTube. And over the past two weeks that story is exploded into a crisis for the world's largest video service and its owner Google.
I am Akito and I'm mark Berg and this week on Decrypted, we're plunging into one of the thorniest issues in the modern Internet. How do you police the unwieldy, ever expanding mess that's the Worldwide Web. And what can Google, a company that believes in letting nearly anything and everything live online, do about it? Can they keep their paying advertisers away from the Internet's darkest corners. Stay with us as we take you on a tour of the latest
attempts to apply artificial intelligence to solve Google's crisis. At stake here is the future of online advertising, which is the fuel for the Internet. Google and another rival, Facebook have taken more and more of the share of all the advertising dollars spent online. So Mark, you're our Google reporter, which looks like a pretty fun job. It means you get to write about basically everything. Yep. It's a Google's
parent company, which you know, is called Alphabet. Now. They sell phones, they sell laptops, they sell smart speakers, they sell smart thermostats. There's two healthcare companies in there, and you have the drones and the robots and the self driving cars. You know, despite all this cool future SI stuff, Alphabet still makes the bulk of its money, actually eighty percent last year from selling ads. That amounts to over
seventy nine billion dollars in two and YouTube. It's a really big video site is one of the fastest growing parts of that business. Thanks to the avalanche that the Times of London set off, the growth of that revenue could be in jeopardy. Now, before we get into the thick of this controversy, we should take a step back and explain how YouTube advertising actually works. It doesn't really happen through the traditional sales process and TV where a company would say we want our ads to run in
front of a commercial for Gray's Anatomy. Instead, the brands could target really specific slices of YouTube's viewers. They could say, we want to target men in their twenties who really like Nascar, or we really want to reach teenage girls who want makeup advice, and to do that on a really big scale. The ad spots are bought and sold through google super fast computerized auction system. It all happens automatically. To date, advertisers have pretty much accepted this trade off.
They get all the benefits of running ads with YouTube and it's huge audience, and they stept the risk that some of their ads could just show up next to pretty much anything, which is why when the Times of London's story first broke. I call around a few advertisers here in the US, and a lot of them said
the same thing. WI this New A Monday, which was a week after all this drama started, we sent our reporter Shelley Hagen to an advertising event in New York, and despite some of the high profile announcements of advertisers pulling their money from YouTube, a lot of the people she talked to seemed pretty sanguine about it too. Um. I mean, you do a lot of algorithmic betting and curation, but you still have you know, ads that could turn up,
you know, next to offensive content. Uh. I don't think it's it's it's always been a problem in u g C. Uh media. That's Anna Thomas as a product marketer at Adobe and yeah you g C. That stands for user generated content, which is what makes up a vast majority of the videos on YouTube. There are videos that are posted by anyone instead of saying the polished videos that we at Bloomberg would publish. It's what makes YouTube unique
and compelling and often kind of strange. It's great for advertisers who wants to reach younger audiences who don't watch a lot of traditional TV and At the same time, we at Bloomberg have all kinds of strict guidelines about what's okay and what's not okay to say. We certainly would never use the kind of epithets that we heard from the pastor earlier in the show, even if we personally believe them, which I must say I do not.
But users on YouTube can say pretty much whatever they want, and this means that content is going to be a lot less predictable, and so it's something that ultimately you algorithms get smartto and you can actually prevent that from happening. But um, yeah, I don't see it as a huge deal. Um. And most advertisers are aware of these things. Not everyone agrees with Anna. Right after meeting Anna, Shelley visited a
big name in the advertising world. His name's Rob Norman, and he's the chief Digital officer at Group M, which is the media buying arm of the ad agency w PP. We've had a lot of conversations with Google in the last couple of weeks, as you might imagine, from the people that look after oppossess on a daily basis, up to the most senior commercial levels of Google, and we've listened to their ideas. We put some ideas of our own on the table, and we I hope that some
of those are implementable in the very near future. It's a big deal when a guy like Rob is this concerned. W p PS parent companies spent around five billion dollars on Google last year, and they plan to spend more this year. For most people, the idea of an head sitting next to a jihadi video one incidence of that is once too many are always a good number. We
should know that these problems happened pretty rarely. One agency executive told me that of the more than hundreds of millions of impressions or ads that they serve online, only two landed on what they would call questionable content. And as Rob noted to Shelly, this is not a new problem for Google. Right so, more than a decade ago, and it was just a search company, it introduced a system called ad Sense. It was an algorithm that scoured texts on the websites to connect them with relevant ads,
and we still we still talk about this story. But one time The New York Post ran a story about a gruesome murder where the killer had chopped up the body and stuffed it in a garbage bag and right next to that story and add placed by Google for plastic bags. Google has since put quite a bit of effort to curb this problem. It even set up a kind of white gloves service called Google Preferred, where advertisers pay more to have their ads run on the most
popular and what Google says is brand safe content. But obviously the problem is persisted, and it really reached a flashpoint this past week. Coming off Google's advertising crisis goes global. Big brands in the US joined the UK and the backlash over terrorist videos. So A T and T announced it was pulling its ads from YouTube and other Google channels that weren't searched. Verison Johnson Johnson quickly followed. Then
there were a bunch more. You know, we had reporters called up in Europe, and there were brands that pulled their ads, and then Walmart, Disney, Starbucks did the same after reporters show that their ads were running next to YouTube videos with pretty blatantly racist messages. Here's Rob Norman from group em again. I think there may be a group of advertisers that decide that YouTube and other forms of user generating content in particular simply to hot to handle.
I think those that do may withdraw over a longer period of time. I think there'll be many others that do come back, but they'll come back when they're satisfied that the controls are in place that lets them believe in the safety of their brands in that environment more than they've been moved to believe in the last few days. This brings us to the question why now. In part,
it's our political climate since the US election. Big tech companies like Google, Facebook, Twitter, and face a ton of criticism over the content they host, especially those articles with latent misinformation, you know the fake news problem, which we covered on this podcast back in November. And marketers like rob are really sensitive about this stuff. It's clearly an issue because as soon as the corporate reputation is damaged
by anything, that's a significant issue for that corporation. It's a significant issue also because some of the content would be offensive to almost anyone in America. You know, there's one more factor. There's this kind of growing angst in the advertising world about how much of the market Google and Facebook are eating up. The word I always hear do a belie more than forty six percent of all digital ad sales worldwide last year went to Google and Facebook.
According to e Marketer, that share is even higher in the US. A lot of people have told me that the YouTube boycott has a really good chance for advertiser of the pile on. After it began, Martin Sorel, the head of w PP, who's like Rob Norman's Superboss, send a statement blasting Google and Facebook. He said they were
quote masquerading as technology company. Martin Sorrell said that these companies place ads, so they should act more like media companies and actually take responsibility for the content that they're hosting, even if they actually didn't make that content themselves. So I think the time has sort of come where the harm has become so salient and so significant that companies are going to one way or another, they are going to have to start to get control over the really
bad things that are happening on their networks. So that sounds an awful lot like Martin Sorrell, but it's actually someone from a very different profession. My name is Honi Freed. I am a professor of computer science at Dartmouth College and a senior advisor to the Counter Extremism project. Before he worked on counter terrorism, Honey worked on a different mission. He built algorithms to rid the web of child pornography.
So he helped create a technology that extracted unique identifiers on images, what he called a signature, and that sticks with them as they move across the Internet, even if
they're modified. And what that allows is to do is to build up this database of known bad content, in that case, it was child pornography, and then we just sit at the pipe of a Facebook or Twitter or a Google and every image that comes in gets scanned and compared against this database of known child pornography, and then anything that is a hit against that database gets flagged, removed,
and reported. Similar software runs online for things like detecting spam and viruses, and over the past year, Honey has taken that same framework and applied it the detecting child pornography and content that promotes terrorism. He's also started to
move from still images online to audio and video files. Video, it turns out, is much more complicated, and while that sounds like a simple extension, it is turns out to be a very very difficult engineering task because a video standard video is twenty four still images for every second, so imagine a three minute video. You were talking about thousands and thousands of images, and so being able to do that efficiently and accurately um is very very challenging
engineering problem. Just consider the sheer volume of video that Google has to deal with. Here's Eric Schmidt, the former CEO and now chairman of Alphabet, talking about this on Fox Business last week, YouTube, for example, went from about a hundred million hours watched per day to one billion hours of YouTube watched globally every day. That's an extraordinary platform and an extraordinary responsibility. Honey, though, says Google has used this as a convenient excuse for not yet solving
the problem of offensive content. So, first of all, YouTube does a pretty good job of taking down content that are violations of touch all property and copyrights. That's Google's Content i D program. It's a software system they built that automatically pulls videos that violate copyright claims. Are actually pretty effective at it, and they're effective at it because there's a financial interest for them to do it. Um They get suit when they don't do it, and now
there's a financial interest for them to eliminate content. Um Um based on the advertising problems that they've been seeing over the last few week. Three days after the Times of London story came out, Matt Britain, whose Google's top business executive in Europe, was speaking in an AS conference. He was really contrite. He said the company was terribly sorry and that a fix was coming. Later that day, Google laid out some of the fixes in a blog post.
Google promised a slew of new controls for advertisers. Yeah, they'd have more levers to avoid quote unquote higher risk content videos with religious or political themes. The company promised to hire significant numbers of people and developed new AI tools to fix the problems of ads next questionable content soon. Google promised these sorts of incidents would be resolved in less than a few hours. Many people we talked to said that if anybody could do this, it would be Google.
They've certainly got a lot of money and they've been working on this technology for a long time. Our editor Alistair Barr talked to this guy over the weekend. Um Meat, Ma, Dannie and I working machine learning, applying machine learning, developing machine learning techniques currently to security at Cisco. Before that, Omid worked at Google. I was there three years ago
up to three years ago. For three years so UM and I was at in YouTube research, so we were again developing machine learning techniques and the rest of the folks were doing machine learning visual analysis audio analysis is speech analysis techniques UH and applying it mostly to YouTube videos. So this was super advanced research, and O Meat spent a lot of that time on, of all things, video
games like Minecraft. Google's researchers wanted to ensure that the videos uploaded to YouTube were what they claimed to be, so videos classified as Minecraft would be indeed Minecraft. They also didn't work on detecting and tagging images inside YouTube videos like look there's a cat or that's a blue sky. There are different ways of analysis. The video gets dissected or like a surgery. Basically, all these channels get dissected. So why would Google invest so much of its research
horsepower into this. Better we can tag automatically, the better ads we can put recommendation, the relevance can get improved. Related videos list on the right side and song. In twelve, O Meat and his colleagues published the research paper showing they had a ninety nine accuracy rate in classifying video content on YouTube. Basically, for the work we had done. Not just this that our team, the YouTube the research team was awarded. For example, we could take vacation together.
They went to Hawaii and Google didn't want to comment for this episode, but Mark, you got your hands on an email the company sent out to some of its advertising agencies last week after the company's initial blog post failed to stem the YouTube boycott YEP. In that note, Google gave a few more details about the new capabilities
its policies could bring. Here's one example. Under its old policies, Google would have allowed advertising if someone in a YouTube video was wearing a T shirt with let's say, offensive language or an offensive slogan. Under the new policies, it would disabled ads. Some of the experts we talked to, so this is not an easy fix to implement. And here's honey Free, the Dartmouth professor. The tools that we
have developed what are generally called robust hashing. UM don't go after content like oh, I've seen this flag or this logo or the symbol, and somewhere in the image it goes after this is the same image that I've seen before. This is the same video, the same audio recording. So it's very very specific. So the good news about that is that we can find that content extremely efficiently and very accurately. And when I say accurately, I mean we can work at Internet scale with billions of uploads
today and make very very few mistakes. And that's really good because that means you can fully automate the system. So this works for things that have already been flagged as inappropriate content, like let's say, a particular clip of child pornography gets taken down, but gets re uploaded to the Internet a couple of days later. What gets tricky is when a computer sees something entirely new, something that
hasn't been tagged yet as appropriate or inappropriate. And while there is technology to do that, that technology is not good enough to work fully automatically, extremely efficiently with no human intervention. And what I mean by that is that it will inevitably make mistakes, and it will make two cut types of mistakes. It will allow things to go through that you don't want to UM maybe we tolerate that, but more troublingly, it will filter out things that you
don't want it to filter out. So without human intervention, that technology today UM does not work at scale, at Internet scale. So Google is one of many companies working on an AI approach called deep learning, which is more or less a system where algorithms sort of teach themselves. So Google's researchers once fed a ton of videos of cats to a computer without telling the computer what a
cat was. Over time, the computers basically learned to identify look, there's a cut, and they could in theory, do the same thing with a Nazi flag or another hate symbol. Our editor Alistair also spoke to another researcher who has worked on these problems extensively. I'm j Bolo. I'm a professor of computer science at the University of Rochester. J Bo worked on developing AI systems that could detect hate mes sages on social media like Facebook and Twitter. He
also uses AI tools for other ends. So we have used social media too, for example, analyze the progress of the presidential election, and many of our findings helped to explain why Trump won election. And we'll also use social media to track under age drinking problem, which is a big problem in the United States. We also use Instagram to track drug use and drug dealers, and this was actually in collaboration with the New York State Attorney General's office.
His research also used advanced AI to come through Instagram looking for things like weed and pills and even bonds. Pretty amazing, yeah, and a little creepy, But Jaba said that video presents a far greater obstacle for AI researchers. A video is not just a bunch of uh individual frames uh when you watch a video. The most important aspect or unique aspect of video is the motion expect So things move, people move. That requires UM motion based
analytical methods too. For example, understand that the changing expression on the person's face or an action performed by a person, oh, you know, something like an explosion, Oh, you know, people fighting each other. It's a challenge. So it's a big challenge for Google because it has YouTube, but Facebook and Twitter and other companies are also pouring a lot of resources, and the online video on Facebook and Twitter also doing
live video, which has its own set of challenge. Right, So that's one challenge figuring out what's inside a video. But still Google's algorithms are getting pretty good at that. Here's an even thorniar problem. How do you make judgments about the gray areas whether a video is outright offensive or hate speech? Or incendiary. Google has typically tried to
steer clear of that. O Meats at his research team at YouTube didn't spend a lot of time trying to classify political videos because that puts Google into it a comfortable position right making editorial judgment. Google thinks of YouTube like online search. It's just search for video. If it curious that it can't be the subjective, neutral technology platform
where all information is free. A few advertisers, if any, would want to run ads in front of that kind of video that we watched earlier, with a screaming pastor putting down gay people. Right. But but some online news services like maybe Vice News or even like CBS might want to run a new segment on that that pastor, and advertisers probably want to run ads on that. So remember the note we told you that Google sent out
to advertisers. In that Google said that there are forty eight videos flagged as offensive by The Times and other papers. Forty four of them would immediately disabled ads under the new policies, but the other four could still show ads. Google had decided that these videos didn't violate their terms of service. Still, with the new controls, advertisers would have clearer options for opting out of videos like this. Here's
one of those four. It starts out with a prayer and Arabic asking God to have mercy upon the soul of martyrs, and then it transitions into a song. The language is pretty graphic. You'll hear phrases like will wage wars against them, will return the rape truth, we won't accept any occupied land, the ground will erupt and burn them, will liberate elk Uods and all goods for our listeners who don't know. Is the Arabic name for the city of Jerusalem. Okay, so it's clear that this is about
the Israeli Palestine conflict. Um. You know, it's pretty graphic language, like we said, with a lot of metaphors violence. When we loaded this video as of taping this episode on Wednesday, we saw an ad roll in for Samsung's new Galaxy phone. Probably not something Samsung might have picked on its own, but the video is popular right now, it has over more than half a million views. With its new policies.
Once they're implemented, Google set adverts wise, there's like Samsung will be able to opt out from this sort of video. They could check off a box that says no ads on political or religious content. But it still makes a question why doesn't Google just disabled ads from videos like these altogether? So I check with Google on this, and they have a policy where they don't comment on individual videos.
But here's a hunch. The words in the video are pretty intense, but Google may have decided that it didn't cross over the line into hate speech or a specific attack on a group of people. You know, it could be interpreted as a religious song, and Google just doesn't really want to filter religious songs from YouTube ads. It goes to show how difficult it is for software alone to be making all these decisions. You really do need a human to be making the call on some of
these trickier videos. Here's what Honey, the Dartmouth professor told me. So it's really a question of how much you want to automate these things, the fully automatic things. We are not there where we can just say, hey, that fire all the extremest content. It's machine learning. Artificial intelligence is
nowhere even near that. Don't believe the hype. So I showed Honey the video with the prayer and the song after we had our initial conversation, and he wrote back and said to him the language sounded like an explicit called to violence, but he said that his work focuses on removing content that violates the terms of service of companies like Google and Facebook, and it's up to Google to set their own terms of service, and so an
example of how humans often disagree about what's offensive. Oh mean? Madonni, the AI researcher who used to work at YouTube and now works at Cisco, He agreed that humans play a really important role here too. He said that letting people look at the end result of what a computer filters would help reduce errors. He did have a caveat though, obviously humans don't scale. We need maybe an army of editors to look at things that eventually to decide what
it said appropriate MEDEO or not. As an aside, humans don't scale might be the th googly comment I've ever heard. Facebook pointed to a similar defense for its inability to curb fake news, but after that controversy in the fall, Facebook decided to team up with third party fact checking websites which employ human editors, and when those fact checkers say that a particular piece of content could be dubious,
Facebook flags those stories as potentially incorrect. Their technology we have today, for all its advances, still is good enough to offer a perfect solution. And Google, which is typically not shy to make lofty promises, didn't want to commit to a total fix. And it's no Google said it's new fixes would go a long way and ensuring brands like Lorel that it would never get a call like it did from the Times other week. But Google said
that could never be a hundred percent guaranteed, right. And Professor lu O, the AI researcher we talked to earlier, he agreed, whether it's human calligence or occupy tween helligence, we are never going to achieve one percent recognition read anything. So for someone to come out to say we'll get and here, we will get rid of all the all the Hadden messages, okay, the videos. That's the responsible and that's it for this week's Decrypted. Thanks for listening. Tell
us what you thought of the show. You can write to us at Decrypted at Bloomberg dot net. You can find me on Twitter, free a Pete speech, I'm an image Bergan and I'm at Ako seven. Don't forget to subscribe to us on iTunes or wherever you get your podcasts and leave us a reading and a review. I read each and every one of these reviews, and they help us make our show better, and it helps put our show in front of more listeners. This episode was
produced by Liz Smith and Magnus Hendrickson. A very very special thanks to our porter Shelley Hagan and our great editor Alistair Barr for their reporting and research for this show. Alec McCabe is head of Podcasts. We'll see you next week. H