How Tech Journalists Are Fueling the AI Hype Machine - podcast episode cover

How Tech Journalists Are Fueling the AI Hype Machine

May 29, 202422 min
--:--
--:--
Listen in podcast apps:

Episode description

Micah breaks down media hype about AI. According to Sam Harnett, a former tech reporter, journalists are repeating lazy tropes about the future of work that once boosted companies like Uber, Airbnb, and Fiverr. Plus, Julia Angwin, founder of Proof News, debunks fantastical claims by AI companies about their software. And Paris Marx, host of Tech Won’t Save Us, explains how AI leaders like Sam Altman use the press to lobby regulators and investors.

On the Media is supported by listeners like you. Support OTM by donating today (https://pledge.wnyc.org/support/otm). Follow our show on Instagram, Twitter and Facebook @onthemedia, and share your thoughts with us by emailing [email protected].

Transcript

This episode is brought to you by Progressive. Most of you aren't just listening right now. You're driving, cleaning, and even exercising. But what if you could be saving money by switching to Progressive? Drivers who saved by switching save nearly $750 on average and auto customers qualify for an average of seven discounts. Multitask right now. Quote today at Progressive.com.

Progressive casualty insurance company and affiliates. National average 12 month savings of $744 by new customers surveyed who saved with Progressive between June 2022 and May 2023. Potential savings will vary. Discounts not available in all states and situations. Hey, it's Latif from Radio Lab. Our goal with each episode is to make you think, how did I live this long and not know that? Radio Lab. Adventures on the edge of what we think we know. Listen, wherever you

get podcasts. Hey, it's Micah. This is the On the Media Midweek podcast. Happy Memorial Day. Hope you did something fun or relaxing. Hope you got to be outside a little bit. We've found sometimes that over holiday weekends are listens dip a little bit. And so you might have missed a piece I did on the most recent show about kind of lazy tech journalism and how reporters just time and time again fall for whatever Silicon Valley is hocking. They did it with the gig

economy and now they're doing it with artificial intelligence. We were pretty proud of the piece and so we're going to rerun it for the pod extra. Enjoy. Last week, OpenAI released a demo of its latest technology. It's text-based software chat GPT 40, which responds to prompts and now has a new voice. A few actually, but this one called Sky got the most attention. You've got me on the edge

of my well, I don't really have a seat, but you got the idea. What's the big news? People online said the demo reminded them of a 2013 film about a man who falls in love with his AI voice assistant performed by Scarlett Johansson. Good morning, Fedor. Good morning. You have a meeting in five minutes. You want to try

getting at a bit? It's too funny. Within hours of the demo's release, OpenAI CEO Sam Altman tweeted the word her the name of that very film, which by the way, he has publicly described as an inspiration for his work. Then days later, the actor says she turned down the offer to be the voice of the artificial intelligence system and that they made one that sounded just like her.

Johansson said Altman approached her eight months ago and she turned down his offer to lend her likeness to the software. He approached her again just two days before the release of the demo. She said I was shocked, angered and in disbelief, the missed outman would pursue a voice that sounded so eerily similar to mine that my closest friends and news outlets could not tell the difference.

In response to requests from Johansson's lawyer, OpenAI has said they're discontinuing the voice they called Sky, but the company maintains they hired a voice actor for the job before approaching Johansson and made no attempt to emulate the actor. The debacle emphasized how these large language models often rely on human labor and data, often taken without permission.

Despite its problems, so many AI boosters in Silicon Valley and members of the press say that artificial intelligence holds the keys to a shining future. We may look on our time as the moment civilization was transformed as it was by fire agriculture and electricity. Oh man, when the AI coverage started, I thought here we go again, this is the same old story Sam Harnette is the author of a 2020 paper titled Words Matter. How tech media helped write gig companies into existence.

I wrote it because I was really disappointed with the coverage I was seeing and some of the coverage I ended up doing today Sam hosts a podcast called Ways of Knowing. But back in 2015, he was a tech reporter for KQED in San Francisco filing stories for marketplace and NPR. I was a young reporter, you gotta do these quick stories and before you know it, you're using all these words like startup or tech or platform.

I started thinking these words themselves are misleading like ride share for Uber like what are you sharing like you're paying someone to drive you around you're not sharing anything. These euphemisms were pushed by the tech industry and quickly adopted by the press during the early days of the gig economy. In his paper, Sam listed off several styles of media tropes that defined that era like the first person review.

He points to a time magazine cover story titled Baby You Can Drive My Car and Do My Arons and Rent My Stuff. In those experiential first person stories, they're not critical at all right it's all about how you're engaging with this thing and what it's like and even when they are critical you're still sort of giving them a lot of free advertising by casting it as a totally new thing.

Yes, but on the consumer side you could see where your car was before it got to you see who the driver was you could know how much it was going to cost you didn't have to give cash to a stranger in a car right that's innovation know well when you look at Uber and lift there using GPS and phones GPS had been around for decades phones were relatively new but Uber and lift and invent the phones really the innovation seemed to be ignoring local transportation laws and ignoring labor laws.

And it was all being cast as techno utopianism this inevitable future of work. It's a mass transit revolution sparked by the universal ride sharing company that goes by only a block letter U on its windshield and of course we're talking about Uber. I hope and that all regulators will take the time to understand that most of these drivers greatly value the freedom and flexibility to be able to work whenever and wherever they want.

The industry wants those drivers to stay independent contractors that's cheaper for those companies it's also at the core of their business. So what Uber does this is the future it is the sharing economy the marketplace will win but we've got to support them. But really it was the past of work. I think it was talking to a lot of taxi drivers and realizing that this is work that has no social safety net.

This is work that has no overtime there's no guarantee minimum wage work that's undoing protections that were hard fought 100 years ago. Meanwhile some outlets focused on what Sam Harnett calls the outlier worker profile CNBC wrote about 38 year old David Feldman who quote quit the rat race and left his finance job to make six figures picking up gigs on fiber.

A site that connects customers with freelancers the Washington Post ran a story titled Ubers were remarkable growth could end the era of poorly paid cab drivers which cited these claims from the company. The people that drive their taxis barely break even whereas someone who drives an Uber can make a net $90,000 a year. The median pay for Uber drivers in New York City $90,000 a year for 40 hour work week. Wow that is the same as a post secondary science teacher and a financial analyst here.

That's a lot of money claims that landed Uber in court. The Federal Trade Commission will send nearly 20 million dollars in checks to Uber drivers. This is all part of a settlement with the ride hailing company. The FTC found Uber exaggerated the yearly and hourly income drivers that they could make in certain cities. Instead of pressing Silicon Valley executives on how these companies were say misleading workers many journalists did uncritical interviews.

They were threatening to sue you right. They were threatening to shut us down. Host Guy Ross in 2018 interviewing lift CEO John Zimmer for MPR's podcast How I Built This. The opportunity was massive and the regulatory obstacles were just as massive. How long did it take for you to overcome those initial regulatory challenges? Was it months, years? I'd say at least a year, probably for that first year. They cast the people riding these companies as heroes who overcome adversity.

Sam Harnett who created a thing that the listener want to succeed. It's kind of astonishing how the tech industry kind of keeps finding ways to get lots of media coverage that ends up turning into lots of investment. And lots of power. Speed is imperative. And if they can get up and running quickly enough and if their business model can become a thing that's regularly used by consumers and embedded in society, then they become too big to regulate.

I think we see it with a lot of new technologies. Whether it's the gig economy, whether it was with crypto a few years ago, whether it's AI. Paris Marx is the host of a podcast called Tech Won't Save Us and the writer behind the Disconnect Newsletter. We often see these very rapid embraces of whatever the next new thing from the tech industry is. And less of a desire to really question the promises that the companies are making about them.

Marx agrees that some of the same media tropes that Sam Harnett identified are recurring now with AI, like the first person review. After ChatGPT was released in November of 2022, the companies were selling that we were potentially even closer to computers matching human level intelligence. And one of the things that we saw a lot of media organizations doing was actually going on to ChatGPT and having conversations with it.

And there's a really striking example of this that was published in the New York Times by Kevin Russe, their tech journalist. And he basically had this two-hour conversation with this chatbot, which he said wanted to be called Sydney. It had its own name. It was telling him that it wanted to be alive and was ultimately asking Russe to leave his wife and have a relationship. With the chatbot. And the way that it was written, it was ascribing intentionality to this chatbot.

It was thinking it was having these responses. It was feeling certain things. When actually we know that these chatbots are not doing anything of the sort, right? The science fiction author Ted Chiang basically called these chatbots auto-complete on steroids. You know, we're used to using auto-complete on our phones. When we're texting people, you know, it's suggesting the next word. And this is just taking it to a new level.

The fact that a nascent chatbot with millions of dollars of funding behind it would say such outrageous things. Is that not in and of itself newsworthy, even if the chatbots own claims about its human-like intelligence were just outright wrong? I think it definitely can be, but then the question is like, how do you frame it? And how do you explain it to the public? This was February of 2023. ChatGBT was released at the end of November of 2022.

So we were still really early in the publics kind of getting to know what this technology was. It really misleads people as to what is going on there. Another trope that Harnet lays out in his paper is his discussion of the founder interview. Today, we've seen so many fonding conversations with tech leaders who are at the forefront of artificial intelligence.

Yeah, absolutely. One of the ones that really stands out, of course, is an interview that Sundar Pichai, the CEO of Google, did with 60 minutes back in April of 2023. And in this interview, Sundar was talking about how these AIs were a black box, and we don't know what goes on in there. Let me put it this way. I don't think we fully understand how a human mind works either.

One of the biggest problems there was not just what Sundar Pichai was saying, but that the hosts of the program who were interviewing him and conducting this were not really pushing back on any of these narratives that he was putting out there. Of the AI issues we talked about, the most mysterious is called emergent properties. Scott Pelley of 60 minutes. Some AI systems are teaching themselves skills that they weren't expected to have.

For example, one Google AI program adapted on its own after it was prompted in the language of Bangladesh, which it was not trained to know. After the piece came out, AI researcher Margaret Mitchell, who previously co-led Google's AI ethics team, posted on X saying that according to Google's own public documents, the chatbot had actually been trained on Bengali texts. Meaning, this was not evidence of emergent properties. Here's another exaggeration that made its way into a TV news piece.

The latest version, chat GPT-4, can even pass the bar exam with a score in the top 10 percent, and it can do it all in just seconds. Chat GPT-4 scored in the 90th percentile on the bar exam. Was that legit? Yes, so that claim was debunked recently. Julia Anguin is the founder of Proof News. She recently wrote an op-ed for the New York Times titled Press Pause on the Silicon Valley hype machine. In MIT researcher, basically, reran the test and found that it actually scored in the 48th percentile.

And the difference was that when you're talking about percentiles, you have to say who are the other people in that cohort that you're comparing with, right? And so apparently, OpenAI was comparing to a cohort of people who had previously failed the exam multiple times. OpenAI compared its product to a group that took the bar in February. They tend to fail more than people who take it in July. And so when you put it compared to a cohort of people who had passed it at the regular rate,

then you got to this 48th percentile. The problem is that paper comes out, it's peer-reviewed, and it goes to the academic process. It comes out like a year later than the claim. Tell me about Devon. This is a red hot product from a new startup that claims to be an AI software engineer. Can it do what its creators claim it can do?

Devon is from this company called Cognition, which raised about $21 million from investors, and came out with what they called an AI software engineer that they said could do programming tasks on its own. The public couldn't really get access to Devon, so there wasn't anything to go on except these videos of Devon's supposedly completing tasks. I'm going to ask Devon to benchmark the performance of Lama on a couple of different API providers. From now on, Devon is in the driver's seat.

The press wrote about it as if it was totally real. Why are data forget chat bots, AI agents are the future with the headline, Bloomberg, data, breathless article about how these programmers are basically writing code that would destroy their own jobs. There was a software developer named Call Brown who decided to actually test the claim. I have been a software professional for 35 years. Here's Carl Brown on his YouTube channel, Internet of Bugs.

For the record, personally, I think generative AI is cool. I use GitHub Copilot on a regular basis. I use ChatGBT, Lama2, stable diffusion, all that kind of stuff is cool, but lying about what these tools can do does everyone into service. So he took one of these videos where Devon was aiming to complete the task and he tried to replicate exactly what was happening. He did the task in 36 minutes and the timestamps in the video show that it took Devon more than six hours to do the task.

What Carl says is that... Devon is generating its own errors and then debugging and fixing the errors that it made itself. The company basically acknowledged it actually in tweets. They didn't respond to my inquiries,

but they basically said, yeah, we're still trying to make it better. But it was one of these things where it was a classic example of like, journalists shouldn't believe just a video that claims to show something happening without actually taking a minute to even carefully watch the video or ask to have access to the tool themselves.

If I started a company and raised millions of dollars in funding, I would be under a lot of pressure to prove to the public that it works and you'd think that people who cover Silicon Valley understand that dynamic. Totally, but I mean, I will tell you that after my piece ran in the New York Times questioning whether we should believe all this AI hype, a reporter at Wired did an entire piece basically trashing my piece and the title of it was, we should believe the AI hype. Really? Yes.

Okay. And what was their argument? Basically that in the future, I will be proven wrong because it will all get better. And that's sort of the company's argument too, which is like, don't believe you're lying. Eyes, they leave the future that I'm holding up in front of you. I think for journalists, I don't think our role is to call the future. I think our role is to assess the present and the recent past. The recent past tells us that Big Tech is very good at generating hype in

the press and using venture capital to grow really fast and influence regulators. I'm not predicting this will happen with AI. It's already happening. My worst fears are that we cause significant. We the field, the technology, the industry caused significant harm to the world. Here's Sam Altman's CEO of OpenAI, testifying before Congress last May and discussing why he thinks AI needs to be regulated. I think if this technology goes wrong, it can go quite wrong. And we

want to be vocal about that. We want to work with the government to prevent that from happening. Just a month later, Time Magazine revealed that OpenAI had secretly lobbied the EU to go easy on the company when regulators were drafting what's now the largest set of AI guardrails. Because he is treated as kind of the high priest of this AI moment, because he had these compelling

narratives that were being backed up by a lot of reporting. Paris marks. He was basically able to convince European Union officials to reduce the regulations on his company and his types of products specifically. And that carried through to when the AI act was finally passed. All this while technology companies push the public along a path that they and members of the press

say is inevitable. We know that generative AI, the chat GPDs, the image generators, things like that are much more computationally intensive than the types of tools that we were using previously.

So they require a lot more computing power. And as a result of that, Amazon and Microsoft and Google are in the process of doing a major build out of large, hyper scale data centers around the world in order to power what they hope will be this major demand for these generative AI tools into the future that obviously requires a lot of energy and a lot of water to power it. I think we have paths now to a massive energy transition away from burning carbon.

And so in this interview in January with Bloomberg, Altman actually directly engaged with that when he was asked about it. Does this frighten you guys because the world hasn't been that versatile when it comes to supply, but AI as you know, you have pointed out it's not going to take its time until we start generating enough power. It motivates us to go invest more in fusion and invest more in Norse new storage. He said that we're actually going to need an energy breakthrough in nuclear

technologies in order to power the vision of AI that he has. He didn't kind of hesitate and say, well, if we don't arrive at it, then maybe we won't be able to roll out this vision of AI that I hope to see. But rather that we're just going to have to power it with other energy sources, those off-meaning fossil energy sources. And that would require us to geoengineer the planet in order to kind of keep it cooler than it would otherwise be because of all the emissions that

we're creating. The existential question I have about AI is is it worth it? Julia Anguin. Is it worth having something that maybe sorts data that are writes an email for you at the cost of our extremely precious energy? And then also AI is based on scooping up all this data from the public internet without consent. As Sam Harnett said, speed is imperative. It's why Big Tech is pushing some half-baked AI features. As of last week, when you type a question into Google,

you now see an AI-generated answer. Some people reported that the AI told them to eat rocks and put glue on pizza, which weren't presented as jokes, even though the info appears to have been scraped from Reddit and the onion. You know, there's this AI pioneer, Jan Lekun, who works at Meta. He's their leading AI scientist. And he recently tweeted out something I thought was so perfect. He said, it will take years for AI to get as smart as cats. And I thought, like, that's

perfect. I should have just run that instead of my column. Here's one last issue. When Google AI summarizes legit info from real news sites, there's no need to go to the original source, meaning even less traffic for ailing media organizations. This is yet another reason members of the press should refrain from Silicon Valley Boosterism. Janky new tools may be eating our lunch, but if the recipe was made by AI, we should probably wait to dig in.

Thanks for listening to the podcast Extra on the Big Show this week will be discussing the rise of Donald Trump's social media platform Truth Social. Look out for the show on Friday. You don't want to miss it. Thanks for listening.

This transcript was generated by Metacast using AI and may contain inaccuracies. Learn more about transcripts.