This country was built on a distinctly American work ethic. But today, work is in trouble. We've outsourced most of our manufacturing to other countries. And with that, we sent away good jobs and diminished our capability to make things.
American Giant is a clothing company that's pushing back against this tide. They make a variety of high-quality clothing and activewear, like sweatshirts, jeans, dresses, jackets, and so much more. All made right here in the USA, from growing the cotton and adding the final touches.
So when you buy American Giant, you create jobs for seamsters, cutters, and factory workers in towns and cities across the United States. And it's about more than an income. Jobs bring pride. Purpose. They stitch people together. If all that sounds good to you, visit American-Giant.com and get 20% off your first order when you use code STAPLE20 at checkout. That's 20% off your first order at American-Giant.com with promo code STAPLE20.
federal funding for public media is at risk of being eliminated without federal funds local public radio stations across the country will struggle to acquire and broadcast place. That has a big impact on our bottom line. So we're turning to you. important right now give to your local station and if you can please donate directly to Go to marketplace.org. Don't believe the hype about AI. From American Public Media, this is Marketplace Tech. I'm Megan McCarty Carino.
The excitement around AI has gotten a bit frothy. Those two magic letters are everywhere, promising everything. Authors Emily Bender and Alex Hanna want us all to take a beat and a more critical look. Their new book is The AI Con, How to Fight Big Tech's Hype and Create the Future We Want. Bender is a linguist at the University of Washington who helped popularize the term stochastic parrot.
to describe large language models. And Hannah is the director of research at the Distributed AI Institute, formerly an AI ethicist at Google. She says claims of AI's artistic prowess can be misleading. There's a lot of discourse around this will democratize art or creativity as if
There were things preventing people from picking up a pencil or picking up an instrument and taking classes to do that. And one person... that we knew citing the text is Julianne Dawson who for a while was a publisher of this This very small publisher is called Bards and Sages. And she said this very pointed thing which is, you know, these people don't really care about. you know, writing or craft they care about.
being the writer and so it really turns this kind of notion of like the process is the point and that really is turned on its head when we say we're democratizing R or the AI hypers are democratizing R. That's definitely not what it's doing. Emily, how much of AI hype do you think is sort of coming from motivated reasoning on the part of people who work in this space? I would say it is motivated reasoning plus lots and lots of literal capital investment.
There's a lot of money that's gone into it and either you are the person who has invested that money and you want to see a return on investment. or you have bought in in the next level of the pyramid and built your company around it, or... bought onto it. Maybe you're providing social services in a school district or something. You've bought into that and then you need to make sure that the thing keeps working because otherwise you're either losing a lot of money or losing a lot of fame.
One concept that comes in for a lot of scrutiny in the book is this idea of artificial general intelligence. It's sort of a squishy term, as I'm sure we'll get into. It's basically AI that can perform most tasks as well as or maybe better than a human. It's kind of this holy grail in tech and we're constantly hearing these days, you know, estimates about it being just around the corner or sometimes even that it's already been achieved. Alex, why should we be skeptical of this?
Well exactly what you're saying Megan it doesn't even have a concrete definition. And there's been multiple definitions proffered that are very vague. There was a leaked memo between Microsoft and OpenAI in which they said... Well, once we have a system that can generate $100 billion in profit, then we have AGI, which, okay, now you have something that has a concrete goal, but that could be... Many things have nothing to do with the capabilities of some computational system.
And so this concept isn't very useful, but it becomes this thing that people can point to and say, well, this is a thing that once we achieve it, you know, we're going to have. all kinds of things that can be done for us. We're going to have science that can be done for us. We're going to have important research that gets done for us and I think what it seems to indicate is a really magical thinking about what these systems are capable of as if they aren't systems that have some input
and some kind of a transformation of some data and then some kind of outputs to them. But it's this thing that I mean is... i mean Even calling it a holy grail might be, you know, granting it too much. because at least the grail has some physical go presence and if you're searching for something you know you found it in this case it's just it's an absolute wild goose chase On the other end of the spectrum, from what you describe as kind of magical thinking about a future of
you know, super intelligence. There is maybe catastrophic thinking about a future of super intelligence. A lot of people... you know, in the AI research community, some of the biggest names, Jeffrey Hinton, Yoshua Bengio, Elon Musk. that make a lot of noise about how concerned they are about how catastrophic superintelligence could be for the future of civilization. You write that this is kind of an inverted form of AI hype. Emily, can you explain that?
Sure, absolutely. So the terms are sometimes AI boosterism. These are the people who say we just have to make a smart machine and it will solve everything for us. And AI doomerism, which is, well, we're going to make the smart machine, but it's going to kill us all. and kill us all, not just now, but like in perpetuity. And so that means it is ultimately catastrophic.
And a lot of the discourse likes to set up those two things as the full range of possibilities. Either you are an extreme optimist or an extreme pessimist or you're somewhere in the middle. But what we see is they're actually just two sides of the same hypochorn. And reality is somewhere off of that coin entirely. And the reason it's still hype is that it's still based on this idea that if you just throw enough text and enough compute at it, somehow it's going to combust into consciousness.
and because you've thrown in all the text you could possibly find, it's also going to know everything, and it's going to be able to, some of these fantasies involved, building an artificial intelligence researcher. So something that can design the next version of the machine and on and on. And then you get this like explosion of this undefined artificial intelligence.
And it just comes back around to, look, someone who's selling this stuff, if they tell you this is going to solve everything, great. If they tell you... this is going to kill us all, they're still telling you they're building something really powerful and you had better defer to them and give them your money and hope that they're right. And what's the danger of focusing on these kinds of theoretical harms?
So the danger is that there's actual harms happening right now. And if we distract policymakers, including the AI Insight forums that Senator Schumer ran, into spending time thinking about these fantasy scenarios, We are letting people down now, people who are being picked up in surveillance dragnets, people who are getting school support.
just slashed and sort of foist it off onto a chatbot instead. People who are being told, oh yeah, there's not enough money to give you medical care, but here you can push your symptoms into this system. And also sort of behind all of that is the fact that all these systems are going to absorb and then amplify the bias and the training data. And so all of our existing systems of racism, of sexism,
of transphobia, of ableism, are just going to be sort of repeated at scale over and over again. These are the urgent problems, not Skynet. We'll be right back. These days, work is in trouble. We've outsourced most of our manufacturing to other countries. And with that, we sent away good jobs. And our capability to make things. American Giant is a clothing company that's pushing back against this tie.
They make all kinds of high-quality clothing and activewear, like sweatshirts, jeans, dresses, jackets, and so much more, right here in the USA. So when you buy American Giant, you create jobs in towns and cities across the country. And jobs bring pride. They stitch people together. If all that sounds good to you, visit American-Giant.com and get 20% off your first order when you use code STAPLE20 at checkout. That's 20% off your first order at American-Giant.com with promo code STAPLE20.
You're listening to Marketplace Tech. I'm Megan McCarty Carino. We're back with Emily Bender and Alex Hanna, authors of the new book, The AI Con. One of the harms you get into that's very tangible is Climate change, these systems use an incredible amount of energy in their training and in their deployment. What is kind of the discourse around the trade-offs, and where do you see problems in that, Alex? Yeah, so as you mentioned, there's huge carbon.
outputs of these models. They use a lot of water for cooling the creation of semiconductors. is incredibly environmentally and has an incredible environmental and public health. implications. But the other problem is that there's an idea that we're going to be able to create these larger and larger models, and these models are going to do science for us.
And those are going to solve the climate issues as if it was a problem of kind of technical ability and creative solutions versus These things aren't really about political will and really curbing our own climate emissions and our carbon emissions. And so it's really mortgaging our present for a future that's not going to come. Emily, one of the answers to these concerns is often, as Alex just said, that we need to accelerate the development of these tools.
and you know increase our carbon footprint in the process in order to you know find these solutions to climate change, to find technological solutions to climate change. What does the evidence look like in terms of, you know, the use of these kinds of tools to accelerate scientific discovery?
The evidence is very, very thin. Again, we have people treating chatbots as if they were answer machines. And the idea is if we built a big enough one, it could answer the question of what do we do about the climate crisis? And what I see is a lot of very narrow thinking about expertise within the community that's building this technology.
Do these people turn to the folks who are actually working on the political and social and to a certain extent technological levers that we can pull to deal with the climate crisis? No, it's just we're going to build a bigger and bigger chatbot and maybe scrape some of their research into its training data. You also have these folks who are suggesting that they have built artificial scientists.
And what they've done is they've sort of strung together a series of large language models and connected one of them to a code base so it can run artificial intelligence experiments. And then the last large language model in the chain was supposedly acting as a peer reviewer using the rubric from an artificial intelligence conference. And they claimed that they produced a paper that passed peer review. But the peer review was also fake.
Some people might come away from your work with the impression that you're just reflexively anti-tech, anti-progress, you're just haters. What would you say to that? You know, are there any developments in the AI world that you find exciting? Are there any tools that you use? I mean, I'm fine being called a hater. If Kendrick Lamar taught us anything, it's that we are not hating hard enough.
But I say this tongue-in-cheek because I would at least call myself a technologist. I work at a place called the Distributed AI Research Institute. But one of the elements of being at the Institute is finding technology that works.
for people and works for communities. The way that we have this current era of generative AI, these are really anti-labor devices. These are devices that are used to cheapen the labor of people and to bring down wages and unions and labor collectivities, to be replacing those people with very cheap. and often incorrect facsimiles in terms of speaking of their output.
So talking about uses of technology that actually serve people are cool and excite me and they're great. And in the book, we talk about a few. We talk about... the uses of machine translation and automatic speech recognition by Teheku Media and translating Tehreya Maori language. These folks in Eritrea, also called New Zealand, who are using these technologies.
The servers are controlled by them. The data that's used in the training is from their community that has proper permissions and consent. And so this is a really interesting use of this technology. There's also this network.
language startups that DARE is helping to support that is based on languages spoken on the African continent that right now machine translation and automatic speech recognition do a really poor job, the ones provided by Meta and Google and Microsoft, because they don't have enough data and the ways that they get data are not consentful and do not respect those communities.
New technological futures are exciting, and we love to talk about that and a lot to think about that. But the path that we're on with these huge models that just take data from anything that is not nailed down is not the way to do it. I also consider myself a technologist. I am the faculty director of a professional master's in computational linguistics, so I have been training technologists for 20 years now. And I do think there's lots of great things we can do with technology.
but it is great when it is, as Alex is saying, controlled by the communities that are using it and are having it used on them rather than centralized. And it's also great when it is...
done for a well-defined purpose. So rather than trying to build everything machines and claiming that your synthetic text extruding machine can do anything because it can output text on just about any topic, Instead, we should be building something that is specific, like I'm going to build, for example, an automatic transcription system for a specific language and test it with the kind of speakers who I expect to be using it.
That makes for, I think, tractable technology. It makes for technology that we know when we can use it. so that there's a sufficient amount of transparency that we can decide if the output is likely to be reliable and reliable enough for our use case and so on. That was Emily Bender and Alex Hanna. Their book is The AI Con.
And their podcast, if you want more AI hype debunking, is called Mystery AI Hype Theater 3000. Jesus Alvarado produced this episode. I'm Megan McCarty Carino, and that's Marketplace Tech. This is APM. In 2009, three days before Halloween, a grisly crime stunned the seaport town of Anacortes, Washington. Dog Whisperer of Anacortes. They soon discovered a story tangled in absurdity. who was the hunter and Follow and listen.
dog trainer, the heiress, the bodyguard the free Odyssey app or wherever you get your podcasts.