Bloomberg Audio Studios, podcasts, radio news. Is it just me? Or is AI suddenly everywhere?
Right now? Both in the US and China ecosystem, The AI space is racing on the breaks, taking.
Sea is everywhere.
It's not that big scary thing in the future.
AI is here with us. You open up Google and instead of getting a list of websites, there's now often an AI overview. This summary scraped together from a sometimes strange variety of sources. One recently told me that cheese was great at preventing cavities, citing the Cleveland Clinic, the British Heart Foundation, and the noted Vermont cheesemaker Cabot as sources. And it's not just Google. Apple wants to power its iPhones with Google's Gemini generated AI, Alhibet.
And Meta, offering millions to Hollywood to partner on artificial intelligence.
From Instagram to customer service, even to healthcare, there's a generative AI bot that seemingly has all the answers, if not necessarily the correct ones. All this has led to big questions recently about what's going into these models. On today's episode, what's behind the recent AI headlines, and what does all this mean for our ability to opt in or opt out of using AI in the future. I'm David Gurra, and this is the big take from Bloomberg News.
We're going to get back to those AI overview search results a little bit later in this episode, but first we're going to start unpacking all of this by looking at what's been going on with another company, Open Ai. Well, six months ago, the company behind chat GPT was in the headlines after a leadership struggle broke out between the company's founders. Now open Ai is back in the news for a whole different reason.
So well, lot's been going on the last few weeks. Sometimes in my head I call it as the Open Ai turns.
Rachel Metz is an AI reporter with Bloomberg. She says the open Ai soap opera got a new plotline a few weeks ago when the company released an update to Chat GPT.
They introduced their latest flagship AI model, This is GPT four O.
Open Ai unveiled human sounding voices for chat GPT in September of last year, but the update really only made headlines a few weeks ago, after the company hosted a live stream to show off what it considers to be a big leap forward.
We'll be showing some live demos today to show the full extent of the capabilities of our new model.
It was during the demonstration when open AI's Mark Chen unveiled GPT four Oh's lifelike voices.
Hey, catching Fatu him Mark? How are you?
Oh, Mark, I'm doing great, Thanks for asking how about you?
That a lot of people notice that this voice sounded remarkably like the voice of the actress Scarlett Johansson.
Oh, you're doing a live demb right now. That's awesome.
Just take a deep breath and remember you're the experts.
The perception was furthered when Sam Altman, OpenAI's reinstated CEO, posted a single word to X after the event, Her, which some took as a nod to the twenty thirteen Spike Jones film in which johanson voice is the Open Operating system Samantha.
And it turns out Scarlett Johansson thought it also sounded a lot like her.
The thing is Altman had approached Johansson twice about potentially working with open Ai, but Johansson turned him down. To some watching, it felt like a bunch of Silicon Valley types had ignored the wishes of one of Hollywood's most famous actresses just so they could make the plot line of a movie a reality.
She seemed quite genuinely upset about it and thought it sounded like her, even though she had specifically said I don't want to participate in this project on multiple occasions.
For anyone feeling anxious about the ubiquity of AI and about the information these generative models are hoovering up the dust up struck a nerve.
I think there's a few different things that go into making people feel really alarmed by this. One is people really like Scarlet Johanson. She is an iconic actress. Another thing that I think is interesting about her in particular is like yourself, David, as you were saying, you talk for a living. Your voice is valuable. Her voice is valuable as an actor, but it is also iconic. People know her voice in part because of the movie her but also just because she happens to have a voice
that is very easily recognizable. I think a less recognizable voice people might not be taking as much to this issue.
Open AI says it never intended to use Johansson's voice as one of its assistants.
Basically, what they said is they started working on the voice feature way back in spring of twenty twenty three, and they cast a number of voice actors for five different voices and said that they were planning on her being a sixth voice. So it's not they're saying, Look, it's not that she would be that voice, or that this voice is meant to sound like her. This is another person.
But after the backlash from Johansson and the public, open ai removed the voice, which Rachel says is something of a turning point in terms of our ability to opt out or at least push back against these generative models that seemed to be pushing forward with seemingly few guardrails in place.
A lot of consumers and a lot of artists are saying, wait a minute, that's not what I signed up for when I put this and that on the internet.
Just this week, open ai introduced a new safety board led by CEO Sam Altman. Move came after the company disbanded a team that had been created to focus on the long term threat AI could post to humanity. Some former insiders are saying that probably won't be enough, including former OpenAI board member Helen Toner. Toner told the ted Ai Show podcast this week that she had concerns about Altman's commitment to safety based on his past behavior.
On multiple occasions he gave us inaccurate information about the small number of formal safety processes that the company did have in place, meaning that it was basically impossible for the board to know how well those safety processes we're working or what might need to change.
And it's not just open ai that's been getting some pushback. After the break, we take a look at another sector that's grappling with how and when to fight developments in generative AI. Can we weigh up whether or not any of this will be enough to shake up what some are calling the raw deal at the center of AI. Open Ai is certainly not the only company racing to
release new AI products. There are startups like Anthropic and Elon Musk's Xai, which just announced this week it raised six billion dollars, giving the company a valuation of twenty four billion dollars. There's also Meta, which recently rolled out its ask meta feature to Facebook and Instagram and WhatsApp. And then there's Alphabet, which just added those AI overviews to Google.
So iioverviews they announced this Developers conference.
Dave Lee is a Bloomberg opinion columnist. He recently attended the Google Developer Conference, where the company unveiled this latest change to its search engine.
The result is a product that does the work for you. Google Search is generative AI at the scale of human curiosity, and it's our most exciting chapter of search yet.
And basically what it does is it's all builds on something that's already been on Google. If you search for, you know, who are the Beatles, you'll get like a sort of fact box, and it will have pictures and a quote from Wikipedia and all that kind of stuff. Aio overviews is sort of that, and then some.
AI overview synthesizes information from a host of different sources into a generated answer, similar to something you'd get if you typed a question into say Open AI's chat GPT. At the bottom, there are some links to where that information came from if you're curious, which is all well and good for Google, but critics of this approach point out that it deprioritizes going directly to a source, and that means less traffic to those sites.
If you're the website that's crazing that information. Why on earth would you continue to do it if you know for all these popular searches, you're not going to get that kickback of traffic, which means that revenue, which means continuing to exist. And that's a really, really big fear with AI overviews.
Google has it rolled out the feature to every search just yet, but the mere prospect that it will do so has created an existential threat for news publishers. But Dave says it's not just news publishers who should be worried.
I also think there's a big problem for Google as well, because because Google needs this web ecosystem to exist. And I didn't get the sense, and I tooked to a Google executive afterwards their event, I didn't get the sense that they really quite comprehended how damaging and how quickly damaging it might be.
This web ecosystem can only exist if publishers can stay in business. One way that AI companies have been attempting to solve this puzzle is by cutting publishers into some of their current and potentially future profits. Take open ai again.
So what they've been doing is going to going out to as many publishers as they can trying to come up with these deals to allow open Eye or whoever use that information from those publishers on their lash anguish models.
And some publishers have agreed to take these deals, including The Atlantic and Vox Media, both of which ink deals on Wednesday. What are the terms of the deals? What's what's the incentive for them to do this?
I wish I'd seen the terms of the deals. I mean, this is one of the criticisms of these deals is that we don't know a great deal of what, you know, the nitty gritty here, what's happening.
Some publishers, yes.
Have made deals. The most notable one is News Corporation for a lot of money, for a lot of money, two hundred and fifty million over five years. Now that doesn't necessarily mean they're getting a nice big check for two hundred and fifty million, because part of that deal is using open ai technology within news Corps and what that looks like we don't know. We don't know whether open Ai has said, guess what, using our technology is
worth one hundred million dollars a year. We don't have those details, which is very frustrating, But you know, there is there is a willingness to make friends of open AI because there's a general feeling. And the chairman of Lamont, the French newspaper. I'm paraphrasing here, but he basically said, look, even we do this with them and we get some money, or they do it anyway with our content and we have no control over it and we don't benefit, and it's going to harm us as a publisher.
Dave says he believes publishers are making a big mistake by agreeing to sell their content to open AI, even if he understands the economic incentives pushing them to strike deals that could help insulate them against potential revenue short falls in the future. And there are some holdouts.
The big one at the moment is the New York Times. They suit open AI. They said them for copyright infringement, which is going to be an interesting one to make in court because their argument is that when you said certain things the chat GPT, it would recreate bits of timed journalism and that was copyright infringement. There won't be a preliminary hearing in that case for it for a number of months, which is you know, just kind of
goes to show. I mean, think how much has changed an AI in just the last two years or so. You know, with every passing day, it feels like this challenge is getting bigger and bigger and bigger.
There's also another group called old in Global Capital, which owns a bunch of titles like The New York Daily News and The Chicago Tribune. They're using the same sort of logic to sue open AI and adding another element. It has to do with a well known flaw in a lot of generative AI tools we've been using, which is their propensity to hallucinate or make up answers.
That's another interesting thing because these hallucinations couldn't be damaging to these news brands as well. If these machines are saying, you know, crunching two bits of information and saying, well, that came from such and such and it might not.
What does that mean for the news industry. If publishers are reluctant to or unable to opt out from what these AI companies are trying to do.
That's a profound question. Right, publishers have had perilous business models for so long now, and sort of one by one they've been they've been sort of removed by bits of tech, you know, whether it was Craigslist for classifieds or you know, Google just getting a lot of the
advertising revenue from from one over the place. You've got this situation where I think, already, and we've seen this from the reaction to AI overviews from just regular users, there is a suspicion of these sort of computer generated fits of information. I saw a great quote and it's gone around the Internet so much now that I've no idea who first said it, which is kind of good because it kind of just belongs to us all now.
And one of them was, you know, why would I be bothered to read something you couldn't be bothered to write? And I kind of hope. I hope that sort of attitude is going to persist. I hold I hope that people will care about human made things, whether it's newspapers or movies or books or whatever.
But for now, at least, there isn't an easy way to opt down. Here's Bloomberg's Rachel Mets again.
These companies that are ingesting tons and tons and tons of data using that to train their AI systems, And if we want these AI systems to get better, the current prevailing thought is we need more data and more compute to make them more better. That is how it's working right now. I have my own thoughts about whether that will be true in the future, but right now, that's what we're seeing, is we're getting better results from
more data and more compute. And in that sense, we're all sort of opted in in various ways, depending on how much of our lives have been lived on the Internet and how much of that data these companies are using.
But just because we're all effectively opted in, whether we like it or not, doesn't mean that we don't have a say in how our data are written, words, and our voices are being used, even if we're not all as famous as Scarlett Johansson. Rachel says, the fact that OpenAI listened, that it pulled the Sky Voice in the face of a public outcry, means there's room to well have a voice in how generative AI gets developed and used.
I don't like to think that anything with technology is inevitable. I like to think that we have a lot of control over what happens, and I think that what we're seeing with some of this pushback recently against this voice that people feel and Scarlet Johansson feels sounds like Scarlett Johansson. I feel like that's a really good examp love it. So I feel like that's sort of a hopeful sign, right that we still have agency and we still have control.
This is the Big Take from Bloomberg News. I'm David Gura. This episode was produced by Thomas lu It was edited by Aaron Edwards. It was mixed by Blake Maples. It was fact checked by Adriana Tapia. Our senior producers are Kim Gittleson and Naomi Shaven. Our senior editor is Elizabeth Ponso. Nicole Beemster bor is our executive producer. Sage Bauman is our head of podcasts. Thanks for listening. Please follow and review The Big Take wherever you get your podcasts. It
helps new listeners find the show. We'll be back tomorrow.