Today on the Big Take. Concerns about AI are growing from whistleblowers and from industry leaders. I'm Craig Gordon in for West Kosova. Artificial intelligence or AI has been getting quite a bit of attention lately. Even as it promises to revolutionize the way we think and work, AI's positioned to bring headaches as well.
Pictures online of a bombing at the Pentagon in the US, and it was AI generated, but obviously so many people very quickly panicked thinking that'd been a bombing that happened.
AI has even infiltrated music. Now there's a new song I don't know if you've heard about by Drake in the Weekend that wasn't made by Drake or the Weekend was created by artificial intelligence.
In March, thousands of tech leaders signed an open letter calling for a pause in AI development. Even more recently, three and fifty industry leaders signed a second open letter urgent caution. Their letter was just one sentence, mitigating the risk of extinction from AI should be a global priority alongside other societal scale risks such as pandemics and nuclear war. How realistic is this ominous warning? I spoke to Bloomberg AI reporters Dina Bass and Rachel Mattz to find out.
I would love to hear from you, Dina, a little reality check. What is AI capable of doing right now? What is it not capable of doing? How can we put this technology to use for humankind on an individual basis.
AI is not.
New, and we've been talking about both promise and the challenges and concerns of lots of different types of AI for a number of years.
What has happened in the last year that has.
Started the current hype churn and excitement cycle that we have is something called generative AI.
And the difference in terms of what we're talking about out.
Here is that there are AI algorithms, AI models that basically suck up a lot of information. You know, pictures aren't text from across the Internet, from you know, Reddit and social media things like that, and what these models do is use all of that information to generate new content in some respect. And so when you start talking about a picture that purports to be an explosion near the Pentagon, that's new content that's created in some way by the.
Artificial intelligence systems.
What people are getting a little tripped up in, though, is you know people will use these sort of human like terms for what the AI is doing. They'll and thropple morphize things because it's the easiest way to understand it, and they'll ascribe kind of human like intelligence to these systems. They are not human, they are not thinking, they aren't producing art. What they're doing, in many cases is making predictions, making guesses, generating things that are sort of an imitation
of the other things that they've seen. And so we need to be very careful to understand.
It's definitely not human. It's not even human like.
That said, as we have seen some examples, it can sort of mimic human speech, It appears to mimic sort of human thought. How can people sort of draw the distinction. How do we tell the difference between a human and an AI voice that sounds like a human.
That's a real concern. I think people are grappling with what we're going to do there. At the beginning of the Senate hearing to talk about the artificial intelligence, Senator Blumenthal had chat GPT from open AI write his introductory speech and if he had an audio generation artificial intelligence algorithm actually speak it sounding like it was him.
And how the lack of transparency can undermine public trust. This is not the future we want. If you were listening from home, you might have thought that voice was mine and the words from me, But in fact that voice was not mine, the words were not mine, and the audio was an AI voice cloning software trained on my floor speeches. The remarks were written by chat gpt.
There's a lot of discussion about how to kind of watermark these things. It's potentially a little bit easier with images text it can be very difficult to tell.
Open AI itsself released an.
Algorithm that was intended to help people figure out if chat gpt had authored a piece of content, but it's not terribly accurate, and as a decently high false positive rate as well. But there is some work right now in technology to sort of help flag to people that something was generated by an artificial intelligence algorithm and is not human authored or human drawn.
So, Rachel, I know a lot of these different technologies get lumped together, but maybe you could help us understand the difference between the sort of flavors of AI.
I like to think of it as different kinds of architecture. I like it a lot because it's easy for us as people to understand architecture in general. Right, we have different types of buildings, they serve different purposes, They change over time, and things might fall out of fashion and then come back into fashion.
So, Diana, what are some of the leading industries that are adapting this technology.
The newest generative AI stuff.
I think we're just seeing industries kind of climb on board with I mean, there's a little bit of a panic for every company to figure out what their AI strategy is and how to use it. But we're starting to see it, certainly in the graphic design space because some of the image generation stuff is a little bit older than the Chat GPT stuff. It preceded it by a few months, and so we're definitely seeing a lot of that in the kind of the graphic design and artwork.
But we're seeing people use it for finance, for legal Obviously, there's been a lot of discussion about academic use cases. There's been a lot of focus on cheating, but there are, you know, non cheating applications that can help students learn or help them draft as long as they're clear with
their teachers what they're doing. There really are I think a pretty wide array of use cases I hear, you know, lots of banking executives, lots of Wall Street executives tripping over each other to talk about who is going to have these smart generative BAI strategy. So it's pretty across the board in terms of people trying to say that they're adopting it. What will actually get used in practice?
I think it's going to take a little while longer to know, and potentially even longer than that to have a sense of whether these replace workers, replace entry level jobs or the rosier scenario that Silicon Valley and Microsoft up here in the Seattle area like to talk about. The rosier scenario is that they make people more productive but don't put them out of work.
What specifically do banks do with it? Can they use it to decide whether I should get my mortgage or what would be a good use for a bank.
Banks have actually been using AI to decide whether you get your mortgage for a long time, and it's found to be problematic because it has racial and geographic bias implications when you do things like that. Also, banks produce a tremendous volume of written content, analysis notes to clients, research reports, things like that, But people are that trade are always looking for anything that can give them an
advantage as well. So for several years, probably more than that, people been trying to figure out how they can use algorithms to give them a better you know, trade a little bit faster, get information a little bit faster. The mortgage and lending scenario there's a real problematic one.
So we hear a lot about this technology known as generative AI, and that seems to be kind of the umbrella topic for a lot of the things that people talk about and write about in the media. Explain what that is and where is that leading us in the future.
A lot of this is just still an experimentation phase, and I think what we'll probably see is these models used increasingly to train on very specific data sets. Like right now you have large language models such as that that underpins chat GPT. It's meant to be kind of a general purpose language model. You know, you can sort of ask it or you know, type to it any kind of question. It'll give you some kind of answer
that may or may not be accurate. It's basically trying to give you what you want based on its training data. But I think that what we're going to be starting to see more and more of is these large language models used in more like specific ways with more specific types of trains data, and then you might actually be able to get better answers, like maybe able to use
it with certain subsets of medical data. Let's say, so you can use it to answer medical questions in a more accurate way than you could with just the basic model as it exists now.
Yeah, and I feel like the other thing that would be an interesting shift about that as we stop competing on size, like you know, like right now everything is size matters. The bigger models do better if we are able to train on smaller, more specific to task models. They don't have to compete on size, and that can
be useful in a few ways. Running these large language models is very expensive, both in a dollar sense and an environmental sense, and because the data sets are so large, they become almost impossible to check for harmful content for biased content. Smaller could be both cheaper to run and have greater quality control.
Rachel Beck. In March, an open letter was signed by thousands of tech leaders regarding some of the apprehensions they have over AI technology. What are some of the specific concerns they laid out in that.
Letter, the basic sort of overarching theme there was there should be a time period pause, like a six month pause on developing AI more powerful than GPT four I believe it was, which was the currently released state of the art model from open Ai. Right now, OpenAI is very much acknowledged as one of just a few leaders
in the AI industry. I think what's actually really interesting about this, besides the fact that, as you pointed out, a lot of people who have been working in and around the AI industry for years signing onto this letter, there have already been lots in lots of people that have been shouting from the rooftops about existing very real current issues with the AI systems that are already in place, and that seemed to have gotten lost in the conversation there, And I felt like, to me that was kind of
like a big kind of question mark.
With all of this talk about a six month pause, is anyone actually hitting the brakes on their efforts to develop up the next version of this I.
Saw something the other day that just said that open ai is not yet working on GPT five or something like that. But to Rachel's point, some of the motivation behind the letter was a little unclear. Some of the people signing it were potentially working on competing products.
It's not really clear.
Also, what does a six month pause do? Why six months? What do you do at the end of six months? And is that really the way to go? I think, you know, to Rachel's point, a lot of the AI ethics and responsible AI people who've been working in this field for five, six or longer years, and many of them are in the populations that are most affected by the current problems with AI.
It's a number of women, a number of people of color.
You know, many of them have been calling for slower approaches for a long time, asking companies developing these things to take the time to make sure that products are responsible it don't cause serious harm before they release them, and also calling for regulation of some sort by various governments. So there's been a call for a while to slow down and make sure that you're not going by the typical tech adage of move fast and break things.
But the idea of.
A specific six month pause, and then what and how would you even enforce that six month pause. I don't know if they just seized on that because it was attention getting or it was something you could just say, oh, this is a specific proposal, but it wasn't completely clear what that actually would look like in practice and.
What it would do when we come back. Worries about the direction of AI have been around as long as the technology has. We'll look at what the concerns are and who has been raising them. I wanted to go
a little bit deeper on this idea. One of these whistleblowers, if we can call them that, Timny Gibrew from Google, was probably one of the most prominent early critics of this for some of the reasons that you cited related to the questions of whether women and people of color would be treated fairly or treated responsibly by some of these technologies. Dina, maybe you could tell us bit about her story and what her concerns were.
So she was one of the pioneers of sort of looking at the harms of various AI systems.
She co authored with.
Joy Bloom Winnie a landmark paper in twenty eighteen that showed that a lot of the most popular facial recognition products were just performing spectacularly badly when they were looking at images of people of color in general, but particularly women,
and choose at Microsoft. At that point, she moves to Google and along with Margaret Mitchell, they co found an ethical AI group at Google, and they start trying to make Google's AI scientists pay more attention to some of the problems in the algorithms that they were working on. This ultimately ends up in her dismissal from Google. Several months later, Margaret Mitchell is also dismissed from Google, basically
decapitating this ethical AI group. One of the things that this sort of raises is some of these, you know, people that are now speaking up about concerns in the AI work, including Jeff Hinton, who's one of the you know, the pioneers of the current generation of AI that we have are Google people, and they're you know, waiting essentially to twenty twenty three to speak up about this and did not, in any way that I know of, really
back or stick up for or anything else. The people that were at Google several years ago that were very much whistle lowers and the truest sense of the word that they were fired for airing their concern.
And so what is Google's response to all this?
So there is you know, a dispute about what happened between Google and doctor Timmy Debrew. Google said at the time that they accepted her resignation. Doctor Gabrew maintains that she offered no such resignation. With regard to doctor Mitchell, Google says that they fired her.
Is there anything that we could point to in the technology that exists now where these concerns have been addressed?
Both Google and Microsoft have significant groups of people that work on responsible AI. All of the new things that Microsoft has put out they have taken and pains to tell us have gone through their responsible AI reviews and continue to as people test the products and they get more feedback about what is and isn't working. Microsoft continually every time they announce a new AI product, will tell the press and the public, here are the ways in
which we know it does not work. They're trying very hard to make it clear that they know that there are limitations and that they are working on many of them. Open ai does the same thing. I mean, when OpenAI released Dolli, its image generation tool, they did some work to make sure that the images that are generated by Dolli are you know, as they sort of explained it, more representative of the world. So when you ask DOLLI
to generate a picture of doctors. They went in and made sure manually that the doctors were both men and women of different races. And so there are things that companies are trying to do to address these issues. I think there's a continued push and pull between the company's own ethical AI people and people externally. The are looking at their work about whether what's enough, whether it's enough, whether things are moving too quickly to possibly ensure safety.
Just a quick quote from the letter that kind of sums up to me the kind of the heart of it. It says quote recent months have seen AI labs locked in an out of control race to develop and deploy ever more powerful digital minds that no one, not even their creators can understand, predict or reliably control.
I don't one hundred percent buy that the fact that they use the word mind in there is to me it's self a little bit suspicious. I think that it's extremely out of date at this point to act as though the current AI systems are black boxes that we can't understand and we can't interrogate. We very much can, And I think that anybody who says otherwise is either
hopelessly naive or as kidding themselves. People who are making these systems understand that there may be aspects of them that they don't quite get, but in general I think they very much do understand them. Whether they're putting in proper safeguards before releasing them is a whole other thing. But we're nowhere near a point of like complete not understanding, or AI at a point where it's like out of control. This very much boils down to applications of mas Tina.
Let's talk about the recent congressional testimony of open Ai CEO Sam Altman. What prompted him to come before Congress at all and share a little bit about the concerns that he raised with a rather unusual call that the Congress should actually regulate his own company, not something to hear every day in the halls of Congress.
I think he was summoned. I think that was what brought him. I want to argue a little bit with the notion.
I know that Congress doesn't frequently have captains of industry come in and say regulate me, but in the AI space that's actually been going on for a few years. Open Ai, Microsoft IBM, who were also on that panel, and even Amazon somewhat surprisingly, have been asking for regularly on parts or all of the AI field for several
years now. Now we can discuss why they want that, to what extent they really want to be regulated, but the fact is they have been asking for it, and some of it is because some of the larger companies really want some guidelines and a level playing field so that they know what they can do and they can't do.
And other companies.
If Microsoft is going to be in their minds a good corporate citizen and not do certain things that they aren't undermined by other companies that are willing to do that. And in fact, the US is behind here. Europe has still not passed anything. It's wending its way through the European Parliament right now, but Europe's been working on an AI law for a couple of years now, and everybody
has a kind of different version of it. But what Sam Altman was talking about is that Opening Eye feels that there needs to be a separate US agency, that the current agencies in government are not fit for this, and that there should be through that agency or otherwise some sort of licensing for these kinds of algorithmstra.
Here's your shot, Thank you, Senator. Number one, I would form a new agency that licenses any effort above a certain scale of capabilities and can take that license away and ensure compliance with safety standards. Number two, I would create a set of safety standards focused on what you said in your third hypothesis as the dangerous capability evaluations.
And then third, I would require independent audits, so not just from the company or the agency, but experts who can say the model is or isn't in compliance with these state and safety thresholds and these percentages of performance on question.
X or Y.
Would you be quantified if we promulgated those rules to administer those rules.
I love my.
Current job, Rachel. First of all, is any of this realistic? Like, how much of this can truly be regulated? The agency doesn't exist, Congress moves very slowly, so what is even realistic about regulation? And then even if the US was able to come up with a regulatory scheme, are there fears of any unintended consequences?
I guess there's a few different ways we can look at it. One is there's some existing rules, laws, different agencies that could probably take on a lot of different aspects. Of regulating AI. As of now, there's this idea of creating a new agency. I think it sort of depends on who you talk to whether or not they think
that's necessary. Some people would say that could make things even harder and even worse, because it's like, oh, a new agency, like you know, we already have quite a number of those in the US, and that could possibly bog things down even more. That's a little tricky to start a whole new thing. I mean, I think what's important to keep in mind is that up to now, there hasn't been any like AI specific legislation in the US. As as Dina pointed out, there's some like application specific stuff,
but it's more at the local level. But other than that, there isn't much so far in the US, and it's going to be interesting to see, especially because we have a few like AI related court cases right now sort of winding their way through various courts. Over the next few years, I wouldn't be surprised if things change quite a bit as far as having some kind of either federal legislation or more rules at the state level.
I think it's also fair to ask the level of sincerity of these calls. Sam Altman's own company released an iPhone and Android app about three days after his testimony to make chat GPT available to all of us on all of our phones. But at the same time he's saying we need to be careful how we use this, he's sort of making it even more available. Rachel, How since here is the industry when it says please regulate.
Us, I mean, I think part of it is probably a actual desire for regulation to know what the rules are, what can you do, what you can you not do? And some of it I think is definitely aimed at keeping control for the parties that are already in control. You know, companies like open ai and Google and Microsoft, they like their positions in the industry and I can't imagine that they want to fall back. So having legislation
could be helpful to them in certain ways. And I feel like keeping some control might be some of it.
For all these companies that want a licensing scheme, they don't have to wait for Congress to act to come up with some sort of mechanisms for outside auditing that ensures people that they are closed source algorithms are fair and safe.
When we return, what do Rachel and Dina see coming down the line when it comes to regulating artificial intelligence. As all of our listeners think about these topics, and I think they are on a lot of our minds. What are you watching for in the next weeks and months as this story continues to unfold, whether it's on the regulatory scheme, the new technologies that might come out, the new uses we might learn about. Dina, why don't you go first tell us what you're watching for right now?
My sense is that the first thing we're going to see from a regulatory standpoint is this European law. One of the things that Sam Outman suggested is that if we do something in the US, it shouldn't impact every single kind of AI. In the same way he wanted some carbouts for open source so as not to stifle innovation, he wanted some carbouts for startups.
Things like that. The European proposal that's being.
Looked at has several different tiers of the type of algorithm and the sort of related level of scrutiny that it gets. For algorithms that do things that the European Union considers completely unacceptable, those would just flat out the outlawed in the block and so that would definitely be new, and then below the absolutely not. There's three other tiers
that have different levels of scrutiny. I'm interested to see if that passes and in what form and what action the US companies take as a result, because US companies can't just continue to do something here that they cannot do in Europe. This will completely impact the way that they do business around artificial intelligence.
Rachel, what are you watching for?
I mean, I think we're gonna see just more and more experiments and people actually using these systems like chat, GPT in practice, and as we've seen with some of the examples, like the stuff's getting better and better, it's getting harder and harder to determine what is a genuine human made article from what is created using AI. So I think we are going to see more sort of
disinfo misinfo stuff related to these systems going forward. But I'm also optimistic that the detection is going to get better and we may be seeing companies increasingly using various watermarking technologies to stamp things. Essentially, there are ways to stamp both text and images so that they can be detected and be seen as created with the help of AI, and then I think we'll probably see more applications of
things like generative video. That's something that's really in its infancy right now and has been getting better rapidly, so it's going to be really interesting to keep an eye
on that. There's all kinds of interesting things that are happening in the open source community right now now, such as the idea of automating these systems, so that, like AUTOGBT is one thing that people are paying attention a lot too lately, So sort of setting an end goal for a large language model and having it sort of go off and create additional tasks toward that goal to eventually reach the goal, maybe connecting it to other services.
So, yeah, there's a lot.
Of interesting stuff there, some potentially scary stuff, but also some potentially really cool stuff.
I'll throw it out there to either of you. The Supreme Court did recently avoid ruling on section two thirty, the law that protects, you know, tech companies being liable for every single thing that's on their platforms. Does that tell us that at least the highest court of the land doesn't really want to wade into being the big referee on tech issues and or how does that ruling or non ruling actually affect the conversation around AI.
So two things.
One, we're still not sure that section two thirty would or should apply to AI. That came up a bit in the recent Senate hearing. I know Altman said he didn't think two thirty was the right way to regulate AI. But you know, interestingly enough that day that really came out.
Rachel and I were actually discussing the AI implications of the other ruling that came out that morning, which also has some AI implications, maybe even clearer ones, which was a case where a photographer had sued the Andy Warhol estate. Andy Warhol had taken a photo that this photographer had taken of prints and turned it into one of his you know, usual pieces of art silkscreen, et cetera, and
the photographer sued over it. And the question was was the Warhol work transformative enough of the photographer's original work that accounted as a new work. And the Supreme Court
ruled in favor of the photographer's claim. And that has implications for AI because there are a number of lawsuits right now from artists and computer programmers whose works either their software code or their works of art, their photography has been used in the training data of some of these generative AI systems that we're discussing, and in some cases the artist or the computer programmer claims that not only was their work used in the training data, but
the out put, the thing that the AI algorithm generated looked suspiciously like their original product, and so there are suits over whether that's allowed now. And the companies that create these AI models are basically making a fair use argument that they are allowed to use these things in the training data, that they're transforming it into something else
that it doesn't resemble the original work. So you know, Rachel and I were actually spent a lot more of that day discussing that Supreme Court cases implications for AI models, rather than the section two thirty one.
Well, I want to say thank you to our two guests today, Dina bas who covers Microsoft and AI for Bloomberg News, and of course Rachel Metz, who covers AI for US as well.
Thank you so much for joining me, Thank you for having us, thank you thanks for.
Listening to us here at the Big Take, a daily podcast from Bloomberg and iHeartRadio for more shows from iHeartRadio, visit the iHeartRadio app, Apple Podcasts, or wherever you listen, and we'd love to hear from you. Email us questions or comments to Big Take at Bloomberg dot net. Revising producer is Vicky Vergalina. Our senior producer is Katherine Fink. Our producer is Rebecca Chasson. Our associate producer is Sam Gibbauer. Raphael M. Seely is our engineer. Original music by Leo Sidron.
I'm Craig Gordon sitting in for West Kasova. Have a great weekend.