The EU Is Leading The Charge On AI Regulation - podcast episode cover

The EU Is Leading The Charge On AI Regulation

Jul 31, 202327 min
--:--
--:--
Listen in podcast apps:
Metacast
Spotify
Youtube
RSS

Episode description

The European Union became one of the first in the world to take wide-reaching action to regulate artificial intelligence when it passed a draft law in June. The proposal would put new guardrails around the use and development of artificial intelligence, including curbing the use of facial recognition software and increasing ChatGPT’s transparency. Bloomberg’s Jillian Deutsch joins guest host Rosalind Mathieson to talk about how the EU pulled ahead in the race to regulate AI, and why concerns are growing about AI being overregulated. Columbia Law School Professor Anu Bradford discusses what the global effect will be if this far-reaching regulatory framework is enacted into law.

Read more: Big Tech Wants AI Regulation — So Long as Users Bear the Brunt

Listen to The Big Take podcast every weekday and subscribe to our daily newsletter: https://bloom.bg/3F3EJAK 

Have questions or comments for Wes and the team? Reach us at [email protected].

See omnystudio.com/listener for privacy information.

Transcript

Speaker 1

The European Union is on the brink of becoming the first major power in the world to regulate artificial intelligence.

Speaker 2

Politicians in Europe will vote on a proposal to bring in a law that would govern the use of artificial intelligence.

Speaker 3

I think we have made history today. We have set the way for the dialogue that we will need to have we started having with the rest of the world. Or now we can build responsible AI for our globe.

Speaker 4

I'm Roslin Mathieson in for Wis Kosovo today on the big take. What's the European Union trying to do to regulate artificial intelligence?

Speaker 5

I think if this technology goes wrong, it can go quite wrong, and we want to be vocal about that. We want to work with the government to prevent that from happening. It's one of my areas of greatest concern.

Speaker 4

What you just heard was Sam Altman, the CEO of OpenAI, the artificial intelligence research lab that created the generative AI model known as CHATGBT. Earlier this year, he testified in front of the US Senate calling for more regulation of AI tech in both his own company and others, and just last week, open Ai, along with Google and Microsoft, went one step further they're creating an industry watchdog to make sure their AI models are safe. While regulation is

moving at a snail's pace in the US. The European Union has set out to be the first Western administration to create a comprehensive set of guardrails around AI, known as the AI Act. But why are European regulations so important?

Speaker 2

If you wanted to evade the European regulations and, for instance, offer them this same AI product somewhere in the United States, you would need to retrain the model and carve out all the European data from that model, so that is often unappealing for companies.

Speaker 4

That was our new Bradford Law, professor at Columbia Law School and author of the term the Brussels effect, the idea that what is agreed in the EU becomes the global standard for regulation. We'll hear more from Arnu later in the episode, but first, Bloomberg reported, Gillian Deutsch joins us from Brussels to tell us a bit more about what exactly the AI Act is.

Speaker 6

The EU'SAI Act is really the first attempt by a Western government to comprehensively regulate artificial intelligence, and the European Commission first proposers back in twentyenty one, so this is pre chagipt hype, and their approach was really to regulate the use of the technology, not the technology itself. So they called it a risk based approach, and they prohibited certain kinds of technology or a certain kinds of uses

of AI. For example, they've banned social scoring, and they often cite China and this kind of idea that you'd have a social score based on your behavior as a citizen. That's one of the things a're trying to avoid. And most of the puzzle focused on so called high risk use cases.

Speaker 7

So this is if law enforcement.

Speaker 6

Wants to use AI, or a company wants to use artificial intelligence to scan employment applications, for example, they want to make sure that is overseen by human There are certain assessments to make sure that their technology is not misused or enforces bias. But the vast majority of AI at that point would be unregulated if it's not using those high risk circumstances. So we didn't really see actually anything explicitly on gurnet of AI.

Speaker 7

If you look for the terms like large.

Speaker 6

Language models or the fundation models, these kinds of things that we're talking about so much now but not really found in that first draft. It's actually all about this high risk use cases. There are some basic transparency requirements for deep fakes and chatbots, but that's about it.

Speaker 4

This talk in a bit more detail about what this act actually does. Who is it trying to regulate, how is it trying to regulate.

Speaker 6

So the Commission's first proposal for the a Act is really to focus on the high risk use cases of AI. So if a government wants to use, for example, you know, artificial intelligence to scan a crowd to look for a terrorist or to look for a missing child, the Parliament, for example, wants to completely ban that. EU countries, though, are saying we need to have some kinds of uses for this kind of live facial scanning recognition technology.

Speaker 8

We also are seeing certain companies would be limited in.

Speaker 6

How they use artificial intelligence for sorting through job applications.

Speaker 7

For example, they couldn't really use.

Speaker 6

AI to choose the best candidates, for example, because a lot of technology has a long history of choosing white candidates over people of color. So those kinds of things would be moderated, But the vast majority of AI under the current proposal would be allowed or just allowed under very loose kinds of controls, like transparency requirements making sure people know they're talking to a chatbot, for example, or making sure that deep fakes are labeled as such.

Speaker 4

Well, you mentioned that this began really before chat GBT, but is that sort of driven some of the more recent urgency around it. You said, this is a process that began some time ago, but feel like it really needs to speed up.

Speaker 6

Yeah, I mean chat chept changed everything here in Brussels. So this has been debated for over two years, and about a year ago there U countries and lawmakers were starting to say there are foundation models that in the future could be really powerful, and maybe we should add some explicit guardrails around that upcoming technology. But they didn't really know what this would look like and they didn't

know how to regulate it either. So U countries decided, okay, let's add some controls over general purpose AI, but they didn't really have a great idea for how to do this.

It was actually lawmakers who were still debating the their version of the AI Act who really started adding more controls to this because they saw chatjeept explode and become from the fastest adopted technologies it'd ever seen, and it created this political impetus where they had to add actually explicitly regulate CHGBT, Internet of AI for the first time, and so they've really come about and added a lot

more explicit controls. Lawmakers wanted to make sure that companies would summarize a copyreaded material that they used train these large language models. They also wanted to add some very interesting controls companies are not very happy with on making sure that they would explain the impact their technologies would have.

Speaker 7

On the environment, unroll of law.

Speaker 6

These are some things that companies argue they don't know how to comply with. But it's really the Parliament actually going full speed ahead and trying to regulate generative AI explicitly for the first time.

Speaker 4

You talk about generative AI, which is really sort of AI that basically generates other things audio content, stories at calls. Perhaps one day even a podcast could be generated from AI. But is that where the concern starts to lie. You talk about the focus on high risk in all of this, but it just sounds like it's an incredibly complex task to try and regulate such as sprawling and fairly new and nebulous sector. Is there worry and all of that that they step into over regulation?

Speaker 6

Overregulation is a massive concern for everyone. And it's actually interesting if you look at France. So France at Yugo is actually the country with the spearheading this effort to include what they were calling at the time general purpose AI, but now a year leader, they're actually they're the ones ring alarm bells that we if you do too much regulation, we might actually is that you miss out on this

next wave of technology. And they want to make sure that their startups in both in France and both around the European Union can actually be leader in this technology.

Speaker 4

And here's what French President and Maliu Macron had to say about investment in AI earlier this year.

Speaker 1

I think we're number one in that's not that you are, and we have to accelerate. So this is why we want to invest more. It's a good thing is that we have a lot of good, very good dance. We have good matheticians, good that science is a lot of dance adapted to this AI environments. We would invest like crazy on training and research.

Speaker 4

Is there the sense that the EU is sort of the later the front runner on all of this.

Speaker 6

Yeah, it's interesting because you know, we obviously saw Samultman from Open AI come out and tell the US Congress he wants regulation, and now there's really this impetus in Washington, d C.

Speaker 8

To actually regulate AI. But while the US.

Speaker 6

Government is actually looking at this for the first time, the is years ahead. Actually they might be ahead though, but they still actually haven't figured out just how far to go. The EU and US are talking about how to regulate AI. The US government is looking at the EU government both for as an example of what they could do, but also as a cautionary tale of how far, I mean too far you could go as a government.

The US government is even actually lobbied against some of these controls as well and said that these are going to negatively impact their companies. It's interesting, you do see the US government they have some rules in certain states. In New York, for example, has laws about how to use it in employment situations. But really the EU is once again operating as the world's big tech regulator, and they have a long reputation of being that world's big tech regulator.

Speaker 8

But they also could be a model for going too far.

Speaker 4

Let's talk a little bit about within the EU also because there are very big differences between member states. Obviously in Brussels itself. You touched on some of these things because I imagine for countries who belong to the European Union, there's that thing where they want to protect from the high risk perspective, but also they want to capture some of this business.

Speaker 6

It's interesting because the problem has gone full speed ahead and they're adding all these controls like I mentioned, or want to add all these controls. The countries are really they're aware of all the possible risks. They see that this stuff could go too far, but also they want to be the hub where startups come and actually, you know, build the kind of company that could rival at Google or an open AI, and so they do not want

to overregulate. Even Spain is lead the EU's presidency, they have a long reputation of trying to build up their AI market and their government has been very much.

Speaker 8

In trying to encourage AI startups to start there.

Speaker 6

So it's really going to be interesting to see how far they actually want to pursue rules on CHAGPT.

Speaker 4

My conversation with Jillian continues later in the episode, but first will the EU set a precedent for the US. I'm joined now by Columbia law professor Arnold Bradford, the author of the upcoming book Digital Empires, The Global Battle to Regulate Technology. Just to talk a bit more about the specifics of this Act in the European context, Arnu, I want to start off by asking you how is the EU's rights based approach being applied specifically to the AI Act.

Speaker 2

So the EU's main concern when it comes to development and deployment of AI is how it implicates fundamental rights of individuals, So that has been one of the guiding principles. So the EU is particularly worried about the individual's fundamental right to privacy and the AI can be a very

powerful tool for surveillance. So the AI Act is looking to limit the ways AI systems can be used for surveillance purposes, for instance, whether it's predicted policing, but also thinking about using facial recognition in public places that can potentially then put large segments of the population under surveillance. So privacy is one thing, and another particular aspect of fundamental rights that is relevant here is discrimination.

Speaker 9

So many companies are using.

Speaker 2

For instance, AI as a tool for recruitment, so the AI can determine your access to employment, the AI can determine access to education, or banks, financial institutions can use AI to screen individuals access to credit, or then the states can use it to determine access to public benefits. And that is a setting within which some concerns over discrimination can arise.

Speaker 4

Just given the complexities of like how you regulate AI and the EU trying to hit the right note with legislation that's vast and complex, I guess a million dollar question is is it even feasible to try and do this?

Speaker 2

So I think that is a really critical question ros and I think the EU is conscious of the difficulty of the task ahead. But at the same time, I don't think it is enough to basically deter the EU from going ahead with this legislation. So there's a very big temptation, and one often hears that, look, this is too fast moving, this is too complicated, and the legislators just don't have the technical expertise to regulate. But I would rather say that this is not just a part technology.

It is also about how technology implicates fundamental rights, how it implicates democracy, And I wouldn't say that the facebooks of the world would have any expertise in being in charge of democracy and fundamental rights. So that alone, I think justifies that democratic governance have the seat at the table when it comes to steering the AI future of the societies.

Speaker 10

I think that you have all hurt and probably agree that AI is too important not to regulate, and it's too important to badly regulate. A good regulation that we all agree on as soon as possible must be a common objective.

Speaker 4

What you just heard was Marguerite Vestiger, the EU Commission's executive Vice President, addressing the European Parliament last month. So would you describe the legislation as sort of the best that it can be at this point in time or when you look at it given your experience with sort of regulation and the challenges as a whole, do you see any holes in the legislation? Do you see things

that lawmakers have missed in this process? So what would be your advice to them if you could look at this legislation and talk to them about it?

Speaker 2

So I think it would be probably fair to say that this is not going to be a perfect legislation and that sometimes then invites the question whether no legislation is better than imperfect legislation, And there I'm willing to say that even if we don't fully get to the kind of regulation that best serves the development of AI over the future years, it is still important to get

basic guardrails in place. So yes, in many ways, I think the legislation is still somewhat vague, so that I think suggests this is the best that we can do at this point. You cannot expect the lawmakers to become too specific in prescribing how you dive a lot of

these technologies. But at the same time, they do have the need to put in place those transparency applications so that we can still have democratic oversight and a conversation that we open the black box behind these technologies and gain a better understanding.

Speaker 4

The EU approach is very different from the US approach, which is very hands off, basically sort of looking towards voluntary regulation if there is such a thing.

Speaker 11

AI can help deal with some very difficult challenges like disease and climate change, but also have to address the potential risk to our society, to our economy, to our national security. Tech companies have a responsibility in my view, to make sure the products are safe before making them public.

Speaker 4

I'm interested in your view on that. And b even if the US decided they wanted to tackle this, is it just impossible during the US election season.

Speaker 9

So the US has for.

Speaker 2

Quite a while been reluctant or unable to legislate soared to the European, Americans are still much more convinced that there is faith in the operation of the markets and greater faith in the tech companies self regulation, and at the same time greater hesitancy that the governments can improve outcomes by stepping in. So that's why I've often described the American way of regulating more of a market based approach as opposed to the European rights based approach, and I think there are.

Speaker 9

A few reasons for that.

Speaker 2

So the Americans are still very focused on safeguarding the conditions for a thriving tech ecosystem that is ideal for innovation. They are also regulating in the shadow of US China tech war, so Americans are very concerned that the US will retain its technological supremacy and it cannot be left

behind in this ongoing tech race. There's also tremendous lobbying by the tech companies that partially explain why the US Congress has not been able to generate any meaningful legisla even though the public opinion in the US has been shifting to be more skeptical of tech companies self regulation, and it's just generally that the political process is rather dysfunctional,

we see very little meaningful legislation emerging from Congress. Still today, we are seeing something like a voluntary guidance coming from the White House. There's a blueprint for AI Bill of Rights, but ultimately those voluntary regulations, they leave the decisions in the hands of the tech companies.

Speaker 4

In terms of talking about the mitigation part of it, do you see a potential to actually use AI to regulate AI? Are there any aspects of AI that can be drawn out that are more positive, that are more productive, that can be used in the application of systems to regulate AI.

Speaker 2

I think it's almost inevitable. Ultimately, if the AAI really is as powerful of a tool, for instance, to be used by criminals, we need to then make sure that we also have the good actors to deploy the AI and develop AI in a way that we can, for instance, more easily detect the wrongdoing, we can detect fraud, and if we think about using AI to spread fraudulent information misinformation that we also have the power of that technology that is deployed in a way that allows us to

fight those downsides and mitigate that kind of harmful activity. So absolutely, I think there is a great need and I would like to think a great incentive by many developers of these technologies to make sure that the technology is deployed in a way that can be used as a powerful weapon for good.

Speaker 4

As we come back to the idea where the EU is that, could you just walk us through where the act is at now and what the processes are to bring it fully into law because it has to be discussed with the EU member states and there could be some wrinkles, no doubt there.

Speaker 9

What are the next steps in this process?

Speaker 2

So we two years into this process rather so that's when the proposal was first put forward by the European Commission, and right now we have the proposal that has gone through the legislative process in the Council that represents the voices of the Member States and in the Parliament that

then represents the European citizens. But now we have this last phase of the legislative process where we need to reconcile the differences between the Parliament and the Council, and the Commission is there as a broker in these conversations, So there are some differences. I think it's fair to say that the Parliament has been more ambitious in imposing additional applications that we're not part of the version that

went through the Council. And now this fall is the time when we are trying to find a consensus between the two legislators, and because of the looming European elections next summer, there is a timing issue.

Speaker 9

So if we fail to reach.

Speaker 2

A consensus and pass the AI Act through this final stage, the fear is that we go to the next elections and then we have a new slate of maps with their own priorities, and we need to go back to the drawing board. So there is a great need now for Spain, as the President of the European Council, Douta try to proger the compromise and make sure that we can get the law out. Still in the course of this fall, of.

Speaker 4

Course, I need to ask you about the Brussels effect which is a phrase that many of our listeners would have heard of but may not know. The EU in fact coined this phrase, which is to describe the way that essentially, if I've got this RIEU regulation becomes the standard everywhere because companies say, well, we need to standardize our operations globally, so even though it's only an EU regulation, it can become the way they operate, including.

Speaker 9

In the US.

Speaker 4

Would you expect this to be the same in this case?

Speaker 2

I believe that there is very likely to be a Brussels effect, at least on some applications of AI. So, first of all, AI is a multifaceted thing and there are many forms and types of AI, so I wouldn't think that we will see every AI system to be following the European regulations around the world. But there are a couple of features of how AI works that lends itself really well to the Brussels effect. So one is that you need a lot of data to develop empower

these models. So the more data you have normally the better AI models you have. And if you are willing to use the European data to train these models, you are bound by European AI Act and then if you want to offer that same AI system in another market and use those models that were trained using the European data, you do need to continue to be bounded by the

AI Act. So if you wanted to evade the European regulations and for instance, offer them the same AI products somewhere in the United States, you would need to retrain the model and carve out all the European data from that model. So that is often unappealing for companies.

Speaker 4

When we return more from Bloomberg's Gillian Deutsch about the future of AI regulation, Jillian, let's talk a bit about the role of big tech in their lobbying efforts that have gone on so far. Obviously, tech companies themselves, some of them expressed concern the CEOs about the pace of AI and the risk from AI, but also they want to harness it and make money from it.

Speaker 9

So what have they been lobbying the EU for.

Speaker 6

It's interesting because the EU proposed AIA Act two years ago and companies have been lobbying like crazy in the interim and actually they elected to the Commission's first proposal because for the most part it left generative AI untouched. It was mostly if this technology was used in high risk circumstances, then it would be regulated. So tech companies were mostly pretty happy with that initial idea. It was when you countries and lawmakers began adding more controls that

they started to ring alarm bells and freak out. It's kind of difficult to summarize exactly what companies all want, but if I were to boil it down to three points, it's basically one trust us week as companies have already put in our own controls. We have not launched certain products as well because of possible misuse or problems we've seen arise. Two, they say it's not really up to us as developers to see how this technology is used.

Speaker 7

It's up to the users.

Speaker 6

It's up to these companies that purchase that AI. They're the ones who determine how it's actually used. So it's really up to those companies and those users to bear the.

Speaker 7

Cost and the burden of compliance of regulation.

Speaker 6

And Third, they really want to stick with this risk based approach, which, to remind you, that means very few controls on genertive AI.

Speaker 4

And how will this fit against individual country regulation? I mean, obviously nations sort of do agree that Brussels has the say on some of these broad issues, but equally each country is still going to try and do their own thing.

Speaker 6

To some extent, I think individual countries are really going to be tripping over each other to become the AI hub in the EU because the EU has a massive chip on his shoulder.

Speaker 7

They've missed out on many waves of.

Speaker 6

Technology, most notably the social media wave. They want to have some kind of company that could someday really rival the lakes of Google, the lakes of Microsoft. And so we will definitely see countries like France, Spain trying to attract those startups, trying to build them up, help them scale up, and not lose them to the US like they've seen in previous startups. So we definitely will see countries trying to become the most hospitable hub for AI.

Speaker 4

And we've talked a lot, I guess about the risks about the full lad about slightly dystopian future that people warn might come with AI. So let's maybe talk just briefly about the upside. Also when we look at AI and the possibilities across business, governments and so on, what does the EU sees the potential of AI that is.

Speaker 6

Such a great question because so much of the conversation here in Brussels is about the possible harms and big tech companies trying to steal the entire market in Europe never being able to compete.

Speaker 7

I think lawmakers do.

Speaker 6

Realize that there is serious upside to productivity to the healthcare industry and they want to make sure that.

Speaker 7

They also can capitalize on that as well.

Speaker 6

Part of the A Act actually is this thing called sandboxes, which is basically allowing smaller companies to operate with few restrictions so that they can kind of test out their technology and not be constrained by so much regulation. And so this at least is one opportunity for EU politicians to say, look, we're not just focused in the regulation, we also care about the innovation here as well.

Speaker 4

If you had to make a bet, what would you say would be in that final bill.

Speaker 6

There's no question to me generative AI will be included in some way. I do think that most people that are not in the Parliament I think the Parliament.

Speaker 7

Went too far.

Speaker 6

So I think some of these concerns or from companies that they will not be able to quantify the impact on the rule of law or the impact on the environment. Those risk classments will probably be taken out, but we will definitely will see some controls of Genervia, There's no question about that.

Speaker 9

Julian, thank you very much for your time.

Speaker 4

Thank you so much for as Thanks for listening to us here at The Big Take. It's a daily podcast from Bloomberg and iHeartRadio. For more shows from iHeartRadio, visit the iHeartRadio app, Apple Podcasts, or wherever you listen, and we'd love to hear from you. Email us with questions or comments to Big Take at Bloomberg dot net. The supervising producer of The Big Take is Vicky Viergelina. Our senior producer is Catherine Fink. Federica Romangniello is our producer.

Our associate producer is zenab Zidiki. Hilde Garcia is our engineer. Original music by Leo Sidrin. I'm Rosland Matheson in for Wescousover. We'll be back tomorrow with another Big Take.

Transcript source: Provided by creator in RSS feed: download file
For the best experience, listen in Metacast app for iOS or Android
Open in Metacast