Bloomberg Audio Studios, podcasts, radio news.
The last week of August isn't exactly known as a busy time of year unless you work in California's legislature.
I'm Senator Scott Wiener, and I am calling in from the state capitol in California.
When we talked recently, Senator Wiener told me he was running from vote to vote on everything from regulating hedge funds to allowing outdoor cannabis sales, very California stuff.
We are in our last week of our legislative session, so every day is marathon voting. We've already voted on a few hundred bills and we have one hundreds more before our session ends on Saturday.
But I wanted to speak to Senator Wiener about one very specific bill that was being voted on that day. It was one he introduced back in February called SB ten forty seven, the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act. The bill would hold AI companies to new safety standards and hold them legally liable if
their tools end up causing catastrophic harm. The bill would only apply to the biggest AI players, the ones that are spending upwards of one hundred million dollars on training their AI models or spending ten million dollars fine tuning them.
We have an obligation in California, as the tech leader on the planet, to make good, forward looking, pro innovation, pro safety tech policy.
Now. Senator Wiener has never been one to shy away from controversial policy moves. A Democrat, he's been one of the biggest champions of zoning reform in the state. But introducing this bill in the place where chat, GPT and Claude were first unleashed and where OpenAI and anthropic are racing towards superintelligence hasn't exactly been popular.
I knew that there would be significant pushback just because of the nature of this subject, and I knew it would be forceable, but it was even more than I thought that it would be.
Open AI opposes the bill, so does Nancy Pelosi and Andresen Horowitz. But Weiener believes that even with that pushback, no state is better positioned to be forward looking in this realm.
California has led on tech policy, on climate policy, on so many issues where we are ahead of Congress and we set the trend, and we should do that here as well.
Today on the show. The debate over the bill to curb AI harms has become a lightning rod for the tech community. Could it be the future of tech regulation or a cautionary tale? This is the big take from Bloomberg News. I'm Sarah Holder. The idea for SB ten forty seven came to California State Senator Scott Wiener about a year and a half ago. That's when he was approached by a group of experts who wanted him to take on AI regulation.
Sometimes in the early part of twenty twenty three, some AI technologists came to me and expressed that we're overdue for a conversation about AI safety. As we have all this amazing, brilliant innovation, how do we make sure that as AI scales, that it happens in a responsible way, because we have a history of not getting out in front of the risks posed by technology and just letting things happen and then having to clean up the mess later.
Now, there are a lot of AI bills floating around in California. In fact, during that August session, legislators passed another bill on deep fakes, but SB ten forty seven focuses on a whole different universe of potential AI harms, catastrophic ones think property damage over five hundred million dollars or a massive loss of human life. And Wiener says he chose to focus on this particular aspect for a reason.
In the debate on SB ten forty seven, there's a lot of what about ism? So what about deep fakes? And what about algorithmic discrimination? What about misinformation? Why are you focusing on the risk of catastrophic harm? And my answer is, well, we need to adjust all of these issues.
But in addition to those potential harms, we know that there is a risk of larger harms, for example, shutting down the grid, melting down the banking system and making it easier, or facilitating the creation of a chemical, biological, and nuclear weapon destroying critical infrastructure. These are all harms that you don't have to have the best imagination on the planet in order to visualize.
These are worst case scenarios, and Wiener has been criticized for even imagining them.
Sometimes the opponents of the bill, they try to say that anyone who focuses on AI safety must be a crazy doomer, which is not true.
You don't consider yourself a crazy doomer.
I am not a crazy doomer. I am very pro AI and pro innovation. I just wanted to be done responsibly.
Still, the risks outlined in the legislation are mostly hypothetical, at least as of now. I asked Serene Gafari, a reporter who covers AI for Bloomberg News, about how imminent the risk of an AI fueled catastrophe really is.
Okay, this is where you know, it really depends you talk to, because even the experts, even the quote unquote godfathers of AI who basically invented this field of the current type of AI models we're using today, wildly disagree. So, you know, someone like Yoshua Benjo, for example, leading computer scientists who won the Touring Award, which is like the Nobel Prize in math for deep learning, which again is like the foundation of AI that we're using today. He
is a big supporter of this bill. He's come out and said, you know, these risks could be coming. He doesn't know exactly when, but he says the risk is enough that we should be trying to mitigate it. Then you have someone else who also won the Tory Award with him that year. Another big scientist, Jan Lucuhn, head of AI at Facebook. Right now a meta, He says, this is ridiculous. You know, we shouldn't be worried about
these catastrophic risks. These AI tools today are nowhere near as smart as human beings, and we're all way too overly paranoid here.
Overly paranoid or not. Another question is whether state legislation like this could actually prevent the worst from happening. I asked Serene how exactly SB ten forty seven approaches AI harm reduction.
So what it would actually do is mandate that companies do a couple things. One is that they have a so called kill switch that they have a way to like turn off the machine if things go horribly wrong. Another is that they would have to create what's called SSPs Safety and Security Protocols. And what that would do is essentially be like a document that outlines how they're
adhering to risk mitigation, which includes things like having external testing. Right, so they say you have to get a third party to go and test your systems and make sure that they're not high risk. Also just through the threat of being held legally liable, right, is kind of putting into writing and spelling out exactly how the state of California could come after you if you are not at least following a basic set of principles around minimizing risk.
It's that threat of legal liability that spooked some tech companies and help fuel a fierce lobbying effort against the legislation.
VC firms and tech companies have brought in the big guns. They've hired top California lobbyists andres and Horowitz has hired someone who has known Governor Newsom for a long time and is a very well known and influential lobbyist in California. Open aither companies have also hired lobbyists who are working on this bill.
And already this tech industry pushback has led to tangible changes. Take the way SB ten forty seven deals with open source startups. For example, Sharin says there were concerns that even though the bill only applies to these AI giants, some of them, like Meta, make their AI models open source. That makes it easier for smaller startups to use them in their work.
And the worry is that this bill, if it does apply to companies like Meta, whether it be a downstream effect where Meta says we're not going to open source our models anymore because we don't want to deal with worrying about potential liability coming from other developers misusing our technology.
The latest version of the legislation tries to address those potential downstream consequences.
Wiener told me, for example, making very clear that the obligation to be able to shut down an AI model only applies if you have that model in your possession. So if you open source a model and someone significantly changes the model, it's no longer your responsibility.
There's another overarching critique that tech companies like OpenAI have made about SB ten forty seven, though, one that's harder to solve with an amendment. It's the fact that the legislation comes from the state of California, not Congress.
A lot of critics of the bill say we should get federal legislation. They don't want this patchwork.
A patchwork meaning different AI regulations set state by state, potentially overlapping or contradicting one another, instead of one national framework all tech companies have to follow. But Senator Wiener says part of the reason California is acting is because Congress hasn't.
People would say, hey, this should be handled by Congress. It should be handled at the federal level. It's better to have a consistent national standard. And my answer is absolutely, I would love to be able to close up shop because Congress stepped in and passed a strong AI safety law. That would be music to my ears.
Not all California politicians are on board with this trailblazing policy agenda, including Senator Nancy Pelosi.
This bill is well intentioned but ill informed.
But prominent supporters have also emerged.
Elon Musk now coming out in support of a California bill to regulate the development of AI, a stark contrast to other big names including open Ai and Meta, both of which have come out against the bill.
You're Sharen wasn't entirely surprised by Musk's take.
Musk has been someone who for a long time, this has been his big fear. He has worried about the long term, potentially catastrophic effects of AI. That's why he helped fund open Ai to be a counter force to Google, because he thought that AI is so important that more
people should be in control of it. Now, if you're cynical, I think you could look at it and say, well, Elon has more incentive to be support of this bill because his AI company isn't number one Open AI or anthropic or these other companies are perceived as being more ahead.
As for Wiener, he told me he'll take the support where he can get it. Do you think his tweet helps you?
I think that his tweet really shook things up because there are you know, folks who I know, we're very critical of the bill, and once they saw the tweet and made them think about it more.
The drama within the AI community over this bill hasn't stopped state legislators. They passed SB ten forty seven in the last week in August. Now it's fate rests in California Governor Gavin Newsom's hands, and if he signs it into law, the reapercussions could be felt far beyond California. Coming up will these AI regulations developed in the industry's
backyard have global reach. The Golden State is often ahead of the curve when it comes to technology, both building it and regulating it, and Scharene Gafari, a Bloomberg Tech reporter, says, what happens in California very rarely stays in California.
It could very well be something that turns into a template for other states. This has happened in the past where California said some kind of tech legislation and then other people followed through. We are already seeing other states like Colorado looking at AI legislation, and I do think we could see more.
And it's not just other states that could be taking notes on what California is doing.
It could even at one day become a template for federal or even international kind of regulations.
Sharene says SB ten forty seven already hewes closely to international agreements that have come out of recent global safety summits in the EU and in Korea. And though the US has no national AI regulation in place, Open AI and Anthropic recently voluntarily agreed to share their AI models before they're released with the US Commerce Department's new AI Safety Institute.
So proponents of the bill say, look, you're already doing some of this work. Why are you so against it? But critics of the bill say, well, these voluntary agreements are working. We're already seeing companies voluntarily come on board, So why do we need to have this hammer.
It's a common tension between businesses and policymakers. They both think they should be the ones holding the hammer. But for the AI industry, Sharin says, now is a particularly tense time to be having this debate.
People think that there is a race to the finish line to reach superintelligence right now. It's this exciting moment where we're seeing this kind of Cambrian explosion and this rapid advancement in AI, and they worry that this is going to just stifle that. And I think that's kind of a classic tech versus government fear.
To AI regulators slowing down that blowsive growth just long enough to put up some guardrails might not be the worst thing. And while Serene says we likely won't see any changes to the AI tools we use in our everyday lives just yet, the biggest change could be in how developers go about rolling out new technology.
They may have to take more of a beat right before putting out certain models and make sure that, Okay, are we in compliance with these guidelines and if not, are we taking the risk of being sued. So it would just kind of create some more administrative overhead for these companies to make sure that they are following these.
Rules, and for California State Senator Scott Wiener that administrative overhead is a small price to pay to ensure future safety.
This is not about being an AI doomer. It's about being a responsible grown up, where we say, let's do all these good things and let's also protect against significant.
Whether you agree with him or not, Charine says that the debate over the bill has already informed what questions we all might need to be asking about AI development and AI regulation in the coming years.
I think what this bill shows is that it's very easy to say, oh, yeah, there could be some kind of catastrophic risks from AI. We're all in agreement on that, but it's very hard to actually agree. Okay, now, what is the percentage chance you think that risk is going to come, How urgent is it, What is the best way to manage it? Once you get into the details, all of a sudden, managing AI becomes really really complicated,
especially because it's so new, it's so technologically difficult. So I think it's probably one of the hardest kinds of technology in some ways to regulate.
This is the big take from Bloomberg News I'm Sarah Holder. This episode was produced by Adrian AA Tapia, who also fact checked this episode. It was mixed by Blake. Our senior producers are Naomi Shaven and Kim Gittelson, who also edited this episode. Our senior editor is Elizabeth Ponso. Nicole Bumster Borr is our executive producer. Sage Bauman is Bloomberg's head of podcasts. If you liked the episode, make sure to subscribe and review The Big Take wherever you listen
to podcasts. It helps people find the show. Thank you so much for listening. We'll be back on Monday.