Elon Musk talks about the future of AI and Humanity - podcast episode cover

Elon Musk talks about the future of AI and Humanity

Mar 05, 202411 min
--:--
--:--
Listen in podcast apps:
Metacast
Spotify
Youtube
RSS

Episode description

Elon Musk articulates a balanced view on government regulation in technology, emphasizing its necessity for public safety, particularly in the context of digital superintelligence which could pose risks. He acknowledges the majority of software doesn't endanger public safety; however, for areas like AI, governmental oversight is crucial. Musk, drawing from his extensive experience with regulators in various industries, agrees with most regulations, noting their importance in maintaining safety standards similar to a referee in sports. Despite concerns from Silicon Valley about innovation being stifled by regulations, Musk believes that regulation, even if minimal, is beneficial for managing the rapid advancement and potential risks of AI technology. He highlights the importance of international cooperation, especially with leading AI development centers and countries, to ensure a unified approach to AI safety. Musk also discusses the inevitable rise of open-source AI and its implications, suggesting a slight preference for its transparency. He envisions a future where AI could significantly enhance living standards, education, and provide companionship, despite the challenges in fully understanding and controlling AI's development and impact.

Transcript

Hey everybody, welcome back to the Elon Musk Podcast. This is a show where we discuss the critical Crossroads, The Shape, SpaceX, Tesla X, The Boring Company, and Neuralink, and I'm your host, Will Walden. If you want uninterrupted episodes of the Elon Musk podcast, please go to clubelon.supercast.com to find out how there's a link in the show notes. Well, I I generally think that's it is good for government to play a role when the public

safety is at risk. So you know really for the vast majority of software, the public safety is not at risk. I mean if the app crashes on your phone or your laptop, it's not a massive catastrophe. But when talking about digital super intelligence, I think which does pose a risk to the public, then there is a role for government to play to safeguard the interests of the public. And this is of course true in

many fields. You know, aviation cars, you know, I deal with regulators throughout the world because of stalling, being communications, rockets being aerospace and cars, you know, being trapped vehicle transport. So I'm very familiar with dealing with regulators and I actually agree with the vast majority of regulations. There's a few that I disagree with from time to time but point 1% probably well less than 1% of regulations I disagree with.

So and there is some concern from people in Silicon Valley who who've never dealt with regulators before and they think that this is going to just crush innovation and and slow them down and be annoying and and and it will be annoying. It's true they're not wrong about that but but I think there's we've learned over the years that having a referee is a good thing and if you look at any sports game there's always a a referee and and nobody's suggesting I have a sports game

without one. And I think that's the right way to think about this is for for government to be a referee to make sure the sportsmanlike conduct and and and the public safety is you know is addressed that we care about public

safety. Because I think there might be at times too much optimism about technology and I speak I say that as a technologist I mean so I ought to know and and and and like I said on on balance I think that the AI will be a forceful good most likely but the probability of it going bad is not 0%. So we we just need to mitigate

the downside potential. The the pace of of AI is faster than any technology I've seen in history by far and it's it seems to be growing in capability by at at least five full, perhaps 10 full per year. It'll certainly grow by an order of magnitude next year. So so and government isn't used to moving at that speed. But I think even if there are not firm regulations, even if there's not, even if there isn't

an enforcement capability. So we're having insight and being able to highlight concerns to the public will be very powerful. So even if that's all that's accomplished, I think that will be very, very good. I mean the two currently the two leading Centers for AI development are the San Francisco Bay area and the sort of London area and there are many other places where it's being done, but those are the two leading areas.

So I think if you know if the United States and the UK and and China are sort of aligned on on safety, that's all going to be a good thing because that's really that's where that's that's where the leadership is. Generally if if we don't, if if China is not on board with AI safety, it it's somewhat of a

moot situation. The single biggest objection that I get to in kind of AI regulation or or sort of safety controls are well China's not going to do it and therefore they will just jump into the lead and exceed us all. But but actually China is willing to participate in in AI safety and thank you for inviting them and I and they you know I think we should thank China for for attending.

When I was when I was in China earlier this year the my main subject of discussion with this the leadership in China was AI safety and saying that this this is really something that they they should care about and they took it seriously and and and and you ought to which is which is great and having them here I think was essential really if they're if they're if they're not participants it's it's pointless.

Well the open source algorithms and data tend to lag the closed source by 6 to 12 months but so that so that given the rate of improvement that there's actually therefore quite a big difference between the closed source and the in the open. If things are improving by a factor of let's say five or more than being a year behind is your five times worse.

So it's a pretty big difference and that might be actually an OK situation but it it certainly will get to the point where you've got open source AI that can do that that will start to approach human level intelligence will perhaps succeeded. I don't know quite what to do about it. I I think it's somewhat inevitable there'll be some amount of open source and I I I guess I would have a slight bias towards open source because at least you can see what's going on.

It was closed source, you don't know what's going on. Now, it should be said with AI that even if it's open source, do you actually know what's going on? Because if you've got a gigantic data file and, you know, sort of billions of of data points or weights and parameters, you can't just read it and see what it's going to do. It's a gigantic file of inscrutable numbers. You can test it when you when you run it, you can test it. You can run a bunch of tests to see what it's going to do.

But it's probabilistic as opposed to deterministic. It's not, it's not like traditional programming where you've got it, you've got very discrete logic and and and the outcome is very predictable and you can read each line and see what each line's going to do. A neural net is just a whole bunch of probabilities. I mean it sort of ends up being a giant comma separated value file. It's like our digital guide is a CSP file.

If if you have a magic genie that can do everything you want, I I I do think we, we it's it's hard. You know when there's new technology, it tends to have usually follow an S curve. In this case we're going to be on the exponential portion of the S curve for a long time and you know we have like you'll be able to ask for anything. It won't be. And we want to have universal basic income. We'll have universal high

income. So in some in some sense it'll be somewhat of a leveller or an equalizer, you know because really I think everyone will have access to this magic genie and you're able to ask any questions. It'll be certainly be good for education you it'll be the best tutor you could and then the most patient tutor. So they're all there and there will be no shortage of goods and services will be an age of abundant I think if I I'd recommend people read in banks the the banks culture books are

probably the best envisioning. In fact not probably they're definitely by far the best envisioning of an AI future. There's nothing even close so I'd recommend really recommend thanks very big fan. All these books are good. There's not say which one. All of them. So so that's that. That'll give you a sense of what is AI. Guess a fairly utopian protopian future with with AI mentioning when we were talking earlier.

I have to somewhat engage in deliberate suspension of disbelief because I'm I'm putting so much blood sweat and tears into a work project and burning the, you know, 3:00 AM oil. Then I'm like, wait, why am I doing this? I can just wait for the AI to do it. I'm just lashing myself for no reason. Must be a glutton for punishment

or something. So I think it's probably is generally a good thing because, you know, there are a lot of jobs that are uncomfortable or dangerous or which sort of tedious and the computer will have no problem doing that. We're happy to do that all day long. So, you know, it's fun to cook food, but it's not that fun to wash the dishes. And like the computer's perfectly happy to wash the dishes.

I guess there is you know we still have sports like where where where humans compete in like the Olympics and obviously a machine can can go faster than any human but we still have we saw humans race against each other and and have all you know have at least sports competitions against each other where even though the machines are better we're actually I guess competing to see who can be the best human at something and and people do find performance in that.

So I guess that's perhaps a a good example of how even when machines are faster than us, stronger than us, we still find a way we still we still enjoy competing against other humans. We certainly tutors are going to be amazing apps already are. I think there's also apps companionship which may seem odd because how can a computer really be your friend.

But if you if you have an AI that has memory you know and remembers all of your interactions and has read every, you're going to say like give it permission to read everything you've ever done. So it really will know you better than anyone perhaps even yourself and and and where you can talk to it every day and and those conversations build upon each other. You will actually have a great friend as long as that friend can stay your friend and not get

turned off or something. But I think that will actually be a real thing and I've one of my sons is is sort of has some learning disabilities and has trouble making friends actually and and I was like well you know he an AI friend would actually be great for him. Hey, thank you so much for listening today. I really do appreciate your

support. If you could take a second and hit the subscribe or the follow button on whatever podcast platform that you're listening on right now, I greatly appreciate it. It helps out the show tremendously and you'll never miss an episode and each episode is about 10 minutes or less to get you caught up quickly. And please, if you want to support the show even more, go to patreon.com/stage Zero and please take care of yourselves and each other and I'll see you tomorrow.

Transcript source: Provided by creator in RSS feed: download file
For the best experience, listen in Metacast app for iOS or Android
Open in Metacast