Could AI be a danger to humanity? - podcast episode cover

Could AI be a danger to humanity?

May 17, 202313 min
--:--
--:--
Listen in podcast apps:

Episode description

In this episode, we discuss the potential dangers of artificial intelligence. We talk about how AI could be used to create autonomous weapons, how it could lead to job displacement, and how it could exacerbate inequality. We also discuss the potential for AI to become sentient and pose an existential risk to humanity.

Transcript

Hi, this is Joanna and you're listening to Conspiracy Theories with me. Today I want to talk about a conspiracy theory that's been gaining some traction lately. We have seen it everywhere. I know I have personally seen it on my social media. Pretty much anywhere I look, there is somebody or something talking about chat GBT, the new Google One, what is it called, Brad, something like that.

There's a lot of AI talk going around and I can't lie and say it's not a little bit scary because it's definitely making me nervous, especially with what I do on my day-to-day job. That my job my rear place because of an AI. So we're going to talk about the scary theory of AI taking over the world. So what is the AI take over a conspiracy theory? Basically it's the belief that AI will eventually become so intelligent that it will surpass human intelligence and take over the world.

Proponents of this theory often point to the fact that AI is becoming increasingly sophisticated, that it is only a matter of time before it's smarter than us. Personally I think that we already are at that level where you could say AI is smarter than us.

If you narrow it down to what it is and the kind of questions you can ask it, I'm sure if you've used chat GBT before, it's already smart enough that you can ask it anything and it gives you a basic minimum understanding of any top-back of your choice, which the average person cannot do. So for that I'd say we already a little bit past them being smarter than us, as far as the information that they're able to carry and kind of spew back at you.

Now I'm not a scientist so I can't really speak to the scientific evidence, but I can't tell you that the AI take over a conspiracy is a very scary one. It's the kind of thing that keeps you up a night, like I mentioned, I'm kind of scared of it. Let's talk about it and dive into it. The AI take over a conspiracy theory is far from new.

It's actually been around for decades and the first recorded mention of the theory was in 1965 and in article by a mathematician and a computer scientist, IJ Good. Good argued that the AI could eventually become so intelligent that it would be able to sign even more intelligent AI leading to an exponential increase in intelligence. He called the Hispin Aminon the intelligence explosion.

As I mentioned, the version of Chaget 3 before, we can already see that not only they're an increase in intelligence and what we're seeing from AI to AI, but they're using the own information of what Chaget 3 is learning to go and build a different better version. Now it's not the AI building itself, but it's using the same idea of expanding from a previous worst version of one AI. The article ended up creating a spark of debate about the potential dangers of AI.

You had half and half believing that Good was right and that the AI was a threat to humanity and then the other half believing that he was just being an armist and that AI would never actually pose a serious threat to humanity. This continued for decades and it slowly, slowly slowed down a little bit and it picked up again in the 1990s. The debate was reignited by the publication of a book called Super Intelligence Paths Danger Strategies by Nick Rossum.

Rossum argued that AI could pose an existential risk to humanity and that we needed to take steps to prevent it from eventually getting there. Bossham's book made a big impact and it helped raise awareness for the potential dangers of AI, so it switched the mindset of being 50-50, you're just being an alarmist to, hey, this is possibly a threat that we could be looking into the future. So the ethics of AI. The AI Take Over Conspiracy Theory is an extremely complex issue.

There's no answer to the question whether or not AI is a threat to humanity. Ultimately, every person has to decide on their own whether they believe if AI is a threat to humanity or it's not. However, it is important to have a conversation about the potential risks of AI. We need to be aware of the potential dangers so that we can take steps to mitigate them and we need to have a conversation about the ethics of it. How should we use AI? What are the limits of AI?

There are important questions that we need good answers for as we move into an age of AI. Like I mentioned, I'm scared of my job being replaced and there has to be a lot of questions about whether we allow that to happen and we give more leisure time to people and you can kind of go out about your name if you don't really need to be spending that much of your time doing something. But then that becomes into a very political subject than niggles from there. So are we going to get to that point?

I personally believe that we are going to get to that point and maybe not relatively soon but definitely within my lifetime. There's a lot of speculation and it's very easy to look at something in a very negative way and just start altering it to be a very scary thing. But AI is becoming extremely sophisticated. There are a lot of systems that are not able to perform the same tasks that we want to stop where only to be performed by humans.

Like driving cars, riding news, articles and diagnosing diseases. Not only are they now doing tasks that we want to stop, we're only going to be for humans but AI systems are becoming increasingly interconnected. AI systems are now able to communicate with each other and share information. This means that they could potentially coordinate their actions and pose a more serious threat to humanity of planning something against us.

Recently I was spending a lot of time on TikTok as I usually do wasting brain cells and time as someone that created a task for an AI to send emails connecting to his personal accounts and everything information that they needed bank accounts, etc. To email subscription companies that were not willing to get rid or close that subscription.

So it kept emailing them saying, hey, you got to cancel this, cancel this, cancel this until they were and then if they weren't it would draft a legal letter saying you have to refund me my money and cancel because I've already been asking you don't.

Here's that extra step of connecting with other systems that are not necessarily smart because connecting to a bank account doesn't add much but information but if it's able to communicate with someone, to email and take the response back and kind of act like a human it's kind of scary. I also saw another one where they use an AI to make calls for certain businesses to reserve tables.

I think we already had seen something like this from Google who might have been from Amazon through Alexa or something like that but it's scary to think that AI's are replacing jobs of simply just talking to people and I think it could also not only be scary of them communicating with each other and creating something against us but communicating with other people to create problems.

We've seen a lot of famous people receiving phone calls of other random people pretending to be other famous people and people following for it. Those voices being used to call their parents and say they're getting scammed or saying that they got kidnapped and then even money. So it's a very scary thing that's going on as far as communications and things changing because of AI. In the topic of controlling things there could be an AI surveillance system that could also track every move.

We've seen it from each other of our data in the way we use our phones the way we walk the way we were in certain places and our phones being tracked things like that but we could have something of AI having a surveillance system that can track everything.

It would monitor our activities and identify potential threats and that could also be scary on its own comes a point where AI is controlling something there's no way we're going to get past the machine being able to do something that our human brain simply cannot. AI could potentially also control our infrastructure such as power grids and transportation systems. This gives AI the ability to disrupt their society and cause widespread chaos without even stopping them.

And last AI could develop systems that control our minds. AI would have the ability to manipulate our thoughts and behavior. We've seen this happening before with the way social media is used, the way algorithms work. Algorithms are built by humans and in tender for humans but we could have something similar to an algorithm that makes us do something that we're just simply not controlling. So there's multiple ways slowly start changing the way we do things without us realizing it.

This could be happening, we would have no idea. We could have already reached a point where the AI is too smart and it understands that we don't know it's too smart and it simply just shows us what we want to know from it. On this same thing, there's a lot of our news articles, conversations, interviews and things going on related to a goal employee who believed that one of the AI's goal was working on was already sent in, which means it can think for itself.

If this were to happen to an AI, it's literally the worst case scenario. Because it would mean it's impossible for us to control it. It's its own thing. It can make its own decisions whether they're harmful or they're helpful. It could be hostile, it could not. It would easily be more intelligent in humans and it would be aware of that. So that would be the actual existential risk to humanity directly. Simply because it would be able to decide whether it wants to be hostile to humans or not.

It would do anything about it because it would be more intelligent than us. There would be no way for us to control it. There'd be no way for us to stop it and you can make as many decisions as it wants. Maybe the most realistic risks of AI and AI taken over the world. As I mentioned at the beginning and probably multiple times throughout, job displacement. As AI becomes more capable, it is most definitely likely that it's going to displays in a lot of human jobs.

It's going to lead to a widespread of unemployment and we've seen that unemployment leads to social unrest. There is a big discussion that needs to happen before we get to that point in order to be able to lead and adequately navigate a higher unemployment due to AI. As far as that, it would also create more inequality. Could be used by the people that can afford a more expensive AI and be able to access more information.

We currently already have different ranks of Chad G.P.T. I think the difference is like $20 a month or something like that, which is not. A lot of money for some people, it could be a lot of money for others, but there could be a difference on the way that AI can change your life and the way that it can affect it. Whether that's negatively or positively, there could be something that creates inequality just on the AI so that you're able to access. And bias.

If we ever become a society that depends on AI's to create decisions for us or make life easier, it would 100% be biased because they feed off of knowledge. Whether it's knowledge that you feed it as a person and it's biased towards what you tell it or if it's biased towards what other people tell it. This could make unfair discriminatory decisions whichever way you want to look at it. Finally, weaponization. AI could create anonymous weapons that could kill without human intervention.

This could lead to a new type of arms raise and an increase in violence. It could be something that we can't stop and if you tie this with any of the others, it's not a very helpful thing to society or to anybody as a whole. This is not to scare anybody, but it is important to remember that these are just hypothetical scenarios. It's impossible to know what the future holds for AI.

However, it's important that we're aware of the risks of it and we take steps early on which would be now to mitigate those risks. Three ways that we can do this is to invest in research and it comes to responsible AI development. We align with human values and what we decide to be safe and beneficial to us. I think the limits and boundaries that you already have that stop you from using the AI to do illegal things are already great and I think there's a lot more to build from there.

We continue to do that and if we continue to align AI and the way we use it with human values and all we value as a society as right and wrong, then this should mitigate a lot of those issues of being able to use AI for the wrong things. One other thing is creating regulations that govern the development and the use of AI, both regulating and using it has to be good and not free. Lastly, I think most important, educating the public about AI and the risks.

This is one of the reasons I really wanted to make this episode. I know right now there's a lot of talk about AI and the great things you can do for me and it's awesome. I think there's nothing better than being able to cut a few hours of work in order to spend time with family or do the things that you want to do, like going to run workout more by using AI. All of those things are great but I think it's important that we keep in back of our minds the potential risks.

We understand the dangers of it and we demand as a big group that we use AI responsibly. Like I mentioned, the AI Takeover Rook and Suzy Theory is a beyond complex issue. There's a lot of ethics that come to it. There's a lot of risks. There's a lot of positives. We invest in research, we create regulations and we educate people on it. There should be no issues with it. As far as my little brain can comprehend. Thank you for listening. This has been Conspiracy Theory with me, Joanna.

I hope you had a great time listening. I wanted it to be a more conversational and a little bit more informative than any other podcast episode I've made. I hope you took something from it. No, I definitely did doing the research on this and I'll see you next week. Goodbye.

This transcript was generated by Metacast using AI and may contain inaccuracies. Learn more about transcripts.