Welcome back to Unlimited Hangout. I'm your host, Whitney Webb, the AI revolution has become more mainstream than ever. And the rise of generative AI is having an obvious and pronounced impact on human employment, creativity, socialization, and much more often framed as helping and assisting humanity into a utopia of enhancement, and increased equality. The impacts of AI are going far beyond and stand to
transform not just the economy and society. But ourselves. AI is being rolled out at breakneck speed across nearly every sector imaginable, along with other emerging technologies, creating a surveillance grid that logs and analyzes every supply chain every keystroke and every transaction, its proponents say
it will tackle illicit activity and inefficiency. But is it even possible to harness AI for its positive use cases without succumbing to its more negative impacts, especially considering that AI is largely being programmed and maintained by our Silicon Valley overlords, and their partners in the military and intelligence community's joining me to discuss this and more is star unlimited hangout is Podcast Producer and assistant who has a lot of interesting perspectives on AI
that I definitely think are worth sharing. So hey, star, how's it going? Hi, good, thanks, how are you? Oh, you know, doing swimmingly here to talk about one of the topics I get asked about the most, and it's, obviously there's a lot
happening with it. And as I said, just a second ago, pretty much every sack sector is having some sort of disruption, quote, unquote, caused by by AI in the media, the field where we work is also one of these sectors, that's, you know, in the news, even mainstream news, especially, you know, is talking a lot about the impacts of the eye on on media, but also to, you know, it's also affecting alternative media quite a bit, I
think, as well. So, you know, like I said earlier, a lot of the narratives that were fed about, like the promise of AI, that it's gonna, you know, it reduced tedious work, people won't have to do tedious work anymore, and sort of frame, you know, the emergence of AI is leading us to the sort of Utopia well, maybe there's been sort of an ability for people to, you know, not have to do as much tedious work, I guess, in in writing, through like, chat, GPT
generative AI, maybe producing thumbnail art for media, and whatever, that can all be done in a few seconds with AI now, but you know, the consequence, one of the consequences of these that we've seen recently are these, you know, these pretty big mass firings, that legacy media that some people in alternative media are cheering on, but I don't really think
it's necessarily something to cheer on. Because essentially, what you have is, you know, if you view mainstream media as essentially synth ographers of the state, you have these companies firing these mainstream media people and replacing them with generative AI, meaning that it's a more effective stenographer, and they can produce more and more content, and they don't necessarily need people to do it. So I don't think that's necessarily a big win for
independent media. It's not like these people, these sites, or these, like, legacy media institutions are going to be producing less content, but I mean, yeah, I mean, there's people that are cheering it on, because they like hate
mainstream media and whatever. But unfortunately, a lot of people and independent media, the dynamics that we, a lot of us in independent media used to be really against have been adopted by a pretty decent amount of people and independent media these days, and it's, um, it's pretty unfortunate, but I mean, I'm sure some people in independent media are using
Well, I know, they are a lot of this generative AI stuff. And, you know, frankly, that kind of concerns me because as soon as a Chet GPT was sort of, like brought out and popularized, you know, they were saying that generative AI is going to be like 90% of all content by 2025. So that's like, a year from now. And I don't think that's necessarily a good thing. Right? I don't know.
Well, and it's not also just, you know, mainstream media that's going to be using it there's plenty of shows that independent media doesn't like that still talk about the same type of stuff that they do, you know, like along those same themes, so it's like, it's not like it's not going to happen to you just because you think you're some kind of truth teller
or something like that. It's going to happen to like everywhere, it's just a they don't need as many people to do the work well I also think a lot of this like the AI dot coming where the emerging really AI dominance in media On is going to have a lot of impacts on the censorship agenda, which is going to have a huge impact on independent media obviously, because the goal well, I talked about this in a recent interview with with Katherine Austin Fitz. And it's unfortunately, paywall just
because of how she runs for her site. But, um, you know, in there, a lot of what I talked about was this Henry Kissinger
and Schmidt book on AI. And they essentially lay out that the goal is to have generative AI, you know, produce all the messaging, whether it's about news, or political messaging, or really messaging about anything, just online content period, and then have that be curated by AI. So like, AI is censoring out the stuff that doesn't fit, you know, and if stuff that if they want, you know, essentially as they, you know, Kissinger and Schmidt layout at all to be essentially AI produced, and,
like, on top of that managed by AI, anything that's like, written by people that's not written by AI is going to, like, stick out to the AI, you know, and be easier to easier to censor, which is not good. So, you know, I understand that people like, like, some of the utility of it, and it's, you know, there's some convenience, I'm sure, to being able to produce a wall of text in like three seconds with this thing,
but it's, um, I think it's a bit complicated, too. And I also wonder a lot about, you know, chat GPT specifically, you know, I've never used it, but as I understand that you have to have like an account. And so every question you ask it, it logs, I'm sure sends back to the Sam Altman, mothership, to see what people are asking.
And it keeps your history so that can be subpoenaed. Oh, wow. Yeah. Yeah. I mean, it keeps your history. So. And, you know, I've heard I've seen people talking about that. So I always delete everything. I asked it, but you can use other ones besides chat. GPT two, which is sure thing that people, you know, can think of,
yeah, but I'm just sort of thinking about how when these, these things that are novel, get rolled out, and people don't use them without thinking about how they're going to use your data against you. So like, you know, Facebook or something, when that first came out, people were like, oh, yeah, I'm gonna, like,
ping my location and tell it exactly where I am. And I'm gonna, like, yes, link it to this, and that, and I'm gonna post like, all my pictures and all this stuff, you know, and then oh, it turns out, Facebook has all these weird connections to people like Peter Thiel, and DARPA and whatever, maybe we like, shouldn't give them our data. You know,
if people don't even remember what they've given, I remember when I was young, I mean, I'm a lot older than you are. But I went to college and like the 90s, when computers were first coming around, and when I first went to college, I was using the computer all the time. And there was this guy that went to my school, who was super, like, he was kind of like the Unabomber. He was really, like, weird and paranoid. And,
you know, just kind of reminded me of that guy. And I always thought about how he was kind of this guy was kind of right for being so paranoid, you know, and I kind of was paranoid to like, I'm not going to, you know, leave my trails everywhere. And I thought about that, from the time I first started using the Internet back in, like, you know, 90s and there are some people who, just from the time they started using the computer, just, you know, would tell everybody everything. Yeah, you
know, this is where I live. These are the things that I like, you know, and all that stuff is still out there. And they don't remember what they've told. But they've pretty much told everything. Well,
I think like you get a computer, you think of it as yours. And it's like, oh, this is where I'm putting all my stuff you're not thinking about like other people accessing it, even though a lot of well, intelligence agencies have funded a ton of these Silicon Valley companies that dominate everything that you know, they've been pretty open about, not super open about but like it's been reported on and
documented that they can get into pretty much everything. So it's not really you know, yours as much as you as you might think it is anyway, going back to like chat GBT and this other stuff, you know, people are asking it, I'm sure all sorts of stuff. And then you have like people, this sort of advent of like aI quote, unquote, people, you know, AI, girlfriends, AI therapists, AI, I don't know, everything and that sort of
strain, I guess. And I presumably that's logging all of your interactions to and sending it somewhere disturbing anyway. But what really concerns me too about AI not just well, so like the end goal of this for the powers that be if you believe people like Henry Kissinger and Eric Schmidt is to have basically everything we interact with in terms of information online be produced and curated by AI and you know, what's interesting too, is there's been this narrative seated about bad
people using AI. That way, you know, for like AI for disinformation and for ISIS is recruiting people with AI and stuff and, and all of these narratives so it It seems likely that, you know, I think we'll see more of that narrative. And as it progresses, they'll try and get like, you know, only certain people are allowed to have a chat GPT account and ask it stuff. You know what I mean? Yeah, but
there's so many of them. I mean, there's that one, I think I sent you this link that uncensored.ai You can't actually get an account on it. I think there's like a waiting list or something like that. But I mean, you. Anybody can make one, you know, I mean, there's so many of them. So I don't think they can prevent people from using them. Because, yeah, I mean, maybe
they can prevent them from using, like, the big ones. But, I mean, even Mark Zuckerberg, like last week, because you know, his language model Lama. He's like, saying he wants to make it open source, you know, because I think he knows that that's what people want.
Yeah, well, some people like to assume open source means like, yeah, free of bad, nefarious code. And that's not necessarily true, open source just means it's available. And people have to go in and audit the code. And if you don't want to audits, the code, you know, whatever. You know, but um, you know, I've talked a lot about the the push the calming push to, like, regulate the internet specifically. Which is like a
definite definite policy goal. And so I think, you know, in if they succeed in that, and there's like, a particular galvanizing event that makes people call for a privacy, free internet and all of that stuff, it's very possible that, you know, there would be restrictions on who gets to use AI for information. And, and who doesn't, because a lot of the stuff in this Kissinger summit book is like, basically, an outline about how to use AI to take us back to the Dark Ages.
And a lot of in a lot of ways I feel specifically about like, the flow of information. So like, a lot of not necessarily, I mean, this isn't necessarily how the dark ages ended, but like with the invention of like, the printing press, and like the the Democrats possession of information and being able to get it, you know, out there, you know, before information was like, very controlled, and like it was controlled, like by the
church. Yeah, specifically, and only like, Yeah, and like only like clergy, or like specific people can have access to that and like, learn to read. And all of this stuff. And sort of the idea that they these guys lay out is basically using AI to take us back to that, which is very crazy, because it's being sold as like it as enhancing humanity and all of this stuff,
and you won't have to do tedious work. And it's sort of been sort of the justification for pushing for things like UBI, universal basic income, and in all of the stuff, but it seems like, you know, the way like these guys are actually thinking in the way this is beginning to manifest as AI is leading to mass firings, and in some sectors, and surely more in the future. You know, I think we're going to be seeing what they they really have in store and more and more. So
I read this book, too. And I, you know, I've heard how you talk about your interpretation of the book, and I think I agree with you, you know, that they're veiling their true thoughts, but don't you think that they also are truly concerned? I mean, these people do know that there are dangers with the AI and and that they have to go about it the right way. And it seems like the book was warning of some of the dangers. Yeah. But
I think the way these people work is that they like warn up some dangers, and some of the dangers are true, but they like their solutions to those dangers are like what they wanted the whole time, but the benefit to them, right? Yeah. And so whatever they're like proposing, they're like, we should be afraid of this. And this is the only reasonable solution. But it's not the only reasonable solution, you know, right? Yeah.
I mean, Eric Schmidt is is also going around, you know, after he wrote this saying, like, we have to link people's social media accounts to their IDs, so that we can report them to law enforcement when they like post disinformation and stuff.
I can't even believe that people still have this idea that if I'm not doing something wrong, then I don't care. I really can't understand. Wow, why people think that and they do. It's
like it's very naive, honestly, because if you consider how AI is being used right now, for example, like in facial recognition and stuff, there's been like a series of issues in the UK with them trying to roll out like real time facial recognition technology, because the the accuracy is like super low. Yeah. And but but they're still
like they're not going back and like fixing it. They're not like changing providers, like if you were the state and you are meaningfully trying to like make an AI facial recognition system that works. You would go and try and find another like provider that has higher accuracy. Yeah, something. Yeah. And they've shown no interest in doing that. And so I think essentially what
they're trying to do. I mean, it sort of reminds me of the movie Brazil, I think you said you hadn't seen it, but it's like this 1985 Like sci fi movie, one of the guys from Monty Python made it, but it's not a comedy at all. And basically, it starts off with the, you know, it's this big, dystopian bureaucracy, sort of, you know, just like a lot of the other, you know,
famous British dystopian, you know, works. And it's, it's in basically, like, they make a mistake, like some guy and in the ministry of information, or something like squishes of flying falls, and in like, the printer as it's printing out an arrest warrant. And so like, one letter is changing this guy's last name. And so they go and they arrest and they end up
interrogating and murdering, like an innocent guy. And all these other people that tried to, like report the wrongful arrest, or like, try and rectify the situation, or let the government know, they made a mistake, like end up being like, over the course of the movie, like arrested and, you know,
tortured and stuff. And it's be in basically like, the message of the of the movie, I think, in that sense is like, you know, the, the state, the government, like a government, like this isn't necessarily interested in things being right, you know, because the system, like a total totalitarian system, just by virtue of like, the fear of that may be happening to you will keep people in line, you know, it just
kind of reminds me of like, predictive programming and movies. And it's kind of like, where if it's anything less than these horrible things that they show us in these movies that people aren't, they're okay with that. As long as it's not those horrible things that they've showed us?
Well, I mean, it's a way of like normalizing it, I guess, in a way you're like, desensitizing people to it to an extent maybe. But when I was saying about Brazil, there's like this French philosopher who's like, name I'm awful at pronouncing, because I'm really just bad in general at pronouncing French names. No offense to anyone, but it's like faux coal or something like that. Like faux faux call was how? Oh, yeah. Yeah, that guy.
So anyway, the people at Palantir you know, Peter Thiel and Alex carps company, but if you're familiar with my work is the privatized version of total information awareness. They like love that guy and have like pictures of him in the offices, New York Times has like a big profile on them, and like 2019, or 2020, somewhere in there, and they, like posed under his
picture. And that guy basically developed or, like, expanded on the idea of of panopticon, which is reflected in the movie Brazil, which is the idea that like, if you know, you're being watched, especially if, you know, you're being watched by like, something authoritarian, you will, you're more likely to, like self regulate your compliance, you know. So it's
not about it being accurate or not, like, they don't care. You know, what they care about is like you like, they surveil you, because, you know, it means like, they're watching and that you'll regulate your own behavior, you'll like self censor, in all senses, you know, not just like, what you post online, but like how you act and, and behave. Because you know, that it's like watching you, but it's not about like it for that to happen. They don't care about accuracy, it's about
like inducing that effect at scale. And so like, if, if these AI, you know, facial recognition, or whatever algorithms are put in charge of like, it doesn't, you know, to these guys, it doesn't really matter how accurate like, they don't they're none of them are 100% accurate. Yeah. And they're being rolled out to decide, like major stuff about, like law enforcement, and governance and other things, you know, and, you know, I think that's something that's definitely not talked
about enough. I mean, I'm sure part of it is like the same, you know, corporate grift Enos of like, oh, you know, this is my brother's company, and I'm gonna give them the contract, even though their AI algorithm is crap compared to the other ones,
like, I'm sure there's a degree of that in there too. But ultimately, like they're not like AI in terms of like being hyper efficient, like it, is it some stuff, but some of the things it's being sold as, like a solution for it's not accurate and has big in in when applied to like law enforcement settings, or which have the potential to decide who lives and who dies just like military settings, you know, it becomes a really big issue. Well,
and you don't know when it's being accurate. That's the problem is like, if you could know, if it was obvious when it was hallucinating, that wouldn't be a problem. But I mean, sometimes you know, I've experimented when it first came out, I didn't want to do it at all, but then you know, something may be kind of look at it in a different way. And so I kind of checked it out a little bit. And some of the things, the
mistakes, it just makes stuff up. It really does. It'll make up court cases, studies, it'll give you names of studies that don't exist numbers for court cases that don't even exist, you know, so, but it seems like it's, you know, they've had I've read all these articles about how in court, somebody will go to court with some information that they got from AI. And it's not even true. Yeah, insane.
Well, this is going back to media, you know, AI like taking over mainstream media journalist jobs, it's not a good thing. It's like, like, even more talented bullshitter you know? Yeah.
Because if you're gonna have to verify everything that it says well, then what's the point of using it you're supposed to be using it to save time, but you need somebody come back and check everything it says anyway, or
you just believe everything it says without questioning because it's sold to you as being superior and more
intelligent. You know, that's how I know that the Kissinger Schmitt AI book is full of Shi t, you know, because basically, if they were being honest about like, their warnings, and all of this stuff, and not just using it to like, you know, sort of give their veiled plans, you know, leak them out to the public, they would definitely have noted that AI like, has accuracy problems that hallucinates it's a known phenomenon. And instead, they're like AI is our ticket into
undiscovered worlds basically. And it sees all these hidden realities that we cannot see. And so we should trust super intelligent AI to be our guide to these undiscovered realms or whatever. And like no, because you like because of these other documented things that these guys obviously know about. You, there's no guarantee that that's even real. If you can't verify and observe it, like aI produces like hallucinates and put in
produces output that is completely like erroneous. And these guys don't acknowledge that once in the book, they frame it as as something that like we have to just trust, and that it's superior to us. And that's I think what the elite
want us to think and to put like blind faith into the AI. And there's these different groups to that, like want to create like religion around AI that have come out of Silicon Valley in sort of related fields, sort of these, some of them are called themselves data s. And then there's this one guy in Silicon Valley that's tried to make a church of AI and AI is writing sermons and some churches and stuff like, getting
a little weird. So they definitely are. I mean, I just think that whole narrative that AI is super intelligent, and that it's errors aren't really errors at all, but like realities that are just hidden to us lowly humans, like I could not just trust that narrative more. Yeah. I mean, it's basically telling us not to interpret our own reality anymore. And saying, We should let the AI do that for us, which
is a major theme in that Kissinger Schmidt book. And they say that that will happen specifically to the class that isn't involved with programming and maintaining AI, the underclass, but the idea here, and what they overtly lay out in this in this book is about how AI is increasingly making our
decisions for us, right. And not, and not just like, big decisions, necessarily, but also, you know, what music we listen to, like the algorithm on YouTube, you know, it's like cultivating our learning our preferences, and then also subtly cultivating our preferences and all of that stuff. And then eventually, like, we won't know how to live without it, that's essentially what they say in that book. And
they talk a lot about that. And, you know, I guess more it's more broader implications that like, without knowing without having AI summarize stuff, that's long, we won't read the long stuff, you know, and without AI like interpreting, you know, this thing or that thing for us, like we won't understand it without like the AI summary are the whatever it produces, that we we grow accustomed to when all of this stuff, and they basically say that this particular class won't understand, at a certain
point won't understand AI at all, and won't understand how AI is acting on them, that they'll that there will be some anxiety in this large underclass, because they'll know they're being acted upon and watched by something but not understand what it's doing to them is essentially what this it's very
disturbing. But they, again, that book relatively well, you know, close, that is like, Oh, they're warning about these things, but they cast them as inevitabilities at the same time and then you know, I mean, the people that write the like Eric Schmidt is like one of the people building this vision out through his work with like the National Security Commission on AI and his, like extreme influence on the Biden administration, science policy. I mean, he basically runs it.
And it's funding salaries of like Biden administration people, it's totally illegal, he shouldn't be able to do that. And then also dominating how it's being implemented in the military and the intelligence community. And then that's a lot of power for one guy. And so he, you know, has the power to, you know, make anything happen, really, when it comes to like aI
implementation in the US. And so a lot of the warnings he's talking about, I mean, it's all, um, if you look at his actions with it, I mean, with the book, it becomes very clear what the book is actually saying, you know, and,
at this time, right now, where we're at where everybody's talking about it, and there's no real policies, you know, like, they have interim policies, but there's no real policies. So that kind of feels like everybody's trying to get what they want right now, before, before the regulations get put into place.
Yeah, and I think, you know, AI regulation is gonna, just what you talked about how they're how they're these different generative AI is like, not just chat, GPT, and whatever. Yeah, I'm sure when they regulate it, they'll make it so that those, those little ones that maybe are a little bit better, whatever, in terms of like data harvesting, or
whatever, probably will not be allowed to go forward. You know, I mean, regulation in these types of spaces, whether it's like, you know, like the coming regulation on crypto or any of these other like emerging technologies or things related to them. I mean, Congress is essentially acting as kingmakers for the companies in this unregulated space, you know, they get to decide who continues to get sick, who's, you know, I mean, obviously, some companies are going to be more favored by
regulations than others. And generally, how this works is that those companies have the most lobbyists and the most poll why those regulations are being written by Congress, and then they go through and those companies when, and then other companies that, you know, are essentially boxed out, after the regulations are pushed through, you know, and a lot of times when this happens in the States, it's like you either, you know, it's sort of started, I guess, maybe in the 70s, under Nixon
with like agriculture, it was like, this whole idea of, like, get big or get out, like, if you're a small mom and pop company, um, you know, government regulation no longer favors, you know, you know, and so they tend to favor sort of these big, big ones that are, you know, we're always going to be part of it. But they take out the little guys, once they regulate, you know,
at the West, there was a talk, I shared it, I can't remember who it was that was talking, but they were talking about how in the future, they want to have it. So this was about what they train their AIS on in the in the content, you know, these are, so they want to make it so that they can serve content from these companies that allow them to use their content to train AI. You know, so like, make these deals with all these companies, you let us train on your content, and then
we'll feed you know, feed people to you. Yeah, make it a deal. Yeah. Which is sounds horrible. It sounds really boring. Well,
there's a lot of that going on. I mean, in, in China, they created like a stock exchange, but it's not stocks, it's like date company's data. And so they like trade it, like on an exchange. And like all these state owned companies, like all of their all of their data in user data,
and then they can use it. Interesting. Yeah. I
mean, obviously, a lot of these complaints are like, Oh, well, and you know, it's been in on immunized. And we don't, you don't have to worry about your privacy. But I mean, yeah, right. You know, I mean, maybe they do, but a lot of them, I'm sure, probably don't, or at least don't do it effectively, you know? Yeah, but I mean, they've been saying for years,
that data is the new oil and all of this stuff. And so I think what people don't realize is that it's like your data is the new oil, and they're making lots of money off of your data, and you are not making any money. And instead, your money is being hyper inflated away or trickling up to the billionaire class. But they, you know, making a lot of money off of you more than ever before.
Wow, depressing.
I'm sorry, I'm like a big black, walking, talking black pill. I mean, I don't really feel like it's black pilling in a sense, because I think it's important to be like, aware of how these guys see this stuff, you know, and I mean, because otherwise, we can't really fight against us, us, us little people
right at the bottom and the way things are going. I mean, I think people need to start divesting off of some of these, you know, specifically like big tech stuff, just because it's like there are one of the clearest actors involved in using our data for bad things and have taken an increasing control of the military. And the government in the US. It's
honestly pretty insane. And then you add all of that this new era, they're trying to push through of like AI, weapons, and all of that, which is either coming from it's all coming from Silicon Valley people. I mean, the push of that, I mean, Eric Schmidt is a big driver of that. And the other big driver of that is Peter Thiel. And, you know, these are all big Silicon Valley guys with very deep ties to the worst parts of the US
government. And I don't know, I think them being in charge of, or developing these like autonomous AI drones with guns. I mean, all of that sounds like an awful idea to me, for sure. Well, war
is always like, the reason it seems that it's always the reason for innovation, right? I mean, since the beginning of time, is about like,
innovation of killing people. Yeah. Well, and making
yourself be the one that survives or gets more or gets what you want. I mean, that's always the thing that propels
innovation. Well, the last few big conflicts so you have like the Gaza conflict right now let's go into spread regionally. And then the Ukraine conflict. Those had been huge testbed specifically for like US military linked AI companies. So
specifically, the Peter Thiel stuff. Very big in North fun, he's funding it, you know, in Ukraine, like autonomous drones, and all of that a lot of it is the frontman for a lot of these companies is Palmer Luckey, who's the guy that made Oculus Rift, like the virtual reality stuff that was sold to Facebook where Peter Thiel was a big investor and basically like,
made for help make Facebook the company it is today. And he's his company is and drill which is also not just making all these like autonomous drones and stuff, but also making like surveillance towers that are on the US Mexico border and all of
this stuff. And, you know, it's all interface to with this other Peter Thiel funded thing like ClearView AI, they're like facial recognition thing where they've scraped all your images from Facebook and including people like that don't have Facebook accounts, but other people have taken their pictures and like uploaded it to Facebook and stuff, you know, trying to make this engine for crazy, crazy dystopia? Yee ha.
You just mentioned the border. And you know, Andrew, I wanted to mention this, because it always surprises people. A lot of people know this, but like 60% of the US population lives in a constitution free zone. Yeah. People because they live within like, yeah. When you live within a certain amount of distance between borders, then that's considered constitution free zones. And about 60% of the population lives in those zones. That's insane. Yeah,
because if you think of borders, I think they count like coastal areas as borders, right. And even yet, right. So that's like California, all of Florida. Two most populous states, right. Yep.
And I think it's like 100 Miles Dan, or something like that. Something
like that. Yeah. Well, it's definitely important to consider given all like the stuff going on right now over the border in Texas, specifically in this showdown as it were between like states and the federal government over border stuff.
But honestly, I think a decent amount of that is pretty manufactured, because bad things are, if people let them are likely to play out, as a consequence of that, and I've been saying for a long time to that, like, once stuff in the US gets particularly dicey, or, you know, there's too much overreach, and people get too upset that the border, you know, like, specifically the stuff polymer and Palmer Luckey has, I mean, it's there, and it's active, they're just not using
it for people coming in. Right. So how much of that stuff is to also keep people from like, coming out at a future point? You know, it's not just, I don't know, I mean, I think the whole border border thing, I mean, it's an election year, too. So there's a lot of stuff going on. And I think, you know, this is going to be the year of unprecedented psyops for sure, and I think a lot of that is going to be very AI enabled, you know,
right. The process of fighting against AI. I mean, it's, you can just look at YouTube and appealing censorship, it's almost impossible to appeal an AI decision. Well,
sure. And then you have on top of it, like, you know, just on social media stuff alone. I mean, for the past, like decade, at least the US military has, like put in a ton of money into making like social media bot armies basically. And with generative AI which now like chat, GBT as an example openly has like a thing with the military now. Like they can have the most sophisticated bots Like ever to like influence opinion,
and stuff like that. And I just think people don't really realize that when they're interacting on social media so many people think like, although a lot of likes and like, you know, people that are boosted by the algorithm and all that stuff is organic because people like it just like they think that like the the video, the pop songs on like radio that get played over and over again or being played over and over again, because people want that it's not because people want
that. It's because that's what they want you to hear and what they want you to see. Right. And they manufacture its popularity, because everyone assumes, oh, it's being played so much, or I'm seeing so much of this or this has so many likes that it must be popular. Right? Well, must be liking it, but it's a completely like off. I mean, not all the time, but a lot of the times it's manufactured.
Yeah, you have to wonder like, okay, so why is this person so popular? And I've never even heard of them? You would think you would have heard of some of these people? If it if it was real?
Yeah, well, you know, speaking of like Twitter specifically, or x or or whatever it is now, um, you know, there's been this whole thing like around Elon and like people that promote Elon get bigger boosts and like
monetization and, and what have you. And you know, there's this whole effort to like Co Op the quote unquote, dissident, right, you know, a lot of the people that were like, against COVID measures and against, you know, digital IDs, cbdc sort of heard them into being like, you know, pro Elon, pro Elon brain chip, who's a contractor for military and intelligence agencies. You know,
I saw RFK, praising Ilan the other day saying, Thank you for providing a free speech platform.
Oh, I missed that. But it's not a free speech platform. That's unfortunate. Well, Alex Jones was calling for Texas to secede and elect Elon Musk as its first president. Wow. Yeah, so um, you know, social media, it's it's definitely a warzone these days. And it's all about trying to get people to perceive
reality, a specific way. And I think what's likely, in 2024, is to basically, you know, through AI and other means AI enabled means, among others, get basically this, this faction of people on the right, that don't trust the government at all to feel like their guy won, meaning Trump, and then there'll be, you know, a lot more acquiescent and compliant to the rollout of all this stuff. Because I mean, just like it was with, you know, COVID, you know, yeah. Trump delivered on all of that for the
elites. And, you know, he's sort of regained his anti establishment cred, I guess. But with all these court cases, trying to take him off the ballot, when whatever, and now some of his biggest and sort of like influencers, like Alex Jones, I guess, you know, he's, he's been re platformed and rehabilitated. As like a pro Elon guy, and obviously, pro Trump once again, despite all the vaccine stuff, to basically you know, sell, you know, Trump winning as this is what's gonna
save America, yada, yada, yada. Um, I don't know. I mean, people forget.
I can't blame people for thinking that though, because you have to look at what we have now. I mean, obviously, they're not doing anything for people.
Well, exactly. But people people forget how the left right paradigm works. Right. So you know, it's the left hand and the right hand of the same thing? Yes, yeah. And so one side makes a mess of things. And the other side comes in and quote unquote, cleans it up offering the solution that they wanted the whole time, but it's cheered on as being the solution to the problem created, by the other hand, you know, yeah. All this.
All this stuff with the border and a lot of this chaos, it's, it's obvious that the the group that's going to come in and fix that is going to be the party that's traditionally been tough on terror, and tough on crime, you know. And a lot of those policies are going to be weaponized against regular Americans, right? Make no mistake about it. And there's going to be a push for ID because of the migrant issue. And it's going to be digital ID, but they want people on the
right to cheer it on. Because people on the right have most of the guns, you know, and can probably actually resist stuff to an extent and make it harder for them. So they have to basically sign up that segment of the populace more than anyone else to get, you know, what they want through. And I think a lot of the stuff that's being set up by Biden, I mean, people act like it's incompetence. It's not it's intentionally being allowed to grow into this insane situation so that they can come
in with very heavy handed solutions later on. And I think it's likely they'll have that want to have Trump deliver those solutions instead of Biden. Yeah.
Because people seem to like him. Well,
I mean, he did offerees I mean, the Teflon Don thing, right. I mean, he did it Operation warp speed. And I had an insane amount of his base was so against that and now they a lot of his base remember it as being Biden's mandates and Biden's vaccine, like Trump wasn't involved in it at all. And that's the I mean, that's again how the left right
paradigm works. You can like offload all of the sins of the current area, current administration and the other guy acts like he's going to be all against it, but they're the same at the end of the day. I mean, people forget that when Trump came to power he like made this team economic advisors that was like Larry Fink from BlackRock and Jamie Dimon and like all of
these guys, and yes, super tight up on Wall Street. And had warmongers and his his administration after campaigning on being against Neo cons, and all of this stuff, and he's one of these guys, that's very good at having rhetoric that's drastically different than their actions. And that's the rhetoric that resonates with people. And then and then they just keep
pushing forward and a lot of the same agendas. And you know, I mean, when COVID happened, one of the first people Trump went to was Larry Fink of Blackrock, and got all this money that was printed by the Fed, and they got to decide where to allocate it. And all this stuff for COVID. Relief, Nolan. I mean, he printed so much money, I mean, he did stuff that was like, so against what he campaigned on, and people just have totally forgotten about all of this. And they act like Oh, he didn't
start any new wars. But he tried to kill Venezuela, and he tried to start a war with Iran. They like murdered Qasem Soleimani, like one of the top Iranian generals and stuff. While he was on a diplomatic mission, like they tried to start wars. They just did it. Like it's I don't know, I mean, I just feel like the way people have come to like, remember, it speaks to the power of, of how media can manipulate people, because that's independent media, supposedly, that's manipulated
Trump's base to feel that way. Or at least the sort of like dissident right base to go back into the Trump fold, you know, and it's because precisely a lot of those guys are seen as being against the mainstream media. And the, if that's been so effective, imagine how effective it'll be when it's boosted by
all this AI stuff. You know, not good. And I think also, there's the strategy they've had for a long time called, like, the flood the zone strategy, where they can just put out so much messaging and a particular way to manipulate people and with AI, like, oh, my gosh, you can flood the zone like never before, you know? Yeah.
I don't understand. I mean, it seems like it should be really easy to make people understand that Trump is on the side of the bankers, people hate bankers. I don't understand why it's so hard to make the connection there. He, I mean, if you look at his history in New York, and everything that happened with the, you know, all the loans that he got there, and I mean, he's on their
bankruptcies. Yeah. Yeah, the the guy that rescued him for bankruptcy, he made Secretary of Commerce, Wilbur Ross, a former who was worked for I think it was in in Rothschild or one of the Rothschild Bank, maybe one of the Rothschild family banks, is what rescued Trump from bankruptcy. So, I mean, I don't know. I mean, I think it is obvious, but I what people point to by default is like, oh, but then why are they trying to stop
Trump? And they meaning like the deep state or whatever, and it's like, if they really wanted to stop Trump, they would have already, you know, and by these these overtures, like they're going to stop him, but they're not actually stopping him and making like this media hoopla about it. They're making their manufacturing trust. And this whole thing of the World Economic Forum right now, where people like Larry Fink are on the board, is how to rebuild trust. Right. That's their
theme. It's been their it was their theme this year. It was their theme, I think, last year in the year before, they're very focused on rebuilding trust. I mean, why do you think they had someone at the WEF? Like Javier Malay? Why did they give him an audience to come up? And, and, you know, everyone was like, oh, yeah, he got up there, and he shit on everybody. Like, I don't think that's what's happening. What I think is happening is that there's this phase shift where they're going to try and
sell the same agendas. The the, the quote, unquote dissident movement in the US is against, let's say, digital IDs and CBD sees as an example, but it's a lot of other policies rolled up in that they want to sell that to instead of having these talking points about it being like ESG or for climate change, or for whatever, they're retooling that to appeal to
people that are right leaning. I think and you're even I mean, like with Larry Fink, who's like the point guy for that, like he was all about, like ESG climate change all those like left leaning talking points, and now he's moving to the right and being like, well, we should do all of the same stuff, but it's not you know, for the planet or for the good of society or inclusivity and talk Two points that resonate more on the left, he's saying, Oh, well, you can make a lot of money doing this.
And everyone can make a lot of money doing this, you know. And sort of talking about, like, you know, pushing for deregulation and stuff like that. And I mean, that's exactly what Malay is doing. And like Malay came to power sort of in a similar way, to Trump having sort of like this extreme campaign rhetoric that resonated with people who were very angry at the political
class. And I mean, it was cathartic with Trump. And it's also cathartic with Malay to hear them crap all over the power establishment that have like been bad to people and everyone hates, you know, but the problem is, you know, Malay gets into office and after railing against the political establishment, he puts the political establishment back in power, not the one he just replaced, which was the left leaning one, he went back to the administration before that the
center right party guy, Mauricio Macri took a bunch of people from his administration and put them back in power. And like his finance minister, he can paint all about being an A, in, you know, an anarchist, and all this stuff, and his economics, his top economics guy and finance ministers, like a career, you know, Latin American point, man for Deutsche Bank, and JP Morgan and stuff. Like, it's not good, you know, and he's super cozy
with the IMF. And he everyone in Argentina hates the IMF, because they've been trying to privatize all their state assets, and force austerity on them and stuff. And Malay, just like has done everything the IMF wanted to do to Argentina and more, without the whole, like, debt slavery angle of it. It's, it's
very nuts. And so I think the fact that Malays being invited there is just indicative that, you know, they're trying to get trust, people that are against policies sought by the WEF, they have certain political influencers, they want to roll out there and have people trust those guys. And then those guys will deliver the policy goals, you know, the West, and these guys have wanted all along. And I think honestly, the digital ID thing is going to be sold as like a solution to the migration
issue. We have the know who everyone is, like the the long standing Republican push for voter ID, which I'm not against, yeah, this is like, I mean, people, like hear me talk about this stuff. And immediately, like, think I have to be on one side or another, I'm on neither side, you know, but um, in there, they'll just, you know, roll out that talking point and be like, Oh, well, you know, everyone has to have voter ID.
But it has to be digital, or whatever. Because I mean, people like Ron DeSantis, who were, you know, postures being against CBDCs, for example, like digital IDs are already rolling out in Florida. So he's not against that. I mean, maybe he's against CBDCs. But you know, I've done some reports and interviews recently about how that's like, just a setup to have like, instead of a cbdc, like, issued by the Central Bank, the Fed, they're going to do it, but it's going to be issued by Wall
Street. And it's not going to be called the CB, DC, but it's going to be the same thing. So anyway, they'll still have that you know what I mean? I mean, I feel like I'm kind of rambling about this. But I honestly feel like there's this intentional shift here to try and move to get like this energy behind the dissident, right? And oh, yeah, independent media is winning, and we're free, and all of this stuff, and our guy's gonna come back into office and like, save everything and save the world.
It's, I don't know, people just have to remember what happened last time. And no one does. I
was reading something about the digital well, like a online verification, you know, to prevent misinformation or something like that. But then the problems involved with that being like, well, it could be a target for, you know, hackers. And so then the solution to that they were looking at, and this was from, like, one of these sites that you follow, like, you
know, government press release type sites. And it was saying that they were looking at banks, because they're more secure, you know, so it'd be Yeah, you would authenticate yourself through your bank. Yep. Online.
Sounds about right. Yeah. Yep. Well, I mean, because bankers are driving a lot of this stuff forward, like the CBD scene, digital ID thing. I mean, if you read stuff, like from the Sustainable Development Goals, agenda 2030 of the UN that every country has pretty much signed on to CBD C's and digital IDs go together. They must as as as it's laid out there. And I mean, most of the stuff at the UN, including all their climate finance and climate action stuff, and a lot of the other
STG stuff. It's been written by bankers. It's been written by bankers. People assume it's written by like UN experts who are somehow like not like, you know, neutral and like, experts in their field, sort of like the idea like it's an FDR brain trust style thing. No, it's not that at all. It's literally written by bankers about how to like, Screw you and your children and all generations to come, and basically turn everybody and everything alive and do financial products to be
traded on, like blockchain exchanges and stuff. I mean, it's totally insane. When you actually read into it and stuff, and I just, I can't stand it. But yeah, people really think the UN is on their side here. But all that cbdc digital ID stuff was written by, by bankers, pretty much. And so yeah, I mean, a lot of the stuff I've written about before about the push for like a regulated internet, it's, it's banks and
intelligence agencies, pretty much. So like, the UN climate finance thing is like, oh, we need to save the planet and do this stuff. And they put Mark Carney and Mike Bloomberg, in charge of it, we're like, the top bank, I mean, just like, I mean, really powerful people who have built their careers by like, stepping on people's heads, you know, and climbing their way to the top. And you're supposed to believe these people like, are setting up all these systems, because they care about
the planet, it's madness. And what they're really doing is they're just like, creating, like carbon markets where they can tokenize and like, turn everything alive, into like, assets, and, you know, financial products. It's insane. So yeah, I mean, these guys don't really care about people at all. And they, but they've spent a lot of money, especially, you know, well, they spent a lot of money, basically, on propaganda on
public relations to convince us they care. But I mean, obviously, their actions, particularly like Wall Street bankers, I mean, it makes it really clear, you know, what they're motivated by, and I mean, a lot of it is more than money. You know, I think, you know, independent media, people that talk about these agendas, you know, there's a lot to be said about how it's really more about control than profit at
this point. But I think, you know, one way of looking at their interests and control is not so much of it is like, Oh, they love to control people as like, you know, I mean, I'm sure there's people that are in it for that, you know, and do like that, but I think there's some also that see, it is like necessary for I guess, risk management, you know, I think they see, like an on sort of like a free meant, like the if the public if the masses were free to them, I think they would
view that as like just uncontrollable, unpredictable, and makes it harder for them to do what they want to do. You know what I mean? And I think a lot of these people's lifestyles is also like, predicated on them being to do whatever they want with the masses, because they like us, our labor, they use us in other ways, or they steal from us in order to, like, maintain their specific lifestyle. And they obviously have, like, no intention of changing that, you know. So I
think it sort of comes down to like this whole, like, risk. I mean, I'm sure they see it as like a risk management thing. And like big parts of the elite. And I, but I think the problem there too, is that, like, what do they see is risk and what do
they see as chaos. And I think at the end of the day, just like human creativity, or like, something that's not completely controlled, like buying machines, and stuff for them is going to be viewed as inherently risky, because they can't like unless they can, like, extreme influenced us to like, extreme extreme extreme degrees, they'll never be able to, like manage away all the risk of there being like billions of independent people on the planet that aren't necessarily going to do what
they want to do every time. You know, I mean, they put so much money and so much effort into manipulating us and AI is allowing them to do that at scale, you know, in unprecedented ways. And a lot of the stuff in the Kissinger Smith book is essentially using AI AI to like, suck us into realities
that aren't even necessarily real and stuff. But I think a big part of that is because, you know, AI can give us the impression of this creativity and of this, of this consciousness and of this stuff that keeps us engaged and interested, but with a lot less risk for them than if it were, you know, something happening organically and not like a synthetic thing like AI, you know?
Yeah. When we were talking about doing this podcast, we're just kind of, you know, talking back and forth. And you said something about, they want predictability. And that was really kind of like mind blowing for me because I was thinking about it all that time about the angle of like, I don't understand how they think this is going to work because they're building on top of lies. You know, they're like training on the training from the media and the media has been telling lies.
So how are they expecting to get truth out of these like, you know, a I models that they're building and stuff. And you said that they don't care while shrewd, they want predictability. And that was kind of like the change the way I thought about it, because I care about truth. So I just assumed that that's what they would care about. But that's not what they care about. They care about things. Yeah,
well, they tell you, they care about it, you know, and it's just like, you know, how a lot of AI that they're using is inaccurate, like we were talking about earlier, and they act like it's going to make things more efficient. Like, that's the selling point. But it doesn't actually do that, because it's like, a lot of the time, it's inaccurate, and they don't care. They just want it to be like in a controlled system that they
can manipulate. And then if it has glitches, they'll cover it up, like, you know, like happens in like the Brazil movie and stuff. They'll just cover it up and like, eliminate the people that know about the mistake and just like paper over it and keep going because it's not it's not about what they say it's about
it's like not about accuracy. It's not about preventing misinformation, so the truth can Ender, right, it's about creating, essentially manufacturing realities through AI, and changing how we perceive reality and having us be dependent on AI to perceive reality. Because if you control how people receive reality, you can control how they behave, right. And so this is like an unprecedented effort to, to be able to push humans into a system that they don't know how
it operates. And I think a lot of this stuff like more than that they plan for AI that isn't necessarily here yet, like a lot of it with health care, and like, you know, wearables and the Internet of bodies, and the Internet of Things, stuff, like escalating a lot, and like AI will, like, I go through your genome and, and all of this stuff, I mean, it's all about just trying to like tweak a system. So that like there's
nothing unpredictable that arises in it. And I think that's why, you know, pretty much not, I don't know, if it's necessarily every sector AI is being rolled out in but a lot of them have an extreme focus on like, predictive analytics and stuff. Like predicting what people are gonna do before they do it. And it's all about like, anticipating risks before they
happen. And all of this stuff, and I mean, ultimately, at the end of the day, it's so like, they don't have to worry about like, uprisings from the little people, you know, they can like
micromanage it all. And I think, you know, AI a big part of it, too, when you tie in, like the whole, like eugenics potential and like healthcare, posturing of a lot of this AI stuff is to basically, you know, tweak humanity so that it can only survive in the system, they're building with it, like this dependence on AI, I think they don't just want it to be cognitive, like is sort of laid out in that Kissinger Schmidt
stuff. But I think they want it to, you know, at some point in the future, like be biological, like, create biological dependencies on this stuff, I think that's part of like, the transhumanism thing, maybe an aspect of it that's not talked about. So much, just like just having us not be able to live
without these. I mean, we're already so dependent on like, big tech, and all that for, like, how we conduct our lives, but we're not necessarily like, dependent on it to live to, like actually live, you know, like, in theory, we can still walk away and like unplug and stuff. And I think, well, there's a couple of different, you know, reasons as to why they may not want that there's like, the data is religious level of it for
some of these people. And there's also, of course, you know, as I've talked about before, on stuff, a lot of like religious overtones on that's some people in view into the whole trans transhumanist movement, but I think it's, I think it's also like just people wanting to be able to create some sort of system that keeps humans like engaged and and trapped, and we're producing all the data that they're using to run the economy now. And moving forward like this. You know,
they call it like the data economy. And there's also talk of like, the DNA economy and how DNA is going to be used to store data and like all of this stuff. I mean, like, the the potential applications of a lot of stuff happening right now. I mean, some of these powerful people, Larry Fink included, want to take all of this stuff to like, an insane level that I think a lot of people haven't, like, fully understand. So like, there's this thing that I've been writing about lately, and
the article is not out yet, but will hopefully be out soon. And that's about the broader, like, tokenization agenda, where Larry Fink talked about the tokenization revolution recently how everything's going to be tokenized and so that it can be traded on on blockchain and they want to do it you know, not it's
not just like things that are financial stuff right now. Like it's not just I mean, they want to be they want to tokenize like every living thing, natural assets, all of that stuff that I've touched on before him on stuff on the whole natural capital natural asset Corporation. and stuff, but also like, there's people tokenizing their careers, their projected future profits, like from their career like trying to tokenize themselves. They're like, like artists trying to tokenize like
their creativity. So we can be like, traded and sold and like make them money and stuff. Oh, essentially, we're all of this stuff. Yeah, it's really crazy. And so essentially, we're all of this stuff is leading. If these people get their way is that like, essentially everything on earth will be truth be able to be like, traded on a blockchain and be a Wall Street financial product. Oh, yeah. Yeah. It Wall Street. But I mean, I mean, it's
not all just Wall Street. But Wall Street is like a key part of of the, you know, the power brokers of the system? Because they control the money. Right? Yeah. And they control central banking in the United States. And they have, you know, a lot of influence over things that happen in the world. And I mean,
I think sometimes people point the finger. You know, I mean, I think what we're meant to do, is, you know, point the finger at this politician or that politician, but I mean, people also know, and this should know, by now that politicians are funded by people and their ideas aren't organic, a lot of the time, and they're just like, you know, doing what they're told to do, and saying what they're told to say, I mean, you have, like a politician rolled out, but they have like, speech writers and
people that, you know, tell them what to say, and write their speeches and like, coach them on debates and like, develop their policies. It's like, not all this one guy. And those people work for think tanks funded by these guys. And those guys, you know, people don't look at those power structures a lot of the time, they just want you focused on the influencer, you know, and
we really shouldn't be doing that. Because I think, you know, if there's anything we've learned, since you know, the COVID era, it's that there's a lot more going on than maybe people assumed. And there's a lot of power grabs happening
right now. And honestly, a lot of this stuff, you know, going on in the financial space right now is, is really all about just trying to literally turn everything you can possibly think of into like money or an asset that they can fractionalize meaning like, cut into little pieces, and then tokenize make a token of it so that they can like trade it and rob you in unprecedented ways, you know, and the way this was being pitched before was stuff like, oh, that we had to do
this, like for the planet, like we need to tokenize everything with carbon in it, which is like all life forms carbon based life. Right. tokenize rainforests and stuff, you know, we were doing it for the planet. And then now you have like people like Larry Fink, like I mentioned earlier, going through, like this big shift and rhetoric where it's not about
that kind of stuff anymore. It's about oh, well think about how much money you can make by tokenizing, your private property, your land holdings, and then you can use it as collateral on loans. Oh, look, you can't pay back your loan, I guess. BlackRock owns, you know, three fifths of your land now. And then they'll eventually own all of it, you know, and then because everything when you there's like, this thing, this push also to like, fractionalize at all, like fractionalized
ownership. That is like the whole you'll own nothing and be happy thing. You know, everyone's gonna rent everything. And it's being pitched right now is like a decentralized, like, right leaning anarcho capitalist thing right now, between people like Malay and Frank, and all this stuff happening right now. And people, I mean, some people might buy into it thinking like they're gonna get rich or like this is, you know, a chance for the little people to claw back
some wealth. But I mean, come on, guys. They don't want to share their wealth with you, and they've stolen wealth from you, and then you have no intention of giving it back. And if they're going to, like, offer you a carrot to try and get it back. Be very wary about that, you know, because that's a way to get you roped in and they know that like their existing talking points of ESG. And let's save the planet, let's build a new better and more inclusive, diverse society.
They know all of that is not working anymore. And now they have all their best minds thinking about how to get people suckered into the same system under different talking points. And it's happening in real time. And I suppose that this podcast is mostly about AI. And maybe it's been a little more about some other stuff too, but I guess AI is touching, you know, essentially every facet of life right now. And there's just a lot going on with it that I feel like doesn't get talked about a
lot. So if it's cool with you star unless you wanted to say anything else that's related to that, then maybe we could talk a little bit about some of the AI military and governance stuff um, going on? Well, we touched on it a little bit earlier, but there's a little more I'd like to say about it. Sure. Cool. So talking about like the AI healthcare eugenic stuff. And I think that should be looked at through the lens, also of what's
going on with like AI in the military. So people, I'm sure you've heard about the use of the IDF use of AI in Gaza to pick targets and it's essentially picking tons of civilians, obviously, because of who's getting killed. And that the death toll is just like completely insane. And it's in the IDF won't say like, what the hell the AI like chooses its targets, like what the parameters are, or anything. But essentially, what you're having here is like aI developing kill
lists for people. So like, back in the I mean, I'm sure you remember star like back in the Obama administration, Obama having a kill list was like, super controversial. And now I guess it's not because people are making like aI generated kill lists that are bigger and bigger with like, no transparency into them at all. And essentially, AI is picking who lives and who dies, and what are the parameters and what
happens when that gets, you know, scaled. I mean, Palestine, Ukraine also is a testbed for a lot of this AI weaponry, and it's going to be weaponized against, you know, countries are going to use it against their own populaces and also at, you know, populations they're at war with. It's not, I mean, once this stuff comes out of the box, it's not just something that is going to just be a war time thing, necessarily. I mean, a lot of like, historically, like the IDF and the Israeli defense
industry, they do a lot of like, testing of products. I mean, I hate to call it that, because I mean, it's genocide right now. But, you know, they're, from their perspective, this is a way to say that their products are battle tested, even though like, they're blowing up kids and stuff. But I mean, in terms of marketing, that's how they say it, you know, I mean, once they
do that, they sell this stuff all over the world. And it ends up getting used, I mean, a lot of like, Israeli spyware, for example, that's framed as like helping catch quote, unquote, terrorists gets used by like, I don't know, the the United Arab Emirates, or Saudi Arabia against their own people, like as an example, you know, and so, like, the whole idea that AI, you know, I think one of the main things that AI is going to be used for and why people should be wary about freely
giving your data to it is that it's going to be increasingly used by governments to decide who gets what. And it's not necessarily who lives and who dies, though that is happening. But it could be, you know, in a future situation, let's say like, you know, more supply chain shocks to the, you know, the food system or whatever. And, like, you know, food stamps
are essentially been obliterated in the US at this point. But what happens if they roll back some sort of, you know, rollout, some sort of system, like the UN right now basically uses the world coin system for, like, food rations, right? Where you have to, like scan your eyeball, and like link to your digital ID and your wallet, and it like, takes out how much money of your wallet automatically when you like, sign out at the cash
register by scanning your eyeball and stuff. That's like, the World Food Program is doing that to millions of people every day refugees around the world. And it's very likely that they'll be trying to do that, for like food assistance and welfare stuff, domestically, and all of that, but you know, if the AI determines, oh, this person's done this or that, and shouldn't qualify, I mean, it enables all of this kind of
stuff. And to think, you know, the people in power right now, when it use it for those ends, honestly, I think is pretty naive. And I think ultimately, you know, there are a lot of people that are sort of eugenics minded in power. And it seems to me that a lot of them want AI trained on all this personal data of everyone because they want to decide, you know, certain traits they want to preserve, and people and they
want to, you know, favor the success of those people. So those people will get preferential treatment, and then the ones that, you know, have undesirable traits will probably
not get selective treatment, you know, what I mean? I mean, it has the potential for all of that, you know, if we let this advance enough, and you know, the way things are going right now, I mean, a lot of people like, are pushing back in some ways, but I think also people just don't realize, like, what these people plan for AI, it's basically going to be like, the livestock herder and like we are the livestock and it decides who to call and who not to call, who to feed and who not to feed. You
know, and I think, I don't know. I mean, we're just willingly giving it all of this power by feeding it all of the data, and not divesting from these companies that are you know, saying they want to do that. So I guess maybe that's a good
time. Then to circle back to the question I had at the intro of, you know, if the people probe ramming and maintaining AI now and that are poised to set AI regulations where presumably after they make those regulations, only the AI, they these groups, you know, program maintain will be allowed, you know, can we use AI for positive use cases then? Or is the negative to negative? I mean, I guess it would depend ultimately
on on regulation? And if they would allow any sort of, like, open source or alternative AI models to exist?
I don't think that they can. What? How do you think that they can stop them from existing? How can they realistically say that you can't have, you know, it's already out of the bottle, the genies already out? I don't think that they can, like say that, because I mean, there's so many language, there's so many AI systems that are already out there. And it's only been a year now. How are they? What are they going to do to say that people can't use it however way they want? I just don't think that
that's possible. Yeah, I
think I mean, I would normally agree with you. But um, they're definitely going to try to regulate the internet. And when that happens, it's going to be a completely different internet than it is now. So if the internet as it is, now we're going to persist, I tend to agree with you that at least
some stuff would slip through the cracks or whatever. You know, but I think, you know, it's similar to how they're probably going to regulate like cryptocurrency in the US, they're going to decide like, which, you know, which stable coins are okay, you know, which dollar pigs stable coins are, okay, like, which, you know, which companies can produce a digital dollar, and which ones can't, you know, that they'll make the regulation, so they, they're kingmakers, basically.
And I think they'll probably do that too, for artificial intelligence. And I think it's, um, you know, what this like regulated internet to come? The whole narrative about it is like, oh, you know, there's hackers, and there's these other people that do bad things online. So to stop illicit activity, we have to end online privacy and know what everyone's
doing and says online. And so I think AI, you know, in that paradigm, like, they'll only want to allow AI that like tracks and logs, everything you're asking it, and then sends it back to the intelligence agencies. But
people care about privacy. I don't think they're just gonna go along with that.
Yeah, I know. But the problem is, like, the infrastructure of the internet is actually pretty centralized. When you think about it, like most of the internet, like, runs basically on like, 13, or something servers globally. Um, yeah, that's pretty centralized. And a lot of, you know, like, some of the people that run that, like dominate the domain name system of the internet, like I can, for example, they're very tied up and all these efforts to regulate the
internet. And, you know, in policies like taking down people's websites for thoughtcrime, and stuff, so I think there is going to be a push, and I think people may still be able to use it in ways that they don't want. But yeah, but you don't need to use it online. Right. But I think that's the only people that are going to do that kind of stuff are gonna be people that are like technic, technical, like, sophisticated technologically, and I think most people are not,
I understand what you're saying. But I disagree. Because it's not hard to install. You know, I have one installed on my computer right now, it's not hard. You just install it on your computer. It's local, there's tons of local API's that you can put on your computer. So yeah, but
I guess like some of the negative impacts, like I'm trying to talk about, you know, like the data harvesting and sending it back to them for like, predictive analytics, and like all of the stuff, like harvesting data about you and whatever. And if they want to, like go after quote, unquote, thoughtcrime, and all this stuff, which like the honestly, they seem to be gearing up to do. Like, how safe is it to use
that? I mean, ideally, you would look for ais that don't harvest your data that way and send it to these guys, but like, but
they take the data off the internet anyways. I mean, using the chat JpT is no different than using the internet. I don't think there's much of a difference.
But what I'm saying is like, the internet is going to be regulated, and then the internet is not going to be safe to use as my opinion. You know, and you like how AI is going to be in that paradigm. I think it's also going to be fundamentally like very unsafe. Yeah,
I don't agree with you that I understand your side for sure that you know, you're giving your data to the AI I think we're already giving it to them, they're already taking it regardless of if we give it to them or not. And I think that there's a lot of things that are really powerful. You know, it's a tool like everything else, you know, people don't like people who use Bitcoin. People don't like people who use all kinds of technologies, right. But I think that, I think that the thing is,
is you have to know. I mean, this could just be me being idealistic or wanting to be able to use it. So I'm justifying it. And I'm not even really using it very much. I mean, I'm exploring it to see, you know, kind of the things that it can do and stuff. But I don't really think that not using it is really that impactful, I think that you can get something out of it. You know, instead of deciding that you're gonna not use it. Yeah,
but I, you know, I feel like I've gone over a decent amount of like, negative impacts of it on like, people cognitively? Well, I mean, I guess I could have said more on that. But I, you know, in terms of like, a dual use thing, you know, that was like, the whole reason for, like, going back to Palantir. Right. So like, the name Palantir derives from, like, Lord of the Rings, and it's like an object in The Lord of the
Rings that is, like, neither good nor bad. It's like a powerful tool, and depending on who holds it, right, you know, determines whether it's good or bad, right. So I think AI is much the same. And I think, you know, once they regulate AI, they'll regulate for the purpose of having AI be as firmly as possible in the hands of the bad people, I guess, is what I'm trying to say.
But AI is such a broad term. I mean, what are you talking about? Because AI, artificial intelligence, when it seems like people have just started calling it AI in the last year since chat DPT? You know, so like generative models or whatever? Is that what we're talking about? Because AI,
I'm not talking about generative AI specifically, because that like generates text or generates images. I mean, it's very different than, like, some of the other AIS we've been talking about in terms of like, military targeting, or like facial recognition, you know, and some of these other ones. And, I mean, we didn't really get at all into like, you know, singularity, like artificial general intelligence stuff.
Right. Yeah. But I mean, obviously, there's like different AI eyes, but I think ultimately, you know, whatever regulatory framework is passed through in the in the coming years is going to be focused on, you know, preventing AI that isn't under their control from being widely adopted. And, you know, what does that, you know, mean, for the utility of AI to the masses, I mean, I think maybe now people can get stuff out of it. But I think people also have to, like, be wary of
the risks. And that ultimately, AI is like risk management, from the elites in the sense of, like, keeping you from doing things they don't want, or, you know, bucking against the system they're trying to create, and like, you know, it's a novel tool, it's a powerful tool, it definitely has positive use cases. But can we make use of those, given who's programming and maintaining and dominating the space right now? So
what are people using AI for right now, what people are excited about is using it to, you know, clean up their text, make pictures, you know, write stuff, whatever. I mean, there's all kinds of computer programs that can help you clean up your text, you know, there's, it's kind of like, it's like a, you know, a one size fits all type solution. It's something that can do everything instead of having to go to you know, all
these different apps and kind of doing it yourself. It's kind of like, like a, just a better version of all of those things in one. Yeah,
I mean, I get that I think what I'm worried about is people getting lulled into a spot where they can't work without it. Because you I mean, obviously, it's still novel, right? But think about like three years from now. And everyone is like,
writing with Chet GPT. And kids in school, like instead of writing essays or Chechi, teeing them all and they never actually learn how to write and like, what kind of impact does that have down the line, especially when like, these bigger thinkers are saying, like, this is what is going to happen and tacitly saying, This is what we want to happen to the underclass. You know,
do you think that people said the same thing about computers? Yeah, I
mean, I'm sure they did, and like television, and all of that stuff, right. And I mean, I don't necessarily think they were wrong about a lot of the risks, but the problem is people were never really I think, at a wide level made aware of the risks. And it ended up like having those negative
consequences once like the novelty sort of wore off. So I guess what I'm saying is people have to be aware of the I guess the where they Wanna take this and like, sure it's fine to use for now, but just be aware of where they want this to go and like, make sure you have red lines that you won't cross about the stuff. And about, like, what happens when the regulatory hammer comes in, and they try and, and online privacy entirely. Because I mean, like you said, Now, like, you know,
people care about their privacy. But I mean, as I've done, I've done a lot of work on this, you know, over the past few years about, there's definitely going to be some sort of event where, you know, privacy, online privacy is the enemy. And the only way to stop these cyber attacks or whatever they are, is to eliminate privacy online, like we have to D mask everyone,
or unmask everyone, and we have to know who everyone is. And you already have people like Jordan Peterson has been pushing for this, Nikki Haley, and a bunch of people and like, you know, right leaning. And then also like, you know, on the left,
there's pushes for it, too. I mean, it's it's a pretty like, talked about thing, even Elon Musk before he bought Twitter was talking about like, verify all humans and all of this stuff, I think that is a red line people should definitely have and not cross is when they start doing the link, your government issued ID to your online activity. If you want to know why I think that please refer back to all my reporting
on the war on domestic terror in the infrastructure for that. And because honestly, it's targeting people that like, you know, I mean, what would be viewed as traditional Americans, the domestic terror stuff, but also anyone who's like against the state or state policies or anti war? I mean, probably people that listen to this podcast environmental, yeah,
environmentalists are on there, too. Yeah. I mean, people assume that it's like all, you know, people on the left, think the domestic terrorist stuff is all for, you know, right wing people who are at January 6, and blah, blah, blah, and, you know, people on the right on, you know, will think it's for, I don't know, Hamas supporters, or whatever I don't, I don't know what the rhetoric is at this point. But I'm sure it's dumb. So. But ultimately, it's about anyone that that threatens, you
know, or isn't willing to comply about certain things. So, you know, don't make it easy for them. Because, you know, here's the thing about the ID stuff on the internet. They already know what you say online, and what sites you visit, and all of that you linking your ID to that isn't going to give them greater visibility necessarily than beyond what they already have. The difference is once they can link your ID to that, they can legally go after you. Because the way they're like spying on
everyone is technically illegal and unconstitutional. So they can't prosecute you necessarily on that stuff that they obtained illegally, you know, right. And so they can, if they can tie your ID, legally to it,
if they say this is a law to use the internet, you have to be using it with your, you know, because right now, you could say, somebody else use my computer.
Yeah. There's more of a gray area now. And also, like, you know, the illegal wiretapping and all of that of communications, they can't, I mean, you know, they can use that to get like warrants and stuff maybe in these like FISA courts and stuff, but it's like, they can't go after most people with that, you know. And it's not like they want to put
everyone in jail. But you know, as an example, under the Trump administration, they almost created this agency called Harpa, that actually Biden ended up making, but he changed it to ARPA H. But it's like health, DARPA is the idea of it. And it's the same people that were trying to push it in the Trump administration to and the first program they wanted to put out which was promoted by Jared Kushner and Ivanka Trump was called safe homes. I've written about it before. It's an acronym
for something. And basically, that program was about using AI to go through social media posts and identify social media posts for early neuro psychiatric warning signs of violence. All of this being under the guise of stopping mass shootings before they happen in the US. It wasn't just like, oh, okay, it gets flagged, and it sends people to prison. It was like send them to a court ordered psychologist and stuff and like Medicaid, or
like, put them under house arrest. And there was like a whole variant of there was a whole like spectrum of stuff you can do to someone who gets flagged by this thing, right. And the best way to not be flagged is to not be able to do at all because again, AI is really inaccurate. There can be in certain situations and misunderstand certain things and not be able to parse certain things. It's probably not great at detecting sarcasm for example, and if It becomes if
there these programs come to fruition. It's not going to be very good, you know, because people that don't deserve to be caught up in this mess are going to be caught up in this mess, basically. And it was pitched during Trump. It almost happened. Biden created the agency and a lot of the other infrastructure for domestic terror, but I'm sure that kind of program is going to be here soon. Whether it's Biden or
Trump. I mean, I don't think it really matters. I mean, during the Trump administration, they legalized pre crime, which is something that like, hardly anyone knows about. William Barr created a pre crime program, that's still department of justice policy called Deep. You know, they've arrested people and put them in prison for social media posts and stuff. That could escalate. I
feel like when you say, though, not be on it at all, you I don't see a difference between what you're saying about AI? Or, you know, whatever you mean, by saying AI and, and the internet?
Yeah, I mean, I see what you're saying. But I guess like, what I mean, is like social media, like being, like I'm talking about, like, once you add the ID thing, you know, or like, once these programs get rolled out where they're trying to, like, hunt for domestic terrorists like don't, the easiest way to make it hard for them is to just not engage with that system. It's weaponizing, a system that used to be good against people, you know, in a way that's like,
unconstitutional and completely insane, frankly. And like maybe, you know, it was great before to use social media to reach people. And for certain things, it's obviously had, like some negative consequences, social media, particularly like on young people and stuff. Yeah. You know, I'm not saying like, illegalized social media, but these people are trying to twist and use all of these things that we've gotten used to, or
dependent on for various things. And then, you know, under the Palantir model, this dual use neutral tool thing, you know, are trying to turn it to the dark side, right? If you get what I'm saying. So like, I guess what I'm saying is not like, necessarily, like be a Luddite, but once this stuff, these these things happen. Yeah, these regulations and laws and programs come in, you should not engage, or you should engage with something completely parallel that does not interact
with that system. Or you should, I mean, the internet is just a bunch of shared servers. I mean, there's nothing really stopping people from making some sort of parallel server system where you can still do some of this stuff. And some of this stuff can get out, you know what I mean? My problem is about, like, the centralization and how literally, the worst people in the world are trying to use AI for particular ends, and we definitely can't use their AI. Going forward, we have to find a
way to stop that. And it's and in that sense, it's also true of the internet. Yeah. Because they're, they're, you know, doing similar things to both I think, yeah. Yeah,
I think that, you know, the local AI thing, I think, is, you know, there's like this, there's all different kinds of them. You know, and I think right now, everybody's just really excited about the fact I read somebody say something like, that the fact that a dog is talking, you know, like, they're not excited so much about the thing they're excited about, just wow, look
what I can do. Yeah. But yeah, we can take away the ability that they have to control our lives by, you know, cultivating what we want, and by, you know, following websites through RSS instead of, you know, going to Twitter to get your news. I mean, there's lots of things that we can do. But I also think, I don't think they're going to make it so that you can't use a language model, you know, on your computer. I don't think that that's going to happen. I think that what if
they're like, you have to get digital ID to use it. Do you think that's likely?
I don't think that that's possible. I think as long as you can buy a computer, I think that you can install whatever you want on your computer. And there's no way that they're going to be able to stop you from installing that as a program unless they make the entire thing illegal. Like, are they going to make large language models illegal? No.
But if it's if it's if your access to it is like, an account based thing, I mean, I think the big ones could definitely require most
of them do a lot of them. You have to pay money to even use them. But like, they're not all owned by the big corporation. Well, for
now. I mean, open AI and chat. GBT is basically Microsoft. And I'm sure a lot of the other ones will get swallowed up.
Yeah, I mean, who knows what's going to happen in the future, but I don't think that it's just going to all there's plenty of people that are interested in it and enough to make sure that that there's stuff that isn't like corporately controlled I just don't see that it's going to end up like something that we can't use to our benefit. Yeah,
I'm just I'm sure some stuff will slip through the cracks, I guess what I'm saying is that it's, they're going to make it so that you have to be really like technologically sophisticated in order to do that.
I also don't agree with that, because it's easy to install a program on your computer. And there's plenty people, there's so many people online, who are like trying to help people divest from this sort of stuff, you know, all you have to do is know how to get on Reddit, but to what you say, Well, you know, maybe this information is gonna be harder,
you say the internet's totally gonna change. Maybe people aren't even going to be able to get on the internet unless they're willing to give up, you know, their ID, you know, and so you have to decide, yeah, so, you know, you brought up awareness, you know, and that's kind of like what I wanted to talk about to about that book that I was reading. So I don't know if you want to talk about that now or later or what? Sure, go for it. Okay, well, I heard about this book that was written
in 1964. It's called Understanding media, the extensions of man, by Marshall McLuhan. This guy is considered like, the father of modern media studies. He, after this book came out, some ad executives kind of swooped him up and put them on a tour of like, you know, interviews, and he was on a lot of TV shows and stuff, he's kind of reminded me of like, like a Bernays, or something, somebody who they were like, really embracing his ideas, and kind of everybody was
talking about him. And so he is the person who coined the phrase, the medium is the message. And so he kind of thinks that we shouldn't necessarily focus on what the content of the medium is, but what the actual medium is. And study that when you're looking at like its effect on people. Understanding media, the extensions of man. So what is a medium, he says, it's an extension of ourself, or any new
technology. So like the wheel is a medium that extends our senses, he defines it as like an something that extends our abilities past ourselves. And so a light bulb is a medium, a fork is a medium, because that is an extension of our fingers. Light bulb is an extension of our eyes. Anything that is law allows us to sense further than ourselves. And so he talks about how we used to be in the mechanical age, and that would be like the wheel and stuff like that. But we're now in the
electric age. So when he talks about media being an extension of ourselves, it's basically like, it's an extension of our central nervous system. Because it allows us to know more about things that we can't see right in front of us. And so now that we're in the electric age, we can, everything is instantaneous, you know, we can access all of the information.
Right away, whatever we want. There's a quote from the book, what we have to consider is the psychic and social consequences of the designs or patterns as they amplify or accelerate existing processes. So when you're looking at a medium, the message is the ways that it changes society. So like, that's what I think that we need to do, we need to look at AI and kind of examine it in a way where we can understand not necessarily, you know, the content that it has, but how it's changing us.
And, you know, we talked about that a little bit, but I think that brings when you when you think about it, like that, it allows you to understand it in a different way, instead of focusing on what it's doing. Like what it's doing for you, we can think about what it's doing to all of us as a society. And once we understand that, then we can maybe decide if we want to do that, decide if we don't want to do that, you know, that type of stuff. So
if you if you look at you know, AI is like the newest iteration of this extension of the central nervous system, then I guess then people coming in to manipulate AI to like, instead of, you know, being part of the extension of our efforts to sense and understand the world around us and like process information, truthful information, and like seeking that out. It's like an a way of sort of hijacking the next inner iteration of that to like, lead us in a different direction, you
know? Yeah, like instead of leading us To finding more about our reality and understanding the world To lead us to sort of like a closed off system, hurting us into that, and sort of trapping our central nervous systems, they're
totally sort of how I
see. So I guess then in the sense, what we have, then is a decision of how how do we avert that diversion AI being used in that diversionary way by the powers that be and keep it as a thing that sort of helps us to expand our access to information? And I think ultimately, it comes down to who's, you know, making the rules and, you know, dominating the AI industry, and how can we decentralize that and, you know, prevent this extreme Central, centralized control over all of it?
I guess I don't really think it's going to be able to be centralized the control of AI as a medium?
Well, I hope not. But I think again, it comes down to like, what people are going to do to prevent that. And what I see right now is that the people that are trying to prevent that are like, much more technologically sophisticated than everyone else that's using it. Right?
And then it's easier to just go along with the way things are than to try and like, you know, be like, conscious about things. Yeah, yeah, totally. Totally. I agree with you. I do agree with you. I mean, I know it sounds like,
you agree, no, no, but I think it's good, because it just helps me explain it better.
Like, no, I love it. Because this is the first time I've heard you talk about the book, I've told you a lot about the book, but I haven't heard your response to it. Something else interesting that he talks about in the book is how all media shapes our identity. And that all new medium contains content
from the previous medium. So like, books contain content from the previous medium, which was handwritten text, short, like manuscripts and stuff, and then the printing press, and then like radio and TV, so like TV contains, you know, radio, and plays and stuff like that. And so it all kind of shows us the past, the new media shows us the past. And like, we live in this idea of what the past was, because the new thing is showing us the past. It's kind of interesting, and like, yeah, big
shapes our identity. I found it interesting that artists and writers and stuff like that are the ones that are still threatened right now by AI as they should be, because their jobs, you know, and their ability to make money is going away. But it's just interesting, because it AI is actually a threat to their identity. As artists as creators, you know?
Sure. Yeah. Kind of disturbing to think about it that way. But yeah, yeah. No, but I feel like I feel like I've heard that somewhere before. And I mean, it does make sense. But something I've thought about before about like identity. And I've said this, and I think some interviews a maybe a few years ago, um, you know, in terms of like, the control of information, like why it's so important to the elites, you
know, is because it like shapes our identities. So like, if our identities are shaped by like, who we think we are, like, where we come from, you know, it's all about, like, our history, and also like, you know, human history, the history of our families of our communities, our societies. And so like if these people control how history is not necessarily just written, but how it's like, remembered, like they have control over memory, then they can control our identities, I think, or how
people perceive them. And so I think that's why they want such, like, extreme control over like the flow of information right now. And I think they're definitely trying to use AI for those ends, which, again, is why I think it's very important that people have physical books. And if you don't have physical books have offline copies of other books and like, remember to read, you know, yeah, it's good for you. And it's good for your
brain as you age and all of that stuff. And, you know, it's sort of getting phased out at the societal level, it seems like and I think we should definitely resist that. Because, you know, if they centralized control automation, and all of this, they'll invariably control all the historical accounts of how we got here, all of that, and you know, what, you know, who the winners are, and who, you know, all of that. So, you know, I think can't
learn from the past if they control the story of the past. Yeah.
Or if the pastor being told about never even happened, and I think a lot of you know, my work, specifically history and also like my book and stuff, you know, we're sort of trying to find, you know, what really happened and how we really got here and just trying to answer the questions about like, how did Epstein how happen, you know, I mean, it's obviously a lot more than that. But that's sort of how I got to answering
those questions. And there's a lot of history that's like, intentionally hidden from us if these people, you know, in the texts, you know, historical texts, textbooks and whatever, that these people, right, they give very specific narratives that oftentimes are not accurate. You know, and that's used to like shape identity. So like, you know, like US public
school, American history classes. Pretty much every textbook I ever, like, encountered in grade school was like, the US government is like your father, and it's been on this steady stream of progress, from the revolution to now. And it's so great and protects freedom and does all this great
stuff. And then you find out the real stuff. And you're like, what, you know, I mean, I'm pretty sure most people listening to this podcast, so I've gone through that to some degree, but what happens when that those alternatives to finding out what's going on, like, aren't readily available anymore? And how will that impact how people view and feel
them to, you know, feel about themselves? And I think, you know, that's, that's part of it. But I think, also to like, in terms of our understanding of human history, I mean, there's so little we know, about, like, the distant past, and all of that, and one of the reasons allegedly for this, you know, is the whole, like, burning of the Library of Alexandria, and all of that, but with the internet, you know, if these guys take down the internet, and try and relaunch it, you know, they
could do something like that, again, you know, like a digital version of that. Yeah, because so many people have stored knowledge and books that aren't in print anymore, and other things, purely online. And so again, that's why I like really like to tell people to try and make some sort of offline or physical library, you know, because it's, they're definitely interested and purging, you know, historical accounts that
don't favor their, how they want people to perceive things. And again, this is all about managing perception with the intention of, you know, controlling behavior. And a lot of that is memory. And a lot of that is identity. And ultimately, you know, it comes down to information and data ultimately, you know, yeah, yeah. So, anyway, that's my soapbox about identity and information.
Okay, so I have two more quotes from the book that well, okay, so I want to read this first quote, and then I'll read another one. If we understand the revolutionary transformations caused by new media, we can anticipate and control them. But if we continue in our self induced subliminal trance, we will be their slaves. So this is why we're having this discussion. This is why we need to talk about AI, what it is
what it's doing to us and everything like that, right. So the self induced induced subliminal trance, he talks about this a lot in this book, he calls it Narcissus. Narcosis. So here's another quote from the book. The hybrid or the meeting of two media is a moment of truth and revelation from which
new form is born. For the parallel between to media holds us on the frontiers between forms that snap us out of the narcissist narcosis the moments of the meeting of media is a moment of freedom and release from the ordinary trance and numbness imposed by them on our senses. So that's where we're at right now. You know, we're not numb to it yet. And we're like, right there at that spot where we can examine it, and decide what we're going to do. You know, so we need to be aware of,
that's why we're having this discussion. Again, we need to be aware of how AI changes the way we interact with people, information and our surroundings. You know, we can't remain ignorant of the environment that we're living in. Yeah,
I mean, AI is rolled out as a novel tool. And this has happened with other stuff before and if people don't, aren't weary, were like, paying attention to it, it quickly moves from being a tool to empower them to something else. And there's always this phase at the beginning where they want people to like onboard to a particular technology, where it is open and useful like that, and then it starts to change.
You know, if people aren't weary about it, so if you're using AI, make sure you're using it in such a way that it's a tool that is helping you not one that is diminishing you and not one that is endangering you in the event that these were on domestic terror, predictive policing, dissident whatever, you know, all that stuff, when that gets rolled out, obviously you should
reconsider what you're doing. But even before then, you know, you have to be aware of how you're using it and think that stuff through because if you just keep using it, because it's Oh, it's convenient, convenient, convenient, that has been used historically to hurt people in a particular direction that isn't good for them or for human society. So I definitely think it ultimately leads to dumbing us down right, which you know, there is In a progressive dumbing down of society is
definitely in the West. But obviously it's happened elsewhere too. And, you know, I would argue that there's a lot of intentionality behind that. And it's, it hasn't all been technology's fault, but it's definitely been an engineered thing. And technology has been used in part to facilitate that engineering. So if you're going to engage with this kind of technology, you have to be aware that it can be used to do that to you. And there's an intention to have that happen to people
who become dependent on it. So don't become dependent on it, you have to keep your relationship with it. So it's a tool serving you. And your don't it, you know, in one day, the tables don't turn and then you're serving it, you know, totally, but
it's dangerous, you know, because you can, like, even going into it, knowing that knowing everything that you said, it's a narcosis, you know, like, you can totally become immersed and unaware, you know, and all of a sudden, the conveniences of it are much too great to even consider anything else. Yeah,
I mean, if you're one of the people that feels like, you can't handle that kind of situation, maybe you shouldn't use it at all. No, yeah, totally. And that's my view about it. But I mean, you know, you just got to be wary that it's like a tool that can be used against you if you're not careful. And, I mean, it's subtle, you know? Yeah, I mean, I go back to the social media stuff, and how this was like, sold the people as Oh, you can stay connected with everybody,
and it's gonna make things so much better socially. And there was no talk about all the data you're giving away to it. And it turned out to be like a huge data harvesting thing that actually made people more depressed made people for feel more disconnected. Right? Yeah, it doesn't necessarily like always have those consequences long term. And maybe if social media hadn't been so I don't know, co opted from the very
beginning it maybe it would have been different. I mean, we know also, like Facebook, for example, experimented with making people more depressed by populating their newsfeeds a certain way. So maybe they've intentionally produced that outcome of making people feel more disconnected and more, you know, more depressed when they use it and all of that, like, maybe it's intentional, and not necessarily social media that does that to people, I don't know. But it definitely has,
like changed. I mean, people engage differently with the discourse on social media than they would like in the real world, you know, and it's definitely had a lot of consequences that I think, you know, users of it don't necessarily think about, and then over the years that you're using it, you get acclimated, and that gets normalized, but
it's, it wasn't normal, you know. And so I think people need to be worried about that kind of stuff with AI, because that's how they did it, you know, before, like, oh, everyone's using it, look how cool it is, look what I can do. And then you're on it, you give all your data away, and then you end up feeling diminished, you know?
Yeah. About this book, they talk, they talk about numbing our central nervous system, we are so stimulated with all of this stuff that we can get, you know, that we were numb to it, you know, and, you know, it's cause causes a lot of anxiety and stuff like that. I think I'm just gonna say there is a lot in
this book that I think is worth considering. And I know we talked about people should read books, and I agree, I love reading books, you've learned so much from them, you learn so much more by reading and understanding an idea and thinking about it than you do from hearing somebody talk about it. But not everybody can read. You know, not everybody has a lot of time to read everything that gets recommended to them or whatever. So I think I want to put some of this stuff in the
show notes. So you know, people can, at least if they're not going to take the time to read the book, maybe read some of the ideas in the book, because I think if you can understand things to consider, it's going to at least help a little bit in deciding where you're going to place AI in your life, you know? Yeah,
I mean, I think it's really important to find out if you know, think about this stuff, are you going to use it? How are you going to use it in such a way that it doesn't negatively impact you think about what your red lines are and pay attention so that you don't cross them? Are you avoiding crossing them?
I think that's really the only way we can really interact with the stuff because to do it without being cognizant of the risks are aware of some of the agendas that AI is meant to serve and how it's meant how, you know, the people dominating the field right now we're seeking to use it mainly against us. You know, we have to be aware of that and ensure that it's not going to be used against us that way, at least as much as we can control obviously. So yeah, I guess
that's probably a good place to, to leave our discussion. So thanks a lot star for being here. And thanks to everyone who listens to this podcast that is, of course a production of star and myself even if star isn't, you know, necessarily part of the conversation. She's always part of the art part of the podcast. And with that being said, star is also the person who does the show notes for every episode. So I'm not sure if everyone listening takes time to look at the show notes. But
you definitely should. And I'm sure no one is better at telling you why then starve yourself.
Thank you. I just wanted to, you know, remind everybody, so when you, you know, you get your podcasts and the podcast app, there's the description, and then I always put the show notes page in there. So you should go there and check it out. Because there's all kinds of stuff on there anything that Whitney's talked about, you know, there's some extra information sometimes like she's done an interview or something like that, that's related to what she talks about, you know, all kinds of stuff.
Everybody knows what show notes are right. But sometimes, there's like playlists of clips from the podcasts and all kinds of stuff like that. So definitely, please check out the show notes page. And I wanted to say while you're there, check out the website, too. You know, like a lot of people don't spend a lot of time exploring, you might go to the website when you see a link to an article or something like that. But there's
a lot of stuff on there. Like, people are always emailing asking how can I find out where Whitney's new interviews are and stuff like that there's a press and media page Whitney's got all we put all of her interviews on there, you find all that there's like an awesome search bar on the website. So if you're interested in something like you know, CBDCs, just type that in the search bar. And if Whitney's done an interview on it, if if she's done an article on it, if it comes up anywhere, it shows
up. It's great, really awesome. And then also to check out the fact on the website, the frequently asked questions, there's the page, and it's got all kinds of info, including stuff like how to follow a website with RSS, which is the technology that's used in podcasts, you know, when podcasts publish a new episode, and how it just shows up in your app. That's RSS. So you can do that for websites too. And then whenever a website publishes something, it'll show up in your
in your app, it's great. And then along those lines, I wanted to mention that you should listen to your podcasts on a podcasting 2.0 app, which is, you know, a more advanced, it's got more advanced features, it's got transcripts and chapters, and you can make clips, you can leave comments, you can send lightning payments to the podcasts that you listen to all kinds of really cool features, you should totally be listening on something that supports that sort of stuff. So I really love
this app called pod verse. You can use it on your computer or you can use the app it's called pod verse and you can use it on your computer at pod verse.fm super awesome and yeah, that's about it and thank you to everybody who supports Whitney because that also supports me
Yeah, so
thank you. Thank you to the people listening to this I never get to say that thank you
yeah, thanks
set out if you want no
it's fine. It's let me I liked it well star is amazing and has kept unlimited hangout alive and probably wouldn't have survived last year and some other things if it wasn't for
her so she definitely deserves your support for sure. Um and thanks for everyone who's supported you know the podcast and my work up up and you know through now I know I haven't perverted producing as much content as I used to I've tried to keep members kind of updated without getting too personal about how you know things with my son are still going on and you know, I had to move and all sorts of other stuff is has been happening and you know, there's obviously some other stuff going
on but I'm hoping to get back to like a normal content production thing pretty soon hopefully once you know kids are back in school and in March which is a little backward from the US because remember I live in the southern hemisphere so seasons are backwards it's summer vacation here now for for us but thanks for everyone who's been you know really supportive through all
this crazy stuff. I'm sure things are only gonna get crazy you're not just for me but but for everybody but just want to say you know, thank you all for you know, allowing me to continue to do this work and to support other people who you know support me and the site like star and just just can't thank you guys enough hopefully you enjoyed this podcast. Hopefully I'll get you know more back in the groove with having them out you know every two weeks like I did before you know
starting now wish. Yeah, thanks, everyone for listening. Hopefully you got something out of the conversation today. If you did, please share this podcast around. Be very free. She did and we'll catch you on the next episode. Thanks so much