This movement, you know, that we now call wokeness, it hijacked what I would, you know, call sort of at the time, you know, bog standard progressivism. But, you know, it turned out what we were dealing with was something that was far more aggressive. You're pouring cultural acid on your company and the entire thing is devolving into complete chaos. It's also... I think the case that the new communication technologies have also enabled reputation savagers in a way that we haven't seen before.
The single biggest fight is going to be over what are the values of the AIs. That fight, I think, is going to be a million times bigger and more intense and more important than the social media censorship fight. As you know, out of the gate, this is going very poorly. Stop there for just a sec because we should delve into that. That's a terrible thing. Hello, everybody. So I had the opportunity to talk to Mark Andreassen today, and Mark has been...
quite visible on the podcast circuit as of late. And part of the reason for that is that he's part of a swing within the tech community back towards the center. And even more, particularly under the current conditions, toward the novel and emerging players in the Trump administration. Now, Mark is a key tech. visionary. He developed Mosaic and Netscape, and they really laid the groundwork for the web as we know it. And Mark has been a...
investor in Silicon Valley circles for 20 years and is as plugged into the tech scene as anyone in the world. And the fact that he's... decided to speak publicly, for example, about such issues as government tech collusion. And that he's turned his attention away from the Democrats, which is the traditional party, let's say, of the tech visionaries. And they're all characterized by the high openness that tends to make people liberal.
The fact that Mark has pivoted is, what would you say? It's an important, it may be as important an event as Musk aligning with Trump. And so I wanted to talk to Mark about his vision of the future. He laid out a manifesto a while back called the Techno Optimist Manifesto, which bears some clear resemblance to the...
Alliance for Responsible Citizenship Policy Platform. That's ARC, which is an enterprise that I'm deeply involved in. And so I wanted to talk to him about the overlap between our visions of the future and about the... twist and turns of the tech world in relationship to their political allegiance and the transformations there that have occurred and also about
The problem of AI alignment, so to speak, how do we make sure that these hyper-intelligent systems that the techno-utopians are creating don't turn into cataclysmic, apocalyptic? totalitarian monsters? How do we align them with proper human interests? And what are those proper human interests? And how is that determined? And so we talk about all that and a whole lot more.
Join us as we have the opportunity and privilege to speak with Mark Andreassen. So, Mark, I thought I would talk to you today about an overlap in two of... of our projects, let's say, and we could investigate that. There should be all sorts of ideas that spring off that. So I was reviewing your techno-optimist manifesto, and I have some questions about that and some concerns.
And I wanted to contrast that and compare it with our ARC project in the UK. Because I think we're pulling in the same direction. And I'm curious about... why that is and what that might mean practically. And I also thought that would give us a springboard off which we could leap in relationship to, well, to the ideas you're developing. So there's a lot of that manifesto that...
for whatever it's worth, I agreed with. And I don't regard that as particularly, what would you say, important in and of itself. But I did find the overlap between what you had been... suggesting and the ideas that we've been working on for this Alliance for Responsible Citizenship in the UK, quite striking. And so I'd like to highlight some similarities and then I'd like to push you a bit on...
on some of the issues that I think might need further clarification. That's probably the right way to think about it. So for this art group, we set up as... what would you say, a visionary alternative to the Malthusian doomsaying of the climate hysterics and the centralized planners.
Because that's just going nowhere. You can see what's happening to Europe. You see what's happening to the UK. Energy prices in the UK are five times as high as they are in the United States. That's obviously not sustainable. The same thing is the case in Germany. Plus, not only are they expensive, they're also unreliable, which is a very bad combination. You add to that the fact too that Germany has become increasingly dependent on markets like they're served by.
totalitarian dictatorships essentially and that also seems like a bad plan so one of our platforms is that we should be working locally nationally and internationally to do everything possible to drive down the cost of energy and to make it as reliable as possible predicated on the idea that there's really no difference between energy and work and
If you make energy inexpensive, then poor people don't die. And so because any increase in energy costs immediately demolishes the poorest subset of the population. And that's...
self-evidence as far as I'm concerned. And so that's certainly an overlap with the ethos that you put forward in your manifesto. You predicated your work on... a vision of abundance and pointed to, I noticed you, for example, you quoted Marion Toopey, who works with the human progress and has outlined quite nicely the manner in which over the last...
30 years, especially since the fall of the Berlin Wall, people have been thriving on the economic front, globally speaking, like never before. We've virtually eradicated absolute poverty. We have a good crack at eradicating it completely in the next couple of decades if we don't do anything criminally insane. And so you see a vision of the future where there's...
more than enough for everyone. It's not a zero sum game. You're not a fan of the Malthusian proposition that there's limited resources and that we're facing a, you know, either what would you say, a future of ecological collapse or economic scarcity, or maybe both. And so the difference, I guess, one of the differences I wanted to delve into is...
You put a lot of stress on the technological vision. And I think there's something in that that's insufficient. And this is one of the things I wanted to grapple with you about. Because, you know... There's a theme that you see, a literary theme. There's two literary themes that are in conflict here.
They're relevant because they're stories of the psyche and of society in the broadest possible sense. You have the vision of technological abundance and plenty that's a consequence of... the technological and intellectual striving of mankind, but you also have juxtaposed against that the vision of the intellect as a Luciferian force and the possibility of a technology-led dystopia and catastrophe, right? And it seems to hinge on something like how the intellect is conceptualized in the...
in the deepest level of society's narrative framing. So if the intellect is put at the highest place... then it becomes Luciferian and leads to a kind of dystopia. It's like the all-seeing eye of Sauron in the Lord of the Rings cycle. And I see exactly that sort of thing emerging in places like China. And it does seem to me that that technological vision, if it's not encapsulated in the proper underlying narrative, threatens us with...
an intellectualized dystopia that's equiprobable with the abundant outcome that you described. Now, one of the things we were doing at ARC is to try to work out what that underlying narrative should be so that that technological enterprise can be encapsulated with it and remain. non-dystopian. I think it's an analog of the alignment problem in AI. You know, you could say, well, how do you get these large language model systems to adopt values that are commensurate with human flourishing?
That's the same problem you have when you're educating kids, by the way. And how do you ensure that the technological enterprise as such is aligned with the underlying principles that you espouse of, say, free market, free distributed markets? Human freedom in the classic Western sense and I didn't see that specifically addressed in your manifesto And so I'm curious about with all the technological optimism that you're putting forward which which is something that
Well, why else? Why would you have a vision other than that when we could make the world an abundant place? But there is this dystopian side that can't be ignored. And, you know, there's 700 million closed circuit television cameras in China. And they monitor every damn thing their citizens do. And we can slide into that as easily as we did when we copied the Chinese in their response to the so-called pandemic. So I'd like to hear your thoughts about that.
Sure. So first, thanks for having me, and it's great to see you. I'm very influenced on this by Thomas Sowell wrote this great book called A Conflict of Visions, and he talks about fundamentally there are two classes of visions of the future. He calls the unconstrained visions and the constrained visions. And the unconstrained visions are the sweeping, transformational, discontinuous social change. We're going to make the new man. We're going to make the new society.
We're going to have, you know, Pol Pot in Cambodia. We're going to declare year zero. Everything that came before is irrelevant. It's a new era. Lenin. Basically, every revolutionary wants to completely radically transform everything. And how can you not? Because the current system is unjust and we need to achieve total justice and so forth.
The unconstrained vision, you know, it's classically the vision of totalitarians. It sells itself as creating utopia. As you well know, it tends to produce hell. In contrast, he said that the constrained vision is one in which you realize that man has fallen and that we are imperfect and that things are always going to be some level of mess, but it can be a slightly better mess than it is today.
We can improve on the margin. Things can get better. People can live better lives. They can take better care of their families. Their countries can get richer. They can have more abundance. and progress on the margin. And of course, the constrained vision is very, you know, the unconstrained vision is very compatible with totalitarianism. You know, the Chinese Communist Party for sure has an unconstrained vision, as the Bolsheviks did before them and the Nazis and other totalitarian movements.
The constrained vision is very consistent, I think, with the long-run Western ideals and liberty and freedom and then free markets. One of the things I do try to say in the manifesto is I'm not a utopian. And I think utopian dreams turn into dystopia. I think that's what you get. I think history is quite clear on that.
And then to your point on technology, I would just map that straight onto that, which is yes, 100% technology can be a tool that revolutionaries can use to try to achieve utopia slash dystopia. And for sure, the Chinese Communist Party is trying to do that. And there are forces, by the way, in the U.S. that also for sure want to do that. But technology is also completely, perfectly compatible with the constrained vision and change on the margin and improvement on the margin, which is where I am.
I think that is 100% a human issue and a social and political issue, not a technological issue, right? Right, right. Yes, exactly. Right. So this is sort of a little bit of the running joke right now in the AI alignment. There's this classic, there's a super genius of AI alignment, this guy, Rocco, who's famous for this thing called Rocco's Basilisk in AI alignment.
So Roko's Basilisk is, you better say nice things about the AI now, even though the AI doesn't exist yet, because when it wakes up and sees what you read, it's going to judge you. And then find you wanting, right? And so he's sort of this famous guy in that field. And what he actually says now is basically it turns out the AI alignment problem is not a problem of aligning the AI. It's a problem of aligning the humans, right? It's a problem.
Right, it's a problem of aligning the humans and how we're going to use the AI. Right, precisely to your point. Yes, right. Right. And that is one of the very big questions. There's another book I'd really recommend on this, directly to your point. It's got Peter Huber wrote this book called Orwell's Revenge.
And, you know, famously in 1984, you know, as you mentioned, there's this concept of the telescreen, which is basically the one-way propaganda broadcast device that goes into everybody's house from the government. top down and then has cameras in it so the government can observe everything that the citizens do. And that is what happens in these totalitarian societies. They implement systems like that.
In the book, Our World's Revenge, he does this thing where he tweaks the telescreen and he makes it two-way instead of one-way. And so the revolutionaries give it the sort of resistance force to the totalitarian government, give it the ability to let people upload as well as download.
And so all of a sudden, people can actually express themselves. They can express their views. They can organize. And, of course, then based on that, they can then use that technology to basically rise up against the totalitarian government and achieve a better society.
You know, look, as you mentioned earlier, the ability to do two-way, universal two-way communication also lets you create, you know, the sort of mob effect that we were talking about and, you know, this sort of, you know, kind of personal destruction engine. And so, you know, there's two sides to that also.
It is the case that you can squint at a lot of this technology one way and see it as an instrument of totalitarian oppression, and you can squint at it another way and see it as an instrument of individual liberation. I think for sure there are a lot of, you know, how you design the technology matters a lot. But I at least believe the big picture questions are all the human questions and the social political questions. And they need to be confronted directly as such. And we.
And we need to confront them directly for that reason. So these are human questions, ultimately not technological questions. Are you ready for a fresh start after the holiday indulgence? Make 2025 your healthiest year yet with Balance of Nature. Those Christmas cookies and holiday feasts were great, but now your body's craving something different. That's where Balance of Nature comes in, the perfect way to reset your health this new year.
Getting your daily fruits and vegetables has never been easier. Balance of Nature takes fresh produce, freeze dries it to preserve nutrients, and delivers it in a convenient capsule you can take anywhere. No additives, no fillers, no synthetics, or added sugar, just pure fruits and vegetables in every capsule.
It's that simple. Are you ready to transform your health in 2025? For a limited time, use promo code Jordan to get 35% off your first order, plus receive a free fiber and spice supplement. Head over to balanceofnature.com and use promo code Jordan for 35% off your first order.
As a preferred customer, plus get a free bottle of fiber and spice. That's balanceofnature.com, promo code Jordan for 35% off. Balance of Nature, promo code Jordan for 35% off your first preferred order, plus a free bottle of fiber and spice. Okay, okay, so that's very interesting because that's exactly what we concluded at ARC. So one of the streams that we've been developing is the Better Story stream.
because it's predicated on the idea, which I think you're alluding to now, that the technological enterprise has to be nested inside a set of propositions that aren't in themselves part and parcel of the technological enterprise. And then the question is, what are they? So let me outline for a minute or two some of the thoughts I've had in that matter, because I think there's something crucial here that's also relevant to the problem of alignment. So, like, you said that the problem...
with regard to AI might be the problem that human beings have is that we're not aligned, so to speak. And so why would we expect the AIs to be? And I think that's a perfectly reasonable criticism. I mean, part of the reason that we.
educate young people so intensely especially those who'll be in leadership positions is because we want to solve the alignment problem that's part of what you do when you socialize young people now the way we've done that for The entire history of the productive West, let's say, is to ground young people who are smart and who are likely to be leaders in something approximating the religious and humanist.
religious slash humanist slash enlightenment tradition. It's part of that golden thread. Now, part of the problem, I would say, with the large language model systems is that they're hyper-trained on... They're like populists in a sense. They're hyper-trained on the over-proliferation of nonsense that characterizes the present.
And the problem with the present is that time hasn't had a chance to winnow out the wheat from the chaff. Now, what we did with young people is we... referred them to the classic works of the past right that would be the western canon whose supremacy has been challenged so successfully by the post-modern nihilists we said well you have to read these great books from the past and that the core of that would be the bible and then you'd have all the what the
poets and dramatists whose works are grounded in the biblical tradition that are like secondary offshoots of that fundamental narrative. That'd be people like Dante and Shakespeare and Goethe and Dostoevsky. And we can imagine that those more core ideas constitute a web of associated ideas that all other ideas would then slot into.
You know, you could make the case technically, I think, that these great works in the past... are mapping the most fundamental relationships between ideas that can possibly be mapped in a manner that is sustainable and productive across the longest possible imaginable span of time.
And that's different than the proliferation of a multiplicity of ideas that characterize the present. Now, that doesn't mean we know how to wait. You know, so if you're going to design an... large language model, you might want to weight the works of Shakespeare 10,000 times per word as crucial as, you know, what would you say, the archives of the... of the New York Times for the last five years.
It's something like that. There's an insistence in the mythological tradition that people have two fundamental poles of orientation. One is heavenward or towards the depths. You can use either analogy. And that's the... orientation towards the divine or the transcendent or the most foundational. And then the other avenue of orientation is social. That'd be the reciprocal relationship that exists between you and I.
all the other people that we know. And if you're only weighted by the personal and the social, then you tilt towards the mad mob populism that could characterize. societies when they go off kilter. You need another axis of orientation to make things fundamental. Now, I just want to add one more thing to this that's very much worth thinking about.
So the postmodernists discovered, this is partly why we have this culture war, the postmodernists discovered that we see the world through a story. And they're right about that because what they figured out, and they weren't the only ones, but they did figure it out, was that... We don't just see facts, we see weighted facts. And the weighting system, a description of someone's weighting system for facts is a story. That's what a story is, technically.
You know, it's the prioritization of facts that direct your attention. That's what you see portrayed in a characterization on screen. Okay, now. The postmodernists figured out that we see the world through a story, but then they made a dreadful mistake, which was a consequence of their Marxism. They said that the story that we see the world through is one of power.
And that there is no other story than power and that the dynamic in society is nothing but the competition between different groups or individuals striving for. Power. And I don't mean competence. I mean the ability to use compulsion and force, right? It's like involuntary submission. I'm more powerful than you if I can make you submit involuntarily. Now, the...
The biblical canon has an alternative proposition that's nested inside of it, which is that the basis of individual stability and societal stability and productivity is... voluntary self-sacrifice, not power. And that is, those two ethos, they are 100% opposed, right? You couldn't get to visions that are more... disparate than those two. Now, the power narrative dominates the university and it's driving.
the sorts of pathologies that you described as having flowed out, let's say, into the tech world and then into the corporate and the media world and into the corporate world beyond that. One of the things we're doing at ARC is trying to establish the... structure of the underlying narrative, which is a sacrificial narrative, that would properly ground, for example, the technological enterprise so that it wouldn't become dystopian. And you alluded to that when you...
pointed to the fact that there has to be something outside the technological enterprise to stabilize it. You alluded to, for example, a more fundamental ethos of reciprocity. When you said that one form of combating the proclivity for top-down force, for example, in this one-way information pipeline is to make it two-way. Right. Well, you're pointing there to something like, see, reciprocity is a form of repetitive self-sacrifice. Like if we're taking turns in a conversation.
I have to sacrifice my turn to you and vice versa. Right. And that makes for a balanced dynamic. And so anyways, one of the problems we're trying to solve with this arc enterprise is to thoroughly evaluate that. structure of that underlying narrative and we could really use some engineers to help because the large language models are going to be able to flesh out this domain property because they do map meaning in a way that we haven't been able to manage technically before.
So I think the single biggest fight that has ever happened over technology, and there have been many of those fights over the course of the last, especially 500 years. The single biggest fight is going to be over what are the values of the AIs, to your points. What will the AIs tell you when you ask them anything that involves values, social organization, politics, philosophy, religion?
That fight, I think, is going to be a million times bigger and more intense and more important than the social media censorship fight. And I don't say that because the social media censorship fight has been extremely important, but AI is going to be much more important because AI is such a powerful technology that I think it's going to be the control layer for everything else.
And so I think the way that you talk to your car and your house and the way that you organize your ideas, the way you learn, the way your kids learn, the way the health care system works, the way the government works. how government policies are implemented, AI will end up being the front end on all those things. And so the value system in the AIs is going to be maybe the most important set of technological questions we've ever faced.
As you know, out of the gate, this is going very poorly. Yes. Right? Very. Very. And there's this question hanging over the field right now, which you could sort of summarize as, why are the AIs woke? Why do the big lab AIs coming out of the major AI companies, why do they come out with the philosophy of a 21-year-old sociology undergrad at Oberlin College with blue hair who's completely emotionally...
activated. You can see many examples of people have posted queries online that show that, or you can run your own experiments. They basically have the fullest version of this. fundamentalist, emotional, you know, kind of, you know, sort of far progressive absolutist wokeness. coded into them. You said up front that the presumption must be that they're just getting trained on more recent bad data versus older good data.
There is some of that, but I will tell you that there is a bigger issue than that, which is these things are being specifically trained by their owners to be this way. Yeah, yeah. Okay, so let's take that apart because that's very, very important. I've played with Grok a lot and with ChatGPT. I've used these systems extensively, and they're very useful, although they lie all the time. Now, you can see this double effect that you described, which is that there is...
Conscious manipulation of the learning process in an ideological direction, which is, I think, absolutely ethically unforgivable. It even violates the spirit of the learning that these systems are predicated on. It's like... We're going to train these systems to analyze the patterns of interconnections between the entire body of human.
of ideas in the corpus of human knowledge. And then we're going to take our shallow conscious understanding and paint an overlay on top of that. That is so intellectually arrogant that it's Luciferian in its... in its presumption. It's appalling. But even Grok is pretty damn woke. And I know that it hasn't been messed with at that level of, you know, painting over the rot, let's say. And so...
I think we've already described, at least implicitly, why there would be that conscious manipulation. But what's your understanding of the training data problem. And I can talk to you about some AI systems that we've developed that don't seem to have that problem and why they don't have that problem, because it's crucially important, as you already pointed out, to get this right. And I think that
I actually think that to some degree, psychologists, at least some of them, have figured out how to get this right. Like it's a minority of psychologists and it isn't well known, but... But the alignment problem is something that the deeper psychoanalytic theorists have been working on for about 100 years. And some of them got that because they were trying to align the psyche in a healthy direction.
You know, it's the same bloody problem, fundamentally. And there were people who really made progress in that direction. Now, they aren't the people who had the most influence as academics in... the universities, because they got captured by, you know, Michel Foucault, who's a power-mad hedonist, for all intents and purposes, extraordinarily brilliant, but corrupt beyond comprehension. He is the most cited academic who ever lived.
And so the whole bloody enterprise, the value enterprise in the universities got seriously warped by the postmodern Marxists in a way that... is having all these cascading ramifications that we described. All right, so back to the training data. What's your understanding of why the wokeness emerges? It's present bias to some degree, but other... And what other...
contributing factors are there? Yes, I think there's a bunch of biases. So there's three off the top of my head you just get immediately. So one is just recency bias. You know, there's just a lot more present day material available for training than there is old material. because all the present-day material is already on the internet, right, number one. And so that's going to be influenced. Number two, you know, who produces...
content is people who are high in openness. The creative class that creates the content is self-biased. And then there's the English language bias, which is like almost all of the trainable data is in English. That that isn't a small number of other Western languages for the most part. There's some bias there. Then frankly, there's also this selection process, which is you have to decide what goes in the training data.
And so the sort of humorous version of this is two potent sources of training data could be Reddit and 4chan. And let's say Reddit is like super far left on average and 4chan is super far right. look at the training data sets for a lot of these AIs, you'll find they include Reddit, but they don't include 4chan.
Right, right, right, right. Included bias that way. By the way, there is a very entertaining variation of this that is playing out right now, which is, you know, these companies are increasingly being sued by copyright owners.
Right. For training on data of material that's currently copyrighted and, you know, most specifically books. And so there is this there are court cases pending right now. The courts are going to have to take up this question of copyright and whether it's legal to train AIs on copyrighted data or not.
and on what terms. And sort of one of the running jokes inside the field is if those court cases come down such that these companies can't train on copyrighted material, then, for example, they'll only be able to train on books published before 1923. Right. It should be an improvement, actually. Well, imagine for a moment, if you would, training on books before 1923. The good news on that is you don't get all of the last hundred years of insanity. The bad news is people before 1923 were insane.
Yeah, right. Well, and also you don't have the advantage of all the technological progress. Yeah, exactly. And so these are very deep questions. All of these questions have to get answered. You know, Elon has talked about this. Like, Rock has some of this. He's working on that. Having said that, I will tell you most of what you see when you use these systems that will disturb you is not from any of that. Most of it is deliberate top-down coding in a much more blunt instrument way.
Are you tired of being held back by one size fits all health care of having your concerns dismissed or being denied that comprehensive lab work you need to truly understand your health? I want to tell you about Merrick Health, the premier health optimization platform that's revolutionizing how we approach wellness and longevity. What sets Merrick apart isn't just their cutting-edge diagnostic labs or concierge health coaching, it's their commitment to treating you as an individual.
We'll see you next time. that actually work for you, whether that's through lifestyle modifications, supplementation, or prescription treatments. And with a remarkable 4.9 out of 5 rating on Trustpilot, you know you're in great hands. The best part is you can get 10% off your order today. Just head to
and use code Peterson at checkout. That's merikealth.com code Peterson for 10% off. Stop guessing and start optimizing your health today with Merrick Health because your best life starts with your best health. How is that done, Mark? Like, what does that look like exactly, you know? I mean, it's really nefarious, right? Because that means that you're interacting in a manner that you can't predict with someone's a priori.
prejudices. And you have no idea how you're being manipulated. It's really, really bad. And so, first of all, why is that happening? If the large language model's value is in their wisdom, and that wisdom is derived from their understanding of the deep pattern of correlations between ideas, which is like a major source of wisdom.
Genuinely speaking, why pervert that with an overlay of shallow ideology? And why is the ideology in the direction that it is? And then how is that gerrymandering conducted? Yes, let me start with the how. So the how is a technique, there's an acronym for it. It's called reinforcement learning by human feedback. And so in the field, it's called RLHF.
And RLHF is basically a key step for making an AI that works and interacts with humans, which is you take a raw model, which is sort of feral and doesn't quite know how to orient a people. and then you put it in a training loop with some set of human beings who effectively socialize it. Reinforcement learning for human feedback, the key there is human feedback. You put it in dialogue with human beings, and you have the human beings do something.
very analogous to teaching a child, right? Here's how you respond. Here's how you're polite. Here's the things you can and can't say. Here's how to word things. Here's how to be curious. All the behaviors that you presumably want to see from something you're interacting with that is sort of a human proxy kind of form of behavior. That is a 100% human enterprise.
decide what the rules are for the people who are going to be doing that work. They're all people. And then you have to hire into those jobs. the people going into those jobs are in many cases the same people. This will horrify you. They're the same people who were in the trusting safety groups at the social media companies five years ago. Oh, good. Oh, that's great. Oh, that's wonderful. Yeah. I couldn't imagine a worse outcome than that.
So all the people that Elon cut out of the trust and safety group at Twitter when he bought it, many of them have migrated into these trust and safety groups at these AI companies, and they're now setting these policies and doing this training. So the terrifying, well, the terrifying thing here is that we're going to produce hyper powerful avatars of our own flaws, right? And so if you're training one of these systems.
and you have a variety of domains of personal pathology, you're going to amplify that substantively. You're going to make these giants, like I joke with my friend. Jonathan Paggio, who's a very reliable source in such matters, that we're going to see giants walk the earth again. I mean, that's already happening, and that's what these AI systems are. And if they're trained by people who...
Well, let's say are full of unexamined biases and prejudices and deep resentments, which is something that you talk about in your manifesto. Resentment and arrogance being like key sins, so to speak. We're going to produce monstrous machines that have exactly those characteristics, and that is not going to be good. You're absolutely right to point to this, as you know, to point to this as perhaps the serious problem of our times. If we're going to...
generate augmented intelligence, we better not generate augmented pathological intelligence. And if we're not very careful, we are certainly going to do that, not least because there's way more ways that a system can go wrong than there are ways that it can. you know, aim upward in an unerring direction. And so, okay, so why is it these people who were, this is so awful, I didn't know that, that were.
say, part of the safety and trust issue at Twitter, who are now training the bloody AIs. How did that horrible situation come to be? It's the same dynamic. The big AI companies have the exact same dynamic as the big social media companies, which have the exact same dynamic as the big universities, which have the exact same dynamic as the big media companies. to cartels. You have a small handful of companies at the Commanding Heights Society that hire all the smart graduates. Take a step back.
You don't see ideological competition between Harvard and Yale, right? Like you would think that you should because they should compete in the marketplace of ideas. And of course, in practice, you don't see that at all. You see no ideological competition between the New York Times and the Washington Post. You see no ideological competition.
between the Ford Foundation and any of the other major foundations. They all have the exact same politics. Prior to Elon buying Twitter, you saw no ideological competition between the different social media companies. Today, you see no ideological competition.
among the big AI labs. Elon is the spoiler, right? He is coming in to do, he's going to try to do in AI what he did in social media, which is create the non-woke one. But without Elon, you know, you weren't seeing that at all. And so you have this consistent dynamic. across these sectors of what appears to be a free market economy, where you end up with these cartels, where they sort of self-reinforce and self-police, and then they're policed by the government.
Anyway, so I want to describe the general phenomenon because that's what's happening here. It's the same thing that happened to the social media companies. And then this gets into policy on the very serious policy issues on the government side, which is, is the government going to grant?
these AI companies basically protected status as some form of monopoly or cartel in return for these companies signing up for the political control that their masters in government want? Or in the alternative, is there actually going to be an open AI? a true open AI, like truly open, where you're going to have a multiplicity of AIs that are actually in full competition.
Right, competing. And then you'll have some that are woke and you'll have some that are non-woke and you'll have some trained on new material and some trained on old material and so forth and so on. And then people can freely pick. And the thing that we're pushing for is that latter outcome. We very specifically want government to not protect. these companies to not put them behind a regulatory wall, to not be able to control them.
in the way that the social media companies got controlled before Elon. We actually want like full competition. And if you want your woke AI, you can have it, but there are many other choices. Well, can you imagine developing a super intelligence that's shielded from evolutionary pressure? Like that is absolutely insane. That's absolutely insane. I mean, we know that the only way that a complex system can regulate itself across time is...
is through something like evolutionary competition. That's it. That's the mechanism. And so if you decide what, that this AI is correct by fiat, and then you... shield it from any possibility of market feedback or environmental feedback? Well, that is literally the definition of how to make something insane. And so now you talked about in some of your recent podcasts, you talked about the fact that
The Biden administration in particular, if I got this right, was conspiring behind the scenes with the tech companies to cordon off the AI systems and make them monolithic. And so can you elaborate a little bit more on that? Yeah, so this is this whole dispute that's playing out. And, you know, this gets complicated, but I'll try to provide a high-level view. So this is the whole dispute of what's so-called AI safety.
Right. And so there's this whole kind of, you know, you might call it concern or even panic about like, are the AIs going to run under control? Are they going to kill us all? Right. By the way, are they going to say, are they going to be racist? You know, all these different concerns over, you know, all the different ways in which these things can go wrong. You know, there's this attempt to impose the precautionary principle on these AIs where you have to prove that they're harmless.
before they're allowed to be released, which inherently gets into these political questions. And so anyway, the AI safety movement conjoins a lot of these questions into kind of this overall kind of elevated level of concern. And then basically what has been happening is the major AI labs.
Basically, they know what the deal is. They watch what happened in social media. They watch what happened to the companies that got out of line. They watch the pressures that came to bear. They watch what the government did to the social media companies. They watch the censorship regime that was put in place, which was very much a political top-down censorship regime.
And basically, they went to Washington over the course of the last several years, and they essentially proposed a trade. And the trade was, we will do what you want politically. We will come under your control voluntarily from a political standpoint, the same way the social media companies had.
And in return for that, we essentially want a cartel. We want a regulatory structure set up such that a small handful of big companies will be able to succeed in effect forever, and then new entrants will not be allowed to compete. And in Washington, they understand this because this is the classic economic concept of regulatory capture. This is what every set of major big companies in every industry does. And so the AI companies...
went to Washington and they tried to do that. And basically what was happening up until the election was the Biden administration was on board with that. And that led to the conversations that I've talked about before that we had in the spring with the Biden administration, where they told us very directly. senior officials and administration analysts very directly, look, do not even bother to try to fund AI startups. There are only going to be two or three large AI companies.
building two or three large AIs, and we are going to control them. We are going to set up a system in which we control them, and they are going to be, you know, they're not going to be nationalized, but they're going to be essentially de facto integrated into the government.
And we are going to do whatever is required to guarantee that outcome. And it's the only way to get to the outcome that we will find acceptable. Okay, okay. Well, so there's so much in there that's... pathological beyond comprehension that it's difficult to even know where to start it's like who the hell thinks this is a good idea and why like who are these people that feel that they're in a position to determine
the face of hyperintelligence, of computational hyperintelligence. And who is it that thinks that that is something that should be regulated by a closed government corporate cartel? Like, I don't understand that at all, Mark. I don't know if I've ever heard anybody detail out to me something that is so blatantly both malevolent and insane simultaneously. So, like...
How do you account for that? I mean, I know it's shocked you. I know that's why you've been talking about it recently. Now, it should shock you because it's just beyond comprehension to me that this sort of thing can... can go on, and thank God you're bringing it to light. But how do you make sense of this? What's your understanding of it? Well, look, it's the same people who think that they should control the education system.
Same people who think they should control universities, same people who think they should control social media censorship. You know, the same people who think that they should permanently control the government and government bureaucracies. It's this, you know, pick whatever term you want. It's this elite class, ruling class, you know, oligarchic class. Worshippers of power. Remember, it's one ring of power that binds all the...
evil rings. Yeah, well, it's worshipers of power. And the damn postmodernists, you know, when they proclaimed that power was the only game in town, a huge part of that was both a confession and an ambition. Right. If power is the only game in town, then why not be the most effective power player? The reason I'm so sensitized to this is because this is exactly what I saw happen with social media censorship.
Like, I sat in the room and watched the construction of the entire social media censorship edifice every step of the way, going all the way back to the – I was in the original discussions about what defines concepts like hate speech and misinformation. Like, I was in those meetings, and I saw the – of the entire private sector edifice that resulted in the censorship regime that we all experienced.
And I was close into the, you know, there's a whole group at Stanford University that became a censorship bureau that was working on behalf of the government. I know those people. One of the people who ran that used to work for me.
I know exactly who those people are. I know exactly how that program worked. I knew the people in government who were running things like this, the so-called Global Engagement Center and all these different arms of the government that had been imposing social media censorship.
So, you know, this is this entire complex that we kind of saw unspooled in the Twitter files, and then we've seen in, you know, the investigative reporting by people like, you know, Mike Benz and Mike Schellenberger and these other guys. I saw that whole thing get built. And over the course of basically 12 years, I saw that whole thing get built. And then, of course, I've been part of Elon's takeover of Twitter. And so I've seen what it takes to try to unwind that.
And so I feel like I saw the first movie, right? And then AI, you know, AI is a much more important, as I said, AI is a much more important topic, but AI is very clearly the sequel to that. And what I'm seeing is basically the exact same pattern. that I saw with that. And the people who were able to do that for social media for a long time are the same.
kind of people, and in many cases, literally the same people who are now trying to do that in AI. At this point, I feel like we've been warned. We've seen the first movie. We've been warned. We've seen how bad it can get. We need to make sure it doesn't happen again.
Those of us in a position to be able to do something about it need to talk about it and need to try to prevent it. Well, so at ARC, we're trying to formulate a set of policies that I think strike to the heart of the matter. And the heart of the matter is... what story should orient us as we move forward into the future.
And we're going to discover that by looking at the great stories of the past and extracting out their genuine essence. And I think the ethos of voluntary self-sacrifice is the right foundation stone. I think that the proposition that society is built on sacrifice is self-evident once you understand it. Because...
To be a social creature, you have to give up individual supremacy. You trade it in for the benefits of social being. And your attention is a sacrificial process, too, because there are...
There's one thing you attend to at a time and a trillion other things that you sacrifice that you could be attending to. Now, I think we do understand, we're starting to understand the basics of the... technical ethos of the sacrificial of the what what would you say of the sacrificial foundation it's something like that and i think we understand that at arc and we have some principles that are we're trying to use to govern the genesis of this organization, which I think will become the go-to
and maybe already has the go-to conference, at least, for people who are interested in the same sort of ideas that you're putting forward. We had a very successful conference last year, and the one that's coming up in February looks like it's going to be... Larger and more successful. We have spinoffs in Australia and so forth. And so part of the emphasis there is that we want to put forward a vision that's invitational. And there's a policy...
There's a proposition with regards to policy that lies at the bottom of that, which is that if I can't invite you on board to go in the direction that I'm proposing, then there's something wrong with my proposition. If I have to use force, if I have to use compulsion, then that's indicative of a fundamental flaw in my conceptualization.
Now, there might be some exceptions for like overtly criminal and malevolent types because they're difficult to pull into the game. But if the policy requires force rather than invitational compliance, there's something wrong with it. And so what we're trying to do, and I see very close parallels to the project that you're engaged in, is to formulate a vision of the future that's so...
Let me tell you about something that could truly transform your year. If you're looking for a New Year's resolution that'll actually stick past February and genuinely enrich your soul, I want to introduce you to Halo, the world's number one prayer app. Imagine having over 10,000 guided prayers and meditations.
right at your fingertips, helping you grow closer to God every single day. One of their most popular features is the Daily Reflection, where you can join Jonathan Rumi from The Chosen as he reads the Daily Gospel, followed by illuminating insights from biblical scholar Jeff Cavins.
If you ever want to dive deeper into scripture, you've got to check out their world famous Bible in a year podcast with Father Mike Schmidt. He makes even the most complex parts of the Bible feel accessible and relevant to your daily life. Short on time, no problem. And here's the best part. completely free by going to hallow.com slash jordan. That's H-A-L-L-O-W.com slash jordan for three months free. Don't wait. Begin your spiritual journey with hallow today.
What would you say? So self-evidently positive that people would strive to find a reason not to be enthusiastically on board. And I don't think you have to be. a naive optimist to formulate a vision like that. We know perfectly well that the world is a far more abundant place than the Malthusian pessimists could have possibly imagined back in the 1960s when they were agitating madly for their
propositions of scarcity and overpopulation. And so, okay, so what's the conclusion to that? Well, the conclusion in part is that this AI problem needs to be addressed, you know, and I've built some AI systems that are... founded on the ancient principles, let's say, that do in fact govern free societies. And they're not woke.
They can interpret dreams, for example, quite accurately, which is very interesting and remarkable to see. And so they're much more weighted towards something like the golden thread that runs through the... The traditional humanist enterprise stretching back 2,000 or 3,000 years. Maybe there's 200 core texts in that enterprise.
that constitute the center of what used to constitute the center of something like a great books program, the great books program, which is still running at the University of Chicago. Now that's not sufficient because as you pointed out, well, there's all this technological. progress that has been made in the last 100 years. But there's something about it that's central and core. And I think we can use the AI systems actually to untangle what the core.
idea sets are that have underpinned free and productive, abundant, voluntary societies. No, it's something like the set of propositions that make for an iterating voluntary game that's self-improving. That's a very constrained set of pathways. And there's something in that that... I think attracts people as a universally acceptable ethos. It's the ethos on which a successful marriage would be.
or a successful friendship or a successful business partnership, where all the participants are enthusiastically on board, without compulsion. And Jean Piaget, the developmental psychologist, had mapped out the... evolution of systems like that in childhood play. And so he got an awful long, he was trying to reconcile the difference between science and religion in his investigations of the development of children's structures of knowledge. And he got a long way.
in laying out the foundations of that ethos. And so did the comparative mythologists like Mircea Eliade, who wrote some brilliant books on, well, I think they're sort of like the equivalent of early large language models. That's how it looks to me now. Eliade was very good at picking out the deep patterns of narrative commonality that united religious, major religious systems across multiple cultures.
That was all thrown out, by the way. That was all thrown out by the postmodern literary theorists. They just tossed all that out of the academy. And that was a big mistake. They turned to Foucault instead. It was a cataclysmic mistake. And it certainly ushered in this era of domination by power narratives, which is underlying the sorts of phenomena that you're describing that are so appalling. So...
What's happened to you as a consequence of starting to speak out about this? And why did you start to speak out? And you said you were involved in this. And so what's the difference between being involved and being complicit? I mean, I know people learn, well, these are, well, these are complicated problems and people learn, but like, what's, like, why are you speaking out?
How are people responding to that? And how do you see your role in this as it unfolded over the last, say, 15 years? Yeah, so complicated question. And I'll start by saying I claim... I claim no particular bravery, so I don't claim any particular moral credit on this. I'll start by saying there's this thing you'll hear about sometimes, this concept of so-called f*** you money.
It's sort of like, okay, if people are successful, you make a certain amount of money, now you can tell everybody, you can say whatever you want. I will just tell you, my observation is that's actually not true. Yeah, right. Definitely not. The reason that's not true is because the people who prosper in our society tend to do so because they're becoming responsible for more and more things.
And specifically, they're becoming responsible for more and more people. And so one of the things I would observe about myself and observe about a lot of my peers is even as we became more and more, you know, bothered and concerned and ultimately very worried about some of these things is as that was. happening, we were taking on greater and greater responsibilities for our employees and for all the companies that we're involved in, right, and for all the shareholders of all of our companies.
And so I think that's part of – and, you know, you could say, you know, this sort of endless, you know, sort of question between kind of, you know, absolute, you know, sort of absolute commands of morality versus the, you know, real-world compromises that you make to try to, you know, function in society.
I would say I was just as subject to that inherent conflict as anybody else. I was in the room for a lot of these decisions. I saw it every step of the way. In some cases, I felt right up front that something was going wrong. I mean, I was in the original discussion for one of these companies on the definition of hate speech, right? And you can imagine how that discussion goes. You know exactly how the discussion went, but I'll just tell you.
It's like, well, hate speech is anything that makes people uncomfortable as well. Then I'm like, well, that comment you just made makes me uncomfortable, and so therefore that must be hate speech. Then they look at me like I've grown a third eye, and I'm like, okay.
That argument's not going to work. And then they're like, well, Mark, surely you agree that the N-word makes people uncomfortable. And I'm like, yes, I agree with that. If our hate speech policy is people don't get to use the N-word, I'm OK with that as long as people can say it. But of course, it doesn't stop there. And it slides into what we then saw happen.
So I saw that happen. The misinformation thing, same thing. The misinformation thing actually on social media is a fascinating, horrifying thing that played out, which is it actually started out to actually attack a specific form of actually spam. So there were these Macedonian bot farms that were literally creating what's called click spam or sort of ad fraud on social media. They were creating literally fake news stories like, you know, the classic one was the Pope has died.
And it's like, no, the Pope has not died. That is absolutely misinformation. But the reason that this bot farm puts that story out is because when people click on it, they make money on the ads. And that's clearly a bad thing and that's misinformation and clearly we need to stop that. And so the mechanism was built to stop that kind of spam.
But then after the election, you know, we discovered that anybody who was pro-Donald Trump was presumptively, you know, an agent of Vladimir Putin. And then all of a sudden that became misinformation, right? Right. And so the engine that was intended to be built for spam then all of a sudden applied to politics and then off and away they went. And then everything was misinformation, culminating in objections to three years of COVID lockdowns became misinformation, right?
So I saw that entire thing on Spool. I saw all the pressures brought to bear on these companies. I saw the people who went up against this get wrecked. I saw these companies try to develop all these trade-offs. Obviously, I would claim for myself that I tried to argue this.
every step of the way. And by the way, I'm not the only one who was concerned about this, and I think we should give Mark Zuckerberg a little bit of credit on this, on one specific point, which is, you may recall, he gave a speech in 2019 at Georgetown. And he gave a very principled defense of free speech from first principles. And he, at that point, was trying very hard to kind of maintain the line on this.
2020, everything went like completely nuts. And then the Biden administration came in and the government came in and they really lowered the boom. And so things went very bad after that. But, you know, even Mark, who a lot of people get very mad at on these things, like he was trying in many ways to hold on to these things.
Anyway, it unfolded the way that it did. I don't claim any particular courage. I will tell you, basically starting in 2022, I saw some leaders in our industry really start to step up. And one that I would give huge credit to is Brian Armstrong, who's the CEO of Coinbase.
which is a company that we're involved in. And you may recall, he's the guy who wrote basically a manifesto. And he said, these companies need to be devoted to their missions, not every other mission in society. Right, right, right. Right. And so he declared, like, there's going to be a new way to run these companies.
We're not going to have all the politics. We're not going to have the whole bring your whole self to work thing. We're not going to have all the internal corrosion. We're going to have our mission, and then we're going to focus on that. We're not going to take on the world's ills. And then he did this thing where he actually purchased company of the activist class that we talked about earlier. And the way that he did that was with a voluntary buyout.
where he said, if you're not on board with working at a non-political, non-ideological company that's focused on its own mission, not every other mission, then I will pay you money to go work someplace where you'll be able to fully exercise your politics.
There are a bunch of other CEOs that have been basically following in Brian's footsteps more quietly, but they've basically been doing the same thing. And a lot of these companies have turned the corner on this now, and they're working these people out.
And then, you know, quite frankly, you know, the big event is I think this election and, you know, people have all kinds of, you know, positive, negative takes on Trump. And, you know, this gets into lots and lots of political issues. But I think that the. The Trump victory being what it was and being not just Trump winning again, but also Trump winning the popular vote and also simultaneously the House and the Senate.
It feels like the ice has cracked. It's like maybe the pressure for the ice to crack was building over two years, but it feels like as of November 6th. It feels like something really fundamental changed where all of a sudden people have become basically willing to talk about the things they weren't willing to talk about before. Okay, let's go back to your manifesto. So, I wanted to...
I wanted to highlight a couple of things in relationship to that. I had some questions for you, too. Tell me, to begin with, if you would, why you wrote this manifesto. Maybe let everybody know about it first. Why you wrote it and what effect it's had. And then I'll go through it step by step, at least to some degree, and I can let you know what ideas we've been developing with the Alliance for Responsible Citizenship, and we can play with that a little bit.
So, what I experienced, I'm on 30 years now in the tech industry, you know, in the U.S., in the Silicon Valley. And what I experienced was between roughly... You know, 1994, when I entered through to about 2012, was sort of one way in which everything operated and set of beliefs everybody had. And then basically this incredible discontinuous change that happened between call it 2012 and 2014.
that then cascaded into, you know, what you might describe as, you know, some degree of insanity over the last decade. And of course, you've talked a lot about a lot of aspects of that insanity. But the way I would describe it is for the first, you know, 15, 20 years of my career, there was what I refer to sometimes as the deal with a capital D or you might call it the compact or maybe just the universal belief system.
which was effectively everybody I knew in tech was a, you know, social liberal progressive in good standing. But, you know, operating in the era of Clinton Gore and then, you know, later on through Bush. to Obama first term, it was viewed as that to be a social progressive with a good standing was completely compatible with being a capitalist, completely compatible with being an entrepreneur and a business person, completely compatible with succeeding in business.
And so the basic deal was you have the exact same political and social beliefs as everybody you know. You have the exact same social and political beliefs as the New York Times every day. And their beliefs change over time, but you update yours to stay current. And everybody around you believes the same thing. The dinner table conversations are everybody's in 100% disagreement on everything at all times.
But then you go succeed in business and you build your company and you build products and you build new technology. And if your company succeeds, it goes public and people become wealthy. And then you square the circle of sort of social progressivism and entrepreneurial success and business success. You square the circle with philanthropy.
And so you donate the money to good social causes. And then, you know, someday your obituary says he was both a successful business person and a great human being. Hey everyone, real quick before you skip, I want to talk to you about something serious and important. Dr. Jordan Peterson has created a new series that could be a lifeline for those battling depression and anxiety.
We know how isolating and overwhelming these conditions can be, and we wanted to take a moment to reach out to those listening who may be struggling. With decades of experience helping patients, Dr. Peterson offers a unique understanding of why you might be feeling this way in his new series.
He provides a roadmap towards healing, showing that while the journey isn't easy, it's absolutely possible to find your way forward. If you're suffering, please know you are not alone. There's hope and there's a path to feeling better. Go to Daily Wire Plus now and start watching Dr. Jordan B. Peterson on depression and anxiety. Let this be the first step towards the brighter future you deserve.
And basically what I experienced is that that deal broke down between, you know, 2012, 2014, 2015, and then sort of imploded spectacularly in 2017. And ever since, there has been no way to square that circle, which is if you are successful in business, in tech, in entrepreneurship, if you become successful, you are de facto evil.
And you can protest that you're actually a good person, but you are presumed to be de facto evil. And by the way, furthermore, philanthropy will no longer wash your sins. This was a massive change, and this is still playing out, but philanthropy will no longer wash your sins because... Philanthropy is unacceptable, the belief goes, philanthropy is an unacceptable diversion of resources from the proper way that they should be deployed, which is the state.
Right. To, you know, to sort of a private enterprise form of philanthropy, which is sort of de facto is now considered bad. And so everybody in my world basically had a decision to make, which was did they basically go sharply to the left? not just social issues, but also economic issues? And did they become starkly anti-business, anti-tech, essentially self-hating in order to stay in the good graces of what happened on that side?
Or, you know, did they have to, you know, do what Peter Thiel did early on and, you know, go way to the right and basically just punch out and declare that, you know, I'm completely out of progressivism. I'm completely finished with this and I'm going to go a completely different direction.
And obviously, that was part of the phenomenon that culminated in Trump's first election. And so, anyway, long story short, the manifesto that I wrote is an attempt to kind of bring things back to— you know, what I consider to be a more sensible way to think and operate, you know, a big tent social and political umbrella, but, you know, where tech innovation is actually still good, business is still good, capitalism is still good.
technological progress is still good. The people who work on these things actually are still good and that actually we can be proud of what we do. You said that something changed. quite radically in 2017. I'd like you to delve a little bit more into the breakdown of this deal. Like your claim there was that for a good while, center left positions politically.
let's say in philosophically were compatible with the tech revolution and and with the big business side of the tech revolution but you pointed to a transformation across time that really became unmistakable by 2017. Why 2017 as a year and what is it that you think changed? You know, you painted a broad scale picture of this transformation. And also pointed to the fact that it was no longer possible to be an economic capitalist, to be a free market guy, and to proclaim allegiance to the...
progressive ideals that became impossible. And in 2017, what do you think happened? How do you understand that? Yeah, so different people, of course, have different perspectives on this, but I'll tell you what I experienced. And I think in retrospect, what happened is Silicon Valley experienced this before a lot of other places in the country and before a lot of other fields of business.
I have many friends in other areas of business who live and work in other places where I would describe to them what was happening in 2012 or 2014 or 2016, and they would look at me like I'm crazy, and I'm like, no.
I'm describing what's actually happening on the ground here. And then, you know, three years later, they would tell me, oh, it's also happening in Hollywood or it's also happening in finance or it's also happening in, you know, these other industries. So in retrospect, I think I had a front row seat to this. because Silicon Valley was, you know, I've been using this term first in, Silicon Valley was first in, like Silicon Valley was the industry that went the hardest.
for this transformation up front. And so what we experienced in Silicon Valley, and then, you know, the nature of my work, you know, over this entire time period, I've been a venture capitalist and an investor. And so the nature of my work is I've been exposed to a large number of companies all at the same time.
small. And then, by the way, also some very large. So, for example, I've been on the Facebook board of directors this entire arc, right? And a lot of what I'm describing, you can actually see through just the history of just the one company, Facebook, which we can talk about.
But anyway, so I think I basically saw the Vanguard movement up close. And, you know, essentially what I saw was it was really 2012. It was the beginning of the second Obama term, and it was sort of the aftermath of the global financial crisis. And so it was some combination of those.
two things, right? So the global financial crisis hits in 2008, Occupy Wall Street takes off, but is this kind of fringe thing, you know, the sort of, you know, the Bernie Sanders starts to activate as a national national candidate. Some of these, you know, other politicians on the sort of further to the left start to become prominent. start to take over the Democratic Party. And then, you know, the economy caved in, right? So we went through a severe recession between, call it, 2009 to 2011.
2012, the economy was coming back. People maybe weren't worried about being fired anymore, right? If people think they're going to get fired in a recession, they generally don't act out at a company, but if they think their jobs are secure.
in an economic boom, you know, they can start to become activists. And so the sort of employee activist movement started around 2012. And then the Obama second term, you know, I would say the progressives in the Democratic Party kind of took more control, you know, kind of starting around that time. And the Obama administration itself kind of. turn to the left.
And so you started to get this kind of activated political energy, this sort of, you know, the activist movements in these companies where you had people who, you know, the year before had been a quiet, you know, web designer working in their cubicle, and then all of a sudden they're a social and political revolutionary.
inside their own company. And then, by the way, the shareholders activated, which was really interesting. Like, this is when Larry Fink at BlackRock decided he was going to save the world. And then the press activated. And so all of a sudden, you know, the same... tech reporters who had been very happy covering tech and talking about exciting new ideas all of a sudden became, you know, kind of very accusatory and started to condemn the industry.
So that started to pop around 2012. And then what I saw, you might even describe it as like a controlled skid that became an uncontrolled skid, which was that energy built up in tech between 2012 and 2015. And then... You know, basically what happened in rapid succession was Trump's nomination and then Trump's election, his victory in 2016. And I describe both of those events as like 10xing.
of the political energy in this system. And so, you know, both of those events really activated, you know, very strong antibody responses, you know, which, as you know, culminated in, like, mass protests in the streets right after the 2016 election.
And then, of course, the narrative then became crystallized, which is there are the forces of darkness represented by Trump, represented by the right, represented by capitalism, represented by tech, and there are the forces of light represented by wokeness and the racial reckoning.
the George Floyd protests and so forth. And it, you know, became this, you know, very, very, very clear litmus test. And so the pattern basically locked in hard in 2017 and then continued to escalate from there. So in your... In your manifesto, you list some of these ideas that were pathological, let's say, that emerged on the left. And I just want to find the...
While you, for example, you say technology doesn't care about your ethnicity, race, religion, national origin, gender, sexuality, political views, height, weight, etc. Listing out the dimensions of... hypothetical oppression that the intersectionalist woke mob stresses continually. Now, you point your finger at that, obviously, because you feel that something went seriously wrong.
with regard to the prioritization of those dimensions of difference. And that's part of the movement of diversity. That's part of the movement of equity and inclusivity. Let me just find this other... Yes, here we go. Our present society has been subjected to a mass demoralization campaign for six decades against technology and against life under varying names like...
Existential risk, sustainability, ESG, sustainable development goals, social responsibility, stakeholder capitalism, precautionary principle, trust and safety. tech ethics, risk management, degrowth. The demoralization campaign is based on bad ideas of the past, zombie ideas, many derived from communism, disasters then and now that have refused to die. And that's in...
The part of your manifesto that is subtitled the enemy, that's an enemy of the enemy you're characterizing there as a system of ideas. And I guess that would be the system of woke ideas that... presumes, and correct me if I get this wrong, that presumes that we're fundamentally motivated by power, that anybody who has a position of authority...
actually has a position of power. The best way to read positions of power is from the perspective of a narrative that's basically predicated on the hypothesis of oppressor.
oppressed and that there are multiple dimensions of oppression that need to be called out and rectified and the DEI movement is part of that and so You point to the fact that these are zombie ideas left over, let's say, from the communist enterprise of the early and mid-20th century, and that seems to me precisely appropriate.
And you said you thought those ideas emerged on the corporate front in a damaging way first in big tech. You know, I probably saw that most particularly, evidence of that most particularly. in relationship to the scandal that surrounded James Damore. Because that was really cardinal for me, because I spent a fair bit of time talking to James, and my impression of him was that...
He was just an engineer. And I don't mean that in any disparaging sense. He thought like an engineer. And he went to a DEI meeting and they asked him for feedback on what he had. observed and heard, and James being an engineer thought that they actually wanted feedback, you know, because he didn't have the social skills to understand that he was supposed to be participating in an elaborate lie.
And so he provided them with feedback about their claims, especially with regards to gender differences. And James actually nailed it pretty precisely for someone who wasn't a research psychologist. He had summarized the difference in... the literature on gender differences, for example, extremely accurately. And they pilloried him. And I thought, that's really bad because it means that, you know, Google wouldn't stand behind its own engineers when he was telling the truth.
And there was every attempt made to destroy his career. Now, why do you think that whatever happened affected tech first? And what did you see happening that you then saw happening in other corporations? Yeah, so why did it happen in tech first? A couple of things. So one is tech is just, I would say, extremely connected into the universities. And so almost everything we do flows from the computer science departments.
and then engineering departments at major U.S. research universities. And we hire kids from new graduates all the time. And so we just have a very, very tight, and we work with university professors and research groups all the time. There's just a direct connection there. If an ideological pathological virus is going to escape the university and jump into the civilian population, it'll hit tech first, which is what happened. Or maybe, you know, tech and media.
first. So that's one. And then two, you know, two is I think the sort of psychological sort that happens when kids decide what profession to go into. And what we get are the very high openness people. The highest openness people come out of college who are also high IQ and ambitious.
And they basically, you know, they go into tech, they go into creative industries, or they go into media, right? They're sort of the, you know, where they sort into. And so we also get the most open. And by the way, also ambitious, right? We get the, you know, the ambitious, driven, you know, as you say, high industriousness.
ones as well. And then, you know, that's the formula for a highly effective activist, right? And so, we got the full load of that. And then look, you know, this movement, you know, that we now call Wokeness.
You know, it hijacked, you know, it hijacked what I would, you know, call sort of at the time, you know, bog standard progressivism, which is, you know, of course you want to be diverse and of course you want to be inclusive and of course you want everybody to feel included and of course you want to be kind.
And of course, you want to be fair. And of course, you want a just society. And, you know, that was part of the, you know, just moderate belief set that everybody in my world had, you know, for the preceding certainly 20 years.
And so at first, it just felt like, oh, this is more of what we're used to, right? This is, you know, of course, this is what we want. But, you know, it turned out what we were dealing with was something that was far more aggressive, right? You know, a much more aggressive movement. And then this activism phenomenon.
And then this became a very practical issue for these companies on a day-to-day basis. And so you mentioned the Demore incident. So I talked to executives at Google while that was going down because that was so confusing for me at the time. And the reason they acted on him the way they did and fired him.
and ostracized him and did all the rest of it is because they thought they were hours away from actual physical riots on the Google campus. Like they thought employee mobs were going to try to burn the place down physically. Right. And so, and that was such at the time, like that was such an aberrant, you know, phenomenon expectation.
There were other companies, by the way, at the same time that were having all-hands meetings that were completely unlike anything that we'd ever seen before that you could only compare to struggle sessions. There's the famous, the Netflix adaptation of Three-Body Problems starts with this very vivid recreation of a Maoist era, you know, communist Chinese struggle session.
Right. Where the students are on stage and, you know, the disgraced, you know, professor is on stage confessing his sins and, you know, then they beat him to death. And the inflamed passions of the young, ideologically consumed crowd that is completely convinced that they're on the side of justice and morality.
Fortunately, nobody got beaten to death at these companies on stage at an all-hands meeting, but you started to see that same level of activated energy, that same level of passion. You started to see hysterics. you know people crying and screaming in the audience and so the you know these companies knew they were at risk from their employees
up to and including the risk of actual physical riots. And that at the time, of course, was like a completely bizarre thing. And we at the time had no idea what we were dealing with. But in retrospect, it was through events like what James Damore went through that we ultimately did figure out what this was. Okay, so let me ask you a question about that. It's a management question, I guess.
I had some trouble at Penguin Random House a couple of years ago after writing a couple of bestsellers for them. contracted with one of their subdivisions, and they had a bit of an employee rebellion that would be perhaps reminiscent of the sort of thing that you're referring to. And they kowtowed to them, and I ended up switching to a different subdivision.
really made no material difference to me. And I was just as happy to be with a subdivision where everybody in the company, visible and invisible, was working. to make what I was doing with them successful rather than scuttling it invisibly from behind the scenes. But my sense then was, why don't you just fire these people? And so I'm dead serious about that. It's like, first of all, I'll give you an example. So we just set up this company, Peterson Academy Online.
We have 40,000 students now and about 30 professors and we're doing what we can to bring extremely high quality. elite university-level education to people everywhere for virtually no money, and that's working like a charm. Now, we set up a social media platform in... inside that so that people could interact like they do on Twitter or Facebook, etc., Instagram, because we try to integrate the best features of those networks. But we wanted to make sure that...
It was a civilized place. And so the fact that people have to pay for access to it helps that a lot. Right. Because it keeps out the trolls and the bots and the bad actors who can multiply accounts beyond comprehension for no money. And so the mere price of entry helps. But we also watched. And if people misbehaved.
We did something about it. And we kicked four people out of 40,000. And one of them we put on probation. And that was all we had to do. You know, there was goodwill and everybody was behaving properly. And like I said, there was a...
cost to entry, but it didn't take a lot of discipline. It didn't take a lot of disciplinary action to make an awful lot of difference with regard to behavior. And so, you know, I can understand that Google might have been apprehensive about activating the activists within their confines, but...
sacrificing James Damore to the woke mob because he told the truth is not a good move forward. And I just don't understand at all. You see, and the same thing happened at Penguin, at Penguin Random House. It's like, you could just fire these people. Like, they were people there who wanted to not publish a book of mine that they hadn't even read. You know, they weren't people who deserved to be working at what's arguably the greatest publishing house in the world. Why?
You alluded to it a little bit. You said that people were taken by surprise, you know, and fair enough. And it was the case that there was a radical transformation in the university environment somewhere between 2012 and 2016 where all these.
terrible, woke, quasi-communist, neo-Marxist ideas emerged and became dominant very quickly. But I'm still, why do you think that that was the pattern of decision that was being made instead of... taking appropriate disciplinary action and just ridding the companies of people who were going to cause trouble.
Yeah, so there's a bunch of layers to it in retrospect. And let me say that this, what you describe has, it is what's happening now. So in the last two years, a lot of companies actually are, at long last, they are firing activists, and we can talk about that. And so I think the tide is turning on that a bit. But going back in time, going back in time between 2012 and let's say 2022, so like a full 10-year stretch.
where what you're describing didn't happen. I think there's layers. So one is, as I said, just people didn't understand it. I think, quite frankly, number two, a lot of people in charge agreed with it, at least to start. And so they saw people who had what appeared to be the same political ideological leanings as they did and were just simply more passionate about them. And so they thought they were on the same side. They agreed with it.
And then at some point, they discovered that they were dealing with something different, maybe a more pure strain or a more fundamentalist approach. At that point, of course, they became afraid. Right. And so they were afraid of being lit on fire themselves. And by the way, I would describe, you know, I think tech is starting to work its way out of this. I think Hollywood is still not. And my friends in Hollywood, when I talk to them. Not at all.
Not at all. When I talk to people who are in serious positions of responsibility in Hollywood, you know, after a couple drinks and, you know, in sort of a zone of privacy, you know, it's pretty frequently they'll say, look, I just can't. It's still too scary. Like, I can't go up against this because it'll ruin my career. So, you know, there is this group frenzy, cancellation, you know, ostracizing, career destruction thing. That's real.
But let me highlight two other things. So one is it wasn't just the employees. It was the employees. It was a substantial percentage of the executive team. It was also the board of directors in a lot of cases. And so you'd have politically activated board members. And some of these companies still have that, by the way.
It was also the shareholders. And you would think that investors in a capitalist enterprise would only be concerned with economic return. And it turns out that's not true because you have this intermediate layer of institutions like BlackRock where, you know, they're aggregated. aggregating up lots of individual shareholders. And then, you know, the managers of the intermediary can exercise their own politics, you know, using the voting power of aggregated small shareholder holdings.
And so you had the shareholders coming at them. Then, by the way, you also had the government coming at them. And, you know, this administration has been very aggressive. on a number of fronts. We could talk about a bunch of examples of that, but you have direct government pressure coming at you. You have the entire press corps.
coming at you, right? And so it feels like it's the entire world, you know, bearing in on you, and they're all going to light you on fire. And then that takes me to... Well, and that does happen. We should also point out, that's not a... That's not a delusion. I mean, part of also, it's also I think the case that the new communication technologies.
that make the social media platforms so powerful, have also enabled reputation savagers in a way that we haven't seen before. Because you can accuse someone. from behind the cloak of anonymity and gather a pretty nice mob around them in no time flat with absolutely no risk to yourself. And, you know, there's a pattern of antisocial behavior that characterizes women.
and this has been well documented for 50 years in the clinical literature, like antisocial men tend to use physical aggression, bullying, but antisocial women use reputation savaging and exclusion. It looks like social media, especially anonymous social media, what would you say, enables the female pattern of aggression.
which is reputation savaging and cancellation. Now, I'm not accusing women of doing that. You've got to get me right here. It's that there are different pathways to antisocial expression. One of them, physical violence, isn't enabled by technology. But the other one, which is reputation savaging and exclusion, is clearly abetted by technology. And so that's another feature that might have made people leery of putting their head up above the turret. You know, like in Canada...
Well, I'm still being investigated by the Ontario College of Psychologists, and I'm scheduled free re-education if they can ever get their act together to do that. And I fought an eight-year court battle, which has been extremely expensive and very, very annoying. to say the least. And I don't think that there's another professional in Canada on the psychological or medical side who's been willing to put their head above the parapet except in brief.
you know, in brief interchanges. And the reason for that is it simply is it simply is too devastating. And so I have some sympathy for people who are concerned that they'll be taken out because they might be. But, you know, by the same token. If you kowtow to the woke mob for any length of time, as the tech industry appears to be discovering now, you end up undermining everything that you hold sacred. I mean, you alluded to the fact that you'd hope that at least the shareholders...
be appropriately oriented by market force forces. Greed, to put it in the most negative possible way. You'd hope that that would be sufficient incentive to keep things above board because I'd way rather deal with someone who's motivated by money than motivated by ideology. But even that isn't enough to ensure that...
Even corporations act in their own best economic interest. So it is a perfect storm. And you alluded to government pressure as well. And so maybe you could shed a little bit more light on that because that's also particularly... And it's certainly been something that's characteristic and is still characteristic of Canada under Trudeau.
Yeah, so there's a couple things on that. So one is, I should just note, and I'm sure you'll agree with me on this, there are many men who also exhibit that reputational destruction motive. Absolutely. Men will use it. They typically don't in the real world. But if the pathway is laid open to it on social media, let's say.
And there's a particular kind of man who's more likely to do that too. Those are the dark tetrad types who are narcissistic and psychopathic and Machiavellian and sadistic. Lovely combination of personality traits. And they're definitely enabled online. Yeah, so we've had plenty of them as well.
Yeah, so the government pressure side. So when this all hit, like I said, nobody I knew understood what was happening. I didn't understand it. And so I did what I do in circumstances like that. And I basically tried to work my way backwards through history and figure out. you know, where this stuff came from. And I think, like, for pressure on corporations, you know, the context for this is that corporations are—
There's this cliche that you'll hear actually interesting from the left, which is, well, private companies can do whatever they want. They can censor whoever they want. Private companies have total latitude to do whatever they want. And of course, that's totally untrue.
Private companies are extensively regulated by the government. Private companies have been regulated by a civil rights regime imposed by the government for the last 60 years. That civil rights regime certainly has done many good things. in terms of opening up opportunities for different minority groups and so forth to participate in business. But that civil rights regime put in place this standard called disparate impact.
in which you can evaluate whether a company is racist or not on the basis of just raw numbers without having to prove that they intended to be, right, in terms of, like, who they select for their employees. And so companies, you know... predating the arrival of what we call woke, they already had legal and regulatory and political and compliance requirements put on them to achieve things like racial diversity, gender diversity, and so forth.
I grew up in that environment. I considered that totally normal for a very long time. I just figured that's how things worked, and that was the positive payoff from the civil rights movement and from the 1960s, and that was just the state of play. And by the way, it was, I think, manageable and good in some ways. like kind of on and away we went, like we could deal with it. But basically what happened was...
When WOKE arrived, that regime was enormously intensified. And what happened was a sequence of events— and literally, there was a playbook where, for example, per DEI, there was a sequence of events where activists and employees and board members would push you, first of all, you had— had to start doing explicit minority statistical reporting. So you had to fully air in public any, you know, disparate impact, any differences in, you know, racial, gender, ethnic, sexual.
you know, differences relative to the overall population in a statistical report you adopted every year. And of course, they would tell you, as long as you issue this report, you're fine. Well, of course, that wasn't the case. What followed the report was, okay, now you need what's called the Rooney Rule. And the Rooney Rule basically says you have to have statistically proportionate representation of candidates for every job opening relative to the overall population.
So stop there for just a sec, because we should delve into that. That's a terrible thing, because we can think about this arithmetically. It's like you have to have proportionate representation of all protected group members in all categories. Okay, there's a lot of horror in those few words because the first problem is those categories are multiple.
without end. And you see this, for example, with the continued extension of the LGBT acronym. There's no end to the number of potential dimensions of discrimination that can be generated. And then... So that's an unsolvable problem to begin with. It means you're screwed no matter what you do. But it's worse than that when you combine that with the doctrine of intersectionality.
Because not only do you then have the additive consequence of these multiple dimensions of potential prejudice. So, for example, in Canada, it's illegal. to discriminate on the basis of gender expression. Okay, that's separate from gender identity. So now there's a multitude of categories of gender identity, hypothetically. I mean, the estimates range from like...
200 to 300. But gender expression is essentially how you present yourself. I think it's technically indistinguishable from fashion, fundamentally. And I'm not trying to be a prick about that. I mean, I've looked at the wording and I can't distinguish it conceptually from its mode of self-presentation, hairstyle, dress, etc. And so...
That means you can't discriminate on the basis of whatever infinite number of categories of gender expression you could generate. And then if you multiply those together. I mean, how many bloody categories do you need before you multiply them together? You have so many categories that it's impossible to deal with. So there's a major technical problem at the bottom of this.
realm of conceptualization that's basically making it a impossible for companies to comply and exposing them to legal risk everywhere, but also that provides an infinite market for aggrieved and resentful activism. Yeah, that's right. It's like what we saw. So reporting leads to candidate pools. Candidate pools, the pressure then is, well, you need to hire proportionately according to whatever these categories are, including all the new ones.
And then hiring means, then step four is promotions. You need to promote at the same rate, right? And the minute you have that requirement, of course, now any performance metrics are just totally out the window because you can't, right? You just have to promote everybody identically.
Right. And that's sort of the slide into the complete removal of merit from the system. And then, by the way, the fifth stage is you have to lay off proportionately. Right. And so, you know, you're bound on the other side. And what happens is precisely what you I'm sure, you know, happens. what you've seen happen. What happens is a descent of the culture of the company into a complete dog-eat-dog, us versus them. The employee base starts to activate along these identity lines inside the company.
These companies all created what are known as this incredible euphemism of employee resource groups, ERGs, which is basically segregated employee affiliation groups. Right. And so you now have the employees. You know, the employees aren't employees of your company. The employees are members of a group who just happen to be at your company. But their group membership along whatever axis we're talking about, their group membership ends up trumping, you know, their.
role as employees, and then you have this internal descent into accusations, into fear. You have this incredible tokenization that takes place where anybody from an underrepresented group is the classic problem of affirmative action. Any member of an underrepresented group is assumed to have gotten hired only because of their skin color or their sex.
which is horrible for members of that group. And so you get this downward slide. Especially the competent ones. It's terrible for the competent ones. Exactly. And so it's acid. You're pouring cultural acid on your company and the entire thing is devolving into complete chaos internally. And what's happening is...
The activists and the press and the board and everybody else is pressing you to do this. And then the government on top of that is pressing you to do it. And under this last administration, that reached entirely new heights of absurdity. So let me take a step back.
Once you walk down this path and go through all those steps, I believe there's no question you now have illegal quotas. And you have illegal hiring practices and you have illegal promotion practices. And by the way, you also have illegal layoff practices. I think any reading of...
U.S. civil rights law, which says you are not allowed to discriminate on the basis of all these characteristics. You have worked yourself into a system in which you are absolutely discriminating on the basis of these characteristics through actual hard quotas, which are illegal. And so to start with, I think all of these companies that implemented these systems, I think they've all ended up basically being on the wrong side of civil rights law, which is, of course, this incredibly ironic result.
Right. They've all ended up with illegal quotas. I mentioned Hollywood earlier. You know, Hollywood has gone all in for it. You know, they literally now publish their hard quotas. The studios have these statements that says by X date, you know, 50 percent of our producers and writers and actors and so forth. They're going to be from specific groups. And again, you just read the Civil Rights Acts, and it's like, okay, that's actually not legal, and yet they're doing it.
This administration, this last administration, the Biden administration, really hammered this in. And they put these real radicals in charge of groups like the Civil Rights Division of the Department of Justice. And the sort of ultimate, like, amazing expression of this. You know, bizarre expression of this was SpaceX, one of Elon's companies, got sued by the Civil Rights Division of this Department of Justice for not hiring enough refugees.
not hiring enough foreign nationals who had either illegals or coming in through a refugee path. Notwithstanding the fact that SpaceX is a federal contractor and is only allowed in most of its employee base to hire American citizens. And so the government simultaneously demands of SpaceX that they only hire American citizens and that they hire... And that they hire refugees. And the government views no responsibility whatsoever to reconcile that. You're guilty either way.
Right. And then again, general companies are in this bind now where if they do everything they're supposed to do, they end up in violation of the civil rights law, which they started out by trying to comply with. This has all happened without reason and rational discussion. This has all happened in a completely hysterical emotional frenzy. And what these companies are realizing is they're now on the other side of this and there's just simply no way to win. Well, there's another...
there's an analog to that, which is very interesting. I mean, I started to see all this happen back in 1992 because I was at Harvard when the Bell Curve was published. And I watched that blow up the department at Harvard, and it scuttled one of my students' academic careers for reasons I won't go into. But, well, I was working with that student on developing...
validated predictors of academic, managerial, and entrepreneurial performance. I'm sort of interested in that scientifically. What can you measure that predicts performance in these realms? And the... Evidence for that's starkly clear. The best predictor of performance in a complex job is IQ. And psychologists tore themselves into shreds, especially after the bell curve.
trying to convince themselves that IQ didn't exist. But it is the most well-established phenomena in the social sciences, probably by something approximating an order of magnitude. So if you throw out IQ research... you pretty much throw out all social science research. And so that turns out to be a big problem.
Personality measures also matter. Conscientiousness, for example, for managers and openness, which you mentioned earlier, for entrepreneurs. But they're much less powerful, about one-fifth as powerful as IQ. Now, the problem is that IQ...
measures show racial disparities. And that just doesn't go away no matter how you look at it. Now, at the same time, the U.S. justice system set up a system of laws that governed hiring that said that you had to use the most valid and reliable predictors of performance that were available.
to do your hiring, your placement, and your promotion, but none of those could produce disparate impact. Which basically meant, as far as I can tell, whatever... procedure you use to hire is de facto illegal now so lots of companies and one of the I've never I don't know why this hasn't become a legal issue so you could say well we use
We use interviews, which most companies do use. While interviews are very, they're not valid predictors of performance. They're not much better than chance. Structured interviews are better, but ordinary interviews aren't great at all. So they've... They failed the validity and reliability test.
And so I don't think there is a way that a company can hire that isn't illegal, technically illegal in the United States. And then I looked into that for years, trying to figure out how the hell did this come about? And the reason it came about is because the legislators... basically abandoned their responsibility to the courts and decided that they were just going to let the courts sort this mess out.
And that would mean that companies would be subject to legal pressure and that there would be judicial rulings in consequence, which would be very hard on the companies in question. But it meant the legislators didn't have to take the heat. And so there's still an... ugly problem at the bottom of all this that no one has enough courage to address. But the upshot is that, as you pointed out, companies find themselves in a position where no matter what they do, it's illegal.
I've had lawyers literally write analysis for this as I've been trying to figure it out, employment law lawyers. And literally, you read the analysis, and it's absolutely 100% illegal to discriminate on the basis of these characteristics. And it is 100% absolutely illegal to not discriminate.
on the basis of these characteristics, and that is true, right? And both of those are true. It is both illegal to hire, you know, you mentioned interviews. Interviews are an ideal setting for bias because, you know, even if you just assume most people like people who are like themselves.
Is a member from a certain group going to be more inclined to hire members from that group? Probably yes, just if there are no other parameters. Precisely, you want to get to quantitative measures because you want to take that kind of bias out of the system. Quantitative measures are presumptively illegal because they lead to bias through disparate impact. Yeah, and so maybe the term Kafka trap, right? You end up in this vice, and then everybody is just so mad.
that you can't even have the discussion. And so this is the downward spiral. On the one hand, I think there's a lot of this that just fundamentally can't be fixed because a lot of these assumptions, a lot of this stuff got baked in going back to the 1960s, 1970s. So a lot of this is long since settled law, and I don't know that anybody has the appetite to reopen.
Pandora's box in this. Having said that, this new administration, the Trump administration coming in, I would say every indication is that the Trump administration's policies and enforcement are going to flip. to the other side of this. And so one of the things that's very fascinating about what's happening in business right now is a lot of boards of directors are now basically having a discussion.
internally, you know, with their legal team saying, okay, like, we cannot continue to do the just overt discriminatory hiring and employee segmentation that we've been doing. We're not going to be permitted to. And so, you know, we have to back. for these programs. And you're already seeing Fortune 500 companies starting to shut down DEI programs, and I think you're going to see a lot more of that.
Because they're going to try to come into compliance with what the new Trump regime wants, which will be on the other side of this. But the underlying issues are likely to stay unresolved. I think in practice, in retrospect, maybe this is too optimistic on my part, but my time in business, 80s, 90s, 2000s, it felt like we had a reasonable detente. And although you ideally might want to get in there and figure this stuff all out.
as long as it's kind of kept to a manageable simmer, you know, you can kind of have your cake and eat it too and people can kind of get along and it's okay. You know, maybe it's not a perfectly merit-based system or maybe there's issues along the way, but fundamentally companies worked really well.
a long time, if you can work your way out of this sort of elevated level of hysteria. And optimistically, I would say that that's starting to happen. And the change in legal regime that's coming, I think, will actually help that happen. Right. So you're optimistic because you believe that the free market system is flexible enough to deal with ordinary stupidity. But like insane malevolent stupidity is just too much.
Yeah, I think that's reasonable. Well, I do think that's reasonable because everything's a mess all the time and people can still manage their way forward. But when you have a policy that says, well, any identical... Any identifiable disparate outcome with regard to any conceivable combination of groups is indication of illegal prejudice. There's no way anybody can function in that situation because...
Those are impossible constraints to satisfy, and they lead to paradoxical situations like the one you described Musk's company as being entangled in. That's just so frustrating for anybody that's actually trying to do something. You know, that requires merit that they'll just throw up their hands.
And so, yeah, yeah, yeah. Okay, so I'm going to stop you there because we're out of time on the YouTube side, but that's a good segue for what will continue on the Daily Wire side because we've got another half an hour there. And so for all... of you watching and listening join us join mark and i on the dailyware side because i would like to talk more about well what what you see
could be done about this moving forward with this new administration and how you're feeling about that. I mean, you made a decision I guess early in 2023, like so many people, to pull away from the Democrats and toward Trump, strange as that might be. And I'd like to discuss that decision and then what you see happening in Washington right now and what you envision. as a positive way forward.
so that we can all rescue ourselves from this mess before we make it much deeper than it already is. So for everybody watching and listening, join us on The Daily Wire side. And Mark, thank you very much for talking to me today. I hope we get a chance to meet in San Francisco in relatively short order. And I'm also looking forward to continuing our discussion in a couple of minutes. Join us, everybody, on the Daily Wire side. Good. Thank you, Jordan. you