Daniel Schmachtenberger || Towards a Radical Cultural Enlightenment - podcast episode cover

Daniel Schmachtenberger || Towards a Radical Cultural Enlightenment

Feb 24, 20221 hr 9 min
--:--
--:--
Listen in podcast apps:

Episode description

In this episode, I talk to social philosopher Daniel Schmachtenberger about exponential technology and its effects on our current world. According to Daniel, organizations that harness the power of modern tech rarely use it for good–like how social media companies boost polarizing content to maximize user engagement–leading to a distrust of science and destabilized democracies. To overcome humanity’s current existential threat, Daniel argues we all need to work towards a radical cultural enlightenment. We also touch on the topics of collective intelligence, human development, power, responsibility, and civilization.

Bio

Daniel Schmachtenberger is a founding member of The Consilience Project, aimed at improving public sensemaking and dialogue. The throughline of his interests has to do with ways of improving the health and development of individuals and society, with a virtuous relationship between the two as a goal.

Motivated by the belief that advancing collective intelligence and capacity is foundational to the integrity of any civilization, and necessary to address the unique risks we currently face given the intersection of globalization and exponential technology, he has spoken publicly on many of these topics, hoping to popularize and deepen important conversations and engage more people in working towards their solutions.

Website: consilienceproject.org

 

Topics

02:52 Techno-optimism vs techno-pessimism 

04:28 Definition of exponential technology

08:39 Is the world getting better from tech?

10:37 The radical asymmetry of power

13:58 Decoupling rewards from development

25:19 A new social media algorithm 

28:56 Tribal politics, certainty, and perspective taking 

33:55 Developing better cognitive capacities

42:06 Rights and responsibilities in a liquid democracy

46:23 The next phase of open societies

49:26 The Consilience Project

52:23 The need for cultural enlightenment 

56:13 Creating an antifragile world

58:49 Collective intelligence

1:00:39 Establish expertise and credibility in institutions

1:05:24 The unique existential threat of the 21st Century 

See omnystudio.com/listener for privacy information.

Transcript

Speaker 1

Exponential tech is is really powerful. It's like the power of gods, at least little gods. But we don't have the kind of wisdom, love and prudence to use that power well. And you don't get to keep doing exponential warfare and exponential externalities on a finite planet not blow yourself up. Hello and welcome to the Psychology Podcast. In this episode, I talked to social philosopher Daniel Schmockenberger about

exponential technology and its effects on our current world. According to Daniel, organizations that harness the power of modern tech rarely use it for good, like how social media companies boost polarizing content to maximize user engagement, leading to a distrust of science and destabilized democracies. To overcome humanity's current existential threat, Daniel argues, we all need to work towards a radical cultural envitenment. We also touch on the topics

of collective intelligence, human development, power, responsibility, and civilization. It's always great chatting with Daniel. He's a good friend and very thoughtful human being. So without further ado, I'll bring you Daniel Schmuckeberger. Hey, Daniel, I can't wait to talk to you today on the Psychody Podcast. I'm so happy to be talking to you again, Scott. It's been a while. It has been a while. I love the beard. I love the beard. Thank you. You've got a couple months

to catch up. I do a couple of months. But you look very wise, and that will catch up with the fact you are wise. You know you are very wise human. As long as I've known you, I've been really impressed by your insights and your almost quaeravoyance about the future of civilization, which is pretty cool. Did you always have that kind of being able to think not short term, but very very long term about the arc

of humanity? I think it's worth noting that I'm profoundly less good at that than I wish I was, and also profoundly less good at it than I thought I was when I was younger. Oh, that's fair enough. It's an important thing to comment on. But yeah, I was fortunate. I was homeschooled growing up and got to have a pretty unique curriculum, largely self guided curriculum. But the things

my parents were into, so I was exposed. One of them was Bucky Fuller and so comprehensive anticipatory design science thinking through long arc of history, long arc of future, and how we designed with that in mind was kind of early background, and then a lot of Fritcheoff, Kapri and system science that was also doing kind of long term forecasting and deep history. So I would say long

time scale has been with me most of my life. Well, it's really allowed you to see and point out a lot of things going on in our world today that could be greatly, greatly improved. And there's a lot there's a lot that could be improved. So let's take some slice of some of the things that have been on your mind lately. I know one big thing is technology and the impact of technology on our psychology and the

risk of technology. But I am wondering to what extent you see the problem as do you see it as an extreme, extremely as bad as you know people are panicking. You know there's a whole tech panic. Where are you in the spectrum from that too? It is completely fundamentally turning kids into robots and causing them be addicted and suicide rates, et cetera. First, the other continuum. There's a

new book that just came out by Robbie Suave. I don't know if you've heard about it called I think it's actually called tech panic, where he says, everybody calmed down, there's really nothing to worry here. All the liberals are panicking for nothing. Where do you kind of fall in this large broad spectrum. Yeah, there are naive techno optimists

and naive techno pessimist views. The naive techno pessimist view sees the problems that are associated with tech and takes a luddite direction, and that approach will always just lose because it will have less power than whatever group is embracing the tech which has the power. So really, there are no futures that have any chance that aren't high tech, because even if they were better, they would just lose.

And so from a real politic assessment, specifically, now, exponential tech is so much more powerful than every other form of legacy power that only the groups that are developing and deploying exponential tech will have any say in the future. Wait, can you actually define what exponential tech is? Yeah, there's

a number of different ways we could define it. Tech that has exponentially more influence or power with or exponentially less people need to be involved, exponentially less resource timescale, so formally, exponential tech would mean technology that recursively makes itself better. So if you think about computation, I can use a computer to simulate how to make a better computer chip, get better transistor dency, and then that better computer chip can do better simulations to make a better

computer chip. I can use AI to make better AI. But not just formally that. There are kinds of tech that help advance a world system where that world system as a whole makes better tech. Right, Nukes don't necessarily make better nukes, but they definitely concentrate power to where the groups that have them can do more tech investment, including in nukes. So I wouldn't call it an exponential tech formally in terms of self recursion. But so's there's

a range of definitions. If we mostly we're talking about when we say exponential type, we're talking about computation and its derivatives that do recursively affect themselves. But you also have things like you have a product for sale, a good or service that doesn't have atoms involved, so there's no unit cost. So I make the thing once and then sell it an indefinite number of times with no you know, so it's all margin so I get an

exponential return on capital as a result. This is why pretty much all the unicorns are software companies, and all of the capital flows into software, which is from hardware or so much of it has, which is why Silicon

Valley has been largely software. But exponential tech also means if you look at the timescale that it took for Facebook to reach a billion people compared to say anything else historically the first oil company or whatever, so the speed to scale or that Facebook has three billion users on a monthly basis, So the total scale or the amount of scale those social media companies have AI models to be the predict your behavior better than intelligence companies

have on their own populations. Also, it takes way less people to run one of those companies or that technology. And so by exponential tech we mean all of the roughly all of those things where there is some kinds of either the return on capital, the tech making the tech better, basically increased scale of impact from smaller groups in shorter periods of time. Okay, cool, So yeah, so where do you see that? Yeah? So the center of

that is computation. The very center of that is AI, and then computation and AI working on computational substrate, like how to make quantum computers and photo computers and better GPUs and all like that is the next step. Then all the sensors that go into it, the whole Internet of Things world and the ARVR kind of extensions. Then computation in AI applied to nanotech, biotech, protein folding, the rest of the world. Right, So that's kind of the

way of thinking about it. I would say that exponental tech provides so much more power than everything else that it just wins via economic warfare, political information warfare, whatever you want to call it, competitive dynamics. And so you're if there's a group that has got a vision for how to make a better future, but it doesn't have the power to enact it, it's not going to matter. And the power to enact it is going to come through the size of the lever that is affecting the choices,

which is going to be the tech. And so right now, I would say there are only a few types of groups that are really developing and deploying exponential tech, which means they are the types of groups that have the most influence on shaping the future currently, and none of them are thinking about the types of things that matter for the types of future we want adequately. Okay, let's pause there. You're saying so much that is rich. No. I assume Facebook is one of those that you're referring to.

And do you think they have been rightly criticized along those lines for not fully realizing their potential that they could have or the kind of power they could have. Totally? Yeah, Okay, I didn't answer your initial question. I started to go off another direction around the techno optimists or technopesmistic direction.

Nobody wants to go back to a pre technology time where novacane didn't exist, and you stelling dental work right, Like, there's lots of things about tech that we like, but all the tech also corresponds with externalities of harms across environmental supply chains or social dynamics or whatever it is. So is the world getting better from tech like pinker and rustling and people would say, or is it getting worse?

Like so many environmentalists and Center for Humane Technology and other places would say, it's getting better and worse at the same time on different metrics. That's the most nuanced answer I've ever heard in the history of answers. I love that it's getting better associated with the metrics that we're measuring and optimizing for, and particularly that have capital associated it's getting worse on all the other metrics. It's

getting worse on heaps of psychological metrics. Related to this podcast where we were just talking earlier that there are so many psychiatric disorders that where people don't have a full blown diagnosable version of it, they have a subclinical version of generalized anxiety disorder, or of borderline type dynamics, or of addictive tendencies, or of existential angst and nihilism, or but like ubiquitous psychopathology is one of the externalities of all of our tech and success. Do you mean

do you mean it? Is it a consequence or is it There's obviously a bidirectional loop there, right, there is a bidirectional loop. The drug dealer says, hey, I'm just

supplying a thing that the people want. But when they offer the first hit for free to get the people to want it, starting when their kids are in bad environments where they don't really understand the full consequence, and then offer it to people who are already addicted, there's a supply side responsibility missing especially given the asymmetric capacity

of the supply side relative to the demand side. That then gets to have this plausible deniability called I'm just supplying the existing demand, but it's manufacturing demand that is not actually healthy, and that is manufacturable because of the

asymmetry of supply. So once companies start getting big enough to make people want shit that they never had before, in fact make them feel unhappy if they don't have the shit that they never had before, make designed and obsolescence and style changes that make them continue to need the new shit, and then say, look, we're just giving the people what they want. There's something basically rather than authentic demand for real things driving supply of real value.

You start to have supply wanting to continue to concentrate its power, driving demand to then be able to have the plausible deniability of saying it's all voluntary, but it's appealing to the lower angels of our nature rather than

the higher angels of our nature. And when you start to get such radical asymmetries as multi billion dollar or trillion dollar companies that are coordinated and the demand side is not coordinated, there isn't There aren't like labor unions of all of demand that go up against Google that

are also using the same kind of AIS. It's the individual Google user against Google, and Google's using the most advanced aiyes in the world with like a trillion dollar market cap against an individual person who doesn't even know that their data is being harvested to optimize their behavior to sell to somebody. Then the whole idea of market

and the rationality of the market is very broken. And so then if you want to maximize the lifetime revenue of a customer, addiction is a great way to maximize lifetime revenue customer. And then if you're something like face, s, Book or Google, where you're an advertiser, where maximizing addiction across every different avenue possible, Right, I get like I'm making money by maximizing their attention, and so then I can sell their behavior to all of the different industries.

That's kind of a meta play on addiction. This is so rich. So let's pick a pause here a second. Hey everyone, I'm excited to announce that the eight week online Transcend course is back. Become certified in learning the latest science of human potential and learn how to live a more fulfilling, meaningful creative and self actualized life. The course starts March thirteenth of this year and goes until

May first. The course includes more than ten hours of recorded lectures, four live group Q and A sessions with me, four small group sessions with our world class faculty, a plethora of resources and articles to support your learning, and an exclusive workbook of growth challenges that we think will help you overcome your deepest fears and grow as a whole person. There are even some personalized self accusation coaching spots with me available as an add on. Save your

spot today by going to Transcend course dot com. That's Transcend course dot com. We have so much fun in this course and I look forward to welcoming you to be a part of the Transcender community. Okay, now back to the show. Let's elucidate or delineate exactly what we mean by the lower angels and what we mean by the higher angels. I think that would be useful in

this conversation now. When my head immediately starts to go to when you say lower angels, I think ego, I think self esteem, drive, you know, I think of it from a Maswell perspective, right, I think about what needs are being met When I think lower I think basic needs versus higher needs, which are more growth oriented needs. And I just wanted to see if you're on the same paige. Are we thinking among some of our lines

here I'll say it different way. Okay, So if I'm looking at what is like, what is fast food as a phenomena? What is hostess in McDonald's as a phenomena? Because it's providing base level needs survival needs in the same way that food from your garden would provide them. So it's not that we're talking about base needs versus higher needs. We're talking about even here base needs versus base needs, but that are different in kind. Okay, good, God, good,

this is great. So it'd be the distinction that I make between like healthy self esteem versus onhealthy self esteem, on healthy self esteem being narcissism, and healthy self esteem being a sense of self worth and competence. So yes, I also self esteem is actually anti narcissistic, right, it has correct in order to truly be healthy. It's acutely aware of all the things that it's not good at, and it doesn't have its sense of self worth or lovability. Coupled to that, so it doesn't have to be in

competition with everyone all the time. It wants to see a healthy world, so that wants to see other people succeeding. Right, its own success doesn't get made less by other people's success. So healthy self esteem and narcissism aren't like slightly similar versions. They're almost almost antithetical and really deep ways to each other. Right they are. But yes, and by healthy not, I mean in terms of they're both attempts to regulate that

need for self esteem. One does it in a very unhealthy, detrimental way that's not leading to growth, and one way is actually caring about growth even at the expense of an ego being hurt or something along those lines. Yeah, okay, so how we define the healthy versus unhealthy is super important. I want to drop back down below esteem to survival again and go back to the fast food example because it's a particularly clear one, and then we can go

all the way up. So, in an evolutionary environment for humans, there wasn't a lot of abundant sugar floating around. You could get a few fruits and a few times a year, not that many fruit trees weren't like they are now. That they've been hybridized, and there was so much kind of chloric surplus in the fruit in the place where famine was a real thing and most people died from pemin that there was a dope me energic response associated with sugar and the sweet taste because those who got

more of it survived better. Right, And the same is true with fat, and the same is true of salt. Those were all hard things in evolutionary environments. Again, so given that they were scarce and they were evolutionarily adaptive, we have a strong feels good, do it again response, right,

a reward circuit associated with salt, fat, and sugar. Industrialization came along, we were able to extract salt and fat and sugar from the micronutrients that they were a part of, make abundant amounts of them, and then recombine salt fat sugar in lots of different ways as a frappuccino, as a milkshake, as French fries. It's all just salt, fat sugar with different types of flavor and palatability to maximize

the dope energic response. So then we have in the rather than starvation being the leading cause of death, obesity being one of the leading causes of death and obc related issues in the West, in the developed world. And so what we can see that we did is we extract the reward part from what it was supposed to be coupled to. It was supposed to be coupled to vitamins and protein and minerals and the micronutrients and the and scarcity of resource. So having some abundance where it

was actually needed. Now we have excessive amounts, but the reward circuit hasn't evolutionarily changed, right, So it's a fucked up reward circuit. And what I would say fast food is to real nutrition, where you decoupled the reward from what it should be rewarding, decoupled it from an actual evolutionary benefit. That's what porn is to sexuality. It's what social media is to real social relationships. It's what online shopping is relative to evolutionary impulse to explore and find

relevant new things, novelty searching. I would push back on one of the things you said for the sake of conversation, please, I think that social media can form real social relationship. I think there's a certain that doesn't tend to and we see a lot of unhealthy aspects of it. But I think you know, a lot of people who have difficulty communicating with each other outside of online have found really strong communities, like people in the autism community, for instance,

and things that have formed real deep relationships. So I would make the case that that analogy is not one hundred percent correct because you you know, social media can afford real social relationships, but it just doesn't tend to be the case. Well, okay, so it's a great example autism community being an actual community online because even without leaving their house and going into social environments that are awkward,

they can find people and talk to them. So there's this issue of way too many men in China because of the one child policy for a long time, and it's selected for men, and so you have a bunch of unmarriable men, something like maybe one hundred million of them.

And that's a pretty big deal because whenever a society gets too many men, if the men can't get laid, they decide to blow the society up, and so you have to usually start a land war to get rid of some of the men and usually advance the aims of the society, Like, that's kind of a well known phenomena, and but in modern time, what do we do with it? So one of the things that has been happening is a lot of these men falling in love with an

AI chatbot. And you can imagine a world where VR porn with AI chatbot type features deals with issues like that, and you're like, hey, and porn did a good thing. It's not that it can't ever do a good thing. It's that overall it is oriented in a way that has reward circuits that are decoupled from the thing they evolved to enhance. Right over Arkingly, that's the phenomena we're looking at, is that someone is getting a reward process for something that is not net healthy, revolutionary for them

or their role in society. And so I would say at every level of Maslow's hierarchy, there are going to be places where I can get a reward circuit that makes me keep doing the thing. The thing that I'm doing is not net developmental and a meaningful definition of development versus making sure that my reward circuit is actually

coupled to real development. So an interesting thing about the Maslow hierarchy in my revised hierarchy is that I don't actually treat as levels but as different I guess forms of integration. So that's why I changed it from like a hierarchical triangle to pyramid to a sailboat and integrate

a whole unit. And I know we've had some really I remember having a conversation with I don't know, do you remember we had a conversation in your car your car once after a long walk and we talked about integration in Anthony remember with this, So there are two things that we both of us have been super interested

in since we've known each other, integration and emergence. And I'm wondering how they come into play here, right, because to me it seems like healthy, healthy use of social media is a possibility, but it actually takes takes a lot more integration than then a lot of people have the patients to you know, it's a lot quicker to have this kind of disintegration on social media, and you see it disintegrates quickly. So any I just want to

riff on that. I guess your thoughts on that. Yeah, it's actually very interesting because to you say, it disintegrates quickly, and one of the classic examples that we see is the amount of polarization that occurs in social media, whether we're talking about social justice or vaccines, or climate change, or gay marriage or whatever. And so this is an example of where the technology is being developed in a

landscape where the economics shaped it so heavily right. They didn't want to charge the users, so the users aren't the customers. The advertisers are so selling the user behavior is the product, and so gathering behavior on the user, gathering behavioral data on them to be able to sell it, and that that should actually be a breach of fiduciary law, except it doesn't apply because when it was developed as

a type of thing didn't exist. Because it's Facebook is gathering an amount of information on you from a predictive behavioral analytic point of view that should be considered privileged information. They can predict your behavior in consequential ways more than your psychotherapist or your lawyer or whoever else could. That has a fiduciary responsibility to only use that information in your interest. But here they're selling it to other people,

political actors and corporate actors that don't. They want you to do a behavior that's aligned with their interests. There's a real problem with that, but you can see that, Like if Facebook simply charged you for a service and let you calibrate what it was optimizing for and had a fiduciare responsibility to you in order to have access to that much of your behavioral data, it would be totally different. But because they're selling ads, they want to

maximize your time on site and your engagement. And it happens to be that the things that are more addictive maximize time on site and the thing that are more polarizing maximize engagement. So when you train the AI on, maximize time on site, maximize engagement. And it's just running personal analytics at a scale nobody could possibly make sense of.

The most polarizing shit and the most limbically hijacking shit is what emerges and the lower angels of our nature again meaning it's easier to get the dopamine hit from the fast food than it is to get the better feelings from the eating salad and veggies and exercising, which is going to take me months to feel better that like, oh wow, I actually have less inflammation in my body and I can move and I feel better about things there's a two marshmallow delayed gratification thing. And so it's

the broken reward circuits are just faster. You can appeal to them easier. So capitalism appeals to the lower angels of our nature better than the higher angels of our nature, right, because the higher angels of our nature don't need to buy an extrinsic thing to get happiness, and they don't and they don't make immediate actions as quickly based on some kind of impulsevity. So the marketing is going to appeal to impulse control disorders and pushing pain points to

then offer market based solutions. Yeah, and then being able to AI optimize this ad over this ad from this company over this company, just creates an arms race of maximizing addictiveness writ large everywhere. Well, you nailed it. That is what's happening. You know, the psychological concept time preference

seems relevant here. And it's just very interesting because all the things we know about you know, in the in the self control or to the self development literature, all the things we know that if we have long term goals and the strategies we use to curb those long term goals, I mean social media they're not designing things to bring out the best long term in people. It's but they could. The point that I think the point is, and this is where we're getting this, is we would

need a different business model. Yeah yeah, And so I really want to have that conversation with you too. What could this new business model look what does positive product design? Is my colleague Coordiny bigoy I was doing great work on designs and as you are and your colleagues, what would the new algorithm look like? I believe the colleague that introduced us was zach Stein, and that's right, that's right.

Some of the early work he did was a work on development, like following the PHA kind of work of stages of childhood development past eighteen as an assumption that that's when humans stop meaningfully developing, and continuing to ask what does a developed human being in a mean in a meaningful sense ongoingly? How do we assess that? How do we create education for it? How do we personalize education for it? And so the group Lectica built the

thing called electical score. One of the things that they were looking at. They're like, this is a good characteristic of development writ large, is someone's orientation to seek multiple perspectives right in a complex scenario, recognize that each perspective will have some signal, not be pure noise, but every perspective will probably have some noise in it. So let me seek as many perspectives as possible. The first part's

perspective seek and perspective taking. Next part would be the ability to parse those perspectives to try to find the signal and get rid of the noise and then do synthesis. And so the complexity to be able to take many perspectives, to parse them, to bring them together, and to yet still hold enough uncertainty to continue to evolve. That that would be a thing we would like to develop in people. Right, So, let's say we could automate electical assessment of people based

on their online behavior. And let's say that we train Facebook's algorithm to increase people's electical score, so it was going to put in front of them, And let's make it simpler. We want to increase people's attention span rather than decreasing, and we want to increase their non dope

energic attention. So we don't just increase their attention span by giving them like only more animated videos that they can make it through longer, but like increase their attention span with text right, with some orring shit, which means and it's actually really important why right, It's important that there, if there is to be any volition or self determination of the being, that they're able to engage their attention in a self directed way for enough time to be

able to actually get enough information to process the complexity of a thing. So let's say that Facebook said, all right, we only we want to increase everyone's attention span. We want to increase their intentionality rather than their distraction. So we're going to do continual check ins on the nature of their intentionality. I have some friends that just sent me a thing on that they're working on developing some

technology to do attention and intention up regulation. I love that you can see a lot of things that we could assess that we would add to the objective function that we wanted to optimize for that would be net positive. And I don't have to say, hey, we're because everyone would have a concern, well, is the right going to have a control on this? Is the left going to have a control on this? If your perspective seeking, then that doesn't fucking matter. We're not trying to make people

have one view more. We're trying to make people better at seeking the range of views, being able to understand them, being able to articulate them in more nuanced ways. But Daniel, this is the This is the tricky thing because the lower angel, the most immediate, short term response, is the one that's partisan, is the one that's tied to your own identity. So I'm just pointing this out to our audience. Yeah, that's a kind of McDonald's. It's a kind of heroine.

It's a kind of porn. Right, tribal identity in politics is too real political thinking. What porn is to real intimate relationships. Well, I'll speak for yourself, and so what I mean is it's a way to get the dopemin ergic hit of certainty and sanctimony and righteousness and group identity without possibly advancing a functional thing. Okay, think about this. If are so many people in the US, whether our issues social justice or our issues vaccines, our greatest enemy

is our fellow countrymen. Right, the right centipathy of the left, the left centipathy of the right is so intense that our fellow countrymen are our greatest enemies, we put most of our our energy into infighting there, well, what happens to that nation? That nation breaks down pretty fucking fast, while China's Belt and Road initiative becomes the new twenty first century global power structure because they don't create a

system of internal antipathy. And if we put all this energy into political infighting every four years to get someone elected, who will not be able to focus on anything that will take four years or longer. And all infrastructure and everything takes four years or longer, so they'll only be able to focus on super short term shit and whatever they do will get undone in the next four years.

And whatever new technology of being able to engage the people and get them out to vote or change their minds that we figure out how to use, the other side, if we're effective, will reverse engineer and make a better version. So we just drive arms race is on that. That's just a path to self termination. You cannot possibly run a democracy effectively where your fellow countrymen or your enemy, and where your all of the energy turns into waste heat because of inviting. So all that does is select

out democracy and have technologically empowered authoritarianism. Guaranteed to rule the world. And so if you are busy doing left right in fighting, you're also just busy killing open societies completely and voting for digital author or technologically empowered detocracy running the world. Your little issue that you're focused on is a much littler fucking issue than do we have a process to address issues at all that is aligned with the kind of society we would like to have.

That's exactly what we're talking about right now. So how can the default response be perspective taking that's my question? Well, this is how we come to the one marshmallow versus two marshmallow fair politics. Is there's a dopeminergic hit uncertaintyus think about it in an evolutionary environment, right in a tribal type environment, having enough, having clarity certainty to know how to act had value. Do I are there dangerous predators over there? Is there a tribe that we're going

to have to deal with? Is their food to go pursue? Like, there's value in the certainty, And it was fairly straightforward to get it for those types of things because you just go look and you could poke it with a stick and have direct five cents experience of the thing, and someone else could come have five cents experience of

the same thing. Right, But now almost all the things we're certain of, nobody can poke with a stick, nobody's seen, nobody really has a sense what the CCP is doing, or what happened in Wuhan Broology Institute, or if the mRNA really such and such, or if no fucking idea of but totally certain. Right, everyone thinks they're certain they know though, Yeah, and so there's an like it feels

good to be certain. There's a certain good feeling of righteousness and kind of moral sanctimony, and there's a certain good feel and feeling like I'm with the other ones who are right. But all of that is the good feeling of eating McDonald's right. It's a dopaminergic hit that is artificially giving a neurotransmitter to your brain that is decoupled from the actual thing that neurotransmitter would have evolutionarily

signaled for. And so it feels good but is actually making you dumber and batter while you're thinking that you are smarter and gooder, right, thinking that you are more doing the true and good thing. Yeah, if we were to get really nerdy hair. There are multiple dopamine pathways, and I've written articles about more evolved dopamine pathways that send projections to the dorsalato preform to cortex, which is

what we want to engage more. When we're talking about your kind of way of perspective, taking and holding consciousness and being able to inhibit the prepotent response. That's all executive functions which do have dopamine projections, but they're more later evolved, so they're not as default default. I don't know why I said like that as default, but I've written about that. I've called it the nerdy dopamine pathway is what I've called it. So we need we need

to activate more of the nerdy dopamine pathway. I think you would agree. Yeah, okay, so this is very interesting. You come back to polarization, and we come back to the topic of democracy for a moment. So democracies, modern democracies emerged following the printing press because the printing pressman textbooks and newspaper for everyone. So you don't need a privileged nobility class that has access to education and information. Everybody can get educated and everybody can know what's going on.

So that they can all participate in government. The social media is really an anti force estate and the exact opposite of a printing press, because rather than everyone being able to get a newspaper to have the same information about based reality to then make decisions on here, we're going to customize a news feed to my preferences. So two people will have certainty because there's an ubiquity of information that was customized to my likelihood to believe it

and interact with it. But they're seeing completely different worlds and so what it's And yet that makes me engage age more seeing the things that hit my preference biases. And so you have AI more powerful than the AI that beat cast provate chess, optimizing everyone towards doubling down on their own biases, group identities, limbic hijacks, and bias

for certainty. So democracy breaks with that that you get an increasingly polarized population, which creates an increasingly polarized representative class, which has more and more gridlock, which can't get shit done, which just loses to authoritarianism, and especially technologically empowered authoritarianism that without the gridlock inside, can implement exponential tech towards

long term visions. So when we come back to the printing press thing, it's not just that democracy emerged when we had broadcasts and the ability to get the same message out to everyone. It's that reading was actually an important part of it. This was something that Marshall McLuhan discussed, and the guys that the Digital Life Foundation go into quite a lot is at the time of say, the development of this country, out of the political thinking associated.

There's very little hypernormal stimuli in reading plain text on paper, and you have to actually stay focused for a fairly long period of time without distractions to get the content in. So you're actually training attention span like a muscle. And obviously, to engage in democracy, I've got to be able to hold my view and someone else's view in someone else's view at the same time. Right, the complexity of the situation means I need more working memory to hold the

whole thing. If my work memory and attention span comes too small, I can only be a fundamentalist who has reactionary views that can fit into sound bites correct, because the truth is too nuanced to fit into a sound bite. So you have to have much longer attention spans. Which means you have to train attention span, which means rather than fast cycles of low dopaminergic stuff, I need stuff like reading it's boring as fuck. And then also writing

right when the medium was reading and writing. Writing also meant I had to be able to understand what I thought well enough to articulate it in structured. And so I'm not saying we go to a pre digital age. We're not going to, but we have to look at the things that cognitive capacities that those media were developing in us and say, in this new media age, how do we develop those same cognitive capacities, and if not, what does that portend for our social structures? Yes, and

you'll play improv here with you. Yes. And the dopamine pathway that goes to the dorsalot of prefrontal cortex is correlated with the personality trait openness to experience. So there are individual differences in this, and there are some people who do find greater reward value of information, of learning

new information. So, you know, knowing that, how can we kind of for those that are very low on that personality trait spectrum where they're not getting reward value from information they're getting more reward value from more of the primal dopamine pathways, sex drugs, rock and roll. You know, how do we kind of increase that motivation in everyone?

Is that this obviously a million dollar question. First thing is that you can be aware of what increases the opposite of it in people and start looking at how do we deal with that. We can't have multi billion person networks with multi trillion dollar market caps using AI and personalization to make people oriented to certainty even more aggressively, angrily, righteously certain right. Like, if you're doing that thing, you're not just saying how do we It's not just can

we develop everyone more? It's look, we're actually fucking the thing up pretty badly. So that's one thing to look at. The other thing is like, Okay, are those differences genetic? Are they early childhood? Are they even in utero? What was going on in the neurochemistry of the mom that was affecting the receptor's informations? Who the fuck knows. I think it's very complex, and every piece of science looks at the thing it's looking at and then over indexes

how important that thing was. But I think it's a complex function of a lot of those things. But you look at a cross cultures where you're gonna have a distribution of different genetic predispositions and typologies, yet where you still get the Bell curve moved one way or the other for the culture as a whole, which is indicating

the way in culturation works. And so you look at Buddhists and their orientation regarding violence across age groups and whatever, and you're like, are the Buddhists less violent historically across the last three thousand years ten million people, varying plus or minus wartime, peace, time, famine, whatever. Yes, they're just

less violent. Can you attribute that to genes? No, it's lots of different gene groups, and some people with the same genes who are not Buddhists but are Hindus or Muslims or whatever else are more violent in other situations, like in culturing people to compassion for all sentient beings as the core thing from early childhood with all of

the practices, like it actually works. If you look at Jews and you're like, do Jews develop a higher degree of education across the Bell curve for their people regardless of the embedding environment there in they do. They do a pretty good job of that historically across a couple thousand years at least. And so there's something in the culture of what the value systems are and how they embed the value systems that have to do with the nature.

You know, does the sabbath have something to do with making sure that tech isn't running your life too much because you have a break from its effect on your life where you're also communing with the sacred and the sense of what how should my life be directed? You know, like that itself would be a social technology. It's very interesting to be able to bind a culture and increase its power to steward physical technology. Well, so can we

develop everyone? Can we prevent people from being developed in the worst ways? Yes? Can we develop people in progressively better ways as whole cultures independent of individual disposition? Yes? Should everyone be equally good at all things? No, because the person who might have less reward circuit on complex cognitive aps thought might be a fucking brilliant musician. And so great society wants lots of different kinds of things, right in so far as it starts to have to

do with governance. Now we have this big question this is why do we want a direct democracy where everyone votes on everything or where they just vote on representatives. A lot of the arguments against direct democracy were like, fuck, do you really want these people voting on everything? Or do you at least want them to vote on a representative classt as some training in law and politics and stuff to not do super dumb stuff. We can do

better than that. Now we can do things like a liquid democracy where rather than pick one representative to represent me across all things, who now, like, okay, so this person is supposed to have a say on defense issues and technology issues and monetary issues and international policy and

domestic like, what the fuck? Who has that expertise? So what if we have a liquid democracy situation where we have, say, a digital democratic platform I give to engage in the topics, and on the topics where I don't understand them well, I can seed my vote to someone who does that I think understands it well in that topic. But the guy who I really trust in what we should do with the energy grid might be totally different than who I trust in what I think we should do with

monetary policy. Now, what if we can go a step further into something like qualified platforms where you bind rights and responsibilities. This starts to sound really tricky, but when you think about what all the alternatives are, what's scary about this is not more scary. It's less scary than what's scarier about all the other options. So binding rights and responsibilities do you mean integrate them? What do you by binding? If I have a right without an attendant responsibility,

we end up getting entitlement dynamics. If I have a responsibility without a right, we get enslavement dynamics. And so the binding of if I want more rights in the system, they are more responsibilities because if I have the right to influence the system in a certain way, but somebody's going to be responsible to implement that shit. But I don't have any responsibility, and I'll always vote myself benefits that I don't want to take responsibility for paying the

cost of right. So there's something around the more responsibility we can take, the more right we have to influence the thing. It's a meritocratic kind of idea. There's of course problems with it. We could get into this in a more nuanced a time where we have more nuanced

for it. But let's just imagine for a moment that we had a digital engagement kind of democratic platform like where I could help craft individual propositions so I didn't just have to vote yes or no in a bundled proposition, were both suck yes socks and no socks based on what's bundled into it and where the structure of that forces the polarization of the population just based on even those who are voting yes are causing harm to the people who are connected to whatever it harms, and lahlah.

So we can actually do proposition creation in this process. But let's say there's a thing where if I want to engage in this topic, there's a very basic test that I take that shows that I understand this kind of general pros and cons of the proposition or basic facts of the issue. It doesn't mean I have to agree with him. It just means that I understand some

basic things, and then I get engage. If I don't, I still have a vote that I can proxy to someone who has through a liquid democracy, But rather than vote for a representative across the board, it can be someone who I think is good on that topic. But I can also choose to get more educated about that topic, and the platform will educate me in audio or video or text, so if I'm blind or whatever, and in any language I want, and it's free, right, So I

have the capacity to get more educated. But should everyone who has no idea at all really have a say in what our energy grid stability policy against EMPs and Carrington events be? Like? I don't want that. I want people who understand, who at least have some sense of understanding the basic things so they weren't controlled by a manipulative populism, to engage in it. And if someone else that they know does have that basic understanding and they

want to liquid put it across, great. So now you have this issue where someone is like, fuck, I want to make great music. I trust these people in these areas. I don't care to know that much. But I'm also now, you don't have the populist incentive to have a demagogue leader try to get all of those people to believe nonsense so that they can go vote in a particular

nonsense direction. I mean, look, I agree with this theory, but I want to keep going back to well, how can these big tech companies use this information to help

redesign civilization? Well, we're talking about right now would be more something like how could the governments of the world build the next phase of government where we actually had to build a lot of infrastructure for this government town halls where the people could meet and talk through the thing, and courts where the judiciary process could happen, and a mail system so the information could be shipped around. It's just that's a super old system. It is not technologically relevant.

And so now we have like e services, so you can log on a DMB and do some basic shit, but the structure of government, how it processes information has not evolved to process the kind of information issues that

we have in a way that it could. Taiwan is the best example so far of a open society or a democratic country actually building a democratic platform where people can engage in fundamentally different ways than the previous system because you couldn't fit everyone in a town hall, there was no way, But you could fit everyone into an Internet platform that could contribute to in constructive ways to the creation of the right propositions, and they're doing a

brilliant job with that. We wrote a paper overviewing the way Taiwan transition to Digital democracy on the Consilience Project if people want to check it out. Audrey Tang helped lead that work. It's so and it's kind of important to realize that the nuclear bomb, as a massive advancement in tech, the first existential tech ever was not created by the market state and the recognition that there was

an existential risk if it didn't create it. It It created an indefinite budget, right black budget to bring all the top scientists together to build the tech that the world that the country needed. We could do a Manhattan project to take the AI and blockchain and all of the types of digital technology to build the next phase of

open society. Recognizing that open societies are being eroded and are almost totally gone, and they're being eroded by the same tech because the tech is making corporations exponentially more powerful that are actually eroding democracy in the way that we have mentioned and are unable to be regulated by the democracy because they're getting more powerful than it, and authoritarian nation states can implement the technology to make more

powerful authoritarian nation states if open societies don't innovate in how to use that tech. Because the thing we said at the beginning is only the groups are developing and deploying the exponential tech, we'll have a say in the future because that's where the power is. And yet none of the groups that are doing that are democratic or even have a democratic ideal, whether it's a tech company or in an authoritarian nations say their top down systems,

so there is no bottom up jurisprudence. And so I want to see open societies take on the role of innovating not one type of tech, but the whole suite of digital tech to make better open societies, because they don't see that open societies survive otherwise. Okay, so let's talk about the Consilience project a little bit. So how are you doing that now? You also are partnering with Tristan Harris on some stuff right from the Social Dilemma movie he was featured in there. Can you talk a

little bit about that? Yeah, the Social Dilemma probably many people have seen it. If they haven't, it's worth checking out. And it just overviews a lot of the issues and

harms that the attention economy technologies create. Tristan's a lead in that runs a group called the Center for Humane Technology, and so yeah, we're good friends and colleagues and worked with talking with a lot of people in US government and other governments on how to address some of these issues, and somewhat tech companies and some with just general public education.

The Consience Project is trying to put out content that will help people understand the deeper issues addressing the state of the world, so that they can more competently make sense of what is going on beyond what they would get through their news feed or other sources, so that also they can the kind of collective intelligence of the world can be better applied to innovate the things that matter.

We are about to do a series of ten new papers that just centrally describe what we would consider the meta crisis climate change, and the issues associated with AI risk and tech and all the planetary boundary issues like that. They're actually all instantiations of a kind of broken civilization system. How do you understand what's wrong with the civilization system and what is needed to be stu words of the power of exponential tech kind of Mythopoetically, you can say,

exponential tech is really powerful. It's like the power of gods, at least little gods. But we don't have the kind of wisdom, love, and prudence to use that power well, and you don't get to keep doing exponential warfare and exponential externalities on a finite planet, not blow yourself up.

So what do we need in terms of a culture that is producing the love and wisdom and then new social systems made by that culture that have the ability to utilize the tech but also bind, guide and direct it to fully healthy for the people and the planet directions, not the extract of harm causing ones. So we're writing that series and then other things to try and work on having more people understand the bigger picture of what's going on so that there are more people capable of

problem solving in meaningful ways. Well, I hope this podcast chat goes along those lines. You're not against exponential technologies, You're against increasingly decentralized exponential technologies. Not about decentralized Honestly, what you see with Facebook is a radical centralization of power. Right. You have one social media company that is radically more powerful than all the other ones and has a monopoly of attention. Facebook. I mean Google is that to search engines,

Amazon is that to online stores. There's a way that network dynamics started creating natural monopolies that weren't governed by previous antitrust type dynamics. And that's actually an example of how exponential tech leads to centralization in very harmful ways. But in other ways, decentralized exponential technologies are just bad. So exponential tech both radically centralizes and decentralizes power, both

in problematic ways. The decentralized power is the ability for a small group of people to do cyber attacks on infrastructure that can actually shut whole infrastructure down. The ability to, over the next little while, be able to build drone weapons and AI weapons and bioweapons by tiny, little non

state actors that have catastrophic ability. The decentralization of radical power equals the decentralization of catastrophic power too, So how do you deal with a world where like, we only had one catastrophe weapon when we had the bomb, but it was really hard to make, so it was pretty easy to control and monitor. But now we're talking about things that are not hard to make. They're super super easy to make, but super catastrophic. So the decentralization of

exponential tech gives a world of increasing catastrophes. The centralization of it gives dystopias, right now those are the only two attractors. I got catastrophes or dystopias. I don't like any of those options, right, so we need a third attractor that has the power to bind the tech in non to not be catastrophic, but is not a top down hedgemon in the way that it binds it, which would be dystopic. Which means, how do you act actually

facilitate a collective intelligence? Right, something that is truly democratic, but that is has the speed and effectiveness, which means it can't have polarization, it can't have travels, We can't have any of those dynamics, which is why it will probably fail, and which is why there's cultural work that has to happen first, and like pretty fucking significant culture work otherwise the dystopias, but that at least have top down control when and so this is why I really

am caring about this. How do we upgrade people's cultural orientation to be like, okay, fuck, let me really seek to understand the other perspectives that I'm that I have currently just made the enemy, and let me avoid the dopamine hits of certainty and righteousness that my perspective is already right because Without that, there is really no chance

at all. Like, if you don't start to have a cultural movement, it can instantiate new types of institutions that are of foreign by a people that would instantiate those better institutions. Then you either don't get the power to bind the tech, in which case the decentralized versions of the tech that emerge anywhere will just cause catastrophes, or you end up getting someone who has the power to do it but is not of formed by the people because the people can't agree with each other, in which

case you get technology AI empowered authoritarianism. And so like, it's either AI empowered authoritarianism, decentrazed catastrophes, or a radical fucking cultural enlightenment to be able to have a government of formed by the people that has the capacities because we've also developed the people and able in order to do that. Yeah, you've called that the new Enlightenment, right,

something like that, Yeah, something like that. The term changes sometimes, So, well, how can we create a world that is more anti fragile considering these factors when you think about what creates fragility? You and I are talking on computers and microphones and all that where the actual production of these computers took supply chains that probably span six continents right in dozens

of countries. So something can break in a local area where the only place that has the mind for that thing or the ability to do that kind of lithography or whatever it is, and you get cascades of breakdown. And we saw that in Wuhan, right one area has an issue, and then you have to have travel lockdowns, which means you can't move fertilizers across borders, which means that you lose crop supply in Northern Africa and parts of the Middle East that affect hundreds of millions of people,

and you're like, oh fuck, that's very fragile. So increasing more local closed loop dynamics addresses fragility, increasing stores of critical resource in terms of stocks and flows. But there's a trade off between resiliency and efficiency, and we've indexed for efficiency so much because of money on money dynamics. The indexing for efficiency means I don't want any stores,

but that means much less resilient. So there's a lot of things that we can do infrastructurally and whatever for anti fragility, but there's nothing as significant as having a collective intelligence that actually thinks about the world better and can one problem solve better and problem solved with the other people, which means cooperate and coordinate like sense make and choice make collectively better and even more fundamentally, don't make the problem worse by getting inducted as warriors in

somebody's kind of narrative war or information war. So the cultural shifts that we're talking about where people actually seek multiple seek many perspectives. Yep, that's a good one. They seek many perspectives. They seek to kind of empirically validate stuff, they seek to synthesize them, They seek to have real, meaningful generative dialogue, realizing that the other people that you hate are still political actors and they don't leave the

planet just when you win. They do shit right, and so you have to coordinate with them or war with them. There is nothing we can do for resilition. It's an anti fragility better than increasing the collective intelligence of the people. Okay, this is I want to double click on this idea of collective intelligence of the people. So can you explain a little bit more what you mean by a collective

intelligence for our listeners. Yeah, we say collective intelligence. We're talking about the intelligence of the people, plus the ways that groups of people can be more intelligent than just the sums of the individual people. So how do we make institutions and groups that are intelligent? Which has to do with things like their ability to communicate and coordinate well, obviously, the reason we will have like a scientific institution is because some things are too complicated for a person to

figure out. So it takes a lot of people dividing up the work working together well. But they have to be able to trust each other that they're not lying or whatever. They have to trust check each other's work. They have to make sure they're ego dynamics to take the credit, don't get in the way too much that

it kind of messes the stuff up. And if you have a bunch of really smart people and not a good system for them working together, the collective intelligence of the group will still be low, even though the intelligence

of all the individual actors might be high. I have sadly been in mini groups like that too, And so democracy and markets are supposedly collective intelligence systems, right, how do we have the intelligence of everyone work together to have the best things emerge the best choices made, they've broken down in really significant ways and they need really

significant updates. And like the best thinking at the time of Adam Smith or the Founding Fathers is not the best thinking anymore, even though the spirit of some of those principles, you know, needs to continue. So when I'm talking about collective intelligence, I'm talking about the world's ability to make sense of itself and make choices that everyone can contribute to and agree with that if they're going

to be affected by it or bound by it. We are living in an age where there's this weaponization of complexity and intellect, and it has led to anti intellectualism and I would say distrust of science as a counter weapon. And so I'm just I'm wondering what you think we can do about that. Yeah, we have a paper and the Concerns Project on the challenges of the twenty first

century sense making that's worth checking out. And one of the things that has discussed is how many of the things that this I think this came out of the work of Lriic Beck after during the Cold War and specifically looking at nuclear risk that uranium isotope in the air. After Chernobyl blows up or whatever is invisible, and the whether it is a democratic government or an autocratic government, trying to figure out, well, how far did the wind blow that stuff to and what parts per million should

we allow to say? Where do we tell all the farmers they have to tell their stuff back into the soil, versus we can feed it to people. If you don't have a Geiger counter, you're gonna have a real hard time just kind of rule of thumb in that one, you know. And so there are a lot of people who were like I, just before the tech advanced very rapidly and the world changed much slower. You could just kind of have a sense of like, the way we've done things for a long time is the right way

to do things. We can trust the knowledge of our forefathers. We've always done it this way. Yeah, but you didn't fucking have nuclear isotopes. That's a new thing, and your hunch for what happens with them is just gibberish. It's based on nothing, right, So we need Geiger counters to check it out. We need to do some studies on

what parts permelion really seem to have big effects. But that means that there is some privileged class of people that know how to read a Geiger counter and know how to do cell cultures that we have to trust to some new kind of priesthood to tell us what the invisible but significant shit means. And so how do we know if we can trust them? Because it is kind of it's invisible stuff and this is COVID again, right, like it's true, or like, okay, we don't want certain

kinds of petrol, toxins and pesticides whatever in our food. Well, when invisible stuff and this is climate change again, has real causal implications. So you can't sense it yourself. You have to basically empower some group that has the capacity to sense it. And then the question is how do you do checks and balances on that? How do you create an institution that has institutional capacity that the people actually trust. That's right, and it's you know, it's a

tricky thing. But as we have a continuously technologically complex world, we'll have more and more invisible stuff, and more and more hyper object and complex stuff, and more places where what affects one thing affects lots and lots of other things, and all that complexity has to be factored and so

now think about it like this. Okay, we want a democratic population who get to have a say in things, but they want to have a say in things like the application of tech that they absolutely don't have the education to create that tech. Right, it was a whole bunch of PhD engineers from MIT and whatever. They created the tech, and they know they couldn't create the fucking tech.

But they want access to the tech, and then they want to make make sure that there isn't some governing body that is governing it in a way that will affect their convenience or their economy or whatever it is. It's like, wait, if you if you can't understand how to make the tech, you probably can't be involved in its governance. Now that doesn't mean can't be involved at all. You can say what your values are, you can say what you care about it. You can still tweet your

opinion about it. Well, no, that's actually the problem, right, I can tweet it. I can tweet an opinion that is based on gibberish. Yeah, I was saying that ironically, I know. But in terms of the actual systems, we want to develop the ability to regulate a thing well and the ability to understand it well are coupled. And this is that rights and responsibilities thing. If I want to have the right to really influence a thing, there's a responsibility to understand it well enough that I actually

know what I'm talking about. Oh, I agree. I mean, you see how many people on the internet think they're experts on the science of intelligence, and it bob makes my head want to explode. Yeah, so just to take my own field for example, So are most of these issues that we're talking about, you know, exontential threats? Are they historically novel? How do you see that in terms of the whole arc of humanity? Yeah? Very hard for a caveman to blow up the world, Very hard for

Caesar to blow up the world. Fair enough, Right, It's a technologically mediated thing. And as our technology got bigger, our civilizations got bigger because we had technology that could extract resources and defend itself and whatever more effectively. But what that also means is we had technology that could

destroy ecosystems and win in war against other groups. And so if I'm chopping trees down, like the world's pretty big, If I'm chopping trees down with axes, it does seem like trees are inexhaustible, so we just priced the tree at the cost to extract it. I don't look at what it took nature to make it. I don't look at the effects on the ecosystem of what it's doing to stabilize topsoil and watersheds and oxygen and CO two and all of that stuff. It's just because there's still

plenty of treats. But I take this thing and make lumber out of it, and so it's really just called taking the shit from Nature's being industrious, and there is no balance sheet of nature that has to be paid

or accounted for or whatever. But when the tech gets bigger, right, and we're start to get into slash and burn of whole massive areas of forest to get the resources under them, and we go from a fishing pole to mile long drift nets, and we go from half a billion people before the industrial revolution to eight billion people empowered by those types of radical industrial and the nuclear aids and

digital age technologies. The power of our tech has gone up so radically that the power of our both intentional and unintentional destructive capacity has gone up, right, and the constructive capacity also, the construct of capacity is not for everyone. It's probably still has a destruct development that's associated with it, right, oh oh yeah. And so we didn't have the ability to start doing things at a scale that could really harm the world beyond our local area till the Industrial revolution.

And the the Industrial Revolution was like, okay, we can start to really scale some shit, which is why we went from half a billion people to eight billion people in almost no period of time, even though three hundred thousand years of Homo sapiens and we never broke a half a billion people. And then it wasn't till the bomb World War two that we had the ability to kill everything in a hurty, right. That was a big deal.

So we had to make this entire world system, the Bretton Woods un much assured destruction world system, to make sure that we didn't use our new tech. And that was one catastrophe weapon. Now we have dozens of catastrophe weapons. It's brand new situation. We have dozens of catastrophe weapons. And instead of having two actors that have them where you can do mutu assured destruction, we have an indefinite number of actors and you can't even know who has them,

so you can't do mut your sued destruction. Part of the solution to World War two not turning into World War three, and the post World War two answer was we don't want the major empires to war, even though the history of the world the major empires always war. But now the weapons are so big that if they do, they fuck everything for everyone, so they have to not war. But everybody still wants more stuff, so we have to get them to get more stuff without taking each other stuff.

So we just have to make exponentially more stuff. So we're going to make a monetary system based on exponential growth, and that's going to drive in exponential growth in goods and services. But that also means in exponential growth and taking shit from nature faster than it can replannation, turning

it into waste faster than it can be processed. So then you start to hit planetary boundaries on overfishing and cutting down all the trees and excessive extraction of all the resources and then turning it into climate change and pollution of the gazillion types over on this side, microplastics and dead zones and oceans. So the solution to how do we not war drove all the planetary fragility issues, and then the other thing was the other way we

want wars. We'll do this globalization thing where we get so globally interconnected that you don't want to bomb each other because then your supply chains are messed up. But that is what then creates the fragility dynamics we mentioned, where the world gets so complicated and so fragile that a little collapse anywhere can create cascading collapses everywhere. So basically there was no global catastrophic risk, human induced catastrophic risks.

So world War two. We got our first one in World War two, and it was such a big deal we built an entire world system to mediate it. It bought us like seventy five years of no world war, but of course lots of proxy war and economic war and info war and whatever. It created. All these planetary boundary issues created the fragility issues and created the proliferation of tech across dozens of sectors, where now we have lots of catastrophe weapons, right, not just nukes, but AI

weapons and bioweapons and blah blah blah cyber weapons. So now we're in this situation where we have to build a new world system again, like we did after World War Two for a completely different risk landscape, and so it takes completely different characteristics. It won't be based on exponential growth of the monetary system. It won't be based on a system that drives more fragility. So it has to have interconnectedness without fragility. It won't just drive tech

to drive markets of all kinds. It has to have anticipatory regulation because of the speed of the effect of tech built right into the process. So this is basically what the world. This is the challenge the world needs to rise to right now as I see it, beautiful. Well, I'm really glad there are people like you who are thinking through these issues for civilization. Thank you so much for being my podcast. Take Where can people you know what they want to read your papers? You mentioned a

couple of papers that are coming out. Where can people read this stuff? Yeah, they can go to conciliensproject dot org and check it out. And I've noticed you're not on Twitter? Is that intentional not? Yeah? Yeah, I've contemplated, but it hasn't been something I've needed to do yet. There's someone called Daniel Schmockenberger said us on Twitter did you know that? And did you know that I don't

know who that is. Yeah, I guess the closest I think can do to tag you and I when I tag you on Twitter, so sometimes I talk about some of your stuff. Look, Dane, I really always get so much from chatting with you today, and I'm sure our listeners will as well. Thanks again for thinking about the future future of civilization and for chatting with me today on the podcast. This was a really fun talk and

I enjoyed all the things we covered. Thank you, Scott, you too, thanks for listening to this episode of The Psychology Podcast. If you'd like to react in some way to something you heard, I encourage you to join in the discussion at thus Psychology podcast dot com or on our YouTube page The Psychology Podcast. We also put up some videos of some episodes on our YouTube page as well,

so you'll want to check that out. Thanks for being such a great supporter of the show, and tune in next time for more on the mind, brain, behavior, and creativity.

Transcript source: Provided by creator in RSS feed: download file