CLASSIC: The End Of The World with Josh Clark - podcast episode cover

CLASSIC: The End Of The World with Josh Clark

Dec 23, 202347 min
--:--
--:--
Listen in podcast apps:

Episode description

How long do humans have left on Earth? As species go, humanity has had a brief, incredibly transformative run here on Earth. We've mined resources, farmed food, hunted animals, built cities and polluted ecosystems across the globe. There's no denying we've also made tremendous technological breakthroughs -- but could some of those same innovations ultimately become the agents of our collective demise? Join the guys as they interview Stuff You Should Know cohost Josh Clark about the science behind his newest podcast, The End Of The World.

They don't want you to read our book.: https://static.macmillan.com/static/fib/stuff-you-should-read/

See omnystudio.com/listener for privacy information.

Transcript

Speaker 1

Welcome back to the show, fellow conspiracy realist, our classic tonight, and actually we're having several this weekend in celebration of the end of the year. We thought, what better time to what better time to re examine our conversation with one of our very good friends mentor of sorts. Although he hates when we say that, Josh Clark.

Speaker 2

It's a lot of pressure, no no, but it's accurate. Josh is a cool guy and a super sharp cookie. Can you be a sharp cookie?

Speaker 3

Sure?

Speaker 2

Crazon whatever. He's smart, okay, and he is super interested in a lot of the same topics that we are, not the least of which is the idea of this world, this little pale blue dot that we occupy, coming to an end, whether with a bang or with the whimper. And there's a lot of different ways that that could potentially happen. And that's what Josh covers on his podcast at the End of the World.

Speaker 4

All the little ways that this whole thing that we call reality and experience could just go boom. That was it, great job, but you don't even you don't even get that it's just over.

Speaker 2

But it's also not a massive bummer. The show is handled with a thoughtful commentary, some interviews with some experts in the field, and Josh's of course kind of you know, trademark wit and lightheartedness, like it's it's heavy, but you'll get through it, and it's you'll feel learned something and you might you know, get some laughs and feel some feels along the way.

Speaker 1

So check out our conversation with our with our Ride or Die Josh c himself, and then tune into the show End of the World with Josh Clark, available now in its entirety.

Speaker 3

Yeah, well, and listen to how many things have come to have or at least we're on the precipice of now. Right since we made this in twelve twenty one, twenty eighteen is when this came out. So many of the things we discussed in this show feel like they've kind of happened. The pandemic happened right after this. This is December twenty twenty eighteen, and we're talking about gain of function research in specific labs that are doing dangerous stuff.

Speaker 4

I don't know, guys.

Speaker 2

Let's see what Josh has to say about all this.

Speaker 5

From UFOs to psychic powers and government conspiracies, history is riddled with unexplained events. You can turn back now or learn this stuff they don't want you to know.

Speaker 4

Hello, welcome back to the show. My name is MaTx.

Speaker 2

My name is Noa.

Speaker 1

They called me Ben. We are joined with our super producer, Paul Mission Control Decatt. Most importantly, you are you. You are here that makes this stuff they don't want you to know. Gentlemen, we are at the end of the year, and today we are talking about the end of the world.

Speaker 4

Yes, how humanity shall perhaps perish.

Speaker 6

Yes, we you know, do a better job and fix things.

Speaker 7

Yeah.

Speaker 4

Yeah, sure we're gonna be able to fix it now.

Speaker 1

We are not confronting these existential threats alone. Today we are joined with an expert and a friend of the show, longtime friend of ours both on and off the air.

Speaker 4

He's kind of like our big brother in a way.

Speaker 7

Friends and neighbors.

Speaker 1

Josh Clark, Hey, guys, hey man.

Speaker 6

Thank you for having me on.

Speaker 2

I've always thought of you as a bit of a big brother figure. I feel like you're always watching.

Speaker 6

Like that's a little more my style for sure. The hug like hey, you can do it kind of.

Speaker 4

Well in my mind, it's more of a watching you walk and then trying to find a way to fit in those footsteps, at least in some small.

Speaker 2

Way like matches gate like kind of like yeah, yeah, really just trying.

Speaker 7

That explains so much.

Speaker 6

My gate is more of like a it's kind of like a speedwalk, slash jazz or size cross things. So it's tough to replicate. Man, I've seen you try, and I think it's adorable.

Speaker 1

Thanks, So hopefully, hopefully, Matt, you will have some time to perfect that gate. And that leads us to the big question, Josh. You recently created a podcast called The End of the World, and one of the questions that a lot of people have when we talk about the end of the world is really a question of timeline. You know, how much time does Matt Frederick have to practice his gate?

Speaker 6

Matt, I would give you one to two centuries tops, whoa, which sounds like a lot of time. It does. You're like, well, I'll be long in the grave most likely, although I don't know you might live to a substantial portion of that.

But if you step back and think, what about the children, what about the grandchildren, what about the future humans who will will come down the line over the next one hundred two hundred years, and then if you look beyond that and really take a step back and look at how long some people expect humanity to continue on into the billions of years, all of a sudden, the idea of going extinct in the next one hundred to two

hundred years suddenly becomes really terrifying and scary. If you can kind of step outside of yourself.

Speaker 4

Well quickly, let's just get kind of an idea of what happens if humanity just does great, we don't we don't kill ourselves off, some giant external thing doesn't wipe humanity out. That billion years that you talked about for humanity is that cut off date when the Sun essentially creates death everywhere in our solar system.

Speaker 6

Right, Yeah, Yeah, that's a thing for sure. A lot of people say, if humanity didn't do anything, just kept plotting along and rather than doing everything right, we just kind of got lucky at every possible break. Okay, a billion years is about how long we would last, because that's about how long the Earth will last in its place in the solar system. The Sun's going to grow, row, and it will eventually basically swallow the Earth.

Speaker 2

Just totally subsume it.

Speaker 6

Yeah, everything, Yes, it's gonna it'll be bad news for the Earth, but we've got a good billion years and we can have a lot of fun and do a lot of cool stuff in a billion years. That's the low end, right, that's if we do nothing but just hang around on Earth.

Speaker 7

But alast people are kind of dumb.

Speaker 6

You could make that case. It's true, but but there's also a lot of hope to future ingenuity. And then also I think as far as dumbness goes contemporarily, it's more complacency than being dumb, you know what I'm saying. It's almost like there's some weird death cult mentality among a lot of people on Earth where it's just like, if humanity goes extinct, that's just what happens. Is maybe we deserve it, which drives me bananas that sentiment, that

idea that maybe humanity deserves to go extinct. We've screwed it up for so long. Maybe this is what we have coming. And I disagree with that tremendously. But some experts say, Okay, if we continue on like being humans and the kind of humans that we are at this point, we're probably going to do some pretty interesting things like

get off of Earth in the near future. Actually maybe one hundred years, maybe two hundred years, maybe five hundre years, even if it's a thousand years, that's plenty of time for us to just basically be like Audios Earth, that was nice. Or if you're starting to look into like the millions of years and all the things we could possibly do, maybe we can move Earth, or maybe we can prevent the Sun from growing, because actually what it's

doing is burning the last of its fuel. There's plenty of ways we could make the Sun burn more efficiently and extend its life by billions of years and then save Earth in the process. There's a lot of stuff we could be doing. So when you start to look into that, then you begin to run into the top limit for humanity's lifetime lifespan into the billions and billions and billions of years, possibly to the heat death of the universe, which is untold billions of years away into the future.

Speaker 7

The final end of the franchise.

Speaker 6

Not necessarily, not necessarily, because one thing I came across and it kind of comes in at the end of the Physics episode is we are starting to figure out how we could theoretically create lab grown universes, so who knows why, Maybe we can learn to grow universes and when this one's starting to wind down, we can be like, oh, well, we're gonna move over here into this fresh new universe.

And then that extends it indefinitely, humanity's lifespan indefinitely. So when you take all this into account, all of the possibility that we have laying out ahead of us, it's really unnerving to think that those of us, allie today have the weight of all that on our shoulders. It's up to us to save that future for the rest of humanity to come. And that's the stakes right now.

Speaker 1

Wow, So let's put we've got the context in terms of timeline here, let's look at the basic, the bare bones definitions. The end of the world deals with the phrase that we mentioned earlier in this show, existential risk. What for audience, what is an existential risk? And are some more let's see, more imminent than others.

Speaker 6

So great question. Existential risks are something that have been around for a very long time. We've lived under like natural existential risks like the sun growing and overwhelming Earth and burning it to a crisp. That's a natural existential risk. An existential risk is anything that can wipe out humanity forever, like we're just gone, there's no more humans left, or the humans that are left can never get back to whatever place in history that we fell from. That's an

existential risk. And there's this guy over in Oxford, the Oxford University, who spends his time thinking about these things and has assembled this basically like super team of thinkers who are all thinking about what to do about existential risks, what existential risks we're not thinking about, and then what to do if we do manage to pass these existential risks that are coming our way and live into the billions of years, all the amazing things we could do.

So the guy's name is Nick Bostrom, and the center that he founded is called the Future of Humanity Institute. And they're not the only group thinking about this, but they're kind of like the og group who started thinking

about these things. And Nick Bostrom is kind of widely viewed as basically the father of the field of existential risk mitigation, and he really kind of took some disparate thoughts that guys like Derek Parfitt and Carl Sagan had back in about the eighties and put him together into like a genuine, refined philosophy that's basically based on we need to do something pretty soon or we were, we probably aren't going to make it.

Speaker 7

TikTok, TikTok, TikTok.

Speaker 6

Yeah, that's true.

Speaker 2

That's my question though, is our existential risks inherently something that's external to our actions as humans on the planet of things that we do that could potentially, you know, cause us to not exist as a species, like you know, climate change and things like that, the impacts that we have on the planet, or is it all about like things that are beyond our control, that are bigger than us that we don't have control.

Speaker 6

It's both, for sure, And so there's there's natural existential risks like the sun growing and asteroid like the kind that took out the dinosaurs, wiping it out, wiping us out. Those we can't do a whole lot about now. We can all imagine a time like maybe a few decades from now, where we can direct redirect the course of like an asteroid that could be colliding with Earth. Right but right now we can't do anything about the natural risks.

The ones we can do something about are anthropogenic existential risks, which are human made existential risks, and we have a few of those coming down the pike coming our way. There is Artificial intelligence is a big one. Another one is physics. Surprisingly high energy physics experiments may or may

not pose an existential risk. But if some of the some of the theories of quantum gravity that seek to move marry the standard model with relativity, if some of those are right, then actually, yes, these physics experiments are quite dangerous that we're doing right now. And then biotechnology is another big one, and there's some other ones too, like nanotech could conceivably be an existential risk to us,

and all of these things. As we're starting to wake up to the concept of existential risks, as our field of vision is kind of coming into focus, we're suddenly recognizing like, oh, there's one. Oh, there's another one. Oh there's one.

Speaker 7

Oh.

Speaker 6

Oh geez, there's another one right there, and they're starting to come our way, and very few people are paying attention to them. That's really the alarming part of the whole thing.

Speaker 1

Isn't It also mentioned earlier that it's a situation rooted in apathy for a lot of people. But one of the questions people will have when they hear this is if we're talking about a global existential risk, if we're talking about not just a risk, a threat that's bigger than an individual's action, how would people mitigate something global?

Like how Let's say my name is Dave Everyman and I'm listening here in Manitoba for some reason, and I say, man, the wild animals are dying at this incredibly cartoonish rate, which will topple the ecosystem.

Speaker 7

Right in the food chain. But what can I do? I'm just Dave Everyman.

Speaker 6

So first of all, I was wondering when it was going to get crazy, And now that Dave Everyman has made an appearance, it's quite obvious it just happened. So that kind of ties into something you said earlier, knowl that you were asking about, was like climate change is you know, climate change actually doesn't necessarily count as an

existential risk. As bad as it would be now, it would for plenty of species that will be affected and wiped out, Like it's starting to look like coral climate change is going to be an existential threat to coral or it is already, but to humans. And that's the thing, like we I really want to focus this in, like I'm talking about humans when I'm talking about existential risks. In the entire series, it's all about things that can wipe out humanity, but existential risks exist for basically any

living thing on Earth. And it turns out from what I saw something kind of came up as humans are actually an existential threat to just about everything else on Earth, right, And I think kind of part and parcel to us saving ourselves and saving the world right is at the same time simultaneously learning that kind of being at the top of the food chain, being Dave, everyman who has the ability to think and act about this kind of stuff that makes us stewards for or the rest of

the planet. So even if climate change is not an existential threat to humans, which it seems like it's not, from taking on existential risks, from taking on existential threats, we should, in my opinion, kind of change our mentality, whether we like it or not, whether we're trying to or not, though our outlook would change, and I think

that things like climate change would be mitigated. And this idea that Dave Everyman can't do anything to help that sense of hopelessness that just kind of presses all of us, you know, down into our couches and into this funk. That kind of thing will go away. And the reason why it will go away, the reason why we can do anything, why Dave every Man can do anything at all, is because it turns out no one at the top

is doing anything. I talked to this philosopher named Toby ort who's one of the guys at the Future of Humanity Institute, and he has spoken to people in the highest echelons of government about this. One of the things they do is is try to like warn people and government and say, hey, you're a policy maker. They're not designing AI very well right now, and one of them could get out of control and take over the world. What do you think about that? What are you guys

doing about that? Oh, well, you know, that's really kind of above my pay grade. I'm sure someone else is handling this. And Toby Oorver's like, there's nobody above your pay grade, Like, it's up to you guys. If you're not doing anything, then that means no one's doing anything.

Speaker 4

Well, yeah, they're stuck in like a cycle of elections, right, that's all those things for sure.

Speaker 6

Yeah, that's a big part of the problems as far as like leadership goes is you know, not just with existential risks, but basically any large project, any long term thing. That's one of the things that climate change is run up into.

Speaker 2

Well, but it's politicized too, so it's like literally you're appealing to a particular base by choosing to say something's not really a problem or ignoring it.

Speaker 7

Sure, that's almost.

Speaker 2

Like a power move to say this isn't really happening, I know, because I'm in charge or I'm the smartest guy in the room, right and I ignore all these other people that are saying that it is, you know what I mean, Like, it's not it's not it's almost ignorance as a like a move kind of you know.

Speaker 6

Like willful.

Speaker 7

Yeah, exactly.

Speaker 6

And it's a disdain for expertise too. I think that's a really popular thing right now. And that kind of ties into that whole death cult thing that bothers me so much. You know, it's it's like, you're a scientist, I don't care, you know, get out of my face, egghead, I don't care about the climate. And that's just kind of a sentiment, just a feeling. That's not the entire zeitgeist, but it's definitely a part of the zeitgeist right now for sure.

Speaker 2

It's almost like we don't have that much time anyway, so let's just get the most out of it that we can in the short time we have, not really worry about the next part.

Speaker 6

You know what I mean. It's basically like the disco era took over the entire world. You know, That's kind of what it feels like.

Speaker 1

So we'll pause here and continue after a word from our sponsor. Here's the question.

Speaker 7

Here's the turn for our show, because now.

Speaker 1

We're talking about will foil ignorance or anti intellectualism and and all these other all these other flavors of governance and society, right or the problematic flavors. Is there, to your knowledge, any sort of cover up or conspiratorial events surrounding any of these existential risks?

Speaker 7

You knew you knew we were gonna ask this.

Speaker 6

I was hoping. I think I prematurely caught the where it got crazy with the Dave Everyman thing.

Speaker 1

Okay, but I really appreciate you doubling down on Dave every shure, Well.

Speaker 6

I think I quadrupled down on Dave everyman, and it's pronounced every man the British pronounces of the Manitoba every Okay, gotcha. Now, now I know exactly who you're talking about, So uh I, I'm quite sure that there have been. I don't think that there are, in necessarily in fields like AI. I I know that the physics community is actually quite the

opposite of that. They the people at CERN in particular, have been working over time while also simultaneously bending over backwards to show that the large Hadron Collider is safe. But the thing that made that episode, the physics episode, the most interesting to me is a lot of the suggestions that it might not be safe are coming from physicists. It's not from just external you know, guys who who you know declared their own patch of land in Nevada

Country or anything like that. It's like actual physicists who work with particle colliders, who work in theory. And also I didn't mean to denigrate a lot of their demographics, just now that they they are the They are the people who are kind of raising the alarm on particle colliders possibly creating microscopic bike holes and that kind of thing.

So if I had to zero in on one group where there was, if not necessarily like conspiratorial cover ups, at the very least a lot of kind of brushing stuff under the rug, it would be the biotech field for sure. For sure, the level of recklessness and accident proneness, I guess is a terrible way to put it, that comes out of the biotech field, and certainly not the entire biotech field. There's plenty of people in the biotech field, and I would say the vast majority of the biotech

field is very, very careful. But the problem with biotech is that even if you are careful, accidents still happen. And if you go back and you look at the statistics, not even statistics, like actual numbers of like accidental releases of deadly, deadly pathogens from labs into the great wide world, just over a very short time. We're talking hundreds and hundreds of them. And the thing that bothers me the most about biotech when I researched this, I started to

get like kind of mad. I'm like, this angers me that there's this field and no one. I can't say no one, but very few people and certainly not enough people are paying attention to and regulating the biotech field.

There are some really reckless experiments that are being carried out, and there are just a small fraction of labs in the world are required to report accidents even let alone what they're doing or what kind of experiments sec carrying out accidents like meaning a deadly pathogen made it out in the skin of a lab worker who didn't realize it and it started to kind of spread or whatever. You don't like if you were a private corporate lab working in the biotech field and an accident happens, you

don't have to tell anybody about it. You don't have to tell a single soul. You have to be funded by the National Institutes of Health in the United States or be affiliated with a lab in the United States funded by the National Institutes of Health to be required to poor accidents. So of the like six hundred and fifty accidental releases between I think two thousand and four and twenty ten, maybe those were specifically accidents reported by

labs that are funded by the NIH. That's it. Wow, there are so many more labs BSL three and four labs those are the most the most secure labs, but that means that they're also the ones who are working with the deadliest pathogens. There are so many of those in the world, and it's become such a like a cool thing to be a corporation or like a university, or to have a BSL three or four lab. No one has any idea how many there are in the world.

Not a single person on planet Earth knows exactly how many BSL three and four labs there are on Earth right now.

Speaker 2

So where is the oversight coming from for those labs?

Speaker 6

I mean basically nowhere. I think the USDA has some jurisdiction.

Speaker 2

Like OSHA or something they don't have like work site hazard you know, inspections and things like that.

Speaker 6

So as far as I'm sure that they do as far as like OSHA goes, yeah, I think that that would that would be extended to everything, including those labs. But as far as like reporting it to like the CDC or the NIH, there's no requirements unless you are NIH funded.

Speaker 7

That's terrifying.

Speaker 6

It is extremely terrible.

Speaker 1

And we're just talking about the United States right exactly.

Speaker 6

So if you are, you know, working in a lab in Korea, Korea. South Korea, I'm sure has like legislation or laws that say, you know, you have to do this, or you have to follow these rules or whatever, but there's there's no there's certainly no universal oversight agency. The UN has is toothless in this respect, in almost every respect, but certainly in this respect right like toothless. The World Health Organization has like zero say in this. It's just

it's just the wild West. And unfortunately, there are microbiologists and plenty of them in field who are like, whoaha, whoa, whoa what what experiment did you just run? Like you just escalated the contagiousness of this deadly flu virus so that it can be passed more easily among mammals. Why did you just do that? We should have a two or three or four year moratorium on those kind of experiments.

And that kind of thing does get observed in the field, but it takes people in the field to do that to raise the alarm, and they don't always do it, and it's not necessarily having this sweeping effect. And then after they raised the alarm and study the problem, there's nobody stepping in and saying, yeah, don't do that. Anymore really quickly.

Speaker 4

I just want to stay in here because something that I kind of knew about, but I learned a lot more about through listening to your podcast was gain a function research, right, And what exactly that is? Can you just tell us about that and why it's so dangerous?

Speaker 7

Sure?

Speaker 6

So I talked to a few experts, like legitimate experts on this, and gainunction research is where you take a wild virus, I guess, a natural virus, and you force a mutation in it so that it becomes deadlier or it becomes more contagious, or it becomes less susceptible to treatments or drugs or something like that. And when when you force mutations in this virus or a pathogen of any sort and it becomes deadlier or more contagious or less susceptible, it has gained function, is what they call it.

So gain a function research is basically taking evolution and speeding it up. And they'll do things like they'll force a bunch of mutations and kind of selectively breed a virus until they think it has the kind of the kind of like maybe contagiousness they're looking for, and then they'll introduce it into like a ferret's nose, and they'll take that and introduce it to another ferret's nose, and they'll just basically speed up the process of an infection

a pandemic among you know, lab animals. And this was done with I think H five N one in a couple of different labs simultaneously but independently. It's really weird. They took this really really deadly virus and they basically taught it to be transferred from one ferret to another through like sneezes and coughs. The saving grace, the thing that's kept us all alive as far as H five N one goes, except for a handful of unlucky people,

is that it's really tough to spread. It's really really deadly. I think it has like a seventy or eighty percent mortality rate, but it's really hard to catch. Well, these guys were forcing gain a function to make that virus much easier to catch. And when they started publishing these these studies, the field just went nuts. They were not having it. They were very upset about this. They said, this was very reckless. Why are you doing this in

the first place. And then some people have even said, these experiments did not need to take place at all. You can actually study the same kind of stuff just by studying proteins. You don't have to have a wild or live, active, deadly virus that you're creating this in.

You can just study the proteins. So if you kind of step back, or I should say, if you dig a little deeper into it, you start to get the impression that there's a lot of ego that are driving experiments like this just to kind of show that it can be done. And the field has a really extensive history of that kind of like gun slinging recklessness, like look what I did. And the problem is is when

you create a virus like that, it's alive. It lives on Earth with us, and that means that it is your responsibility for the rest of time to either one hundred percent eradicate it from Earth or you have to keep up with it make sure it doesn't get out. And if it does get out, well that's a big problem because now you have an H five N one virus that's extremely deadly and also extremely contagious, and that's

the existential threat posed in the biotech field. There are really risky experiments being carried out, and they also have a long history of being accident prone as well. That's the one that gets my blood raised the most.

Speaker 4

I don't know if you could tell it's scary.

Speaker 1

I mean it's a terrifying proposition, because we hear in the news cycle, right, we hear about maybe once every year and a half a report of a virulent strain of something breaking out in a specific part of the world, and the question is always how far will it get this time?

Speaker 7

Right?

Speaker 1

So are you saying that there is a real and substantial possibility that some sort of aggressively modified virus or contagion really could just through the slip of someone's hand, spread across the planet.

Speaker 6

I mean, it's not one of those things where one virus being experimented on and one lab poses a significant risk to humanity. But it is a risk just that one virus in that one lab does pose a risk just by the very fact that it exists. And there are such things as humans who are accident prone experimenting

on them. Right. So the problem comes when you have many, many people running the same or similar experiments with the same or similar viruses all over the world in an unknown number of labs, then that one that one remote risk starts to compound and get a little a little

more dangerous than a little more dicey. I don't think that there's any virus right now that could conceivably wipe out one hundred percent of humanity or so many humans that we would we could never possibly rebuild civilization ten thousand years down the road. I don't think so. I don't know. I'm not in the biotech field, but from the outside looking in to the the progression of the field over the last few decades in particular, it does

seem like that's the direction it's going. From my view, just mine, and I'm not an expert, but from my view, the sentiment, or the sentiment that I've kind of tapped into, is, if you are a microbiologist of the kind of cowboy ilk, coming up with a contagious virus with one hundred percent mortality would be like your crowning achievement, basically, And the whole premise of this is to study it so we

can figure out how to treat them. And that is like a that's a legitimate avenue of research, that's the main avenue of research for virology. That's the point largely. But do we need to force gain of function in viruses that don't exist like that in order to treat it?

That's the question that I have, and to me, the answer is no. I think there are definitely ways to do it, and we should be focusing our research and figuring out how to treat viruses that could conceivably get like that without creating them first.

Speaker 1

So, assuming that the world doesn't end, we'll be right back after a word from our sponsor. I'd like to take a slight pivot here and ask a couple of biographical questions we probably.

Speaker 7

Should have asked in the beginning.

Speaker 1

This is This entire interview has made me very conscious of time as well. Okay, I hope we have enough time to finish. One thing a lot of people would want to know is whether there was some specific moment in your life that inspired the End of the World series.

Speaker 7

Was it?

Speaker 1

Was it something related to biotech? Did you like get a nasty cold and the doctor said, boy, uh, this is weird job to sit down.

Speaker 7

You're what happens?

Speaker 6

Have you been hanging out with Ferretts lately? I actually was sick while I wrote the Biotech episode but yeah, which just really drove everything home that much more. The thing that inspired me to do the series, which and also I want to just take this time right now, is to thank all three of you for your roles in helping me with the series. Like overtime, all three of you had a hand in it, and I appreciate it big time. So thank you. That's off to all three of you as well.

Speaker 4

We were excited to hear it.

Speaker 6

I was too. It finally came out. But the whole thing started, as you probably know, just from this kind of intellectual curiosity about it, Like I ran across Nick Bostro many years ago and read some of his papers and I just found it fascinating and I still find it fascinating. So the original point of the series was to say, hey, everybody, check this out, and that's the

coolest thing you've ever heard in your life. And as I dug into it more and more and started to actually interview the people involved, like Nick Bostrom and Toby or and other people at the Future of Humanity Institute, I realized, oh wait, this isn't this isn't just an intellectual pursuit. These people are doing like they're actually trying to warn the world like this is real, Like wait, wait, wait,

wait what this is real? And the the I underwent a conversion, and then so too did the series because I was still working on the series at the time, and there was a huge tone shift in the series. It went from straight basically like a very dry book report to Okay, we need to do something, everybody, and this kind of thread of we need to form a movement, we need to start doing something emerged in the series and took on almost became like a character in the series,

or certainly a theme, a major theme. So it was originally intellectual interest that brought me to it, and then I kind of got struck by life on the way to finishing and it changed the tone big time.

Speaker 4

I'll tell you what made me want to put my phone down and join up the movement. And it was in one of the episodes where you talk about a certain three and a million chants that occurred in the nineteen forties.

Speaker 6

Isn't that fascinating?

Speaker 4

Can you tell us a little bit of that story.

Speaker 6

Yeah, So, what's widely seen as the first human made existential risk that we've ever faced was the first detonation of an atomic bomb at the Trinity Test on July sixteenth, nineteen forty five, in Almagordo, New Mexico, USA. And it wasn't that they were saying, yes, this think's going to

be a deadly weapon, this is an existential risk. A lot of people make the case and I kind of subscribe to it too, that the nuclear bomb has never been an existential threat to humanity like nuclear war, I should say, has never actually been because we can't we probably couldn't wipe all of humanity out. And again, that's the thing that separates existential risks from all other types of risk. Everything else, we have a chance to rebuild,

we have a chance to learn from that mistake. With existential risks, there's no second chances, there's no do over. One thing. One thing goes wrong, that's it for everybody, right, and.

Speaker 4

The first t Yeah, So the nuclear bomb, just to say this, the nuclear bomb was not the existential risk. Again, was not officially that part wasn't the existential risk, right.

Speaker 6

It was the detonation that conceivably posed an existential risk. I should say it was the first possible human made existential risk. And the reason it was is they were sitting around the dudes at the Manhattan Project, and I think it was Edward Teller. This is like the first time they all formally met, and Edward Teller was like, hey, has anybody thought might we accidentally ignite the atmosphere with this thing? You know, we're about to dump a massive

amount of energy into the atmosphere. It could set off a chain reaction, don't you're thinking. I've read different accounts of it. Some people say that it was immediately shouted down and they all realized, no, it's fine, it's fine, whatever, and then somebody ended up telling Arthur Compton, and Arthur Compton was like, oh, fiddlesticks, like, we need to do some of them about this, and he assigned Teller and a couple of other guys to go figure out whether

it was possible. So, depending on what story you hear, either it was like this great, like, guys, we got to get to the bottom of this, or it was kind of like a side project. But either way, they definitely assigned Teller and a small group to go figure out, you know, mathematically, if that was possible.

Speaker 4

And just to be clear, we're talking about the entire atmosphere, not just a section of atmosphere, the Earth's entire atmosphere.

Speaker 7

Chain reaction, essentially.

Speaker 6

That was the fear that when they set off this first atomic bomb, because no one had ever set off an atomic bomb before, it wasn't even at the time known whether it was possible. There was still a chance that it was going to be an impossibility to create a nuclear explosion. But they were saying, if we do do this, I mean like, if we started a chain reaction in the atmosphere, it could spread and burn off the entire atmosphere on Earth, and then life would just

cease to exist in very short order. So they started to study this, and they came back and said, there is a almost no chance that this is going to happen, even even accounting for energy way beyond what we're going to be doing with the bomb, where it's not going to happen. But these guys are physicists, and physicists don't deal in certainty, they deal in probability, and so there was still that small, small chance that they could accidentally

ignite the atmosphere with this. So later on years later, they went ahead with the test. They decided that the possibility was small enough that it was worth taking the risk because at the time, the Nazis were still going strong, and they were like, this is this is worth you know, the small chance that we're going to nite the atmosphere is worth, you know, taking over the Nazis with this bomb that we're going to produce from this test eventually.

But the years later, the guy who was running the Manhattan Project at the time, his name was Arthur Compton. He told the writer Pearl Buck about this story and he said, I drew a line in the sand. I said that if there was a three and a million chants of igniting the atmosphere, we wouldn't go through with the test. And so over the years some people have been like, was it one in a million? Did he misspeak?

Like was there any chance whatsoever? But that was supposedly the way that we handled the first existential risk, which in a lot of ways was like great, you know, hats off, Like they took it seriously, they studied it, they did the math, they carried the one and all that stuff right. But then if you step back and look at it another way, that they decided for the rest of the people alive on Earth at the time that this three and a million chances was worth it.

It was worth the risk. And I'm sure there's plenty of people still alive today and who would have been alive back then who would have said, no, no, it's actually not worth that risk. Three in a million is actually not that remote of a chance. Like you have a one in a million chance of being struck by

lightning if you live in North America. I can't remember if it's any given year or your lifetime, but a one in million, this is a three times greater chance of being struck by lightning that they were going to

ignite the atmosphere. And so if you kind of look at the probability compared to other probabilities, it suddenly made it seem like maybe that wasn't such a good idea to carry out the test anyway, And so I kind of used that in I think episode nine as kind of this teaching example, like it gives us a model of how to approach existential risks, but it also shows us what not to do, and that's not to have just a small cadre of people working in secret to

decide for the the rest of the world without any input, whether something's worth the risk or not. And That's one of the reasons why we all need to be involved. Why Dave, everyman from Manitoba needs to be involved, while you guys need to be involved. Why I need to be involved, Because again, there's no one at the top thinking about this stuff, and it's going to take a bottom up process. But for us to do it correctly, for us not to just be like I don't think

it's worth the risk. We all need to understand the science behind all this stuff. We need to be informed. So we need to if we're going to decide together, it has to be a smart decision, you know, And we have to convert that death cult into the thinking cult to take on existential risks because basically everybody needs to be on board, and at the very least the people who aren't on board need to get out of the way and not work counter to the stuff that everyone else is trying to do.

Speaker 1

You know, I got to be honest, that was incredibly well said, and I kind of felt music swelling. I don't know if Paul's going to put it in when we do this in post.

Speaker 7

But it does. It does lead us to.

Speaker 1

Some questions that naturally follow, one of which being we're talking about becoming more educated, becoming more aware of a situation, becoming less apathetic, realizing that there's no one truly in a room at the top. Right, But what do those next steps look like? What are the specific concrete things other than of course checking out the show?

Speaker 6

Right, Thanks for that, dude. So the first step that we have to take those of us alive today is to start talking about this kind of stuff. Like the more people talk about things, and I use the example of like the environmental movement, Like the environmental movement today has a lot left to do, a lot of work left to do, and who knows, maybe this kind of twelve year timeline that we've been given recently will we'll get people going a lot more seriously on climate change.

But the fact is there is an environmental movement. There didn't used to be. Just in the sixties, the late sixties even there was basically no such thing as the environmental movement. Now, every eight year old can tell you tons of environmental facts, cares about the environment, knows what they can do to to make the earth a better place. We need to do the same thing with existential risks. So step one is for everybody to make this an issue. When all of us start talking about things like this,

the people who we elect start paying attention. It's like, oh, this is what these guys want. Okay, I'm on board. It's not like they're necessarily opposed to what we're doing by principle, it's just not enough of us are talking about this kind of thing or that kind of thing or whatever. So the more of us start start talking about this, the more we're going to be able to

get movement on it. We also need to basically take a lot of the scientific mental energy that we have avail and a lot of the money that we put towards science, and a lot of other stuff to divert to thinking about existential risks, identifying existential risks, identifying best practices. And then the next step is for those of us alive today to say, Okay, what'd you guys come up with?

And then listen to them, not fight them, not say that sounds kind of hard, We can't do it, because what they come back to us with will be a roadmap for surviving the next one hundred to two hundred years as a species. Right, if we can just kind of alter our brains just a little bit in those relatively small ways, we will lay the groundwork for generations to come to build on. But that's the key. Those of us live today have to start now, or else

we're just going to hamstring the ones to come. And in that sense, it really kind of bears a strong resemblance to environmentalism as well. So it's basically environmentalism for the human species is what we need to do. Start that.

Speaker 7

Wow.

Speaker 4

So everyone listening here, I would describe End of the World as if Black Mirror and Cosmos had a baby and then that baby made a podcast. And maybe that can't make it. Yes, oh yes it can. But if if that interests you as it did me, go check it out right now. You can find the End of the World on the iHeartRadio app. Is that the right way to say it. I've never done this before. Somebody want to do? You want to do it?

Speaker 7

Josh?

Speaker 4

You say where it is?

Speaker 6

You can find it on Apple podcasts, the iHeartRadio app, everywhere you listen to podcasts.

Speaker 2

And not to mention, it's all there so you can beg the whole thing. It's all there for your listening pleasures. So all ten episodes. Yeah, and hats off to Paul too. You know, he was the supervising producer on It All job. Try to call him mission control.

Speaker 1

Also, also tell us how you feel after checking out the show. Tell us what really peaud your interest. Tell us if you have responses you can you can write to us.

Speaker 7

You could write to endo.

Speaker 1

The world on let's see Instagram, Twitter, all the hits, all the internet hits.

Speaker 6

Right mostly I'm on there's a there's a hashtag, yeah, hashtag you got to make the too symbol of the two fingers simple hashtag eot W Josh Clark is where you'll find that all over social And then I'm at Josh Clark on like Instagram and Twitter and Facebook. And then there's like some EO t W Josh Clark stuff.

Speaker 1

Too, And tell us, uh, tell us how you keep your optimism after you listen to the show. I don't want to spoil it too much because you should go check it out.

Speaker 4

So go ahead and find us again. We are conspiracy stuff on most socials, or conspiracy stuff show on Instagram. It just just tell us what you think this is. This kind of stuff keeps me up at night, honestly, and I don't know how you got through creating all of this content, Josh, because I mean, just listening to it and absorbing it as a listener gives you a certain amount of dread and hope. And it's almost this simultaneous like scared happy feeling. It's very strange, it's weird.

It's very strange, and you've just been immersed in it and you're still here and you look you look to be fine.

Speaker 6

Oh, it's all show.

Speaker 4

And that's the end of this classic episode. If you have any thoughts or questions about this episode, you can get into contact with us in a number of different ways. One of the best is to give us a call. Our number is one eight three three st d wy TK. If you don't want to do that, you can send us a good old fashioned email.

Speaker 5

We are conspiracy at iHeartRadio dot com.

Speaker 4

Stuff they don't want you to know is a production of iHeartRadio. For more podcasts from iHeartRadio, visit the iHeartRadio app, Apple Podcasts, or wherever you listen to your favorite shows.

Transcript source: Provided by creator in RSS feed: download file