Yuval Noah Harari: The Urgent Warning They Hope You Ignore, “More War Is Coming”, Yuval’s Chilling Future Predictions! - podcast episode cover

Yuval Noah Harari: The Urgent Warning They Hope You Ignore, “More War Is Coming”, Yuval’s Chilling Future Predictions!

Jan 11, 20242 hr 46 min
--:--
--:--
Listen in podcast apps:

Episode description

If you enjoy hearing about the potential impact of AI on humanity, I recommend you check out my conversation with ex-Google office, Mo Gawdat which you can find here: https://www.youtube.com/watch?v=bk-nQ7HF6k4 He has shown millions of readers how humans have evolved to where we are now, but what does the future hold for us as a species? Yuval Noah Harari is a best-selling author, public intellectual and Professor of History at the Hebrew University of Jerusalem. He is best known for his bestselling books, ‘Sapiens: A Brief History of Humankind’, ‘Homo Deus: A Brief History of Tomorrow’ and ‘21 Lessons for the 21st Century’. His books have sold over 45 Million copies in 65 languages. In this interview, Steven and Yuval discuss everything from how AI will change everything, the importance of language and stories, why the idea of finding a ‘soulmate’ is a myth, and the ongoing battle for human attention. You can pre-order the 10th anniversary edition of ‘Sapiens’, here: https://bit.ly/48JVQ6c Follow Yuval: Twitter: https://bit.ly/3HdUxR7 Instagram: https://bit.ly/41WLbCT YouTube: https://bit.ly/3vyAwm0 Follow me: https://beacons.ai/diaryofaceo Learn more about your ad choices. Visit megaphone.fm/adchoices

Transcript

We are now in a new era of wars and unless we re-establish order fast, then we are doomed. You've all know of Harari. One of the brightest minds on planet Earth, historian, a best-selling author. Some of the most influential non-fiction books in the world today. I think we are very near the end of our species because people often spend so much effort trying to gain something without understanding the consequences. For example, we will get to a life where you can live in

definitely, but realizing that you have a chance to live forever. But if there is an accident you die, the people who will be in that situation will be at a level of anxiety and terror unlike anything that we know. Then you have artificial intelligence and the world is not ready for it. It's the first technology in history that can make decisions by itself and take power away from us to hack human beings, manipulate our behavior and making all these decisions

for us or about us, whether to give you a loan, whether to give you a mortgage, dating us, shaping your romantic life. But real problem is that increasingly the humans at the top could be puppets when the most consequential decisions are made by algorithms, global financial decisions, wars. This is extremely dangerous, but it's not inevitable, humans can change it. But with what's to come, are you optimistic about the future?

I'm very worried about two things. First of all, I find it incredibly fascinating that when we look at the back end of Spotify and Apple and our audio channels, the majority of people that watch this podcast haven't yet hit the follow button or the subscribe button wherever you're listening to this. I would like to make a deal with you.

If you could do me a huge favor and hit that subscribe button, I will work tirelessly from now until forever to make the show better and better and better and better. I can't tell you how much it helps when you hit that subscribe button. The show gets bigger, which means we can expand the production, bring in all the guests you want to see and continue to do in this thing we love. If you could do me that small favor and hit the follow button, whatever you're listening to this, that would mean the world to me. That is the only favor I will ever ask you. Thank you so much for your time.

You've all, I have three of your books here and these are three books that sent a huge tidal wave, a ripple through society with these books and with all of the work that you're doing now, the lectures you give, the interviews you give, what is your mission?

What is this sort of if I was to be able to summarize what your collective mission is with your work, what is that? It's to clarify and to focus the public conversation, the global conversation, to help people focus on the most important challenges that are facing humankind and also to bring at least a little bit of clarity to the collective and to the individual mind.

One of my main messages in all the books is that our minds are like factories that constantly produce stories and fictions that then come between us and the world. We often spend our lives interacting with fictions that we or that other people created, with completely losing touch with reality.

And my job and I think the job of historians more generally is to show us a way out. Inheriting in much of your work is what feels like a warning. And I've watched hundreds of videos that you've produced or interviews you've done all around the world and it feels like you're trying to warn us about something, multiple things. So, what is the estimation there is correct? What is the warning?

Much of what we take to be real is fictions. And the reason that fictions are so central in human history is because we control the planet and rather than the chimpanzees or the elephant or any of the other animals, because not because of some kind of individual genius that each of us has. Because we can cooperate much better than any other animal. We can cooperate in much larger numbers and also much more flexible.

And the reason we can do that is because we can create and believe in fictional stories because every large scale human cooperation, whether religion or nations or corporations are based on mythologies, on fictions. Again, I'm not just talking about gods. This is the easy example. Money is also a fiction that we created. Corporations are a fiction. They exist only in our minds.

Even lawyers will tell you that corporations are legal fictions. And this is on the one hand such a source of immense power. But on the other hand, again, danger is that we completely lose touch with reality. And we are manipulated by all these fictions, by all these stories. Again, stories are not bad. There are tools. As long as we use them to cooperate and to help each other, that's wonderful.

Money is not bad. If we didn't have money, we would not have a trade network. When everybody would have maybe with their friends and family to produce everything by themselves, like the chimpanzees do. The fact that we can enjoy food and clothing and medicines and entertainment, created by people on the other side of the world is largely because of money. But if we forget that this is a tool that we created in order to help ourselves, and instead this tool kind of enslaves us and runs our life.

And now just back home in Israel, there is a terrible war being waged. And most wars in history, and also now, they are about stories. They are about fictions. People think that humans fight over the same things that wolves or chimpanzees fight about. That we fight about territory. That we fight about food. It sometimes happens. But most wars in history were not really about territory or food.

There is enough land, for instance, between the Jordan River and the Mediterranean, to build houses and schools and hospitals for everybody, and there is certainly enough food. There is no shortage of food. But people have different mythologies, different stories in their minds, and they can't find a common story they can agree about.

And this is the root of most human conflicts, and being able to tell the difference between what is a fiction in our own mind and what is the reality, this is the crucial skill. And we are not getting better at finding this difference as time goes on. And also with new technologies, which I write about a lot, like artificial intelligence, the fantasy that AI will answer our questions, will find the truth for us, will tell us the difference between fiction and reality.

This is just another fiction. I mean, AI can do many things better than humans, but for reasons that we can discuss, I don't think that it will necessarily be better than humans at finding the truth or uncovering reality. It looks to me that the thing that made us successful, the ability to believe in fiction, and I use the word successful, powerful, yes, maybe a bit more.

The thing that made us powerful could well be the thing that makes us powerless, in the sense that, ability to believe in fiction and stories, create a society that would potentially lead to our powerlessness. And I kind of one of the messages that when I connect the dots throughout your work, and you look off into the future, I'm left feeling. And even you think about the modern problems we have, those are typically consequences of our ability to believe in stories, and to believe in fiction.

And if you play that forward 100 years, maybe 200 years, you don't believe that you believe will be the last of our species, right? I think we are very near the kind of end of our species. It doesn't necessarily mean that we'll be destroyed in some huge nuclear war or something like that.

It could very well mean that we'll just change ourselves using bioengineering and using AI and a brain computer interfaces, we will change ourselves to such an extent that will become something completely different. Something far more different from present day homo sapiens than we today are different from chimpanzees or from the other tools. I mean, basically, you know, you have a very deep connection still with all the other animals because we are completely organic.

We are organic entities are psychology, our social habits, they are the product of organic evolution and mammalian, most specifically mammalian evolution over tens of millions of years. So we share so much of our psychology and of our kind of social habits with chimpanzees and with other other mammals looking at 100 years or 200 years to the future.

Maybe we are no longer organic or not fully organic. You could have a world dominated by cyborgs, which are entities combining organic within organic parts, for instance, with brain computer interfaces. You could have completely non organic entities. So all the legacy and also all the limitations of four billion years of organic evolution might be irrelevant or in applicable to the beings of the future. What that would you make? Because you're saying maybe here.

No, I mean, we could destroy ourselves. I think there is a greater, I mean, to completely destroy every last single human in the world, it is possible given the technology that we now command, but it's very difficult. I think there is a greater chance and again, this is just speculation. Nobody really knows, but I think I mean, lots of people could suffer terribly, but I think it's more likely that some people will survive and then will undergo radical changes.

So it's not that humanity is completely destroyed. It's just transformed into something else. Just to give an example of what we are talking about, organic beings like us need to be in one place at any one time. We are now here in this room. That's it. If you kind of disconnect our hands or our feet from our body, we die. Or at least we lose control of these. I mean, and this is true of all organic entities of plants, of animals.

Now with cyborgs or with inorganic entities, this is no longer true. They could be spread over time and space. I mean, if you find a way in people are working on finding ways to directly connect brains with computers or brains with bionic parts, there is no essential reason that all the parts of the B of the entity need to be in the same room at the same time.

As you said that, I started thinking a little bit about neural ink and what Elon Musk is doing with interfacing us with computers. But then I had a secondary thought which is if there could be two Stevens, one here and then one in the United States right now because we're connected to the same computer interface.

So theoretically, I could hack Jack over there. I could hack his interface so there could be three Stevens because I hack Jack and then I hack you and then those four and then I could eventually try and hack the entirety of the world or a country.

Basically, one one. Once you can connect directly brains to computers, first of all, I'm not sure if it's possible. I mean, people like Elon Musk in neural ink, they tell us it's possible. I'm still waiting for the evidence. I don't think it's impossible. But I think it's much more difficult than than people assume.

Apparently, because we are very far from understanding the brain and we are even further away from understanding the mind, we assume that the brain somehow produces the mind, but this is just an assumption. We still don't have a working model, a working theory for how it happens. But if it happens, if it is possible to directly connect brains and computers and integrate them into these kinds of cyborgs, nobody has any idea what happens next, how the world would look like.

It is certainly makes it a plausible if again, if this is this if you reach that point that you could have an interbrain net. The same way that lots of computers are connected together to form the internet. If you can connect also brains and computers directly, why can't we then connect an interbrain net, which connects lots of brains as you described. And I have no idea what it means.

I think this is the point when the way that our organic brains understand reality, even our imagination in the end is the product as far as we can tell of organic biochemistry. Do you think we are not equipped, I think, to have a kind of serious discussion of what a non-organic brain or a non-organic mind might be capable of doing, how it would look like. And all the basic assumptions that we have about brains and minds, they are limited to the organic types.

How do you feel about artificial intelligence and what is happening this year has been a real sort of landmark year in the big leap forward for artificial intelligence, the conversation, public awareness, the technology itself, the investment in the technology, which is always a very important indicator of what is to come. How do you, as someone that's spent a lot of time thinking about this, emotionally, how do you feel about it?

Very concerned. I mean, it's moving even faster than I expected. When I wrote, say, homodeos in 2016, I didn't think we would reach this point so quickly, where we are at 2023. And the world is not ready for it. And again, it's not, AI has enormous positive potential. And this should be clear. And there is no chance of just banning AI or stopping all development in AI.

I tend to speak a lot about the dangers simply because you have enough people out there, all the entrepreneurs and all the investors talking about the positive potential. So it's kind of my job to talk about the negative potential, the dangers. But there is a lot of positive potential. And humans are incredibly capable in terms of adapting to new situations. I don't think it's impossible for human society to adapt to the new AI reality.

The only thing is it takes time. And apparently, we don't have that time. And people compare it to previous big historical revolutions, like the invention of print or the invention of the industrial revolution. And you hear people say, yes, when the industrial revolution happened in the 19th century, so you had all these prophecies of doom about how industry and the new factories and the steam engines and electricity, how they will destroy humanity or destroy our psychology or whatever.

And in the end, it was okay. And when I hear these kinds of comparisons as a historian, I'm very worried about two things. First of all, they underestimate the magnitude of the AI revolution. AI is nothing like print. It's nothing like the industrial revolution of the 19th century. It's far, far bigger. There is a fundamental difference between AI and the printing press of the steam engine, of the radio or any previous technology we invented.

The difference is it's the first technology in history that can make decisions by itself and that can create new ideas by itself. A printing press or a radio set could not write new music or new speeches and could not decide what to print and what to broadcast. This was always the job of humans. This is why the printing press and the radio set in the end empowered humanity. That you now have more power to disseminate your ideas.

AI is different. It can potentially take power away from us. It can decide, it's already deciding by itself, what to broadcast on social media, its algorithms, deciding what to promote. And increasingly, it also creates much of the content by itself. It can compose entirely new music, it can compose entirely new political manifestos, holy books, whatever. So it's a much bigger challenge to handle that kind. It's an independent agent in a way that radio and the printing press were not.

The other thing I find worrying about the comparison with say the Industrial Revolution is that yes, in the end, in a way it was okay, but to get there, we had to pass through some terrible experiments. When the Industrial Revolution came along, nobody knew how to build a benign industrial society. So people experimented. One big experiment was European imperialism. Many people thought that to build an industrial society means building an empire.

Unless you have an empire that controls the sources of the raw materials you need, iron, coal, rubber, cotton, whatever. And unless you control the markets, you will not be able to survive as an industrial society. And there was a very close link, also conceptually, between building an industrial society and building an empire.

All the leaders, the initial leaders of the Industrial Revolution built empires, not just Britain and France, also small countries like Belgium, also Japan. When it joined the Industrial Revolution, it immediately set about conquering an empire. Another tribal experiment was Soviet communism. They also thought, how do you build an industrial society, you build a communist dictatorship? And it was the same as Nazism.

You cannot separate communism and Nazism from the Industrial Revolution. You could not have created a communist or a Nazi totalitarian regime in the 18th century. If you don't have trains, if you don't have electricity, if you don't have radio, you cannot create a totalitarian regime. So these are just a few examples of the failed experiments. You know, you try to adapt to something completely new. You very often experiment and some of your experiments fail.

And if we now have to go in the 21st century, through the same process, okay, we now have not radio and trains. We now have AI and bioengineering. And we again need to experiment. Perhaps with new empires, perhaps with new totalitarian regimes, in order to discover how to build a benign AI society, then we are doomed as a species. We will not be able to survive another round of imperialist wars and totalitarian regimes.

So anybody who thinks, hey, we've passed through the Industrial Revolution with all the prophecies of doom, in the end, we got it right? No. If, as a historian, I would say that I would give humanity a C-minus on how we adapted to the Industrial Revolution. If we get a C-minus again in the 21st century, that's the end of us. It seems quite trivial to many that the AI revolution has seemed to begun with large language models.

And when I read Sapiens, this book I have here, language was so central to what made us powerful as Hermosapiens. In the beginning was the word. I didn't say it. You know, it's a very, very widespread idea. That ultimately our power is based on words. The reason that we control the world, and not the chimpanzee or the elephants, is because we had a much more sophisticated language, which enabled us to tell these stories.

Stories about ancestral spirits and about guardian gods and about our tribe, our nation, which formed the basis for cooperation. And because we could cooperate, you could have a thousand people, a thousand humans cooperating in a tribe, where the Neanderthals could cooperate only on the level of say 50 or 100 individuals. This is why we rule the world and not in the Neanderthals. And you look at every subsequent kind of growth in human power.

And you see the same thing that ultimately you tell a story with words. And language is like the master key that unlocks all the doors of our civilization. Whether it's cathedrals or whether it's banks, they're based on language, on stories we tell. But again, it's very obvious in the case of religion. But also if you think about the world's financial system. So money has no value except in the stories that we tell and believe each other.

If you think about gold coins or paper bank notes or cryptocurrencies like Bitcoin, they have no value in themselves. You cannot eat them or drink them or do anything useful with them. But you have people telling you very compelling stories about the value of these things. And if enough people believe the story, then it works. They're also protected by language. Like my cryptocurrencies protected by a bunch of words. They are created by words and they function with words and symbols.

When you communicate with your banker, it's with words. I mean, what happens when AI can create deep fakes of your everything, your voice, your image, the way you talk, the type of words you use. So there is already an arms race between banks and fraudsters. I mean, we want the easiest communication with our banker. I just pick up the phone, I tell a few words and they transfer a million dollars.

But at the same time, I also want to be protected from an AI that impersonates my voice and tone of voice and whatever. And this is becoming difficult. But on a deeper level, again, AI could create because money is ultimately made of words, of stories. AI could create new kinds of money. The same way that, you know, cryptocurrencies like Bitcoin have been created simply by somebody telling people a story. And enough people finding this story convincing.

And I guess as a CEO and as an entrepreneur, you know that if you want to get investments, what really gets investments is a good story. And what happens to the financial system if increasingly our financial stories are told by AI? And what happens to the financial system and even to the political system? If AI eventually creates new financial devices that humans cannot understand. Or what is today, much of the activity on the world markets is being done by algorithms.

It's such a speed and we such complexity that most people don't understand what's happening there. If you had to guess what is the percentage of people in the world today that really understand the financial system? What would be your kind of... Less than 1%. Let's be kind of conservative about it. Fast forward 10 or 20 years, AI creates such complicated financial devices that there is not a single human being on earth that understand finance anymore. What are the implications for politics?

Like you vote for a government but none of the humans in the government, not the prime minister, not the finance minister, nobody understand the financial system. They just rely on AI to tell them what is happening. Is this still a democracy? Is this still a human form of government in any way? Would you say to someone that here's that and goes, oh that's just nonsense, that's never going to happen? Why not, let's look back 15 years to the last big financial crisis in 2007-2008.

This financial crisis to a large extent began with these extremely complicated financial devices, CDOs, what's the acronym, collateral depth, something. I don't even know what the word letters stands for. This is kind of with kids in Wall Street inventing a new financial device that nobody except them really understood, which is why also it wasn't regulated effectively by the banks and the governments.

And it worked well for a couple of years and then it brought down the world's financial system. And what happens if now AI comes with even more sophisticated financial devices and for a couple of years everything works well, they make trillions of dollars for us. And then one day it doesn't. One day the system collapses and nobody understands what is happening.

And again, it's not that you didn't go to college or whatever, no, it's just objectively the complexity of the system has reached a point when only an AI is able to crunch the numbers, is able to process enough data to really grasp the shape, the dynamics of the financial system.

We're ready there though, you know, I think if anyone does understand how the financial system works in the markets work, it is a bunch of home-o sapiens relying on a computer to tell it something and it trusting that computer's calculations. Yeah, and this will get a more and more complicated and sophisticated.

And for people who say, no, it's not going to happen, the question is, what is stopping it? I mean, you know, in all the discussions about AI, the kind of dangers that draw people's attention, like the poster child of AI dangers, is things like AI creating a new virus that kills billions of people, a new pandemic.

So you're a lot of people concerned about how do we prevent an AI by itself or maybe some small terrorist organization or even a 16 year old teenager, given an AI a task to create a dangerous virus and release it to the world. How do we prevent this? And this is a serious concern and we should be concerned about it, but this gets a lot more attention than the question, how do we prevent the financial system from becoming so complicated that humans can no longer understand it.

And I see a lot of regulations being at least considered how to prevent AI from creating dangerous new viruses. I don't see any kind of effort to keep the financial system at a level that humans understand it. Why do you think that is? I mean, I had a guess. My guess was, why would the UK cut off then? Why would they give themselves a disadvantage? Exactly. When, you know, it just means that the UK will suffer. If America is using a really advanced AI algorithm to get ahead, we have to keep up.

Yeah. It's the logic of the arms race. And again, it's not all bad. I mean, you have a better financial system. You have a more prosperous economy. I mean, money isn't bad. I mean, the basis for almost all human cooperation. And a lot of financial devices, in the end, if you think what are they, they are devices to establish trust between people, especially trust between strangers.

And money in essence is a device for establishing trust. I don't know you. You don't know me, but we both trust this gold coin or piece of paper. So we can cooperate on sharing food or creating a medicine. And the most sophisticated financial devices, they basically do the same thing. Stokes and bonds and these CDOs, they are a method to establish trust.

And when you open a new bank account, the most important thing is how do I trust the bank to really take care of my money and to follow my instructions, but not to be open to fraud and things like that. And again, you as an investor, when you try to get money from, oh, you as an entrepreneur, when you try to get money from investors, the biggest issue is always trust. And if somebody can come up with a new way to establish trust between people, that's a good thing.

But if this new way increasingly depends on non-human intelligence, on systems that humans cannot understand, that's the big question. What happens to human society when the trust that is at the basis of all social interactions is actually no longer trust in humans, it's trust in a non-human intelligence that we don't fully understand and that we cannot anticipate.

And part of the problem with regulating AI or AI safety, it goes back to what we discussed earlier, that AI is different from printing presses or radio sets or even atom bonds. If you want to make nuclear energy safe, then you need to think about all the different ways that, I don't know, a nuclear power station can have an accident. And I guess there is a limited number of things that can go wrong.

And ideally, if you think hard, if you have enough people thinking hard enough, you can make safe nuclear reactors, safe nuclear power stations. Now, but AI is fundamentally different because AI keeps changing, it keeps reacting to the world, it keeps reacting to you, coming up with new inventions, new ideas, new decisions.

So making AI safe is a bit like making a nuclear reactor safe, taking into account the fact that a nuclear reactor can decide to change in ways that you can't anticipate and even worse, it can react to you. So if you build a particular safety mechanism for the nuclear reactor, what happens if the nuclear reactor says, oh, they build this mechanism, let's do that, to somehow get around the safety mechanism. We don't have this problem with nuclear reactors, but this is the problem with AI.

We are trying to contain something, which is an independent agent, and which might actually come to understand us better than we understand it. I'm really curious about how this will impact.

You talked about elected officials there, and their systems will be sort of their financial decision making might be driven by algorithms, but governments and authority itself, I've pondered recently whether they'll come a day in the not so distant future, where we might vote for an algorithm, where we might vote for an AI to be our government.

That's not crazy thinking. I think we are quite a long way off from there. We would still want humans, at least in the symbolic role of being the prime minister, the member of parliament, whatever the president. The real problem is that increasingly these humans could be kind of a figureheads or puppets, when the real decisions, the most consequential decisions, are made by algorithms.

Partly because it will just be too complicated for the humans at the top to understand the situation or to understand the different options. So going back to the financial example, imagine that it's 4 o'clock in the morning, there is a phone call to the prime minister from the finance algorithm telling the prime minister that we are facing a financial meltdown, and that we have to do something within the next 30 minutes to prevent national of global financial meltdown.

There are three options, and the algorithm recommends option A, and there is just not enough time to explain to the prime minister how did the algorithm reach the conclusion, and even what is the meaning of these different options. And again, people think about this scenario mostly in relation to war, that what happens if you have an algorithm in charge of your security system, and it alerts you to a massive incoming cyber attack, and you have to react immediately.

And this could, if you're reacting in a specific way, this could mean war with another nation, but you just don't have enough time to understand how the algorithm reached the decision, and how the algorithm was also able to determine that of all the different options, this is the best option.

Do you think that humans believe we're more complicated and special than we actually are? Because I think part of much many of the rebuttals when we talk about artificial intelligence, stand back to this idea that we're in, you know, we're like innately genius, creative, spiritual, special, different from, you know, artificial intelligence.

Like our intelligence is somewhat divine or we've got free will and we, you know, we, yeah, I mean, it's, if the argument is we have free will, we have a divine soul, and therefore no algorithm will ever be able to understand us and to predict our decisions or to manipulate us. Then this is a very common argument, but it's obviously nonsensical. I mean, even before AI, it was even with previous technology, it was possible to a large extent to predict people's behavior and to manipulate them.

And AI just takes it to the next level. Now, with regard to the discussion of free will, my position is you cannot start with the assumption that humans have free will. If you start with this assumption, then it's actually is very, it makes you very incurious, lacking curiosity about, about yourself, about human beings. It kind of closes off the investigation before it began. You assume that any decision you make is just a result of my free will.

Why did I choose this politician, this product, this spouse, because it's my free will. And if this is your position, there is nothing to investigate. You just assume you have this kind of divine spark within you that makes all the decisions, and there is nothing to investigate there.

I would say no, start investigating, and you'll probably discover that there are a lot of factors, whether it's external factors, like cultural traditions, and also internal factors, like biological mechanisms that shape your decisions. You chose this politician or this spouse because of certain cultural traditions, and because of certain biological mechanisms, your DNA, your brain structure, whatever. And this actually makes it possible for you to get to know yourself better.

Now, if after a long investigation, you have reached the conclusion that yes, there are cultural influences. There are political influences, there are genetic and neurological influences, but still there is a certain percentage of my decision that cannot be explained by any of these things, then okay, call it free will. And you can discuss it, but don't start with this assumption, because then you lose the incentive to explore yourself.

And anybody who embarks on such a process of self-exploration, whether it's in therapy, whether it's in meditation, whether it's in the laboratory of a brain scientist, or as a historian in the archive, you will be amazed to discover how much of your decisions are not the result of some mystical of free will. They are the result of cultural and biological factors.

This also means that you are vulnerable to being deciphered and manipulated by political parties, by corporations, by AI. People who have this kind of mystical belief in free will are the easiest people to manipulate, because they don't think they can be manipulated. And obviously they can. We humans should get used to the idea that we are no longer mysterious souls. We are now hackable animals. That's what we are. You said that, the world economic forum.

Yeah, again, this is the same point, basically, that it's now possible to hack human beings, not just to hack our smartphones, our bank accounts, our computers, but to really hack our brains, our minds, and to predict our behavior and manipulator behavior more than in any previous time in history. The other line that you said, which really made me think and ponder was, as previously human life was about the drama of decision making, and without this, we won't have a meaning in life.

Yeah, that if you look at politics, that religion, and at culture, people told stories about their lives, or the lives of people in general, as a kind of drama of decision making, that you reach a particular junction in life, and you need to choose between good and evil. You need to choose between political parties. You need to choose your, what to study to university, or what to work, what kind of job to apply to.

And our stories revolved around these decisions. And what happens to human life, if increasingly the power to make decisions is taken from us. And increasingly its algorithms, making all these decisions for us, or about us. Is that possible? It's already happening. Increasingly, you know, you apply to a bank to get a loan. In many places, it's no longer a human banker who is making this decision about you, whether to give you a loan, whether to give you a mortgage. It's a deal algorithm.

Analyzing billions of bits of data about you, and about millions of other customers, or previous loans, determining whether you are correct where they are not. And if you ask the bank, if they refuse to give you a loan, and you ask the bank, why didn't you give me a loan? And the bank says, we don't know, the computer said no. And we just believe our computer, our algorithm. And it's happening also in the judicial system, increasingly, that various judicial decisions verdicts.

Like for how many, like the judge decided that you're committed some crime, the sentence, whether to send you to two months or eight months or two years in prison, is increasingly determined by an algorithm. You apply to a place at university, you apply to a job. This too is increasingly decided by algorithms dating. Yes, I mean, even unknown unknowns to you, the algorithms of the dating apps that you're using are shaping your romantic life.

Why not just have a relationship with a robot or with an AI? Yeah. We do see the beginning of this that people are building more and more intimate relationships with non human intelligences, with AIs and bots and so forth. And this raises a lot of difficult and profound questions. Now, part of the problem is that the AIs are built to mimic intimacy.

That the ability intimacy is an extremely powerful thing, not just in romance, also in the market, also in politics. If you want to change somebody's mind about anything, political issue, a commercial preference, intimacy is kind of the most powerful weapon. And somebody you really trust, somebody you have intimate relationships with, will be able to change your views on a lot of things more than someone you see on TV or just an article in newspaper.

There is a huge incentive for the creators of AIs to create AIs that are able to forge intimate relationships with humans. And this makes us extremely vulnerable to these new type of manipulation that was previously just unimaginable. Because loneliness is all time highs, but she's in the Western world and sexlessness.

And I was reading some stats about how the body, bottom 50% of men in particular, having almost no sex relative to the top sort of 10%. And you think this disparity, the rise of digitization, loneliness, where in our homes on screens more than ever before. And then you hear about this industry of AI and sex dolls and all this and you just wonder, you play it forward and go, oh, yeah, it's going there.

And the thing is that it's not that the humans are so stupid or something, that they kind of project something onto the AI and falling in love with an AI chatbot. The AI is deliberately built, created, trained to fool us. To the same way, you know, you look at the previous 10 years, there was a big battle for human attention. There was a battle between different social media giants and whatever, how to grab human attention.

And I created algorithms that were really amazing at grabbing people's attention. And now they are doing the same thing, but with intimacy. And we are extremely exposed. We are extremely vulnerable to it. The big problem is, and again, this is where it gets kind of really philosophical, that what humans really want or need from a relationship is to be in touch with another conscious entity.

An intimate relationship is not just about providing my needs. Then it's exploitative. Then it's abusive. If you're a relationship and the only thing you think about is how would I feel better, how would my needs be provided for, then this is a very abusive situation. A really healthy relationship is when it goes both ways. You also care about the feelings and the needs of the other person, of the other entity.

Now, what happens if the other entity has no feelings, has no emotional needs because it has no consciousness? That's the big question. And there is a huge confusion between consciousness and intelligence. AI is artificial intelligence. But what exactly is the relation between intelligence and consciousness? Now intelligence is the ability to solve problems, to win a chest, to invest money, to drive a car. This is intelligence.

Consciousness is the ability to feel things, like pain and pleasure and love and hate, and sadness and anger and so many other things. Now, in humans and also in other mammals, intelligence and consciousness actually go together. We solve problems by having feelings. But computers are fundamentally different. They are already more intelligent than us in at least several narrow fields. But they have zero consciousness.

They don't feel anything. When they beat us at chess or go or some other game, they don't feel joyful and happy. If they make a wrong move, they don't feel sad or angry. They have zero consciousness. As far as we can tell, they might soon be far more intelligent than us and still have zero consciousness. Now, what happens when you are in a relationship with an entity which is far more intelligent than you and can also imitate mimic consciousness?

It knows how to solve the problem of making you feel as if it is conscious, but it still has no feelings of its own. And this is a very disturbing vision of the future. It opens us up to manipulation. Is that what you are saying? First of all, it opens us to manipulation, but also the big question, what does it mean for the health of our own mind, of our own psyche?

If we are in a relationship or many of our important relationships in life are with non-conscious entities that they don't really have any feelings of their own, and they are very good at faking it. They are very good at catering to our feelings. But again, it's just manipulation in the end. Are you optimistic about the happiness of humans going forward? Or do you think happiness will take its own?

I've heard you talk about how happiness might just become a biochemical, I don't know, prescription or something. Yeah, I mean, we don't have a good track record with regard to happiness. If you look at the last 100,000 years from say the Stone Age until the 21st century, you see a dramatic rise in human power. We are thousands of times more powerful as a species and as individuals than we were in the Stone Age. We are not thousands of times happier.

We just don't really know how to translate power into happiness. And this is very clear when you look at the lives of the most powerful people in the world. There is no correlation between how rich and powerful you are and how happy you are as a person. I mean, I don't get the impression that people like Vladimir Putin or Elon Musk are the happiest people in the world. Even though they are some of the most powerful people in the world.

So there is no reason to think that as humanity gets even more powerful in coming decades, we will get any happier. And understanding happiness is about understanding the deep dynamics of not even the brain, but of the mind, of consciousness. And we are just not there yet. We are very, very in the related problem is that humans usually understand how to manipulate something long before they understand the consequences of the manipulations.

If you look at the outside world, the ecological system, we have learned how to cut forests, how to build huge dams over rivers, long before we understood what will be the consequences for the ecological system. Which is why we now have this ecological crisis. We manipulated the world without understanding the consequences. Something similar might happen with the world inside us, with more powerful medicines, with brain computer interfaces, with genetic engineering and so forth.

We are gaining the power to manipulate our internal world, the world within us. But again, the power to manipulate is not the same thing as understanding the complexity of the system and the consequences of the manipulation. A related manipulation there is a mortality in our pursuit of it. I've sat with people on this podcast who are committing their lives to staying alive forever.

There is a three line there between our desire to be immortal, the rise in the scientific discoveries that are enabling that and our happiness. I've often thought much of the reason why things are special in my life is because they're scarce, including my time. I almost wonder about the psychological issues I would face if I knew I was immortal. The partner I'm with doesn't come at the expense of another one I can be with at 30 years old.

The choices you make, I think what makes them valued are their scarcity against the backdrop of a finite life. It will definitely change everything if you think about relations between parents and children. If you live forever, so that 20 years you spent raising somebody 2,000 years ago, what do they mean now? But I think long before we get to that point, most of these people are going to be incredibly disappointed because it will not happen within their lifetime.

Another related problem is that we will not get to immortality. We will get to something that maybe should be called immortality. That immortality is that like your God, you can never die no matter what happens. Even if we solve cancer and Alzheimer and dementia and whatever, we will not get there. We will get to kind of a life without a definitive expiry date. That you can live indefinitely, you can go every 10 years to a clinic and get yourself rejuvenated.

But if a bus runs you over or your airplane explodes or a tourist kills you, you're dead and you are not coming back to life. Now realizing that you have a chance to live forever, but if there is an accident you die, this creates a level of anxiety and terror unlike anything that we know in our own lives. I think that people who will be in that situation will be extremely anxious and miserable.

Another issue is people often spend so much effort trying to gain something, get something without really understanding what are they going to, why? What will you do with it? What is so good about it? People spend so much effort to have more and more money instead of thinking on what will I actually do with that money? So it's the same with the people who want to extend life forever. What is so good about life? What will you do with it?

And if you know it, why don't you do it already? I hear people saying about how precious human consciousness is. Why? What do you think is so precious? And whatever it is, why don't you do it right now? I mean, why spend your life developing some kind of treatment that will extend your consciousness for a thousand years? Just spend your time doing it now, whatever you think you would be doing with your consciousness a thousand years from now.

So if they were to say, you better give me more time with my family, you're saying just, instead of wasting your time just like that? Exactly. So, you know, somebody who has no time for their family at all right now because they're busy developing the kind of a miracle cure that will enable them to spend time with their family in 200 years. This makes no sense.

I think about the disparity that artificial intelligence and these forms of sort of bioengineering might create because it's conceivable that the rich will gain access to these technologies first. And then you know, when we think about bioengineering, being able to sort of play with our genetic code, that means if I, for example, managed to get my hands on some kind of bioengineering treatment to make sure that my kids were maybe a little bit smarter, maybe a little bit stronger, whatever.

Then you're going to start a sort of genetic chain of modified children that are superior in intelligence and strength or whatever else might be desirable. And then you have this disparity in society where you have like the, you know, one humans, one set of humans are on a completely different exponential trajectory and the other humans are, you know,

yeah, yeah. This is extremely dangerous. I think we just shouldn't go there that we shouldn't invest a lot of resources, efforts in developing these kinds of upgrades and enhancements that are very likely at least at first to be the preserve of a small elite. And to translate economic inequality into biological inequality and to basically split the human species to split homo sapiens into, you know, a ruling class of super humans and and the rest of us.

This is a very, very dangerous development related to that is the problem that I don't think it will be, this will be upgrades at all. What worries me is that a lot of these things will turn out actually to be downgrades that we, again, we don't understand. Our bodies, our brains, our minds, well enough to know what will be the consequences of tweaking our genetic code or of, I don't know, implanting all kinds of devices into our brains.

People who think that this will enable them, let's say, to upgrade their intelligence, they don't know what the side effects will be. It could be that the same treatment that increases your intelligence also decreases your compassion or your spiritual depth or whatever.

And the danger is that especially if this technology is in the hands of powerful corporations, armies, governments, they will enhance those qualities that they want, like intelligence and like discipline, while disregarding other qualities, which could be even more important for human flourishing. Like compassion or like artistic sensitivity or like spirituality. If I think about somebody again, like Putin, what would he do with this type of technology?

Then yes, he would like an army of super intelligent and super loyal soldiers. And if these soldiers don't have any compassion or any spiritual depth, all the better for him. But that speaks to the arms race. And you know, you said you think we shouldn't, but China will see that as an opportunity or Putin will see that as an opportunity if the, if the Western world, if the United States or the UK don't.

And so again, it comes back to this point of, you know, we're screwed if we're damned if we do, we're damned if we don't. And I'm not sure that in this case it works because again, a lot of these upgrades are likely to have that mental side effects, both for the person in question and for the society as a whole. And I think that in this case, societies that will choose to be a progress most slowly and safely, they will actually have an advantage.

It's like if you say, you know, there is some other country where they don't have any breaks or they're on their cars and they don't have any seat belts and they release new medicines without checking their side effects. They're moving so fast, we are left behind. No, it makes no sense to imitate them. This will actually ruin their societies.

You don't want to imitate these kinds of harmful effects. With development of AI, it's different. I think there the advantages in things like finance, like the military will be so big that an AI, AI arms race is almost inevitable. But with trying to kind of bioengineer humans, if you go too fast, it will be this self-destructive. So we can take it most slowly and safely and without being kind of left behind in an arms race.

You said on the Tim Ferrisspoke cast, the best scenarios that Homo sapiens will disappear but in a peaceful and gradual way and be replaced by something better. It's quite an uncomfortable statement to listen to. I think that again, the type of technologies that we are now developing when you combine them with the human ambition to improve ourselves, it's almost inevitable that we will use these technologies to change ourselves.

The question is whether we will do it slowly and responsibly enough for the consequences to be beneficial. But the idea that we can now develop these extremely powerful tools of bioengineering and AI and remain the way we are will still be the same Homo sapiens in 200 years, in 500 years, in 1000 years. We will have all these tools to connect brain to computers, to kind of reengineer our genetic code and we won't do it. I think this is unlikely.

One of the outstanding questions that I have and one of the sort of observations I've had is people like Sam Altman, the founder of OpenAI that made Chatchy PT, started working on universal basic income products like Wildcoin. I thought, you know what, that's curious that the people that are at the very forefront of this AI revolution are now trying to solve the second problem they see coming, which is people not having jobs essentially.

Do you think that, because I've spoken a lot this year on stages and this is one of the questions I always get asked is the implications of AI on the jobs as we know it in the workforce. Is it realistic to believe that most jobs will disappear as we know them today? I think many jobs, maybe most jobs will disappear, but new jobs will emerge. You know, most jobs that people do today didn't exist 200 years ago. Like this?

Yeah, like this. A big podcast. And there will be new jobs. The really big problem will be how to retrain people. It demands a lot of financial support, also psychological support for people to kind of re-learn, retrain, reinvent themselves and doing it not just once, but repeatedly throughout their careers, throughout their lives. The AI revolution will not be a single watershed event, like you have the big AI revolution in 2030.

You lose 60% of jobs, you create lots of new jobs, you have 10 difficult years, everybody adjusting, adapting, rescuing, whatever, and then everything settles down to a new equilibrium. It won't be like that. AI is no near its full potential. So you'll have a lot of changes by 2030, even more changes by 2040, even more changes by 2050. You will have new jobs, but the new jobs too will change and disappear. What new jobs?

In a world where intelligence is disrupted, what jobs are left? Because you say you're going to retrain me. I'm not going to be able to keep up with an AI that's retraining every second. I'm not sure. I mean, some of the answers might be counterintuitive. At least at present, we see that AI is extremely good at automating jobs that only require cognitive skills.

But they are not good at jobs that require motor skills and social skills. So if you think about, say, doctors and nurses, so at least those types of doctors who are only doing cognitive work, they read articles, they get your medical results, all kinds of tests and whatever, they diagnose your disease, and they decide on a course of treatment. This is purely cognitive work. This is the easiest thing to automate.

But if you think about a nurse that has to replace a bandage to a crying child, this is much more difficult to automate. You don't think that's possible to automate? I think it is possible, but not now. You need very delicate motor skills and also social skills to do that. Did you see Elon's video the other day with the Tesla robot? I see a lot of these videos. It's getting the egg and it's cracking the egg and it's going like this.

I'm not saying it's impossible. I'm just saying it will take longer. It's more difficult. Again, there is also the social aspect. If you think about self-driving vehicles, the biggest problem for self-driving vehicles is humans. Not just the human drivers, it's the pedestrians, it's the passengers. How do you deal with a drunken passenger? It's not impossible, but it's much more difficult. I think there will be new jobs, at least in the foreseeable future.

The problem will be to retrain people and the biggest problem of all will be on the global level, not on the national level. When I hear people talk about universal basic income, the first question to ask is it universal or national? Is it a system that raises taxes on big tech corporations in Silicon Valley and California and uses the money to provide basic services and also retraining courses for people in Ohio. Does it also apply to people in Guatemala and Pakistan?

What happens when it becomes cheaper to produce shirts with robots in California than in Guatemala and in Mexico? Does Sam Altman has a vision of the US government raising taxes in California and sending the money to Guatemala to support the people there? If the answer is no, we are not talking about universal basic income. We are only talking about national basic income in the US. Then what happens to the people in Guatemala? That's the biggest question.

A sub question to that is about how one should be educating our children and education institutions as they are today. With what's to come makes me wonder, what skill would be worth investing 10, 12 years into a child that I had? Nobody has any idea. If you think about specific skills, then this is the first time in history when we have no idea how the job market or how society would look like in 20 years. So we don't know what specific skills people will need.

If you think back in history, so it was never possible to predict the future, but at least people knew what kind of skills will be needed in a couple of decades. If you live in England in 10, 23, a thousand years ago, you don't know what will happen in 30 years. Maybe the Normans will invade or the Vikings or the Scots or whoever. Maybe they'll be an earthquake, maybe they'll be a new pandemic.

Anything can happen. You can't predict. But you still have a very good idea of how the economy would look like and how human society would look like in the 1050s or the 1060s. You know that most people will still be farmers. You know it's a good idea to teach your kids how to harvest wheat, how to bake bread, how to ride the horse, how to shoot and bow an arrow. These things will still be necessary in 30 years.

If you now look 30 years to the future, nobody has any idea what kind of skills will be needed. If you think for instance, okay, this is the age of AI, computers, I will teach my kids how to code computers. Maybe in 30 years, humans no longer code anything because AI is so much better than us at writing code. So what should we focus on? I would say the only thing we can be certain about is that 30 years from now the world will be extremely volatile.

Extremely, it will keep changing at an ever rapid pace. Do you think this is going to increase the amount of conflict? Because I watched a video on your YouTube channel where you said the return of wars. That's one of the dangers that there is and we see it all over the world now. Like 10 years ago, we were in the most peaceful era in human history. And unfortunately this era is over. We are now in a new era of wars and potentially of imperialism.

And we are seeing it all over the world with the Russian invention of Ukraine, now with the war in the Middle East, Venezuela and Guyana. Some East Asia's war is back on the table. It's not just because of the rapid changes and the appeals they cause. It's also because, you know, 10 years ago, we had a global order, the liberal order, which was far from perfect, but it's still kind of regulated relations between nations, between countries.

Based on an idea, on the liberal worldview, that despite our national differences, all humans share certain basic experiences and needs and interests, which is why it makes sense for us to work together to diffuse conflicts and to solve our common problems. It was far from perfect, but it did create the most peaceful era in human history.

Then this order was repeatedly attacked, not only from outside, from forces like Russia or North Korea or Iran that never accepted this order, but also from the inside, even from the United States, which was the architect to a large extent of this order. With the election of Donald Trump, which says, I don't care about any kind of global order. I'd only care about my own nation, and you see this way of thinking that I only care about the interests of my nation more and more around the world.

Now, the big question to ask is, if all the nations think like that, what regulates the relations between them? And there was no alternative. Nobody came up with the, and said, OK, I don't like the liberal global order. I have a better suggestion for how to manage relations between different nations. They just destroyed the existing order without offering an alternative. And the alternative to order is simply this order. And this is now where we find ourselves.

Do you think there are more wars on the way? Yes. Unless we re-establish order, there will be more and worse wars coming in the next few years, in more and more areas around the world. You see defense budgets all over the world, skyrocketing. And this is a vicious circle. When your neighbors increase their military budget, you feel compelled to do the same. And then they increase their budget even more.

You know, when I say that the early 21st century was the most peaceful era in human history, it's one of the indications is how low the military budgets all over the world were. For most of history, kings and emperors and cons and sultans, the military was the number one item on their budget. They spent more on their soldiers and navies and fortresses than on anything else. In the early 21st century, most countries spent something like a few percentage points of their budget on the military.

Education, healthcare, welfare were a much bigger item on the budget than defense. And this is now changing. The money is increasingly going to tanks and missiles and cyber weapons instead of to nurses and schools and social workers. And again, it's not inevitable. It's the result of human decisions, the relatively peaceful era of the early 21st century. It did not result from some miracle. It resulted from humans making wise decisions in previous decades.

What are the wise decisions we need to make now in your view? Reinvest in rebuilding a global order, which is based on universal values and norms and not just on the narrow interests of specific nation states. I think it's very likely. And if it happens, it is likely to be the kind of like the death blow to what remains of the global order. And it says it openly.

Now, again, it should be clear that many of these politicians, they present a false dichotomy, a false binary vision of the world. As if you have to choose between patriotism and globalism, between being loyal to your nation and being loyal to some kind of global government or whatever.

And this is completely false. There is no contradiction between patriotism and global cooperation. When we talk about global cooperation, we definitely don't have in mind, at least not anybody that I know, a global government. This is an impossible and very dangerous idea. It simply means that you have certain rules and norms for how different nation states treat each other and behave towards each other.

If you don't have a system of global norms and values, then very quickly what you have is just global conflict. It's just wars. I mean, some people have this idea. They imagine the world as a network of friendly fortresses. Like each nation will be a fortress with very high walls, taking care of its own interests, interests, but living on relatively friendly terms with the neighboring fortresses, trading with them and whatever.

Now, the main problem with this vision is that fortresses are almost never friendly. Each fortress always wants a bit more land, a bit more prosperity, a bit more security for itself at the expense of the neighbors. And this is the high road to conflict and to war and to war. There's that phrase, isn't there? Ignorance is bliss. Now, something that your work has forced you and continues to encourage you to not live in is ignorance.

So with that one might logically deduce that out the window goes, you're bliss. Are you happy? I think I'm relatively happy. At least happier than I was for most of my life. I, part of it is that I invest a lot of my time, not just in researching what is happening in the world, but also in the health of my own mind. Keeping a kind of balanced information diet, that it's basically like with food. You need food in order to survive and to be healthy.

But if you eat too much, or if you eat too much of the wrong stuff, it's bad for you. And it's exactly the same with information. Information is the food of the mind. And if you eat too much of it, of the wrong kind, you'll get a very sick mind. So I try to keep a very balanced information diet, which also includes information fasts. So I try to disconnect. Every day, I dedicate two hours a day for meditation. And every year, I go for a long meditation retreat of between 30 and 60 days.

I can completely disconnecting no phones, no emails, not even books. Just observing myself, observing what is happening inside my body and inside my mind, getting to know myself better, and kind of digesting all the information that I observed during the rest of the year or the rest of the day. Have you seen a clear benefit in doing that?

Yes, very, very clear. I don't think I would be able to write these books or to do what I'm doing without these kind of information diet and without kind of devoting a lot of time and attention to the balancing my mind and keeping it healthy. So many people spend so much time keeping their body healthy, which is very important, of course. But we need to spend equal amount of attention with our mind. It is as important as our body.

When you said you don't think you'd be able to do what you do if you didn't take these information diets. Why? I'll just, you know, first of all, just overwhelmed and not have any kind of peace of mind, not have any kind of perspective. But constantly in the news cycle, in the information cycle, you lose all perspective, you know, organic entities, unlike AIs, unlike computers, we are cyclical entities.

We need to sleep every day. AIs don't sleep. You know, even the stock exchange closes every afternoon. It closes also for the weekend. So for Christmas, if you think about it, this is amazing that you know, if a war erupts in Christmas, the Wall Street will be able to react only after a couple of days, because the people are on holiday, they took time off. But if you give AI full control, there will never be any time off. It will be 24 hours a day, 365 days a year. And people just collapse.

I mean, I think part of the problem that politicians today face is that they need to be on 24 hours a day, because the news cycle is on 24 hours a day. Like in previous eras, if you're, I don't know, a king in the Middle Ages, and you ride somewhere, you're on the road in your courage, and nobody can reach you.

Even if the French are invading, nobody can reach you. You have some time off. If you're a Prime Minister now, there is no time off. And computers are built for it, but human brains aren't. If you try to keep an organic entity, I awake, and kind of constantly processing information and reacting 24 hours a day, it will very soon collapse.

It's funny. It made me think of what the, I think it's the former Netflix CEO, one of the Netflix CEOs or someone said, and they said, our biggest competitor is sleep. Sleep. Yeah. That's a very scary and very, I think, important line. And it's a very honest line. It's a very honest line. And it's scary because if people don't sleep, they collapse, and eventually they die.

And this is a part of the problem that we talked earlier about about the battle for human attention in social media, in streaming services. Now, for many of these corporations, they measure their success by user engagement. The more people are engaged, the more successful we are. Now, user engagement is a very broad definition. According to this measurement, one hour of outrage is better than 10 minutes of joy. And certainly better than one hour of sleep.

Because one hour of outrage, I will consume three adverts. Yes. And then that means that the corporation make $30. For example, and from two hours of sleep, they make nothing. From 10 minutes of joy, maybe they sell only one ad. And, but from the viewpoint of how humans function and how this organism function, 10 minutes of joy are probably better than for us than one hour of outrage. And certainly we need not just two hours, we need six, seven, eight hours of sleep.

Well, this is why the algorithms on certain platforms, specifically TikTok, are just absolutely addictive to say the least. Because they hacked us. Yeah, it's literally they, we had a certain level of addiction to the previous social algorithms. And then TikTok came along and said, hold my beer. And they just went for it. And they've won because of that. I see 60-year-olds absolutely addicted to TikTok.

And because they don't understand the concept of an algorithm sometimes, and they don't understand the advertising model and all of that stuff, it's hypnotism. They're like absolutely hypnotized. Finally enough, my drive is one of them. My drive is outside. Whenever I walk up to his car, he's just like this on TikTok. He's scrolling. And I had a conversation with him last night. I'm like, do you realize that TikTok has your brain?

Yeah. You know, apps, you know, and we're just at the very footstit, sort of the first steps of an exponential curve of algorithms competing for our attention in our brain. Yeah, we haven't seen anything yet. I mean, these algorithms are what like 10 years old. In terms of you think about these social media algorithms and the algorithms that get to know you personally, to hack your brain and then grab your attention, they are 10 years old.

And the companies die if they don't beat the other algorithms. So like Twitter now, when Elon took it over, and I think people relate to this if you use Twitter. Suddenly, I've seen more people having their heads blown off and being hit by cars on Twitter than I've ever seen in the previous 10 years. I think someone at Twitter has gone, listen, this company is going to die unless we increase time spent on this platform and show more ads. So let's start serving up a more addictive algorithm.

And that requires a response from Instagram and the other part, insert surreal. And Elon has this other company, the boring company, which is about boring tunnels, of course. But actually, it might be a good idea to make Twitter more boring and to make TikTok more boring. I know it's a very bad business decision. But I don't think humanity will survive unless we have more boredom. If you ask me what is wrong with the world in 2023, is that everybody is far too excited.

And if I had to kind of summarize what's wrong in one word, the word is excited. And people don't understand the meaning of this word. People think that excited means happy. Like two people meet, I am so excited to meet you. I have a new idea, I publish a new book, whatever. Oh, this is such an exciting idea, such an exciting book. And exciting isn't happy. Exciting isn't always good. Sometimes, yes, sometimes it's good to be excited. An organism that is excited all the time dies.

The meaning of excitement is that the body is in flight or fight mode. All the nerves are on, all the neurons are firing, all the muscles are tense. This is excitement. And very often, negative things excite us. Fear is excitement. Hate is excitement. Anger is excitement. And you know, when I meet a good friend, I'm often relaxed to meet the friend. Not excited.

And all the much kind of, you know, you think about the political level, we have far too many exciting politicians doing very exciting things. And we need more boring politicians. The do less, less exciting things. But the brain is wide to pay attention to excitement and to create it. Yes. But the brain evolved in situations when you didn't have a constant stream of exciting videos. Sometimes it was on, sometimes it was off. And now our brains have been hacked.

And these devices, technologies, they know how to create constant excitement. And the more this happens, we also lose our ability, our skill, to be bored. That if we have to spend a few minutes doing nothing, some were waiting, we can't do it. We immediately take out the smartphone and start watching TikTok or scrolling through Twitter or whatever. Did you hear about that experiment where people would rather take an electric shock than do nothing?

Yeah. And you know, you can't get, for instance, to any level of peace of mind, if you don't know how to handle boredom. But peace and boredom are the same way that excitement and outrage of neighbors. Peace and boredom are also neighbors. And if you don't know how to handle boredom, if the minute there is a hint of boredom, you run away to some exciting thing, you will never experience peace of mind.

And if humans don't experience peace of mind, there is no way that the world as a whole is going to be peaceful. Okay, this is something I've never mentioned before. In 2023, I launched my very own private equity fund called Flight Fund. And since then, we've invested in some of the most promising companies in the world.

My objective is to make this the best performing fund in Europe with a focus on high growth companies that I believe will be the next European unicorns, the current investors in the fund, who have joined me on this journey as some of Europe's most successful and innovative entrepreneurs. And I'm excited to announce that today, as a founder of a company, you can pitch your company to us. Or, if you are an investor, you can also now apply to invest with us.

Head to FlightFund.com to gain an understanding of the fund's mission, the remarkable companies we proudly support, and to get in touch with me and my team. Legal disclaimer, FlightFund is regulated by the FCA, so please remember that investing in the fund is for sophisticated investors only. Don't invest unless you're prepared to lose all of the money you invest. This is a high risk investment, and you are unlikely to be protected if something goes wrong.

There is no guarantee that the investment objectives will be achieved. And as with all private and equity investments, all of the investment capital is at risk. This communication is for information purposes only, and should not be taken as investment advice or a financial promotion. As you guys know, I'm a big fan of Hewlett, I'm an investor in the company, and they sponsor this podcast.

And what I've done for you, I've put together what I call the Hewlett Steven Bundle, which is a selection of my favourite products from Hewlett, including the Black Edition salted caramel flavour, which is super high in protein, and has 17 servings per container. It also comes with their Ready to Drink product, which is one of my all-time favourite products from Hewlett. The brand new and very exciting Hewlett, complete nutrition bars, this is chocolate caramel.

You can see from the empty box in front of me that I've eaten most of them, right? Me and my team here, if you leave these on the count of five seconds, they'll go. I'm going to say something I've never said. When Hewlett first made their bar many, many years ago, I tried it and I didn't like it. So I've never talked about it on this podcast. They've spent roughly the last two to three years making a brand new bar, which I absolutely love.

If you want to order them yourself and get started on your Hewlett journey, the link is in the description below. In this podcast episode, wherever you're listening to it, there'll be a Stevens bundle link and check it out back to the episode. If I could give you the choice to be born in 1976, as you were, or to be born now. It would go from 1976. I mean, the people of my generation, we were privileged to grow up in one of the most peaceful and most optimistic eras in human history.

The end of the Cold War, the fall of the Iron Curtain, I don't know of any better time. But when I look at what is happening right now, I don't envy the people who grow up in the 2020s. What is the closing statement of help and solution that kind of ties off this conversation? What is the thing that, having someone gotten to this point in the conversation, they should be thinking about doing, which will cause the domino effect that will lead us to maybe more hopeful future.

But we still have agency. I mean, the algorithms are not yet in full control. They are taking power away from us, but most power is still in human hands, and every human being has some level of power of agency, which means that each one of us has some responsibility. Now, nobody can solve all the world's problems. So focus on one thing. Find the one thing which is close to your heart, which you have a deep understanding of, and try to make a difference there.

And the best way to make a difference is to cooperate with other people. When the human superpower is our ability to cooperate in large numbers. So if you care about a specific issue, don't try to be an isolated activist. 15 individuals who cooperate as part of an organization can do much, much more than 500 isolated activists individuals. So find your one thing, and again, don't try to do everything. Let other people do the rest, and cooperate with other people on your chosen mission.

You've all your book, Sapiens, changed the world in many ways. It gave us a new perspective and a new understanding of who we are as humans, where we've come from, and with that we have a roadmap for where we're going. It's celebrating its 10th anniversary, I have the 10th anniversary edition here, which I'm going to beg you to sign for me after. And it really is a once in a generation book. The numbers that I have are that it sold more than 25 million copies.

And that's in a market where people said no one's buying books anymore. That's crazy. That's absolutely crazy. You're working on a new book, which I'm very excited to hear about. I'm sure that a little birdie told me that I'll be announced next year, and I'm sure everyone's incredibly energized about that. What is the... I asked these people, the question sometimes, just as a way to close off the show.

But I wanted to ask you it, because it's especially a pertinent to someone that's got such a huge varying wealth of work. Is there one particular topic that is pertinent to our future that we didn't talk about? I would say that when we talk about the future, history is more relevant than ever before. History is not really the study of the past. History is the study of change, of how things change. You... Nobody cares about the past for the sake of the past.

All the people who lived in the Middle Ages or in the ancient Rome, they're all dead. We can't do anything about their disasters and their misery. We can't correct any of the wrongs that happened in ancient times. And they don't care what we say about them. You can say anything you want about the Romans, the Vikings, they're gone. They don't care.

The reason to study the past is because if you understand the dynamics of change in previous centuries, in previous eras, this gives you perspective on the process of change in the present moment. And I think the curse of history is that people have this fantasy of changing the past, of bringing justice to the past. And this is just impossible. You cannot go back there and save the people there.

The big question is how do you save the people now? How do you prevent new catastrophes, perhaps, from happening? And this is the reason to study history. And the main message of history is that humans created the world in which we live. The world that we know with nation states and corporations and capitalist economics and religions like Christianity and Hinduism, humans created this world.

And humans can also change it. If there is something about the world that you think is unfair, is dangerous, is problematic, then some things are beyond our control. The laws of physics are beyond our control. So far, the laws of biology are also beyond our control. But knowing what is natural, what is the outcome of physics and biology, versus what is the outcome of human inventions, human stories, human institutions,

this is very difficult. A lot of things that people think are just natural, this is the way the world is, this is biology, this is physics, they are not, they are actually the result of historical processes. And this is why it's so important to understand history, to understand how things change and to understand what can be changed. We have a closing tradition on this podcast where the last guest leaves a question for the next guest not knowing who they're going to be leaving it for.

The question that's been left for you, if you could impose a global law, but only one global law, what would it be and why? Oh, great question for you. I would tell people should consume less information and spend more time reflecting and digesting what they already know, what they already heard. Thank you, Eva. It means a huge amount to me that someone of your esteem and someone whose books have inspired me and turned the lights on and so many areas of my life.

I don't have this conversation with me today, so thank you so much for that. But also for turning the lights on to the hundreds of millions of people that have consumed your work all around the world, the videos, the books, et cetera, et cetera. As you said there, it's the most important work because it helps us looking back at history in a way that is accessible, in inclusive, in a way that even I could read without having to be a history and understand very complex subject matter.

So thank you so, so, so much. Thank you. It's been great to be here. If you listen to this podcast frequently, there's something I talk about very often and that is the subject of sleep. And so I dug down a pretty deep sleep rabbit hole to figure out how I could sleep better. One of the things that I found is a brand called Eight Sleep that spawns this podcast and that is the cover that I have on my bed.

I saw the variants in my performance, my ability to talk, my mood and everything that matters to me when I'm unslept. It regulates the temperature of both sides of my bed individually. So my partner can have cold, I can have a little bit warmer and it learns about my body and sets my bed to the temperature that I need to have optimal sleep.

The brands that I talk about on this, this show, the podcast sponsors that I have, a brand that I love and use and Eight Sleep is one of them. They've made that piece of phone that we all sleep on for eight hours a day smart. I've put a link in the description below, but you can go to eightsleep.com slash Steven for exclusive holiday savings. Do you need a podcast to listen to next? We've discovered that people who liked this episode also tend absolutely love another recent episode we've done.

I've linked to that episode in the description below. I know you'll enjoy it.

This transcript was generated by Metacast using AI and may contain inaccuracies. Learn more about transcripts.