What role will AI play in the future of fake news and misinformation? And what does this have to do with your brain's internal models, or with voice passwords, or with what I'm calling the tall intelligence problem? And why do I believe that these earliest days of AI are actually it's golden age and we're quickly heading for a Balkanization.
Welcome to Inner Cosmos with me David Eagleman. I'm a neuroscientist and author at Stanford and in these episodes we sail deeply into our three pound universe to understand why and how our lives look the way they do now. In the last two episodes, we talked about the notion of truth versus misinformation. Two weeks ago we covered the question about truth in the media, and last week was specifically about truth on the Internet. Today's episode is about
truth versus misinformation and artificial intelligence. So if you happened to listen to the last two episodes, you'll know I've been arguing that the position of truth is not as simple as many pundits have made it out to be. A lot of people have said truth is declining and
misinformation is on the rise. It's achieving ascendency so much so that the Oxford English Dictionary in twenty sixteen coined the term post truth, implying from the structure of the word that there used to be a time where people operated on truth, whereas now, regretfully, people's beliefs are predicated
on emotion and person old beliefs. So I've made my arguments in those episodes about why I think that position is so specious, and what I gave was a simple historical analysis that demonstrates beyond any doubt that people have never operated on anything but emotion and personal beliefs, and the idea that that has recently changed belies nothing but
an appalling ignorance of history. And specifically in the last episode, I argued that we're actually in a much better position now because of the Internet, as it prevents the one thing that has historically proven itself worse than misinformation, which is censorship, having someone else decide for you what you
are allowed to see. And if you want to know more, please go back and listen to those episodes where I give examples of the USSR controlling a total clamp down on the press and even on copying machines, and China controlling which websites you are allowed to go to, and Nikolai Chichescu of Romania controlling even the weather reports, and Saddam Hussein of Iraq outlawing maps of Baghdad, and on
and on. Just imagine if President Trump or President Biden had one hundred percent control over what you are or are not allowed to see. This was the situation and so many recent historical examples that we examined in the last two episodes, from left wing communism in China and the USSR, to right wing Nazism or fascism in Germany or Italy, to theocratic dictatorships like we find in Iran.
In all cases the government decides what is the proper material for you to see or Nazi and in all cases uniformly, this has been disastrous and has led to the starvation or execution of tens of millions of people. So even if you don't like hearing other people's opinions, and you deep down believe that they're all misinformed and you know much better, the fact is that our society will survive for longer if we put up with those people.
Rather than imagining we'd be better off if we could just legislate or take up arms to have them agree with us. So that's where we are so far in
this series. But today I'm going to dive into the third part because we've just arrived in this technological era which got here with unexpected speed and success of artificial intelligence, and specifically of generative AI, which is a type of artificial neural network that creates new content like beautiful passages of writing, or shockingly great images or expert level music.
Its success blossomed suddenly mostly because of the increasing availability of large data sets combined with more powerful computing hardware, and as a result, generative AI now produces results that are indistinguishable from human generated content like news articles, or photographs or voices. And the question today is what will AI mean for the future of disinformation or truth? Has AI thrown a wrench irreversibly into this game of truth telling?
So let's start with the question of truth and AI. If you ask the question to people on the street, was Trump a good president? Obviously you're going to get a range of responses. So the question of what is truth here is a difficult one. Now, if you ask chat GPT this question, it will tell you the question
is subjective and depends on individual perspectives and priorities. And then it'll tell you supporters of Trump often highlight X, y Z, and then it says critics point to concerns such as ABC, and it does this sort of back and forth for most questions that you ask it, even if you had intended to get a yes or no answer. And a lot of people that I've talked with find this aspect of llms, these large language models, really annoying.
And that's because every question you ask to Barret or chat GPT gives this sort of wishy washy answer and tells you, look, there's this opinion on it, there's that opinion on it. More discussion will have to take place, and if you say yes or no, was Trump a good president? It will tell you quote as a neutral and objective AI, I don't have personal opinions, The evaluation of whether Donald Trump was a good president is subjective
and depends on individual perspectives and priorities. And then it ends by noting that it is essential to consider a range of perspectives and weigh various aspects of his presidency to form an informed opinion. And you know what it is, annoying that it does that, and you know what, else, it's right. This is exactly what we should do, given that public opinion on Trump's presidency is polarized and assessments
vary widely based on political ideology and personal values. So what Bard or chat GPT or other lms do is actually quite genius, which is that they don't get caught in the trap of giving a yes or no answer
to what we think is a yes or no question. Instead, they are synthesizing the opinions of millions and millions of people who have written things down, and therefore or it says, well, some people think this, and some people think that, and although some people find that annoying, it actually represents a beautiful sort of fairness. For example, you can ask chat GPT give me different points of view about abortion, and
it does a beautiful job with that. It gives you the perspective from pro choice advocates about reproductive rights and women having the right to make decisions about their own bodies, and also health and safety concerns, making sure that women have access to approved medicine to reduce risk of illegal methods, and issues of autonomy. And then it also gives the point of view of pro life advocates, their belief in the inherent right to life for the unborn fetus and
alternatives instead of abortion, like adoption or parenting. And finally, it points to their ethical beliefs about the sanctity of human life from conception, and on top of that, it spells out some other perspects as well as compromised positions like trimester based regulation and exception clauses when a mother's health is at risk. Now, why does chat GPT do such a good job at this sort of thing? Because
it's acting like a meta human. It's like a parent watching squabbling children and understanding the perspective where each different child is coming from. Similarly, I asked it to tell me the different points of view about the Israeli Palestinian conflict, and it succinctly and fairly captured the perspective of both sides. Again, this is because it has access to all the writing of the world, so it sees the different perspectives. It's not making a choice, that's not its job, and maybe
it couldn't do it technologically anyway. So instead, what it represents is a fair and balanced view of the world that we're in now. What's fascinating is that we are probably at a moment with AI that is perhaps the fairest and most balanced that we will ever see. Why. It's because I predict that people will start training up their own AIS, and they will do so with what
they consider the best information. So one organization might say, we're not going to use all the books, We're just going to use the classics, and anything that is gay or trans or non binary, we're not including that because we don't feel like it's necessary to get this next generation of intelligence system trained up. We're going to train
this with what we believe in. And on the other side, you'll have people on the left wing who say, I don't want to include Mark Twain in here, or Raould Dahl or certain books by doctor seuss Er, various Shakespeare plays or things like that, because they represent a point of view that people used to have, but we don't believe in that anymore, so we want to leave that out. Where you can imagine another group that feels it's better
if there's no violence in the training data. So we're only going to use uplifting books where people help each other, and there's no such thing as violence, and that will galvanize a different group to say, look, you're being naive. The world is full of psychopaths who will commit violence against you, and you need to be prepared for that, so you can recognize when a violence psychopath is seeking power over you or seeking political power in your neighborhood
or on a national stage. So we're going to leave out the uplifting rom com books and we're going to just include the books about the real world and what actually happens in warfare and how one needs to be prepared for it, and on and on and on. So I predict we are headed rapidly for a balkanization of large language models. Just in case you don't know, Balkanization is a word that refers to the process of breaking
something up into small, isolated factions. This originated from the Balkan Peninsula, where a bunch of ethnic and political groups sought independence and created smaller, less cooperative states. So I'm suggesting we're heading towards a disintegration into smaller, less cooperative large language models. The idea is you can go ask a question to your left wing model or your right wing model, your model that lives in Wokistan or Magastan.
And I think that's a shame because it's taking something that should be smarter and better than humans, but we're going to manipulate it to be the kind of human that you are, so that you can say, yes, I like its answers. That makes sense now, the model tells the truth. It's like the Biblical creation story where God creates man in his own image, which actually seems to me like a real shame, because what if God could have created something that was better than himself. That's what
we have the chance to do with AI. But I think it's just around the corner that we human say I know what truth is, and I'm going to make sure that I fashion this powerful new creature so that it looks just like me. I'm going to take the vast space of possibility and squeeze it down until it
is nothing but a successful clone of me. So my hope is that what will remain or grow up into this space is something like a Wikipedia of large language models, something that takes the space of human opinion and says, look, some people think this, some people think that, and some people think another thing. Altogether, After all, even though we have countless news sites and opinion sites of every different
political bias. The existence of Wikipedia makes me feel more confident that this is a possibility that we continue to have ais just like we have today that are unafraid to say thank you for your yes or no question. But as a metahuman, I'm going to give you a more complex answer than perhaps you were looking for, because you may or may not know that truth is a tricky concept, and we don't have to pretend that most
questions are binary. Okay, So I wanted to express my fear about AI breaking up into individualized models that better match the truth of the programmer, and my hope that by expressing this clearly we can prevent it. Now, I'd like to shift into act two of this episode and address the issues that a lot of people are worried about when they think about the role of AI in
the era of disinformation. Now, my suspicion is there are trivial cases which people are worried about and they probably don't need to be, and then there are the more sophisticated cases. So I was at a party the other night and ended up talking with an older gentleman who told me he was worried about AI and fake news. So I asked him why and he said, well, just think about the speed of AI. It could manufacture reams
of fake news in a second. So I asked him, well, what would you generate and he said, well, just imagine generating fake stories about Joe Biden. So I asked him what would you do after you generated that story on your desktop? Would you post it to your feed on X? And he said yes. So I asked him whether he felt sure that would make it difference. Imagine he posts on X that Biden just adopted an alien baby. What would it matter? Why would anybody listen to this guy's tweet?
If I make up a story about seeing Bigfoot on Stanford campus, it doesn't matter if I have used AI to write it or not. And that's because there's no further corroboration beyond my claim on social media, so there's no reason for anyone to believe it. But maybe, he said, maybe it's different if I were to generate a more carefully crafted story like that. The New York Times has just found evidence that Biden cheated on his taxes, and
I agree. But I pointed out that even though prompt engineering is funds, so you can get just the right story. He could currently without AI make up any story he wants, and it might take him five minutes to write instead of sixty seconds of crafting the right prompt. But other than saving those few minutes, it's not as though the AI has done something fundamentally different than what he could
do anyway. And of course, whether penned by him or the AI, he still has to try to convince people that the story is real, even though it's just a tweet and perhaps it links to his blog. But if no traditional news source or aggregator has picked up on it or posted something like it, it's not news. It's going to have a difficult time getting off the starting blocks.
Now this is not to say that fake news can't spread, because it can, but it is to say that it's not clear to me how AI is an important player in this, except that it saves him four minutes. So I think there's some confusion generally about the role that AI will play in fake news, and I want to be very clear today about where the important difference with
AI may or may not be. For example, AI could play a different kind of role if what I'm doing is starting, hundreds of AI bought Twitter accounts that all chat back and forth, and at some point they all retweet this guy's made up story, and it gains traction and believability when somebody sees the number of likes and
reposts and comments on it. But fundamentally, I think this is a cat and mouse game, and the important thing will be for social media companies to stay ahead of the game in terms of verifying who is a real human. Now let's turn to the next issue with AI, which is its ability to flawlessly impersonate someone else's voice by capturing the cadence and prosity. Now, a few episodes ago,
I talked about the potential benefits of this. For example, I have made a voice AI of my father who passed away a few years ago, and it's so wonderful and comfort for me to hear his voice. But the concern that people have when we're talking about the notion of truth is somebody trying to fool you with a voice. So here, for example, is a famous person you might know you're a bad bad boy. Now, that was Snoop Dogg, But as you could probably guess, that wasn't actually Snoop Dogg,
but an AI generated voice. Now, we all share concerns about this capacity to reproduce someone's voice, and I'm going to get into some of those concerns in a moment, but first I want to play his voice once more. You a bad bad boy. Now that's pretty convincing, right, maybe even more so than the AI file. But that wasn't Snoop Dogg either. That was the performer Keegan Michael, doing an impersonation of Snoop Dogg. And the thing to note is that impersonators or mimics have been around for
as long as recorded history. They do awesome voice fakes by picking up on the cadence and prosity of someone else's voice. So I just want to note that this issue is not new. Now. Part of the concern that people do have about AI voice generation is that it doesn't have to be about a famous person, but instead it might be an impersonation of your grandmother or your child, because incredibly, these models can get trained up on just
a few seconds of voice data. So, for example, one of the meaningful concerns is that you might receive a phone call and you say hello, Hello, I can't hear you. Is there anybody there, and then you hang up, and that's enough audio data for someone to make a pretty flawless impression of your voice. So the idea is that the sheer speed of capturing a non celebrity voice makes this worrisome. And it's worrisome for a few reasons. One of them is that several businesses moved in the last
few years to voice fingerprinting as their password. So, for example, you call the company to find out about your bank account or your stock trades, and you simply say, my voice is my password, and through a sophisticated process, it recognizes that it is, indeed you trying to find out information about your account. AI voice generation renders that security
approach meaningless and even worse. The nightmare scenario is that somebody records audio of your child's voice and then you get a call one day, maybe you're out of the state or out of the country, and your phone rings and you hear your child say, help me, I've been kidnapped. Please send the money now, or the man is going to hurt me. Now. Even if you know about this potential future scam, you're still going to hesitate in that situation.
You're going to be thrown for loop, You're going to panic. And the more likely case is that for non listeners of this podcast anyway, you've never even heard of such a scam in your life, and AI voice generation is new to you, and you fall for the scam entirely. So again, when we talk about the influence of AI on the truth, we might be finding ourselves in many scenarios that we could have never seen coming. And of course voice is just the beginning. When people think about
deep fakes, they're often thinking about photographs. So with mid Journey or Dolly three or any other good image generation AI, you can specify the parameters of your prompt to make it so the generated image looks indistinguishable from a real photograph. Now,
a new science paper came out on this recently. The researchers used a set of actual photographs of faces and a set of AI generated photographs of faces, and they asked the participants to judge or any given photograph whether that was a real face or a synthetic face, and the participants performed at chance. Their guesses were like throwing
darts in a dartboard. They were totally random. In fact, they rated the AI faces as more trustworthy than the real ones, and another study found that the AI faces got ranked as more real than the photos of the actual faces. So here's the thing I want to mention. One of the studies, which came out in October, claimed that although participants could not distinguish real and generated faces,
their unconscious brains could so. Specifically, this research group ran an experiment in which the participants wore EEG on their heads. That's electro and cephlography, and that measures the electrical activity in their brains. Now, the researchers showed real faces or synthetic faces, and the claim in the paper is that at one hundred and seventy milliseconds, there was a small
difference in the EEG signal between these two conditions. And so the claim of the paper is that while you can't tell the difference between the real and the generated faces, your brain can. There's a distinction between what we know
consciously and what our brains have access to unconsciously. And so in the wake of that paper, people are suggesting that maybe we will have neurally based safeguards in the future to mitigate these dangers, because your neural networks will be able to tell the difference between real and synthetic photographs. I have to say I'm a little skeptical. It strikes me there might be some alternative explanations to the study here.
One thing is that the synthetic faces all tended to be more average in their measurements and their configuration, whereas real faces can be more distinct, and this presumably explains why in that first study people found the AI faces as more trustworthy and they judged them to be more real.
But more importantly, even if the study results are true that the participants' brains were able to make some little distinction, it won't be true for long because these GENAI models get better and better each month, and even if there are clues that can somehow be detected now unconsciously, there likely will not be for very long. And of course audio files and photographs those are just the beginning. Already, there are video deep fakes that get better each month.
Recently I saw a video of Greta Thurnberg, the young environmental activist, and she was saying hello, my name is Greta, and welcome to my oil company. I love how it is pumped out of the ground and it shows her working on an oil rig and so on. Obviously, this was a fake video generated from other videos of her, and the AI moves her mouth appropriately to the words,
and it does it perfectly. Now, these kinds of deep videos are becoming so easy to make that we will have to regularly deal with these into the future, and just like the photographs, we may soon have an increasingly difficult time distinguishing real from synthetic. Now, when it comes to the question of deep fakes and misinformation, the problems are real. First of all, from a psychology point of view.
You might watch a deep fake video, let's say, of some celebrity saying something violent or racist, and then the truth emerges that the video was a deep fake, so the celebrity is forgiven, but there's just a little petina of negative feeling that sticks with you. And you see this happen all the time. By the way, just look at any case where somebody is erroneously convicted of a crime and then later definitively acquitted. The suspicion remains on them like a cloud for the rest of their life,
totally unfairly. And my nephew Jordan recently suggested to me that this is the problem with deep fakes, maybe they won't fly into court of law, but they can still leverage emotional consequences on social media. Even when the truth comes out and a person is verified to be totally innocent of whatever was claimed. There's still something that sticks
with you. Now, because it's been my habit in these last few episodes to point out historical precedent to all this, I want to note that this is the same game that defamation has always been. People can just drop sticky, unverifiable statements like I don't think he's loyal to the party, or I heard a rumor that she's having an affair, or I think that old woman might practice witchcraft. So this idea of planting seeds of suspicion is as old
as the hills. But it is the case that seeing something with your own eyes can have a stronger effect than mere whispers in the rumor mill, and that's why deep fake videos are something too genuinely worry about. Now, there's the second point about why the existence of deep fake videos is something to worry about, and this one is slightly more surprising, which is that the discussion of deep fakes is showing up more and more in courts of law. But it might not be for the reason
that you think. It's not that people are making deep fakes of other people committing a crime and then trying to convict them that way. As far as I can tell, it's exactly the opposite. People are committing crimes and getting captured on video and then they simply claim that the video is a deep fake, and then the prosecution has to spend a lot of money and effort and time to try to convince the jury that the video is
actually real. And this is related to another issue, which is that we want so strongly to believe things that are consistent with our internal model. So whatever evidence comes out in the news, people have almost infinite wiggle room to say they simply don't believe it. When pictures, an audio and video surface that are not consistent with someone's political point of view, they can just do what the accused person in the court does and say, I don't
believe it. It is all fake. So all that seems a little depressing, but the fact is that the battle between truth and misinformation is always a cat and mouse game, and there is promising news coming from the technology sector. For example, several big companies like Adobe and Microsoft and Intel and others. They've all come together to form a coalition called C two PA if you're curious. That stands
for Coalition for Content Provenance and Authenticity. So what C two PA sets out to do is to reduce misinformation by providing context and history for digital media. And the idea is simply to say, precisely where a piece of digital media comes from, what is its provenance. The provenance is the history of that piece of digital content. When was it created, by whom was it modified, what changes
or updates have been made over time. So the protocol they've developed binds the provenance of any image to its metadata, and this gives a tamper evident record that always goes along with it. So let's say you take a photo of your dog outside and say your cell phone has C TWOPA on it, which presumably all phones will in the near future. So now when you snap the shot,
all the data about location, time, et cetera. That's all recorded, and that's bound to the actual image cryptographically, meaning you can't untangle that for the life of this photo. It's orin is known. Now you post it on social media and anyone can click on the little icon that says content credentials, they can see all the provenance information and
that's how they decide their trust in the image. This doesn't make the photo tamper proof, but instead tamper aware, because let's say someone comes along and manipulates the photo. They add a flying saucer in the background, and they make the claim that this is photographic evidence of UAPs. Now that change to the photo is recorded in a new layer. It's part of the record of the photo and its history, and it's locked to the photo forever.
So when someone clicks on the content credentials now they can see that change in any other changes, and they can use that information to decide how much they want to trust that image. Every bit of the photo's journey is recorded, so every photo becomes like an NFT. It's not using the blockchain, but the idea is the same of making an indelible record. And of course the coalition
is thought through possible scenarios. So if I use an old program to manipulate that photo, and my program doesn't have a c TWOPA specification in it, the metadata can nonetheless detect that there was tampering and it will show up in the change log as an unknown change, but a change nonetheless. And even if somebody figures out a way to strip the provenance information from the photo, it
can be rematched using cryptographic techniques. And if you generate an image with generative AI, that information will be baked into the metadata. So the c TWOPA coalition is pushing the US Senate to legislate that this technology will be built into all media in the near future. So this is pretty cool because for society this represents a one step back, two steps forward situation. The one step back was that AI photos and videos can be deep faked,
which suddenly renders everything questionable. But the two steps forward is that by taking advantage of tools that we have like digital certificates and controlled capture technology and cryptography, we now might be able to build something better than anything
anyone's ever had in history. I mean, imagine that you are living in the Soviet Union eighty years ago, and I show you this picture of Stalin standing in Red Square and there's no Leon Trotsky by his side, and you can't quite remember if the original photo had Trotsky there or not. And one of your neighbors tells you he thinks Trotsky was there, and your other neighbor says she's certain this is the way the photo has always been.
There is no disinterested metahuman arbiter of the truth. You have no way of knowing whether the photo you're seeing was airbrushed or not. But now it can be clear to anyone by clicking on the content Credentials icon what precisely happened here. So, at least for the moment, legislation like this seems to make truthiness better than it ever was. Now, it won't be perfect, because people who want to fake
something will always find a way. So what I think this means is that we're not entering an era of post truth, nor are we entering an era of truth. This is just the next move in the chess game between people who document and people who fake. Now there's another issue about AI and truth that I want to focus on, and I think this one is the most serious.
A few episodes ago, I talked about AI relationships and the possibility, which is already flickering to life, that a lot of people will find appeal in having an AI friend or an AI girlfriend, And by the way, this might even turn out to help people learn better habits of relationships like patience and equanimity. They're learning from the avatar that they're in a relationship with which can pay
off in their real human relationships. So it's going to be a fascinating future for us to keep an eye on. But there's also a dark side here, which is that these AI friends could be built to convince us of a particular point of view. Now that might sound like a dystopian sci fi story, but the fact is, we
Homo sapiens, are fundamentally social creatures. We have managed to build our societies and cities and civilizations precisely because we are so social, and our highly social brains lead us to form our truths through discussions with other people, with our pals and our parents and our girlfriends or boyfriends. This is how we do our information foraging and our
sense making. And the question is whether AI could now, under the worst circumstances, provide a way to hack that, to tap into this ancient neural lock and key mechanism that we have, and to undermine our species that way.
So the emergence of AI relationships capable of persuading humans towards specific points of view presents a really complex ethical dilemma, our social brains, our adept at information gathering through interpersonal relationships, familial, romantic, collegial These services the foundation for the construction of our worldviews.
And I want to point out that this kind of manipulation could occur really subtly because AI algorithms could in theory, analyze and mimic your behavior to effectively sway your opinion
without you really having any insight into that. So this thought experiment where AI could hack into our neural mechanisms of social influence, that raises not only technological and ethical questions, but also questions about the vulnerabilities of our social fabric Are we as a species equipped to defend against these kind of manipulations or could the discovery of this way of exploiting our own vulnerabilities be the beginning of the
end for us. So, as we integrate AI into various aspects of our lives, we can't go into this blindly. We have to be simulating possibilities and constructing the ethical frameworks and the possible regulations that would make sure that this new technology doesn't under our most vulnerable security flaw, which is our very social neural architecture. That forms its
truths by talking to others. Now, I want to transition into the third act of this episode, and for this I'm going to take a totally different angle on the question of whether AI will take us off the road from the truth. What if the closest we will ever get to the truth is via Ai? After all? The problem is that as humans, we are biased by the very thin trajectories that we take through space and time, and we are shaped by our neighborhoods and our cultures
in our religions. But AI rides above all of that. It is the metahuman that can weigh a whole planet full of opinions and options. It becomes the oracle, which, as a reminder, in ancient Greek mythology, was a person, usually a woman, who could communicate with the gods and provide divine guidance. The most famous oracles at Delphi and Dodona and Olympia. These were consulted by anyone who was seeking advice on important decisions like should I marry this
person or should my nation go to war? And in the Greek mythology, the oracle would go into a trance like state and communicate with the gods. Now, in these stories, oracles played an important role in ancient Greek society because they were a source of wisdom and guidance, and their advice got sought out by everyone from pow to kings.
And I'm going to make an argument that we now have the technology to build a real oracle and access it for pennies, but it's not going to take root in modern society, not because of any fault in the technology, but because of two facets of human nature. So for the first facet, I'll mention that I was talking to a colleague recently who was making the argument to me that if Bernie Sanders says something about the energy industry,
then he's immediately shot down as a socialist. So my colleagues idea was to get AI to say the same thing about energy and then everyone would take it to be true, or at least take it in a different way. And I thought his idea was really interesting, but I'm not so sure I think that this will work for
very long. And this is for the following reason. As I mentioned earlier, I predict we're heading toward a balkanization of AI, where different groups will train the AI on data that they want and purposely throw out the data that they are quite certain is bad anything from the other side of the political spectrum. And so the idea that we can get AI to tell us the oracular truth in a way that everyone will listen to is flawed because I think that soon there will be no
AI oracle that weighs all the evidence equally. And the fact is, when it comes to something like what is the right way for us to produce energy, there is no single right answer. First this is because there are always what economists call externalities. So if you say let's go entirely solar panels, there will be other things that you haven't thought of, like that the solar panels require a particular element like molybdenum, and that is rare and
it has to be minded. And if the whole world's energy consumption goes to these new soullar panels, that would cause major wars around molybdenum, the way that people fight
now about diamonds or water. And there would be another problem with choosing one technology, let's say solar panels, which is what happens when there are cloudy days or in a rarer circumstance when a volcano goes off, as happens every few millennia and actually blocks out the Sun for a while, then we would regret having all of our
technology made solar. So really what we want is a mixture of technologies solar and wind and wave in geothermal and nuclear and hydrogen and probably fossil fuels also because you never know the future, and the best example we have of survival is the way that Mother Nature allows life to survive by making a huge menagerie of different life forms and seeing which ones survive. There's never a
single answer about the perfect species. At different times in history, different ones survive, and in the future there will always be unexpected events which cause some life forms to survive and on others. So it's a pretty good guess that that's what we would want for our energy portfolio as well, to have a very diversified portfolio. So let's imagine that we have an AI oracle and it tells us that
that a big diversified portfolio is the optimal answer. The thing I want to concentrate on is that this answer won't please any particular political player who has incentives, and it certainly won't please anyone who is in business who has skin in the game to produce solar panels or fossil fuel or nuclear or whatever, and it won't please the environmental activist who believes with certainty that she knows
the right answer. So it may well turn out lots of the time that the optimal solution is not one that any particular where a human or party of humans is going to want to hear, even though it is actually the best thing for society in general. And this is where we think we want the oracle, but we don't when it gets in the way of what we want to believe. And hence we return to my prediction
of balkanization. Different parties will be incentivized to modify the oracle by restricting its diet of books and articles, feeding it some and not others. So I mentioned earlier the political reasons for Balkanization, and here I'm also pointing to the economic reasons. Fundamentally, people and businesses are self interested, and if the oracle doesn't think your answer is that important to the overall picture, you might want to modify that.
And there's a second reason why ai oracles might not take root, even if they should, and it's not because of the Balkanization, but also because people will eventually just fake it. What do I mean, Well, right now, AI has an explainability problem, which means that the networks, with their almost two trillion parameters, are so complex that there's no way to say, oh, we know precisely why the
system gave that answer and not a different answer. This is part of what has led to the magical quality of AI, but I suggest that it's also going to lead to another problem. People will simply fake it. In other words, I tell you, hey, I ran this two trillion parameter model, and I trained it on all the data of all the energy sources and supply chains and capacities, and it told me its position as an oracle, that
we should follow the plan that I suggest. Maybe this is the plan that benefits my family's business, or the
plan that leans in my political direction. It would be very difficult for you to prove that the AI did not tell me that, especially if I show you the output screen where it says that you're presumably not going to try to reproduce feeding the entire corpus of energy economics data into the AI, which I assure you took me seventeen months and a team of geniuses to accomplish, and so the final analysis of the system has to be taken at my word, and society is no dummy.
So my prediction is that this will have a chance of working the first time, and then that strategy will collapse as soon as more people take a flyer and say, hey, I can't believe it, but this oracle told me that my plan is the optimal one. There's going to develop a need for provability somehow that an unbiased AI system actually yielded that conclusion. But I don't think we're smart enough yet to see what that can look like and
how to build it. And so between balkanization just feeding the AI what I want to and manipulation claiming that my oracle came to just this right opinion, I'm afraid that a possible outcome will be people not trusting the output of any AI oracle. They'll trust it just as little as they would any particular politician saying we should legislate for this solution, or any business person saying we should just put our chips on my solution. The oracle
will degrade to becoming a sock puppet. Now, this is obviously the worst case scenario for where things can go, because the idea of an AI oracle is extraordinarily appealing. I'm just concerned this won't happen, and I'm coining this the tall AI syndrome. Now, you've probably heard of the tall poppy syndrome, which is that in a field, if one poppy grows tall, it's going to get mowed down because it's stand out from the rest of the poppies.
So this expression arose about the tall poppy syndrome, which is that if some person is just better than everyone around her at something, let's say swimming or math or whatever, then people will tend to criticize her and try to mow her down. So, in thinking about the future of AI and truth, it strikes me as a possibility that we are going to run in to a modern version
of the tall poppy syndrome, the tall AI syndrome. In other words, even if AI could be an oracle, the kind of truth teller that we've dreamt of since ancient Greece, there are many reasons we will not listen. So in this episode, I covered the future of truth from the point of view of our latest technological invention AI, and what I think we can see is that there's good and bad to look forward to here, making this just
like every other technology we've ever developed. And as we wrap up this three part series about truth, from news stories to the Internet to AI, I hope you'll join me in trying to get a clear picture of where we are and where we have been and where we're going so that we can all try to do what we can socially and technologically and legislatively to build a more truthy world. Go to Eagleman dot com slash podcast
for more information and to find further reading. Send me an email at podcast at eagleman dot com with questions or discussion, and I'll be making episodes in which I address those until next time. I'm David Eagleman, and this is in our cosmos.