Can you get convicted in a court of law based on your brain activity? Our brain scan lie detectors going to be accepted? And when does measuring somebody's brain count as illegal search and seizure? When does it violate the privacy of the mind? What constitutional issues are triggered by all this? And what does this have to do with whether your mouth gets dry when you lie, or the orbits under your eyes get hot, or your voice constricts, and what the networks in your brain are up to
when you deceive. Welcome to Inner Cosmos with me David Eagleman. I'm a neuroscientist and author at Stanford and in these episodes we sail deeply into our three pound universe to understand why and how our lives look the way they do. Today's episode is about the fascinating topic of whether we can use technology to figure out whether a person is lying. What is lie detection actually measuring? How good is it?
When is it accepted in courts of law? And as we come to have better and better brain reading technology, what is the future of lie detection? So in a court of law, someone says I didn't do it. I wasn't even there that night, I was in my apartment watching TV. If there's no other information that can be gathered, like his cell phone is off, so there's no GPS signal and there's no eyewitnesses, there's no alimis. If there's no other information besides his testimony, how do you know
if that's really what happened or not? Is he telling the truth or is he lying? Now? On every crime television show, there are lots of clever clues that are surfaced by the industrious detective. But the issue in real courts of law most of the time is that other data just isn't available, so you only have the testimony
of the people who were involved. Most crime doesn't happen in fancy office buildings with full video coverage, but instead, more commonly happens in areas without any coverage, and so you need to ask people probing questions about whether they were involved and what happened, and given human nature, you really want to know whether they are telling the truth or not. So let's start today's episode with two legal cases. One involves a woman named Aditi Sharma. She moved to
a town in India with her boyfriend Udit. This was an arranged marriage they were going to be together forever. But while she was there, she met a new man named Provene, and they fell in love with each other and decided that they wanted to be together rather than
her and Udit being together. So she ran away with Provene, but Udit kept following them, so she and Pravene decided that they would murder Udit, and they did so by inviting him to meet them Adam McDonald's, and they gave him a popular drink in India that was laced with arsenic and he drank it and died. So she was brought up on charges of murder, and in court she denied the crime. But part of what led her murder conviction was a lie detection test what's called an electro
encephalogram or EEG. So we'll come back to this in a few moments now. Case number two is in the United States, a case in Tennessee. A man named Semrau was the CEO of two nursing home facilities, and these facilities were accused of having the employees fraudulently fill out Medicare and Medicaid forms, and Semrau said, I had no knowledge of this. I did not know this was happening in my facilities. And so in going to court, he enlisted the help of a lie detection company that used
new brain scan lie detection. This is a new technology that we'll talk about today. So this company did the brain scan and the results suggested that in fact Semrau was not lying, that he had no knowledge of the fraud. And so he went to court and the question is did the court accept the brain scan? The answer is they did not. They wouldn't accept in court. They said, you can't present that here. So should they have accepted it? How do we decide when to accept a new technology
or not? Under what circumstances do you say this technology is good enough that it's telling us what we need to know. So these are all the issues that we're going to talk about today, and we'll get into the background of what a lie is and how you could measure something about it and where this is all going. So we'll start with the basic question of what is a lie. Well, first, lying might seem like a human invention. So it's interesting to know that we see deception throughout
the kingdom of life. For example, in the world of plants, there's a type of orchid that develops to look just like a female bee, and this way male bees come and land on it, and then the plant eats them. It's a strategy of deception to lure in the bee, and we see this throughout the animal kingdom. For example, fireflies blink to say, hey, I'm here, please meet with me.
And there's a predatory firefly beetle named Fotaurus, and this meetle has evolved bioluminescence so that it flashes light signals of other firefly species, and that attracts the males and then they kill and eat the males. So this is known as aggressive mimicry. They're pretending they're the other fireflies. So one way we can think about this is that they are lying. They're not displaying truthy signals about who they are, but instead they are deceiving to accomplish their ends.
So deception is not simply a human thing. But let's turn to humans because that's what we care about. So the first question is how do we define a lie? Well, it turns out that's not so easy. Let's consider an example like the scientist Copernicus. He developed a mathematical model in which the sun goes around the earth, and that turns out to be totally wrong. But he wasn't lying, he wasn't intending to deceive. And occasionally what you'll find is a situation where someone tells a man this truth,
but they had no intention to deceive. I'll give you an example. Some years ago, there was a Harvard kid who went out and drank too much, and as he was stumbling home, he was jumped by two locals who beat him up badly and smashed his head into the sidewalk. So in an effort to save himself, he pulled a pen knife out of his pocket and he stabbed at the assailants to get them away. So they ran away
seemingly unharmed. But it turns out his knife had gone through the chest of one assailant and tore the lining around the heart, causing what's called a cardiac tamponaud in which blood fills the lining around the heart and squishes it, and within hours this assailant died. Now, when the police came to question this Harvard kid, he told them a
totally untrue story about what had happened. Why, While a neurologist and researcher named Jeremy Schmahman was called in to diagnose and testify, and he realized it was because this young man had been badly concussed, His head had been pounded into the sidewalk, and his brain had constructed a false memory about the whole event. So the opposing lawyer asked, do concussions make you lie? But Schmaman explained that a patient with a concussion is not lying, he's confabulating. There's
a critical difference there because the patient believes it. In other words, just because a statement is not factual doesn't make it a lie. And in a future episode, I'm going to dive really deeply into this issue about confabulation because we see this all the time in many different
types of brain damage. This even happened to a Supreme Court Justice, William Douglas, who ended up with a disorder called a nosygnosia in which he lost the ability to control part of his body, and yet he spoke falsely about it. The key is he was lying, he was confabulating. He believed that he could still move his body. So there are many situations where a person can believe something
falsely and that is not equivalent to a lie. And of course, this comes up all the time when people are asked about their memory of an event, because memory doesn't store sequences of zeros and ones like a computer does. And I talked about this at length in the context of eyewitness testimony in episode twenty seven. We are not like a video recorder, and my memory of something that went down might genuinely be very different from your memory.
But again, if we both probe our memories in the court of law, and we have different versions, it doesn't require that one of us is purposefully lying. But let's turn to actual deceptive lies, where you're intending to fool somebody. What do we know about that? Welltive lying starts in children at about the age of five, they start to lie. It's a cognitive development. When you're very young, you believe
that your parents know everything. But around five years into the game, you realize that your parents are not omniscient, they don't know everything that happened, and it strikes you that you can inject false information. And so we train our youth on the value of truth telling because this is very pro social behavior. But the fact is, when the stakes are high, humans will often continue to practice deception, perhaps in small ways throughout their lives. Now, interestingly, we
see that other primates do the same thing. They do what's called tactical deception. And the general story we find across primate species is that the bigger the brain they have, the more deception they do, because bigger brains allow us to model what other brains are thinking, which allows us to be better at manipulating them. But what's interesting is that understanding what other brains are thinking is also the development that allows species to build larger social groups with
richer collaborations. So we find this strange truth that the same evolution that allows us to live in groups harmoniously also allows us to deceive. Okay, so you might think, well, we have these big brains and we're able to lie. But you might think, how could you possibly build a technology that can tell whether something is a truth or untruth. It all comes down to a simple principle. Lying is hard.
Why because you have to think of the true testimony and inhibit that so you don't blurt it out, and then you have to generate a plausible alternative, and all the while you have to do a good job of hiding your anxiety and guilt. You have to manage your emotions. You have to inhibit any behaviors that would give you away. You might try to generate honest looking behaviors. So that's what we're going to talk about today. How do you build a technology to detect all of that extra work
that goes into a lie. Now, what you might find unexpected is that the idea of lie detection of using a technology in the courtroom, this has a long history. So take a guess when the first lie detector was used in courts two thousand years ago in China, and the technology was uncooked rice. The defendant would put uncooked rice in his mouth and he was then asked questions. Now, presumably his answers were marginally more difficult to understand, but
that wasn't the important part. The critical test was whether he could spit the rice back out of his mouth. If he couldn't, it was assumed that his dry mouth served as a physiologic marker of his nervousness about lying, And the idea was that if they were telling the truth, they'd have plenty of saliva in their mouth and they'd be able to spit the rice out. So this was the first lie detector test. But Eventually, starting about a
century ago, humans leveled this up. They invented what's commonly known as the polygraph test, which is usually what people refer to when they talk about a lie detector. Now, what you do is you attach some sensors to a person's hand, and you're measuring their physiologic responses while they're answering questions. You're measuring their heart rate, their blood pressure,
their breathing, their skin conductance. And the underlying principle is that when a person lives, their body reveals changes because of the stress and anxiety associated with deception, and so the examiner analyzes the recordings to look for patterns or deviations that indicate lying. So to make this clear, let's zoom in on the issue of skin conductance. So you've got all these little sweat glands in your skin, and when you have a sudden stress response, those sweat glands
open more, which would cause you to sweat more. But it's not sweat that's being measured here. What's being measured is that there's a slight electrical current between point A and point B across your skin, and the conductance between A and B changes when those sweat glands open, so the electrical conductance across your skin, or inversely the resistance that changes. And so the measure here is just looking at how well electricity can pass from here to there
across your skin. And when you are stressed, the electricity passes better. So you get a blip on the graph when you have a sudden moment of stress. Now here's the critical point. It's not measuring a lie, it's just measuring the stress response that's associated with lying. You're looking for that stress response as a correlate of deception. So you ask me, were you at this location on the night of March first, and I say no, I wasn't.
Now if I actually was there, the lie is stressful, even at a subconscious level, and that can be read out because I have an automatic under the hood stress response that registers. The sweat glands on my skin open slightly, which changes the electrical conductance across my skin, Maybe my breathing changes a bit, my heart rate changes slightly, and
as a result, this deception is brought to light. Now, this is the common way to use the polygraph test, but there's also second way that it's used, and that's known as the guilty knowledge technique. It turns out that if I show you something that's familiar to you versus unfamiliar, your brain will have a different unconscious response. So imagine I show you something like a strange little three D
statue of a Pokemon. So if it's the first time you've ever seen that statue, your brain has a different response to it than if it's something you've seen before. So the police or the private investigator can leverage the guilty knowledge technique because let's say they know that this unique little statue of the Pokemon was at the scene of the crime. It was on the desk of the
man who got murdered. So they sit you down in the chair and they put the polygraph electrodes on your skin, and then they show you a can of coke, and then they show you a computer printer, and then they show you a large spoon, and then they show you the Pokemon statue. Woh big response. Your brain has a different response because you were there, and only somebody who had seen that statue before and therefore would have been at the scene of the crime, would have that physiologic response.
You're looking for a familiarity response for something that only a person who was there would know. That's the guilty knowledge technique. It's also known as the concealed information test. And keep in mind that your brain picks up all kinds of things that you're not even aware of. So it turns out that these familiarity responses can be big, whether or not you know that you have seen that thing before. So the polygraph test is used in both
these ways, judging stress or judging familiarity. So let's go back to nineteen thirty two when a couple got married in the Northwestern Crime Scene Laboratory under the polygraph test. This couple got married and the newspapers at the time reported that her heart almost stopped beating when she said I do, and he twitched a little when he said I do. And the papers reported that they thought this all looked fine and good, but of course people didn't
typically get married while hooked up to this test. This was a publicity stunt because the groom was one of the founders of the polygraph test, and he had launched a polygraph testing company and he was jazzed to get
this pr But let's dive into its usage. So it was first used in a criminal case in nineteen twenty three in Berkeley, California, after a retired police chief was murdered, and then it started gaining traction in law enforcement all through the twenties and thirties, and then by the time of the Cold War, there was a lot of concern about possible espionage, so a lot of agencies started to use the polygraph test for security screenings, and by the
nineteen sixties, essentially all federal law enforcement agencies were leveraging this test. But then by the nineteen seventies some academics started raising questions about the accuracy of the polygraph test, because it's quite good, but it's not perfect, and so battles over its admissibility, and courts have raged since at least that time. So I'll give you a sense of
how these battles can go. In nineteen ninety two, there was a US Air Force man named Edward Scheffer who applied to work at the Office of Special Investigations, and they said, cool, but you have to take a urinalysis and a polygraph test to make sure you're not on drugs. So he said, great, and he did that. What happened is that the uranalysis test came back positive for methamphetamine use.
But his polygraph test in which he said I have not done any drugs since I joined the Air Force, that came back and showed no evidence of deception, in other words, showing he was not lying. Now, based just on the uranalysis, he was tried by Marshal for using drugs and at the court martial he said, yeah, I know, my year analysis came back positive, but I would like to submit my polygraph test as part of my defense
because that indicates I am not lying. And the court martial judge said sorry, but as a rule, the Air Force never accepts polygraph tests in the courtroom. And Scheffer said, this violates my sixth Amendment rights, which say that I should be able to submit evidence in my defense, and you're not letting me do this. But the military court wouldn't accept the polygraph and so Scheffer was convicted and
dishonorably discharged. And so he went to the Air Force Court of Appeals and they said, yeah, no, we don't have to take the polygraph test, even if it would exculpate you. So then he went to the Armed Forces Court of Appeals and they said, you know you're right, this does seem to violate your sixth Amendment right to
present a defense. So this went all the way up to the Supreme Court, and they had to struggle with this because the polygraph test is controversial in many ways, and so what they decided is that they were going to side with the military on this. The military courts, they said, do not have to take polygraph evidence if they don't want to. They said, we need to preserve the jury's core function of making credibility determinations in criminal trials.
In other words, they were saying, it should be about the person in the jury box making a decision, not about some machine or technology that says this is the truth. Now did their decision have something to do with the accuracy of the polygraph? Maybe? It turns out the polygraph
test is actually quite good. In tests by the American Polygraph Association, they find it's about eighty seven percent accurate in determining whether someone is telling the truth or telling a although some scientists claim it's lower, maybe as low
as seventy five percent. The thing to note is that's not bad, but some critics argue that you just can't accept and errorrate that high if somebody is guilt or innocence depends on this, and some courts argue that it's not just a matter of whether the accuracy is good enough. Part of the concern expressed by courts is whether jurors understand probabilities well enough. In other words, if a defendant fails the lie detector test, will a juror think, Okay, well,
it's not one hundred percent accuracy. There's still a thirteen to twenty five percent chance that the machine got it wrong, and so here's how I'm going to work those probabilities into my decision. Many courts assumed, probably correctly, that the technology will sway people more than it should. They will believe whatever the machine says. And beyond the issue of jury interpretation, there's another problem as well, which is that
some people are pathological liars. Some people, for example, those with psychopathy, have real deficits in empathy, and they're not good predictors of future punishment, and they lack what we might poetically call a conscience, and so for them, lying
just doesn't trigger a stress response. Many of the studies on lie detection are done with undergraduates or otherwise normal people, but the people we really care about most are psychopaths, and the technology doesn't always work so well with them. So as it stands now, the polygraph test is generally not used at the federal level in the United States unless the prosecution and defense both agree to allow it,
and at the state level it varies widely. Essentially one half of the states allow polygraph results while the other half generally prohibits them. But the polygraph test, which has been around for over a century now, is just one method that people use, and as science cranks forward, there have been many newer proposals. Actually a lot of these ideas sprung up after nine to eleven when people wondered how can you do lie detection on big groups of
people quickly. So one of these techniques is thermal imaging. There's some evidence that when you're lying, you get hotter under the orbits of your eyes. So the idea is to use cameras at airports to do infrared imaging as people are passing through and answering questions. And people have also been looking at the subtle detection of stress in the voi, the way that your voice gets constricted when
you have stress overlying about something. But things get a little more interesting and spicy when we start looking directly at signals of the brain, and one of these technologies involves sticking electrodes on the outside of the scalp, and this is known as electron cepholography or EEG. And the idea here is to use things like the guilty knowledge test to see if there's a big brain response when
the person sees something they are familiar with. So let's go back to that story that I began with about Aditi Sharma and her boyfriend Pravine, who together killed her fiance. In Aditi's court trial in two thousand and eight, she was tested with a thirty two electrode system on her
head called the Brain Electrical Oscillations Signature system. And the way it worked is this, a forensic scientist had a dit listen to a voice talk about the alleged murder and when she heard specific incriminating bits of the story
about how the murder was committed. For example, he was killed with a special kind of drink in McDonald's that was poisoned with arsenic When she heard those bits of the story, woosh, her brain had larger activity, which was interpreted as the activation of memories that she had of concealed knowledge, so this was presented as evidence of her guilt in this case. Now, I think all legal scholars agree that this was not the only evidence on which
her guilty verdict was based. There were other pieces of evidence that pointed to her guilt. But nonetheless, this kindled an ongoing controversy about when and whether it makes sense to accept lie detector technology in a court of law, because this was one of the first cases, possibly the very first, where someone was convicted in part on brain
based lie detection. But this really was just the beginning, because starting over two decades ago, people got interested in the idea of whether you could detect a lie in a brain scanner, and in two thousand and six at least two companies launched with this goal. There was no Lie MRI in San Diego and Cephos Corporation in Massachusetts, and their mission was using brain scanning fMRI or functional
magnetic resonance imaging for the purpose of lie detection. Now, how do these companies come about, Well, they were based on the discovery that when you are lying, you can image in the scanner more activity in frontal brain areas just behind the forehead, more activity than when you are telling the truth. In other words, while it's easy to blurt out the truth, lying requires more effort. You are inhibiting telling the truth. You don't want to blurt out
the truth, and you have to generate falsified data. Remember I mentioned earlier that it's easier to tell the truth, and we see that reflected in brain activity. So back in two thousand and one, a researcher named Sean Spence and his colleagues interviewed participants on what activities they had
done the previous day. Then in the scanner fMRI, these people were presented with various activities and were asked whether they had done them, and they had two buttons, a yes and a no. Now, they were also given either a red light or a green light to tell them whether they should lie or not on this round. So the first thing the researchers noted is that it always takes longer to lie, about a fifth of a second longer.
So you went to the beach yesterday, and you're asked if you went to the beach, But you see that there's a red light and so you answer no. Now, the first thing the researchers noted is that it always takes longer to lie, about a fifth of a second longer, And this is consistent with the idea that you are taking an answer you already know and you're squelching that. So they demonstrated that this squelching of the truth response
activates an area called the ventralateral prefrontal cortex. This is a region that becomes active when you need to suppress a behavior that you would normally just go with, in this case telling the truth, and other brain areas are involved too. So there was a study Bilangualmen in colleagues the next year where participants went into the scanner and they saw a playing card on the screen and they
were asked to remember it like Ace of Spades. Then they were shown a series of cards and they were asked, was that your card or not? They were instructed to lie sometimes, and here's the key. When they lied, the researcher saw a lot of activity in an area called the anterior singular cortex, which is a region that becomes
active when there's con between other brain areas. So in both of those studies, people were just saying yes or no. But in the next study by Spence and colleagues, they wanted to see what happens when people get more imaginative, when they go beyond yes or no to make up a new story. So imagine I ask you where were you on the afternoon of March first, and you know the answer to that, But imagine that you lie to me and you say, oh, I left my cell phone in the car and I was just hiking by myself
up the mountain. Now, in order to do that lie, you have to think of the true response and suppress that, and then you have to cook something up. So when you are thinking of the truth, but you actively suppress that, that activates the ventralateral prefrontal cortex, as we just saw. But then you need to generate the lie, and this activates the dorsal lateral prefrontal cortex, which is an area that cranks up when a person generates something new, like
a new response that they hadn't done before. And we see other areas too, like the anterior singular cortex, which cares about conflict, and the venturemedial prefrontal cortex, which cranks up when you're trying to regulate your emotions. So there are a whole constellation of brain areas that indicate deception. Now, interestingly, in all these studies about lying, you never see areas where there's more activity when you tell the truth. There's
only more activity when you're lying. So the idea that it's easier to tell the truth holds as a rule in life and as a rule in the brain. Now, that's how you can detect something about deception using an fMRI, and you can also use the guilty knowledge technique here. So one research paper said, we're going to try a guilty knowledge technique. We're going to rennew certain dates on the calendar, and one of those dates is going to be your birth birthday, and of course all the participants
have different birthdays, so it's a nicely controlled study. So the murder weapon they're looking for is your birthday. So they say March twelfth, November twenty third, August fifth, and at some point they say your birthday and several eras in your brain light up. You're hearing something that is familiar to you. And generally this is the same thing that happens when you see that Pokemon statue or when
you see the murder weapon that you used. Now, it turns out that in this study, the investigators reported that this worked one hundred percent of the time. In other words, all the time they could tell when someone is hearing their birthday versus hearing another date. But it turns out
that you can easily fake the test. You can fool this entirely by saying in advance, Okay, I'm going to associate another date with a small movement of my finger, So you decide that before you go into the test, and then when you hear August fifth, you were waiting to hear that so you could move your finger. So when you hear it, your brain has a big response
because you were waiting for that. What that means is that you're having the same murder weapon response to some arbitrary date or any number of arbitrary dates, because you have pre planned that you were going to do that, you were waiting to hear that target, and as a result, everything else gets swamped out because you're having the same response to some fake date as you are to the real date, and you can't tell a difference between those
brain states. So it turns out that this is not ready for the courts, because you can completely fool this with a very simple countermeasure, and even with the other fMRI technique that just involves looking for the correlates of deception. It only works if you stay absolutely still in the scanner.
And even if someone were to fix your head into place so that you couldn't move at all, you could, of course do all kinds of things to think random thoughts and move your fingers and almost certainly swamp out any meaningful signals. So let's recap where we are. As animals developed bigger brains, they were able to make better models of one another, and that allowed them to cooperate,
but it also allowed them to deceive. And so throughout human history people have been very interested in the question of whether you can detect if a person is lying. And this is tough because someone can be telling the truth but they're factually incorrect. But if they believe their story is true, then you would say, look, she passed the polygraph test, she passed the brain scan lie detection test,
but it still doesn't tell you what actually happened. All it's telling you is that she believes that would happened. But sometimes people really do set out to deceive, and that's what people want to build technologies for. And we find this search for detecting deception all over. Sometimes this is in arbitration situations where it's one person's word against another and you want to know who's telling the truth.
And you find this in criminal cases where attorneys want to know if someone has in fact broken the law. And you find this in national security where governments want to know when some spy that they've caught is telling the truth or not. They want to have a meaningful interrogation. Now, as a reminder of what we talked about, there's no
way to measure a lie directly. All we ever measure is the physiology that's associated with the lie, whether that's a stress response like we see in the polygraph test, or whether it's about inhibiting the truth and cooking up a fake story like we see in brain imaging. But the question is will it work? Will we ever get to a point where we can know for sure when somebody is lying. Essentially, the debate about this falls into
two camps. The first is that this technology simply is never going to work perfectly in the real world because it's always facing a mountain of questions. What if the person that you're measuring is on some sort of medication, What if he's had a stroke. Does the technology work for people who are outside the narrow age range that's been studied so far. What about psychopaths and compulsive liars. What if a person is totally misremembered something but they're
not actually lying, and on on. So the concern that many scholars express is that these technologies, both new and old, will appear in courts before all the wrinkles have been ironed out. And this is a concern for many reasons. But when it comes to fMRI, one of the concerns is that images of the brain are very compelling to jurors.
It's different if you just say, hey, on this polygraph test, there was a blip, versus you show a beautiful false color image of the brain, and then a juror thinks, wow, wow, that looks like real science, even if the accuracy probability is the same. So that's the first concern is that this technology is not actually going to work in the
real world. The second concern is that it will work, and this is a concern because most people hold the intuition that the inside of your head should be inviolable. Your mind should be a sanctuary where no one can enter so given that even if the technology does work perfectly someday, there are several reasons why it might never
get used in the legal system. I mentioned the Seffer case in nineteen ninety eight when the Supreme Court was deciding on whether a lower court has to accept the polygraph test, and at that time four of the justices suggested that even if polygraph tests were to work flawlessly with one hundred percent reliability, they still shouldn't be admitted. Why because the justice is said that would take away the job of the jury as the determiners of truth.
Now that's not law, it's only a legal opinion at this stage, but it could be a pattern for how fancy brain imaging technologies might also get ruled on in the future. And again, unlike a polygraph as it stands now, the fMRI technology won't work unless you are a willing subject, willing to keep your head very still for the images
to come out right. And in this sense, the current brain imaging technology may become more useful as a truth confirmer by people who feel they've been falsely accused, rather than a lie detector. So let's return to this issue about privacy and whether lie detection as it gets better
over the coming centuries, constitutes a violation. One of the deepest questions from the point of view of the legal system in the Constitution is that a criminal defendant always retains the right not to testify and not to incriminate himself. So can a brain scan lie detection test be forced or does it need to be agreed to? If you were to refuse to use a new brain scan technology, are you protected against self incrimination which is the fifth
Amendment of the Constitution. There's a lot of legal precedent to all these questions, but of course we don't know exactly how this will evolve in the coming decades. Currently, blood tests and DNA tests can be forced. Many counties have a no refusal policy. If you're driving through and they suspect you of being on drugs, they can draw your blood. In other circumstances, law enforcement gets a court order for your DNA, which is usually taken with a
cheek swab. Now, these tests can only be done if they are very relevant to the case, if they're critical to the outcome. So will brain scan lie detection tests ever reach the point where they can be forced, Well, it's difficult to say, but as a piece of legal precedent that might be applicable. Let's go back to nineteen fifty two. There was a man in California named Rochin, and the police raided his home on suspicion that he
had narcotics. And when they busted in, he went running up the stairs and they chased him up the stairs and he got to his bedroom and there were some pills on the dresser and they said what is that and he popped the pills into his mouth and swallowed them. So this made the police mad because he had just
gotten rid of the evidence. So they wrestled him to try to get the pills out, and when that didn't work, they handcuffed him and they took him down to the emergency room and had his stomach pumped to get the evidence back out. And this case spun all the way to the Supreme Court, and the court concluded, you can't do that. You can't pump somebody's stomach to get the evidence out of them. Why because they said that is unreasonable search seizure, and they said it shocks the conscience.
So given this legal background, it's not clear that you could be forced into giving a brain scan and whether that would count as a form of unreasonable search and seizure. Would it be like pumping out the contents of the stomach if you looked at the contents of your mental life. And what if you're terrified of a brain scanner because you have claustrophobia or you don't like loud noise. Could you say that you're being put under undue pressure and
it's a form of torture. So these are all open questions. Can a legal system in the future get a mental search warrant? Does that seem reasonable under some circumstances? Or is there something special about this inner sanctum which has been absolutely private for all of history? Can we and
should we maintain that expectation. How all of this is going to shake out at the intersection of neuroscience and the legal system is It's difficult to say at the moment, but generally, as our technology moves faster and faster and surpasses our stone age brains, we find ourselves entering a very strange new world, and that is no lie. Go to Eagleman dot com slash podcast for more information and
to find further reading. Always feel free to send me an email at podcasts at eagleman dot com with any questions, and check out and subscribe to Inner Cosmos on YouTube for videos of each episode and to leave comments. Until next time. I'm David Eagleman and this is Inner Cosmos