Lies, Brains and Courtrooms

Jan. 12, 2017, 5:00 AM UTC

When exactly did Hillary Clinton and Donald Trump lie to us? Do these pants make me look fat? Did Oscar Pistorius believe he was shooting an intruder and not his girlfriend?

We care about lies, and for good reason. Humans have been lying to each other, and trying to separate lies from the truth, since we first began roaming this planet. We are gifted liars, but seem to be terrible lie detectors, whether the lie is of the little white variety or of the no-I-didn’t-mean-to-kill-her variety. As for these latter kinds of lies, the forensic kind, our recorded history is strewn with bizarre and desperate techniques to detect them, all breathtakingly ineffective, and we are only marginally better at it today. That could be changing because of some pretty spectacular advances in neuroscience, and because of the way assessments and admission of scientific evidence has changed over time. Caution is certainly warranted, given our pitiful lie detection track record. But the neuroscience is progressing so fast that we should probably be thinking now about some of the issues we will face—from personal privacy to the reliability of trials and the role of jurors—if it turns out this new lie detection technology eventually becomes ready for courtroom prime time.

Help From the Gods

Until trial by jury became dominant in England in the late 13th century, all criminal lie detection techniques assumed that ordinary mortals simply could not distinguish truth from lies. We needed help, either from the gods directly or from their priestly go-betweens. The Boca della Veritá (the “mouth of truth”) was one of many ancient truth-telling devices. It is a large marble sculpture from 1st century Rome, carved into the form of a face with a big, gaping mouth. It stands today in the portico of a church in Rome called Santa Maria in Cosmedin. Legend has it that suspects (or prospective lovers, in the case of Gregory Peck and Audrey Hepburn in a scene from the movie Roman Holiday) placed their hands inside the sculpture’s open mouth, and if they lied the gods would cause the mouth of truth to crush the liars’ hands.

All of the medieval trials by ordeal—including the ordeal of cold water, where the accused was bound and thrown into a body of water—were likewise based on this idea that even though ordinary people might not be able to detect lies, God could. Many of the medieval ordeals were self-executing, so to speak, though not in the lose/lose way you might think. In most pre-Inquisition ordeals, surviving them—admittedly a long shot—was a sign of God’s intervention to protect the innocent, not a sign of guilt. So if an accused undergoing the ordeal of cold water were somehow able to wriggle loose and not drown, he was declared to be innocent, and freed. It was only in the heresy trials of the Inquisition that the Church regularly began treating survived ordeals as a sign of witchery, so that the rare survivors were then burned at the stake.

Depending on the crime and the feudal ranks of the accused and victim, some of the lie detection ordeals in the Middle Ages were surprisingly mild.

In the ordeal of the consecrated morsel, usually reserved for nobility or priests, the accused took an oath to tell the truth, ate a consecrated communion wafer, pronounced his blamelessness, and was declared innocent if God didn’t immediately strike him dead. Although this method, like the mouth of truth, had a predictably low conviction rate, feudal lords did have some momentary doubts about it when the Earl of Kent reportedly died during the ordeal of the consecrated morsel in 1194.

Trial by compurgation—meaning “oath-helping”—was similarly mild. All the accused had to do was to swear his innocence, have a sufficient number and feudal quality of other people swear that he was God-fearing, and he was declared innocent, without any messy inquiry into the facts of the case. These oath-based lie detection methods may sound strange to our modern ears, but they weren’t strange at all if you believed in the sanctity of the oath. The threat of eternal damnation was a pretty good incentive to tell the truth, even if the consequence was a painful but at least brief earthly death.

These kinds of god-based truth-finding processes were common across many human societies. Trial by battle—where the victim and accused fought (or, later, their chosen champions) and where God preordained the outcome to favor the truth-teller—was used in virtually every civilization that has left a written record. Compurgation was not quite as wide-spread, but it appears in records left by many ancient societies, including the Babylonians and Jews. Trial by jury, by contrast, was exceedingly rare. It probably would never have ascended to its current position as the lie detection method of choice, at least in England and America, had not Pope Innocent III abolished trial by ordeal in 1215. With the ordeals gone, with the roughest edges of trial by battle smoothed by the influence of the Church, and with faith in the power of the sacred oath waning, we were forced to turn to the only lie detector left: the alleged group wisdom of our peers.

Humans Aren’t Good Lie Detectors

But when it comes to detecting lying, we’re probably not nearly as wise as we think.

The human brain is a wonder of information processing and prediction. But there are a few well-known tasks it simply cannot do well, and lie detection seems to be one of them. It is quite a startling paradox. Study after study shows that most of us are only slightly better than chance at distinguishing liars from truth-tellers. Yet we seem to have high confidence in our own ability to detect lies, and not much confidence at all in other people’s ability to do so. History’s desperate and unending search for God’s or science’s help with lie detection is probably a reflection of this widely and deeply held belief that with a single exception (me), no one else can accurately detect lies. I can’t be everywhere, and of course I might also be the one accused of lying. So we needed to find clever ways to help everyone else detect lies.

Ever since Greek physicians got the idea around 250 B.C. that telling a lie might increase a person’s pulse rate, earthly lie detection technology has pretty much been stuck measuring arousal. Yes, new measures have been added, including blood pressure, respiration rates, pupil dilation, blood oxygenation levels, galvanic skin resonance (detecting sweating by measuring the skin’s ability to conduct small currents of electricity over its surface) and even voice stress analysis. Today data from these measurements can even be crunched with fancy computer programs. But the underlying theory has always remained the same: all liars are worried about their lies, and their worry can be accurately detected by measuring changes in arousal.

Unfortunately, neither of these assumptions is correct often enough to make traditional lie detection by polygraph (which can measure blood pressure, pulse, respiration, and skin conductivity) accurate enough for courtroom use. There are many obvious and not so obvious reasons why lie detection based on measures of arousal is so difficult.

Truth Tellers Have Reasons to Worry

In criminal law contexts, even truth tellers have lots of reasons to be worried, stressed and aroused. They may be worried about whether the police will believe them. They may still be exhibiting stress simply from witnessing a crime, even if they didn’t commit it. Whose baseline arousal measures won’t be off the charts, and therefore presumably subject to wild swings, if a subject has just learned a loved one’s been murdered? Suspects could even feel real guilt about a crime, but not because they committed it. “I should never have gone on that trip and left my husband alone to be robbed.”

Lie detection is also fraught with the converse problem—liars passing the test—and polygraphy is no exception. Psychopaths, who generally don’t feel guilty about anything, may not feel guilty enough about lying to have the same arousal responses as ordinary subjects. There are also known countermeasures, becoming more and more well-known thanks to the internet, which can mask the arousal measures polygraphs use. Some countermeasures may even mask the very consciousness of guilt that all lie detection is allegedly detecting.

Philosophers and religious scholars have spent centuries debating the question of whether people are lying in the eyes of God when the words they say are true but they intend them to convey a false message to the listener. This kind of lie is called equivocation. When you’ve taken all the copy paper home for your high schooler’s term paper, and your boss asks you about it, are you lying when you say that “Someone must have taken it.” Sneakier still, there’s what theologians called “mental reservations.” When asked whether you killed your husband, you say out loud “I did not kill my husband” but say to yourself “at noon yesterday.”

Are you lying when you equivocate or mentally reserve? You certainly are in the eyes of the law. But what about in the eyes of an arousal-based lie detector? Lies also have different degrees of moral traction. “Of course I remember you” almost certainly has different psychological underpinnings, and therefore different arousal patterns, than “Of course I didn’t rape her.”

Finally, the human capacity for self-deception, not to mention old-fashioned memory loss, can blur our own internal lines between truth and lies. Even the best lie detector can probably only detect lies that the subject knows, or at least suspects, are lies, since it is the consciousness of guilt that presumably animates the arousal.

So, with all these theoretical and practical challenges with arousal-based lie detection, how good are modern polygraphs at detecting lies? A 2002 report by the National Academy of Sciences, looking at more than 50 of the best polygraph studies around the country, found that the technology was, under some conditions, considerably better than chance (70 percent to 80 percent) at detecting lies. But that same NAS report also found that test results varied so wildly, both between polygraph operators and even with the same operators over time, that the polygraph is simply not yet reliable enough for courtroom use (or even for use in many other contexts, including job applications). And it is not entirely clear it can ever get reliable enough.

Instructed Lies

One of the hardest problems with lie detection research is the problem of the “instructed lie.” How can we ever study the accuracy of lie detection in the real world when the experiments we use to test that accuracy give subjects permission to lie? In the typical experiment, subjects are told to lie about some facts and to tell the truth about others (such as where the experimenter hid some objects). Surely we can’t just assume that these “instructed lies,” as the researchers have dubbed them, are the same phenomena as lies in the real world, where suspects lie at their own choosing, often about facts with deep moral significance and serious personal consequences. Lie detection researchers hope that instructed lies are in fact harder to detect than uninstructed ones, and therefore that the current modestly accurate ability of experimental polygraph results to detect instructed lies might translate into even more accurate detection of real world lies. But until uninstructed lies can be tested experimentally, this hope will remain only a hope.

One way to test uninstructed lies is to look at real lie detection results obtained during criminal investigations, and compare them to real case outcomes where we are pretty sure a witness has either lied or told the truth. But both of those circumstances are rare, and together they are almost unheard of. Most suspects and witnesses don’t take lie detection tests before trial, and with a few small exceptions (DNA, later-discovered video, confessions by alternate suspects) there is usually no objective way to be reasonably sure, even after a trial, whether someone has been lying.

But it now looks like the problem of the instructed lie may be on its way to a pretty clever solution. In an experiment done by Harvard’s Josh Greene and colleagues, under the auspices of a national law and neuroscience project funded by the MacArthur Foundation, subjects were asked to predict the outcome of computerized coin flips, using the cover story that they were testing extrasensory perception. The team incentivized the subjects by paying them for each correct guess. At first, they asked the subjects to report their predictions before the coin flip happened, just as you would expect in any experiment testing ESP. Not surprisingly (unless you believe in ESP), all subjects were right about 50 percent of the time. But then the researchers had subjects report their predictions after the flip, incentivizing them to lie. In this fashion the researchers were easily able detect the net liars—anyone reporting success significantly above 50 percent—and then study their lying brains, without inviting any lying.

The experiment found that roughly 1/3 of the subjects regularly lied, 1/3 occasionally lied and 1/3 never lied. Even more interesting, brain-scanning results using functional magnetic resonance imaging (fMRI) suggested that liars, in the aggregate, appeared to expend additional neural resources in certain frontal areas of the brain compared to the non-liars, both when they lied and when they refrained from lying, results that could be quite promising for neuroimaging-based lie detection in the future. In fact, the experiment was able to distinguish the frequent-liars from the never-liars with 100 percent accuracy using the fMRI data alone.

Promising, right? Well, not so fast. For one thing, it was impossible to identify on which coin flip any particular subject was lying. That’s a big deal. In courtrooms, it’s not enough to know that witnesses may have a penchant for lying or telling the truth; we want to know whether they are telling the truth at a particular moment about a particular fact important to the case. For another thing, lying in an experiment about a coin flip, just to get a few extra dollars, may still be quite a different psychological, and therefore neural, phenomenon than lying in court about committing a bank robbery, in order to avoid prison. Finally, lies, like all decisions, can be complicated and nuanced, both in contemplation and execution. Were the subject liars “lying” when they merely thought about lying, when they decided to lie, or when they actually pushed the button that operationalized the lie?

As these methods are tweaked, and more lie detection research is aimed at uninstructed lies, researchers hope that the uninstructed lie ends up being easier to detect than the instructed one, since efforts at detecting instructed lies have been so disappointing.

The Changing Landscape

Polygraph results are generally inadmissible in virtually every state and federal court in the United States. In an influential 1998 court martial case, the U.S. Supreme Court even held that a criminal defendant has no constitutional right to present favorable polygraph results to the jury. Of course, lie detection is used today in many different domains, including screening by employers (including government agencies) and even in the criminal justice system before trial. But it remains generally out of bounds in our courtrooms. Two tectonic plates—one legal and one scientific—are shifting in ways that may change all of this.

First, the law is changing. The polygraph was born in the 1920s with a conjoined legal twin called the “Frye rule.” Named for a 1923 homicide case in which the trial court refused to admit a blood-pressure polygraph offered by the defense to show the defendant was telling the truth about his alibi, the conviction was upheld on appeal. Frye v. United States, 293 F. 1013 (D.C. Cir. 1923). The Frye rule announced by the appeals court, and widely adopted by other courts across the U.S., allowed novel scientific evidence to be admitted only if the trial judge determined it was “generally accepted in the scientific community.” The polygraph was not generally accepted by the scientific community, so its results were inadmissible.

“General acceptance in the scientific community” may not sound like a very high standard, but it is. Science works by making hypotheses, falsifying them with experiments, then adjusting the hypotheses based on the outcomes of the experiments. Today’s scientific “truths” are tomorrow’s falsified hypotheses, and all good scientists are keenly aware of the limits of their current knowledge. It is difficult in any scientific discipline to achieve general acceptance about any new technique or finding.

The Frye standard also forced judges largely to punt to scientists, which made little sense given that the question of how much uncertainly the law should tolerate is ultimately a legal question, not a scientific one. For these and other reasons the U.S. Supreme Court threw out the Frye test in 1994 for all federal cases.

In Daubert v. Merrill Dow Pharmaceuticals, 509 U.S. 579 (1994), the Court replaced the daunting Frye standard of general scientific acceptance with a more flexible judge-operated standard. Judges should still look at whether there is general acceptance in the scientific community, but that is now just one factor of many, no one of which is required.

At roughly the same time the Supreme Court was making the standards for the admission of novel scientific evidence more flexible, leaps in neuroscience were allowing researchers, for the first time ever, to explore living brains while the owners of those brains were alive and performing experimental tasks, such as lying. Until then, our knowledge of how brains worked was limited to examining them on autopsy (or occasionally by recording the brain’s electrical impulses through the scalp), or in animal studies where an electrode was inserted to study a single brain cell, or in other very limited fashions. Suddenly, with the emergence of positron emission tomography (PET) scans in the mid-1980s, and fMRI scans in the early 1990s, scientists were no longer stopped at the barrier of the skull, or frustrated by the low resolutions of x-rays. Now they could measure brain anatomy and neural activity with high resolutions in living subjects while those subjects performed designated tasks, formed intentions, or lied.

We are still in the very early days of these neuroimaging technologies. They are just 25 and 30 years old, yet their advances have already been impressive. Thanks mainly to these neuroimaging technologies, scientists have learned more about the operations of the human brain in the last 30 years than they’ve learned about it in all of prior human history. And the pace of learning keeps accelerating.

It is true that these neuroimaging technologies, like all technologies, have limits. They do not directly measure brain activity. PET scans read the amount of injected radioactive tracer carried by blood to the brain. If, as neuroscientists believe, increased metabolic demands of the hardest working brain cells yield an increase in blood flow to their areas, then brain activity can be indirectly measured by tracking blood flow.

The fMRI scans are conceptually similar, except with greater temporal resolution and no need to inject subjects with radioactive substances. The fMRI technique detects changes in the ratio of oxygenated and deoxygenated red blood cells, because of their different magnetic properties. That window into localized changes in a brain’s blood flow patterns, which reflect the energetic demands of hard-working neurons, enables inferences about areas of the brain that are working differently when on task for different mental operations.

Yes, here we are with neuroimaging, as with polygraphy, back at physiology. But at least these neuroimaging technologies are looking at physiological changes in the organ commanding any intention to lie—the brain—rather than at remote arousal-based exhaust fumes of that intention.

PET and fMRI scans also don’t read brain function in real time—there is a small delay between the blood flow signals they detect and the brain activity presumed to be related to those signals. These time delays, though tiny, can be difficult challenges in any experiment aimed at pinpointing the moment a subject does something—like decide to lie.

The spatial resolution of these neuroimaging methods, though greater by orders of magnitude than traditional x-ray, and better even than EEG, remains limited, especially compared to the complexity and density of brains. The best fMRI can now detect blood oxygenation down to a resolution of about 1 cubic millimeter of brain matter. But 1 cubic millimeter of cortical brain matter contains about 50,000 brain cells, or neurons. That means the fMRI is counting as one signal what in fact is an average of signals coming from 50,000 different neurons.

Scientists know quite a lot about brains grossly—how different large segments are oriented and seem to work together. They know even more about how individual neurons work. It is at the intermediate level of organization—how local neural circuits function and how networks of neurons interact with other networks of neurons—that the brain remains most mysterious, and it is at that intermediate level that neuroimaging techniques like fMRI and PET, together with even newer methods to measure the connectedness of brain areas, hold the most promise.

Not Ready for Prime Time

Not surprisingly, once these neuroimaging techniques opened the window on the functioning brain, researchers began thinking of using them to detect lies. Our Research Network’s consensus statement on fMRI lie detection concludes that, at present, the state of this research is very much like the state of polygraphy in 2002, when the NAS declared it was simply not reliable enough to use in courtrooms, even if 70 percent or even 80 percent validity could sometimes be achieved.

In 2014, a team of neuroscientists, also funded by the national law and neuroscience project, conducted a comprehensive review of neuroimaging-based lie detection experiments and concluded that there is very little evidence that bears on the accuracy of fMRI-based lie detection. While just a handful of studies examined the accuracy of the method (sometimes reported as high as 80 to 100 percent), as with polygraphs, the team found that fMRI studies relied heavily on instructed lie paradigms. Further problems with the experimental designs also leave open the possibility that the fMRI signals attributed to lying might instead reflect brain responses associated with memory, attention, and other cognitive states. Finally, they noted that there is unacceptable variation both within subjects in a single experiment and between experiments and experimenters. Ultimately, the team concluded that the accuracy and reliability of fMRI-based lie detection are currently unknown and (just as the NAS concluded in 2002 about polygraphs) that lie detection by fMRI is not currently ready for courtroom use.

The Research Network’s knowledge brief “fMRI and Lie Detection” is available at
http://www.lawneuro.org/LieDetect.pdf.

Beyond the above concerns, lie detection by neuroimaging seems particularly vulnerable to both unintentional and intentional data degradation. Even small, unintentional, head movements during scanning can invalidate neuroimaging results; intentional head movement is catastrophic for the technique. Apparently, so are intentional cognitive countermeasures. In one reported study, accuracy rates plunged from 100 percent to 33 percent after subjects were instructed to think about irrelevant things and to covertly move their fingers while answering.

As with polygraphs, lie detection by neuroimaging is not faring well in our courtrooms. Although the Supreme Court has not yet addressed the issue, a 2012 Sixth Circuit opinion upheld a federal trial judge’s exclusion of fMRI-based lie detection results in a criminal case, agreeing with the trial judge that among other things the technique, as applied in that case, was of unknown accuracy and reliability. U.S. v. Semrau, 693 F.3d 510 (6th Cir. 2012).

Reasons to Believe

But there are reasons to believe that lie detection by neuroimaging will improve, both in its accuracy (how frequently it can correctly differentiate lies from truths) and in its reliability (how variable is that accuracy between subjects, experiments and experimenters). If the next 30 years is anything like the last 30 years, we can expect significant increases in accuracy. Neuroimaging techniques are not only getting better, the way scientists crunch the data from them is getting more and more sophisticated. Neuroscientists are starting to use learning-capable computer programs, called pattern classifiers, to look at huge amounts of brain imaging data. This technology has the potential to work around the problem that so little is known about the intermediate organization of the brain. Using these big data techniques, neuroscientists are looking at whole brains, or large regions, for patterns that might accurately correlate to lying. It doesn’t really matter why these patterns appear when a subject is lying, as long as they accurately match lying.

Lie detection by neuroimaging also has a significant advantage over the polygraph when it comes to the likelihood of increasing reliability between subjects and different experiments and experimenters. Polygraphers look at a few arousal measures over time for each subject; neuroscientists typically look at thousands of data points for each subject. This gives neuroimaging experiments a statistical power polygraph experiments will never have and, especially with the big data boost of pattern classifiers, a real chance to improve the reliability of neuroimaging-based lie detection.

There are already some replicable common findings in fMRI-based lie detection experiments. Certain specific regions of the brain—including those associated with executive control (some of the same regions identified in the uninstructed lie experiment discussed above)—consistently show increased activity during instructed lying compared to instructed truth-telling. These common findings are not strong enough to overcome the concern that there might be other cognitive causes of the brain signals other than lying, such as memory loss or lack of attention. These findings also reflect group averages rather than individual subjects on individual trials, they are susceptible to countermeasures, and they still relate only to instructed lies. Nevertheless, they are a clue that future neuroimaging technologies may one day be ready for the courtroom.

Beating a Coin Flip

Proponents of admitting lie detection technology quite rightly ask why, if we can solve the reliability problem, we shouldn’t admit any lie detection method even if it never achieves more than a 70 percent accuracy rate. After all, isn’t 70 percent a lot better than what we have now: jurors just flipping a coin about who’s lying? Yes and no.

Because of the problem of the instructed lie, we actually don’t know whether jurors are as bad at detecting courtroom lies as they are about detecting instructed experimental lies. They may be much better than chance; maybe even much better than an 80 percent or 90 percent accurate lie detection technique. This remains an enticing open question, waiting for clever experimenters to figure out ways to take experiments on uninstructed lies into the forensic realm. How can we get experimental subjects to lie about morally salient historical facts, similar to the kinds of things we see in criminal courtrooms, without instructing them to do so? There are some efforts underway, but it remains unknown whether humans are as blind to courtroom lies as they are to experimentally instructed lies.

It also turns out that a small percentage of us are considerably better than the rest of us in detecting instructed lies. These super human lie detectors are accurate to about 65 percent, only a little less than the best polygraphs. Not much is known about why these super human lie detectors are so good, and much more research is needed. But it is conceivable that in the future we will be able to teach a host of players in the criminal justice system—police, prosecutors, jurors, judges, probation and parole officials—to improve their lie-detecting skills considerably.

Whodunnits. Whydunnits.

It is also important to understand that criminal justice outcomes rarely depend on whether a witness is lying. First, across state and federal courts, roughly 95 percent of accused defendants plead guilty. Even in the 5 percent of criminal cases that go to trial, the factual guilt of the defendant is seldom at issue. Crimes usually have two components: the unlawful act and the accused’s state of mind while committing it (e.g., whether he shot the victim intentionally, accidentally, or with a mental state somewhere in between).

A small part of the cases that go to trial are whodunnits; most are whydunnits. Even in that tiny segment of criminal cases that go to trial and are whodunnits, the evidence is seldom limited to the statements of a single witness. The crime is witnessed by others, or was captured on a security camera, or—and this happens a surprising number of times—the defendant has confessed. Yes, there are cases that truly are he said/she said whodunnits, or whether anyone dunnit. But they are a fraction of a fraction. When we imagine the impacts reliable lie detection might have on the criminal justice system in the future, we need to understand what a small part trials play in that system.

But of course plea bargains happen in the shadow of trials. If defendants’ and witnesses’ testimony could be accurately and reliably confirmed or discounted by lie detection, it would have an enormous impact on all aspects of the criminal justice system. Perhaps innocent suspects would never be charged, and therefore never wrongfully convicted. Perhaps the guilty would plead guilty to appropriately serious charges, rather than to ones that reflect the prosecutors’ worry that jurors will let a guilty man go free.

Invading the Province of the Jury

But there are palpable costs even to reliable and accurate lie detection. One of these costs is invading the most traditional province of the jury—deciding credibility. Indeed, in what appears to be the first reported case excluding evidence of lie detection by fMRI, this problem of invading the jury’s role figured greatly in the New York trial judge’s decision. Wilson v. Corestaff Serv., LP,
900 N.Y.S. 2d 639 (2010).

It is not just a matter of invading the juror’s province; after all, one man’s “invasion” is another’s improvement, and if there is a valid and reliable technique that proves to be better than jurors then perhaps their province should be invaded. These scientific techniques risk more than mere invasion, however; they risk jurors over-relying on them. In its wisdom, the law has recognized, in the guise of the doctrine of not invading he province of the jury, several circumstances in which we risk having the jury abdicate its truth-finding function to other kinds of particularly alluring evidence. Remembering that most trials have lots of evidence besides a single witness’s testimony, we need to worry about the jury inappropriately discounting all of that other evidence and over-relying on highly accurate and reliable, but still fallible, lie detection.

Imagine you were sexually assaulted in a public bathroom, that you had plenty of time to see the defendant’s face and picked him out of a lineup, that there is no forensic evidence, but there is a security camera showing the defendant going into the public bathroom and coming out of it right at the time he assaulted you (this was a real case over which the judge co-author presided). A slam dunk case, right? Now imagine the defense could force you to take a polygraph or fMRI lie detection test, you did, and you were one of the, say, 10 percent of truth-tellers who fail it. The slam dunk case is probably now very hard to prove, because jurors may well pay so much attention to the polygraph results that they ignore the other evidence.

There is lots of literature confirming that ordinary people often over-value scientific or even pseudo-scientific evidence. We are all desperate for that truth-detecting device, and we may assume it is infallible even in the face of expert witnesses telling us it isn’t. Neuroimaging evidence may be especially susceptible to this problem of jurors over-relying on scientific evidence.

Several studies in the early 1990s showed what experimenters have come to call “the Christmas tree effect.” When two groups of subjects are given identical medical information, but one group is also given an unexplained PET or fMRI image showing the brain “lighting up like a Christmas tree,” the group that gets the neuroimage gives it weight that is simply not justified by the information provided verbally. In contrast, several studies since 2011, funded by the law and neuroscience project, suggest there may not be any “Christmas tree effect” at all, or that its effect may not be as worrisome as once thought.

How accurate and reliable will any new lie detection technology need to be to overcome problems like invading the province of the jury and the Christmas tree effect? In the end, unlike with the polygraph, we imagine that individual trial courts across the country, most of them using the flexible Daubert standard, will come to different conclusions about admissibility. That may be a better result than the monolithic and automatic exclusion of polygraphs that began in the 1920s and has continued today despite the changing standards for admissibility and the increase in the polygraph’s accuracy. If new lie detection methods really do get substantially more accurate and reliable, as we suspect they will, then judges will need to be open to the possibility they should be admitted.

Neuroimaging in the Courtroom

When and if that happens, and lie detection by neuroimaging is ready for the courtroom, will we be ready for it? Getting past Daubert’s accuracy and reliability barriers is just the first of many legal and policy hurdles. Can witnesses be forced to undergo accurate and reliable lie detection? The defendant probably cannot be forced, since the Constitution’s Fifth Amendment protects a defendant’s right to remain silent, and all current lie detection techniques require the subject to answer questions. But we can imagine future neural-based lie detection methods that might be so sensitive they could detect a suspect’s recognition of crime scene photos without asking him anything.

In fact, we don’t have to imagine such techniques; a version of them is already on the experimental radar. Our Research Network colleague Anthony Wagner has published a series of experiments in which he and his team were able to detect, with accuracies far above chance, whether subjects had seen a face before, or visited a scene before, simply by showing them photographs and using classifiers to look for patterns of recognition in their fMRI data. Would such a method, if perfected, violate the Fifth Amendment if forced upon a suspect? It is not at all clear.

Conversely, should the defense be able to force a victim or witness to undergo accurate and reliable lie detection? There are usually no Fifth Amendment issues raised by this question (unless the witness was also involved in the crime), but what about privacy? Have victims or witnesses lost what would ordinarily be their right to prevent the state from rooting around inside their brains, just because they have been unfortunate enough to have had a crime committed against them or witnessed a crime being committed against someone else? The answer to this question might depend in part on whether this new technology can be limited to the memories pertinent to the criminal case. Few of us would want to live in a Brave New World where the price of effective forensic lie detection is to have all our memories downloaded for state use.

We also have some worries, which we don’t want to overstate, about what might be called the technological dehumanization of justice. Knowing that humans are so deeply fallible is one of the things that keeps us all vigilant about the fallibility of our institutions, including the trial system. The day we turn on a switch and expect a machine to spit out justice is probably the day we stop worrying about justice.

But of course that doesn’t mean justice’s component pieces can’t be improved by technology; it just means that we need to think about whether the technological medium has affected the legal message. The fact that today we photocopy jury instructions instead of having scribes handwrite them doesn’t mean our jury instructions were better in the 1700s than they are today. But it does pose the risk, less present when each instruction had to be created on its own, that instructions we’ve used before will get used and reused without thinking about them again. The solution isn’t to go back to scribes, but rather to force ourselves to reread, and re-think, the instructions in each case.

Likewise, we can’t imagine a trial system that consists solely of banks of lie detectors churning out the truth of a case, with no human involvement by lawyers, judges or juries. But neither can we imagine that lie detection, any more than photocopying, will somehow never find a place in the system. Finding the right place will be the challenge.

Learn more about Bloomberg Law or Log In to keep reading:

See Breaking News in Context

Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.

Already a subscriber?

Log in to keep reading or access research tools and resources.