TIME has a recent piece entitled "Fighting Crime by Reading Minds"--which discusses some interesting recent work in neuroscience and its potential for detecting lies, reading minds, and the like. It's worth the read even if I don't share Rosenfeld's desire to simply "get past all these theoretical arguments." On my view, side-stepping legitimate moral and political issues in the name of "simply doing science" is myopic given the historical abuses of science. This is not to say that scientists need to be beholden to these concerns, but they should nevertheless take them seriously lest we repeat some of the avoidable mistakes of the past.
Michael Pardo and Dennis Patterson have a very interesting forthcoming piece in Neuroethics entitled "Minds, Brains, and Norms." I was fortunate enough to be asked to provide some invited commentary along with Walter Glannon and Carl Craven. Pardo and Patterson's lead article is now available for preview (see here) along with some of the invited commentary (see here and here). Finally, their reply is also available here. Thanks to Neil Levy and Walter Sinnott-Armstrong for inviting me to play along and for putting together this special issue!
I have posted several things (see here and here) as of late about the two recent high profile court cases involving the admissibility of neural lie detection. The case that has garnered the most attention is United States v. Semrau. The United States Magistrate released his recommendation yesterday. In short, neural lie detection was deemed to be inadmissible.
The complete recommendation can be found in the following attachment:
Greg Miller has recently written three pieces for Science Magazine on neural lie detection in the courtroom:
As always, happy reading!
Here is a recent article that readers of this blog will find of interest:
Rissman, J., Greely, H., and Wagner, A.D. (2001). Detecting individual memories through the neural decoding of memory states and past experience. PNAS.
Here are two recent pieces on neural lie detection:
As always, happy reading!
Raymond Tallis has made it abundantly clear that he doesn't like the recent trend towards research at the cross-roads of neuroscience and the humanities. In a series of sensational editorials, he has derided what he uncharitably refers to as "neuro-trash" and "neuro-mythology." I, for one, think that his arguments tend to shed far more heat than light. Minimally, Tallis' frequently commits what I have called the fallacy of composition when it comes to the field of neurolaw. As such, I plan to post a few responses in the weeks ahead. In the meantime, I thought I would provide readers with links to some of the things he has written about the relationship between neuroscience and the humanities in the past few years."Neurotrash"
In summary, such are the limitations of our understanding of the brain, attempting to apply the findings of neuroscience to social policy would be premature, even if this were not wrong in principle. But it is wrong in principle. The fabric of the human world, of the public space that is the arena of our lives, is woven out of explicit shared attention that has been infinitely elaborated in a way that has little to do with what goes on in the darkness of the individual skull, though you require a brain in working order in order to be part of it. If you come across a new discipline with the prefix “neuro” and it is not to do with the nervous system itself, switch on your bullshit detector. If it has society in its sights, reach for your gun. Bring on the neurosceptics.
Meanwhile, the neuromitigation of blame has to be treated with suspicion
except in those instances where there is unambiguous evidence of grossly
abnormal brain function or abnormal mental function due to clearcut illness
that may have its origin in brain disease. Our knowledge of the relationship
between brain and consciousness, brain and self, and brain and agency is so
weak and so conceptually confused that the appeal to neuroscience in the law
courts, the police station or anywhere else is premature and usually
inappropriate. And, I would suggest, it will remain both premature and
inappropriate. Neurolaw is just another branch of neuromythology.
Our failure to explain consciousness in terms of neural activity inside the brain inside the skull is not due to technical limitations which can be overcome. It is due to the self-contradictory nature of the task, of which the failure to explain "aboutness", the unity and multiplicity of our awareness, the explicit presence of the past, the initiation of actions, the construction of self are just symptoms. We cannot explain "appearings" using an objective approach that has set aside appearings as unreal and which seeks a reality in mass/energy that neither appears in itself nor has the means to make other items appear. The brain, seen as a physical object, no more has a world of things appearing to it than does any other physical object.
(*) This piece is not specifically about neurolaw, but it is based on assumptions that inform Tallis' so-called neuro-skepticism.
It appears both Tallis' strategy and his rhetoric are catching on, as evidenced by this recent piece by Denyse O'Leary over at Mertcator.net:
I'd like to propose a radical idea. Why don't the neuroscientists and progressive humanists stop hyperventilating and chill out for a while. Let them reflect on the fact that a hundred years ago phrenology, the "science" of analysing behaviour by putting a tape measure around a skull, was all the rage. Until they can account for the difference between the mind and the brain, their research might not be worth a hill of beans. In fact, it might just be, in the words of Raymond Tallis, neo-phrenology.
Joelle Anne Moreno has an interesting new article entitled, "The Future of Neuroimaged Lie Detection and the Law." The abstract is as follows:
Neuroscience will certainly change law. In fact, neuroscience research has the potential to influence a vast range of legal decisions. To the extent that neuroscientists increasingly make claims that neuroimaging reveals cognition, even the most unimaginative prognosticator might predict: (1) the preliminary investigative use of neuroimages to enhance witness interviews and police interrogations (including but not limited to lie-detection), (2) jury selection based on neuroimages that appear to reveal jurors' unconscious stereotypes or biases, and (3) arguments about intent or sentencing based on neuroimage-enhanced explanations of behavior and predictions of dangerousness. In anticipation of a brave new world of neuroscience 'enhanced' law, this Article suggests that if we want to predict or control future social and legal responses to cognitive neuroscience research, we must carefully and explicitly consider two basic preexisting realities: (1) our shared assumptions about the validity of the medical field of neuroscience and the accuracy of diagnostic neuroimaging technologies; and (2) our increasingly frequent exposure (even within the mainstream media) to uncritical reports of cognitive neuroscience research that purports to correlate brain activity with cognition, deception,or social behavior. The risk, is that if we ignore these realities, judges, jurors, and the general public will likely view all or most neuroscience-based evidence as legitimate 'hard' science because researchers rely on technologically sophisticated neuroimaging tools of demonstrated accuracy. The problem is that judges and jurors will mistakenly assume that technologies that are demonstrably valid medical diagnostic tools yield equally valid conclusions when they are used to map the neural correlates of deception and other forms of cognition.
As always, happy reading!
Frederick Schauer--one of the members of LANP's Governing Board--has recently posted an interesting article on SSRN entitled, "Can Bad Science Be Good Evidence: Lie Detection, Neuroscience, and the Mistaken Conflation of Legal and Scientific Norms" (see here). Given the recent interest in neuroimaging and lie detection, it is a very timely and important piece. Here is the abstract:
As the capabilities of cognitive neuroscience, in particular functional magnetic resonance imaging (fMRI) 'brain scans,' have become more advanced, some have claimed that fMRI-based lie-detection can and should be used at trials and for other forensic purposes to determine whether witnesses and others are telling the truth. Although some neuroscientists have been at the forefront of promoting such claim, most neuroscientists aggressively resist them, and have argued that the existing research on neuroscience-based lie-detection is deeply flawed in numerous ways. And so these neuroscientists have resisted any attempt to use such methods in litigation, arguing, in effect, that they are the product, so far, of poor science. But although it is probably true that the existing studies have serious problems of validity when measured by the standards of science, and true as well that the reliability of such methods is significantly lower than their advocates claim, it is nevertheless an error to assume that the distinction between good and bad science, whether as a matter of validity or a matter of reliability, should be dispositive for law. Law is not only about putting criminals in jail, and once we understand that numerous uses of evidence in various contexts in the legal system require a degree of probative value far short of proof beyond a reasonable doubt, we can understand the larger point that legal and scientific norms and standards are different. Some examples of good science may still not be good enough for some legal purposes, and, conversely, some examples of bad science my, in some contexts, still be good enough for some legal purposes.
Adam Kolber already has a nice and concise commentary on Schauer's paper over at the Neuroethics and Law Blog. So, you should check that out as well.