The Neuro Gang's all here! We posed this afternoon
for a group photo on the beautiful Penn campus.
The morning started off with a really useful exercise, a breakout session led by Penn neuro grad students, on how to read through a paper that reported fMRI results. We broke into five groups (my group covered race and empathy as reported in this paper), and it was a very helpful exercise. For those playing along at home, you might also want to check out Owen Jones article, “Brain Imaging for Legal Thinkers”. If you’re new to fMRI, it will help you understand what the terminology means.
After the break out session, we heard again from Geoff Aguirre. Through a series of morning and afternoon lectures, Geoff walked us through more advanced topics in fMRI. If you want to see what we covered, you can find Geoff’s presentations (both slides and videos) online. I highly recommend checking him out.
After Geoff’s lecture in the afternoon, we had two hour-long breakout sessions. These afternoon sessions, designed to be informal and a chance to converse, were led by boot camp participants. Along with Hiroko Ide, a political scientist doing fMRI research at the
The lineup of breakout sessions (5 were offered each hour) was great, and I wish I could have attended many of them. If you were here you could have attended the following sessions:
- Neuroimaging of Punishment by Owen Jones
- Regulating Neuroimaging by Stacey Tovino
- Neuroimaging: Privacy Considerations by Marc Blitz
- Neuroimaging of Religious Experience by Andrew Newberg
- Neuroimaging in Court by Julie Seaman
- Imaging of Intention by Tom Buller and Deborah Denno
- Marketing and Neuromarketing by Jeff Galak and Gal Zauberman
- Neuroimaging Pain by Adam Kolber
II. Important Lessons
Yesterday we learned some fMRI basics, and today Geoff taught us how to be informed consumers of fMRI research. Like any research methodology, fMRI has both promises and limitations. Because it is still new, and because the first round of fMRI studies didn’t do everything they needed to methodologically (e.g. statistical corrections), Geoff has been reminding us that a lot of the stuff currently out there isn’t to be trusted. So how do we know if an fMRI study is solid? What types of questions should we ask? Geoff gave us the answers.
- Forward Inference Experiments. These types of experiments are those that tell us “the brain area is for X,” where X may be love, working memory, or any number of other mental operations. When you see these experiments, ask this question: “How did the researchers isolate this mental operation?” For instance, if they’re claiming that this is the brain area for love, how did they isolate love? Remember that we can only isolate a mental process indirectly, through a method neuroscientists call cognitive subtraction. We’re not able to see subject “love” and then track the neurons that fire. Rather, we observe subject during state-1 and during state-2, and we subtract the neuronal activity between the two states. We then call that difference “Love,” but we can only do so if we’ve designed the experiment such that we can credibly claim that the only difference in mental processes between state-1 and state-2 is in fact Love. As you’re probably guessing, such a claim often isn’t valid. This problem is known as the “trouble with cognitive subtraction” and it’s a longstanding problem.
- Reverse Inference. In a reverse inference experiment, a researcher sees where neuronal activity is in the brain, and then works in reverse to claim that this brain activity is related to the mental process the subject was engaging in at the moment the scan was taken. The problem is that, in the absence of information provided by previous forward inference studies, a researcher doesn’t know a priori which mental processes the subject is actually engaged in. When a brain area is associated with multiple mental processes, it is impossible to know which mental process (or combination of processes) the subject happened to be experiencing. Thus, the question to ask when you see these sorts of studies is, “How strong is the relationship between brain activity and the mental process?” Geoff illustrated this problem with a study he called the “biggest, smelliest pile of garbage you can find”. He also has written a longer critique
- Distributed Reverse Inference. Distributed reverse inference studies are the closest we get to “mind reading,” as researchers essentially say: Let us see your brain's entire activity and based on the distribution of activation we can accurately predict what you’re thinking about … so long as you’re thinking about one of the things that we’ve trained our computer program to recognize (e.g. a certain face). This caveat sound familiar to social scientists, as it’s another version of the internal/external validity challenge. In the same way that experimental social scientists can’t be sure that the conclusions they reach will be valid outside the laboratory setting, so neuroscientists can’t be sure that their predictive models will work accurately outside the things that the computer has been trained to recognize. This has big implications for lie detection, namely that current technology does not allow for scientifically valid claims about the ability to detect lies.
III. Who’s at boot camp with me?
I went to lunch today with Bombie Salvador, a Management Professor at the