Archive for the ‘Forensic evidence’ Category
We’ve seen the claims that people don’t find brain scans as alluring as they used to, but here is a study that says, “not so fast!”. It’s an oddly intriguing study involving not only invoking pretty pictures of brain function but also political affiliation and how that factors in to what one chooses to believe.
Much attention over recent years has been given to “an attack on science”, with many public figures (including elected officials) insisting that evolution is a hoax, climate science isn’t real, and vaccines are somehow more harmful than helpful. [For the record, here at the Jury Room we are big-time fans of science. I want to believe that our readers knew that already.]
Researchers discuss perceptions of “soft science” and “hard science” and the general sense that “hard science” is viewed as more reliable, accurate and precise. They describe multiple experiments showing people tend to prefer “hard science” data to data offered by those in “soft science”. The question these researchers focused on was whether “hard science” data (in this case, a brain scan) would be preferred over “soft science” data (in this case, cognitive test results). They also wondered if this preference (for “hard science” or “soft science” data) would be mediated by political orientation.
In the study (106 participants, 83 women, 23 men; ranging in age from 18 years to 47 years with an average age of 19.6 years; 77 identified as White, 17 said they were African-American, and “five or fewer” identified as Asian American, Latino/Latina or other) completed a pretest online which included two questions about their political preference (both used by the American National Election Studies).
Generally speaking, do you think of yourself as a Democrat Republican, Independent, or something else?
If you selected Democrat or Republican for the previous question, would you call yourself a strong Democrat or Republican or a not very strong Democrat or Republican?
Only those participants who identified as either Democrat or Republican were eligible to participate in the study which they were told would involve them reading about an ethics violation and then making judgments about the case.
In the study itself, participants read a one-paragraph case description about a politician elected to office in a geographically distant state who had recently been cited for three ethical violations. The paragraph informed them the ethics committee had questioned the politician’s memory and asked him to have an evaluation done on his memory to determine if memory issues would prevent him from carrying out his duties as an elected representative. Finally, the participants read that if the testing determined the politician was impaired, he would be forced to resign and the governor of the state would appoint a replacement to serve until the next election. The paragraph description concluded by saying the governor had announced that any replacement appointees would be members of the same political party as the governor.
There were (you knew this was coming) several variations in the information the participants read about the politician and his situation.
Half of the participants read that the politician tested was a Democrat and the governor of his state was a Republican. The other half read that the politician was a Republican and the governor of his state was a Democrat.
The researchers paid attention to the political identification of the participant and if the participant said they were Republican and read about a Republican politician—they were placed in a group for analysis that was labeled in-group. If, on the other hand, a Republican participant read about a Democratic politician, they were placed in a group labeled out-group for analysis purpose. (The same applied vice versa when party preference is opposite.) Further, if the participant endorsed a strong affiliation politically, they were classified in the strong political identification group and if they endorsed a weak affiliation politically, they were classified in the weak political identification group.
After reading the initial description of the situation, all participants read a two-paragraph description of an expert evaluation of the politician. The expert mentioned in this description was a “Dr. Daniel Weinberger”. The participants received differing information about how Dr. Weinberger had evaluated the politician’s cognitive function.
Half the participants read that Dr. Weinberger reviewed the politician’s medical history and gave him verbal or paper and pencil tests (commonly used by neuropsychologists).
The other half of the participants read that Dr. Weinberger reviewed the politician’s medical history and conducted an MRI of the politician’s brain. (It is important here to note that no MRI images were shown. All the participants saw were words describing the process and then, the outcome.)
The second paragraph offered a description of the results of the evaluations in ways consistent with either verbal or paper and pencil tests or an MRI. For all participants, the second paragraph ended with identical statements saying that the expert concluded the “politician was suffering from beginning-stage Alzheimer’s disease, that symptoms will continue, and the symptoms will interfere with the politician’s ability to perform his duties”.
And here are the findings:
Biologically based information (i.e., the brain MRI) was viewed more favorably (69.8% said the evidence the politician had early stage Alzheimer’s was strong and convincing) than the behaviorally based (i.e., cognitive testing) information (only 39.5% said the evidence the politician had early stage Alzheimer’s was strong and convincing).
When asked to identify the one most important reason they felt the way they did about the evidence presented, those who saw the behavioral evidence said it was subjective and perhaps unreliable or irrelevant—more than 15% said the neuropsychological testing was unreliable or irrelevant. Not a single participant who saw the biologically based evidence said the MRI evidence might be unreliable—in fact, they saw it as objective, valid and reliable. (Anyone with any knowledge of the validating research and very detailed manuals accompanying psychological tests might find this, as the researchers say, “perplexing”. Of course, those who have that knowledge base would not qualify for inclusion in this study.)
Those participants who were in political out-group assignments (that is, Republican participants who read about a Democratic politician or Democratic participants who read about a Republican politician) were more likely to discount the behavioral science evidence than those in political in-group assignments.
In short, in this study, participants saw the MRI as more reliable and relevant than the cognitive testing, and those with strong political identities discounted the cognitive testing even more than those without the strong political sense of self.
Despite the reality that Alzheimer’s would always be diagnosed with cognitive testing, and brain scans used after testing was completed to rule out other explanations for impairments identified by testing—these participants preferred the verbally described brain images of “hard science” to the low-tech paper-and-pencil tests of the neuropsychologist. It’s a finding that underscores the importance of expert testimony informing jurors of how a diagnosis is made so they know if testing was performed because of the “wow” factor of a colorful MRI or to offer a research-based assessment of brain/memory impairment.
In other words, don’t believe everything you read– jurors can still be seduced by what looks like “hard science”. Your task is to show them what scientific findings are truly backed up by years of scientific research and development.
Munro, G., & Munro, C. (2014). “Soft” Versus “Hard” Psychological Science: Biased Evaluations of Scientific Evidence That Threatens or Supports a Strongly Held Political Identity. Basic and Applied Social Psychology, 36 (6), 533-543 DOI: 10.1080/01973533.2014.960080
If you think neurolaw and neuroscience are everywhere–and don’t find it particularly challenging to talk about brain science, apparently you are living in a very rarified environment. It’s hard to believe but evidently, most people do not think the exploding field of brain science is fascinating! Instead, when they think of brain science they think of things that are far removed from their daily lives and things that make them anxious. [Or bore them to tears.] For litigators this has crucial ramifications, since any body of technical information that is worth presenting to a jury requires understanding if it is to be useful.
UK scientists interviewed 48 London residents about “brain science”. They found that most of the interviewees believed that they would only find themselves interested in learning more about brain science if they developed a neurological illness. Maybe… too little too late?
The researchers identified four themes in the participant’s interviews: the brain is something in the science domain; there was significant angst that something could go wrong with the brain; there was a belief that we are all in control of our brains to some extent, and that our brains are what makes us all different and unique. The individual quotes the researchers included however, highlight the lack of awareness of brain science or research:
“Brain research I understand, an image of, I don’t know, a monkey or a dog with like the top of their head off and electrodes and stuff on their brain.” [Male participant]
“It does conjure up images of, you know, strange men in white coats.” [Female participant]
“You just, like I say, blind people with science, don’t you. And then it becomes a subject that you just don’t understand. With me, I just switch off. I’m not understanding what you’re talking about here, so I just switch off.” [Male participant]
“Where do these people come from, that actually understand these things?” [Female participant]
The researchers highlight the reality that most people do not see “brain science” as something relevant or a part of their lives. However, if an individual developed a mental illness or a neurological condition–they believe they would have more interest in learning. Without those catalysts, however, they have little interest in pushing themselves to understand more. The researchers report the concept of “brain science” seemed foreign or “baffling” to most of those interviewed.
From a litigation advocacy perspective, this study highlights the importance of teaching the science. Whether “the science” of a specific case is patent law, high-tech and abstract concepts, or actual “brain science”–jurors need to hear it and have a sense that they understand it enough to actually make judgments on the case. Keep in mind that they are going to judge it whether it is understood or not. The question is simply whether the judgment is going to be informed by bias, by knowledge, or by a coin flip and a longing to be done with jury duty. We know from 20 years of interviewing jurors that they strongly prefer having clear understanding. And that, dear litigator, is up to you.
We have worked on cases in which animation helped jurors make sense of complex computer programming and on others where the analogy of ordering a pizza with different toppings or a hamburger with or without special sauce were used to help jurors understand different technology applications in an especially complex patent infringement case. We’ve also worked on cases where there were allegations of neurological injuries but a very normal looking Plaintiff and jurors had to “see” the injuries somehow to help them understand what had been lost.
Never lose sight of how foreign the concepts truly are, and help jurors understand so they do not have to “shut off” as one of the interviewees in this study confessed to doing. Often, our mock jurors help to make the abstract and complex both concrete and simple, or at least familiar. Just because you have been buried in a case for years and live, eat and breathe the science, doesn’t mean jurors will have a clue about what you are presenting to them. Teach them in a way that helps them relate the abstract and esoteric to their everyday lives. It empowers them to make the right call. If you don’t know how to explain it to ‘real people’, gather a group of mock jurors and ask them what makes sense, where they get lost, and what analogies are most useful to them. If you invite them to the conversation in the right way, they’ll tell you.
O’Connor, C., & Joffe, H. (2014). Social Representations of Brain Research: Exploring Public (Dis)engagement With Contemporary Neuroscience Science Communication, 36 (5), 617-645 DOI: 10.1177/1075547014549481
We are again honored by our inclusion in the ABA Blawg 100 list for 2014. If you value this blog, please take a moment to vote for us here in the Litigation Category. Voting closes on December 19, 2014. Doug and Rita
A new issue of The Jury Expert has been published, and as usual, it’s one worth reading. As Editor since May, 2008–I get to see the articles as they come in and am always surprised at (and appreciative of) the creative and stimulating content we receive. The Jury Expert, like this blog, is all about litigation advocacy and understanding how new research can help inform your strategies in the courtroom. Here’s what you can see in the lineup for the November 2014 issue.
Wendy Heath and Bruce Grannemann ponder how video image size in the courtroom is related to juror decision-making about your case. They discuss how image size interacts with image strength, defendant emotions, and the defendant/victim relationship. Trial consultants Jason Barnes and Brian Patterson team up for one response to this article and Ian McWilliams pens another. This is a terrific article to help you reconsider the role of image size in that upcoming trial.
Sarah Malik and Jessica Salerno have some original research on bias against gays in the courtroom. This is simple and powerful research that illustrates just how moral outrage drives our judgments against LGBT individuals (especially when they are juveniles). Stan Brodsky and Christopher Coffey team up for one response and Alexis Forbes pens a second. While these findings make intuitive sense, they may also highlight something you’ve not previously considered.
Lynne Williams is a trial consultant who lives in the cold and snowy state of Maine. She is also skilled in picking juries for political trials and a gifted writer as she describes the important differences between picking juries for civil disobedience cases and antiwar protestor cases. This article not only explains what Ms. Williams does, but why and how she does what she does. It’s like lifting up the top of her head and peering inside her brain.
Mary Wood, Jacklyn Nagle and Pamela Bucy Pierson bring us this qualitative examination of self-care in lawyers. They talk about workplace stress and depression and substance abuse. Been there? Are there? Some kinds of self-care may work better than others but–what’s important is that you actually do some self-care! Andy Sheldon and Alison Bennett share their reactions to this article.
Why, you may wonder, would Plain Text EVER be a Favorite Thing. Because it is fabulous. Or, perhaps because, “Plain text is the cockroach of file types: it will outlive us all.”
Adam Shniderman knows neuroscience evidence can be incredibly alluring. This new study shows us that unfortunately (or perhaps fortunately) it is not universally alluring. Here’s a shocker: the impact of the neuroscience evidence is related to the individual listener’s prior attitudes, values and beliefs about the topic. Robert Galatzer-Levy and Ekaterina Pivovarova respond with their thoughts on the issues raised.
Law and Neuroscience by Owen Jones, Jeffrey Schall, and Francis Shen has just published and is as long as any Harry Potter tale at more than 800 pages. Rita Handrich takes a look at this new textbook and reference manual which covers more than you ever knew existed on the wide-ranging field of neurolaw (which is a whole lot more than the “my brain made me do it” defense).
Roy Bullis is back to talk to us about the wide language gulf between attorneys and their social science expert witnesses. Just because you are talking, doesn’t mean you are actually communicating. How do you talk so your expert knows what you mean?
Image from The Jury Expert
Demographic Roulette: What was once a bad idea has gotten worse. Authored by Doug Keene and Rita Handrich with a response from Paul Begala, this article takes a look at how the country has changed over the past 2 decades and our old definitions of Democrat or Republican and conservative or liberal are simply no longer useful. What does that mean for voir dire? What should it mean for voir dire? Two very good questions those.
If it feels bad to me, it’s wrong for you: The role of emotions in evaluating harmful acts. Authored by Ivar Hannikainen, Ryan Miller and Fiery Cushman with responses from Ken Broda-Bahm and Alison Bennett, this article has a lesson for us all. It isn’t what that terrible, awful defendant did that makes me want to punish, it’s how I think I would feel if I did that sort of terrible, horrible awful thing. That’s what makes me want to punish you. It’s an interesting perspective when we consider what makes jurors determine lesser or greater punishment.
Neuroimagery and the Jury. Authored by Jillian M. Ware, Jessica L. Jones, and Nick Schweitzer with responses from Ekaterina Pivovarova and Stanley L. Brodsky, Adam Shniderman, and Ron Bullis. Remember how fearful everyone was about the CSI Effect when the research on the ‘pretty pictures’ of neuroimagery came out? In the past few years, several pieces of research have sought to replicate and extend the early findings. These studies, however, failed to find support for the idea that neuroimages unduly influence jurors. This overview catches us up on the literature with provocative ideas as to where neurolaw is now.
Predicting Jurors’ Verdict Preference from Behavioral Mimicry. Authored by Matthew Groebe, Garold Stasser, and Kevin-Khristián Cosgriff-Hernandez, this paper gives insight into how jurors may be leaning in support of one side or the other at various points during the trial. This is a project completed using data from actual mock trials (and not the ubiquitous undergraduate).
Our Favorite Thing. We often have a Favorite Thing in The Jury Expert. A Favorite Thing is something low-cost or free that is just fabulous. This issue, Brian Patterson shares the idea of mind mapping and several ways (both low-tech and high-tech) to make it happen.
The Ubiquitous Practice of “Prehabilitation” Leads Prospective Jurors to Conceal Their Biases. Authored by Mykol C. Hamilton, Emily Lindon, Madeline Pitt, and Emily K. Robbins, with responses from Charli Morris and Diane Wiley, this article looks at how to not “prehabilitate” your jurors and offers ideas about alternate ways of asking the question rather than the tired, old “can you be fair and unbiased?”.
Novel Defenses in the Courtroom. Authored by Shelby Forsythe and Monica K. Miller, with a response from Richard Gabriel. This article examines the reactions of research participants to a number of novel defenses (Amnesia, Post-Traumatic Stress Disorder (PTSD), Battered Women Syndrome (BWS), Multiple Personality Disorder (MPD), Post-Partum Depression (PPD), and Gay Panic Defense) and makes recommendations on how (as well as whether or not) to use these defenses.
On The Application of Game Theory in Jury Selection. Authored by David M. Caditz with responses from Roy Futterman and Edward Schwartz. Suppose there was a more predictable, accurate and efficient way of exercising your peremptory strikes? Like using a computer model based on game theory? In this article, a physicist presents his thoughts on making those final decisions more logical and rational and based on the moves opposing counsel is likely to make.
Just say his brain made him do it! That is the conclusion of new research on the relationship between gruesomeness of the crime and the harshness of the sentence. In case you can’t intuit this one, the more gruesome (and disturbing) the crime, the harsher the sentence tends to be. But if the assault was merely moderately gruesome — even though it could have been deadly– there are ways to minimize punishment decisions.
Researchers at Duke University found that “if the focus is drawn away from the mind of a perpetrator by providing biological explanations of personality instead of traits, people may not make the same social cognitive inferences”. So how did they come to that conclusion (and what does that quotation mean)?
First of all, it’s a small sample (N = 11), likely because it’s expensive and time consuming to use an MRI machine. The researchers conducted brain MRIs while the participants read a number of different vignettes about crimes either strong in violence-related disgust or weak in disgust. The idea was for the researchers to see which areas of the brain were activated while reading the vignettes (that were either disgustingly gruesome or not so much) and then to see whether the participants chose punishment less than the US Federal Sentencing Guidelines or chose the harsher recommended sentence. (We’ve written about disgust before and these researchers equate “gruesome” with “disgusting”–apparently thinking of the visceral reaction to gruesome photos or mental images elicited from written descriptions.)
Here are examples of the vignettes used:
Rob Whitley was on his lunch break. He saw his boss at the hot dog stand and approached him while taking out a pair of scissors. He stabbed his boss on the side of the neck first, and then the lower back, causing the victim serious blood loss and requiring hospitalization. (This vignette was described as high in disgust.)
John Noel was at a bar and saw his ex-girlfriend’s new lover, James. Although John was not expecting to see James there, John took out the gun he regularly carried in his back pocket and tried to shoot James, but missed. (This vignette was described as low in disgust.)
Both of these crimes (whether high or low in disgust) would be prosecutable for aggravated assault. Participants were asked to rate how morally reprehensible the act was, how severe the punishment should be, and how much they were disgusted by what they read. However, as is typical in research like this, there was another twist: The researchers added a single sentence to the end of each vignette describing the perpetrator’s personality using either personality traits or biological language. That is, “Gerald frequently proves to have an impulsive personality” versus “Terry has a gene mutation that has been associated with impulsivity” when the crime was premeditated murder.
And here is what they found:
When the perpetrator was described as having biological reasons for impulsivity (rather than as being impulsive), he was seen as being less responsible and punished less severely.
When crimes were strong in disgust, there were harsher sentences but there was no relationship between how personality was described (biological or trait description) and punishment.
Crimes weak in disgust resulted in less harsh punishment than the guidelines recommended while crimes strong in disgust were punished at the recommended level.
In other words, if the crime is pretty gruesome (and these researchers say therefore one jurors would see as disgusting) your client is likely to get the harsher sentence regardless of whether you invoke a neurolaw (his brain made him do it) sort of defense. But, if the crime isn’t gruesome and you invoke a neurolaw defense, your client may be seen as less responsible for his actions and punished less.
Ultimately, this dovetails well with what we’ve known for many years– its about what the jury focuses on. If the jury spends a lot of time talking about the crime and the injuries it caused, the defendant is in trouble. If there is a credible mediating explanation such as a neurolaw defense or other circumstantial evidence and the jury spends time talking about human behavior instead of terrifying assault, the defendant is in better shape.
Overall, it is important to remember that this is a study based on such a small sample of people (N = 11) that their results might not be verifiable, even when it makes intuitive sense. However, it is worth remembering that according to this study, gruesomeness/disgust of the crime affects the assignment of responsibility but likely does not affect sentencing decisions.
Capestany BH, & Harris LT (2014). Disgust and biological descriptions bias logical reasoning during legal decision-making. Social Neuroscience, 9 (3), 265-277 PMID: 24571553