Archive for the ‘Forensic evidence’ Category
Our posts on women stalkers are often listed in internet searches that bring people to our blog. Women stalk. Women also kill. In fact, it is believed that about 16% of serial killers (about 1 in 6) are female. Although it is hard for many to see women as capable of extreme crimes like murder, the researchers whose work we feature today have no such illusions. [If you can’t wrap your brain around that notion, we suggest you spend an evening alone in your house with all of the lights turned down, and watch the film Monster, an account of the convicted female serial killer Aileen Wuornos.]
“Contrary to preconceived notions about women being incapable of these extreme crimes, the women in our study poisoned, smothered, burned, choked, shot, bludgeoned, and shot newborns, children, elderly, and ill people as well as healthy adults; most often those who knew and likely trusted them.”
This is a chilling article to read (likely because of our stereotypes of women as nurturing caregivers). The researchers used murderpedia.org to identify female serial killers and then followed up with research in newspapers, police reports, et cetera. They were able to verify every female serial killer listed in murderpedia.org as having killed in the United States between 1821 and 2014.
They ended up with a sample of 64 female serial killers who killed in the United States and were almost entirely (98.4%) born in the US as well. Here’s what female serial killers (FSKs) look like in the United States:
Most were White (55, 88.7%) with six (9.7%) being Black and one (1.6%) Latina.
They were married (54.2%), divorced (15.3%), widowed (13.5%), in long-term committed relationships (8.5%) and single (8.5%).
Some were well-educated with a third (34.6%) having college degrees, 19.2% had some college or post-high-school professional training, 15.4% were high school graduates, and 30.8% dropped out of high school.
They held a wide variety of jobs including nursing, teaching, and prostitution. Many (39.2%) worked in health-related positions (such as nursing, nurse aides, or health administration). Others (21.6%) had other direct caregiving roles (babysitter, homemaker with children). The remainder (39.2%) were employed in a wide variety of jobs ranging from “farmer, gang leader, custodian, prostitute, psychic, drug dealer, and waitress”.
On average, they were about 32 years old when they first began to kill, but the age range was from 16 to age 65 so there is considerable variation. Similarly, they had an average “killing time span” of 7.25 years but the range was from all murders being committed in a single year to murders committed over a 31-year period. The 64 FSKs in this sample averaged 6.1 victims with a range of 3-31 victims.
Nearly 40% in the sample experienced some form of mental illness, while nearly one-third (31.5%) had been either physically or sexually abused (or both) by either parents or grandparents in childhood, and by husbands or long-term partners in adulthood. Even in the absence of diagnosed mental illness, the authors report “dysfunctional personality characteristics” such as lying, manipulation or insincerity in many FSKs. It’s hard to imagine being surprised that serial killers might be insincere.
Most commonly they killed for financial gain but they also killed for power, revenge, notoriety, and excitement. Women did not generally sexually assault their victims, nor did they tend to mutilate or torture like we see with male serial killers.
Their tendency was to kill both men and women (67.3%) with some killing male victims only (20%) and others killing female victims only (12.7%).They knew all or most of their victims and, in fact, were related to most of their victims (e.g., their children, their spouse, fiancé, boyfriends, mothers, mothers-in-law, fathers, aunts, cousins, and nephews). In every case, they targeted at least one victim who had little chance of fighting back (e.g., a child, the elderly, or the infirm).
The upper class (socioeconomically) was rarely represented (4.3%) with most FSKs being middle class (55.3%) and a few less being lower class (40.4%).
Their most common method of killing was poisoning (they are four times more likely than men to drug their victims).
A summary table from the article itself shows the range of killing methods used by FSKs.
In short, women (like men) kill. But, say these researchers, women tend to kill for resources (e.g., profit, comfort, control) while men kill for sex (e.g., rape, sexual torture, mutilation).
Harrison, M., Murphy, E., Ho, L., Bowers, T., & Flaherty, C. (2015). Female serial killers in the United States: means, motives, and makings The Journal of Forensic Psychiatry & Psychology, 1-24 DOI: 10.1080/14789949.2015.1007516
We’ve seen the claims that people don’t find brain scans as alluring as they used to, but here is a study that says, “not so fast!”. It’s an oddly intriguing study involving not only invoking pretty pictures of brain function but also political affiliation and how that factors in to what one chooses to believe.
Much attention over recent years has been given to “an attack on science”, with many public figures (including elected officials) insisting that evolution is a hoax, climate science isn’t real, and vaccines are somehow more harmful than helpful. [For the record, here at the Jury Room we are big-time fans of science. I want to believe that our readers knew that already.]
Researchers discuss perceptions of “soft science” and “hard science” and the general sense that “hard science” is viewed as more reliable, accurate and precise. They describe multiple experiments showing people tend to prefer “hard science” data to data offered by those in “soft science”. The question these researchers focused on was whether “hard science” data (in this case, a brain scan) would be preferred over “soft science” data (in this case, cognitive test results). They also wondered if this preference (for “hard science” or “soft science” data) would be mediated by political orientation.
In the study (106 participants, 83 women, 23 men; ranging in age from 18 years to 47 years with an average age of 19.6 years; 77 identified as White, 17 said they were African-American, and “five or fewer” identified as Asian American, Latino/Latina or other) completed a pretest online which included two questions about their political preference (both used by the American National Election Studies).
Generally speaking, do you think of yourself as a Democrat Republican, Independent, or something else?
If you selected Democrat or Republican for the previous question, would you call yourself a strong Democrat or Republican or a not very strong Democrat or Republican?
Only those participants who identified as either Democrat or Republican were eligible to participate in the study which they were told would involve them reading about an ethics violation and then making judgments about the case.
In the study itself, participants read a one-paragraph case description about a politician elected to office in a geographically distant state who had recently been cited for three ethical violations. The paragraph informed them the ethics committee had questioned the politician’s memory and asked him to have an evaluation done on his memory to determine if memory issues would prevent him from carrying out his duties as an elected representative. Finally, the participants read that if the testing determined the politician was impaired, he would be forced to resign and the governor of the state would appoint a replacement to serve until the next election. The paragraph description concluded by saying the governor had announced that any replacement appointees would be members of the same political party as the governor.
There were (you knew this was coming) several variations in the information the participants read about the politician and his situation.
Half of the participants read that the politician tested was a Democrat and the governor of his state was a Republican. The other half read that the politician was a Republican and the governor of his state was a Democrat.
The researchers paid attention to the political identification of the participant and if the participant said they were Republican and read about a Republican politician—they were placed in a group for analysis that was labeled in-group. If, on the other hand, a Republican participant read about a Democratic politician, they were placed in a group labeled out-group for analysis purpose. (The same applied vice versa when party preference is opposite.) Further, if the participant endorsed a strong affiliation politically, they were classified in the strong political identification group and if they endorsed a weak affiliation politically, they were classified in the weak political identification group.
After reading the initial description of the situation, all participants read a two-paragraph description of an expert evaluation of the politician. The expert mentioned in this description was a “Dr. Daniel Weinberger”. The participants received differing information about how Dr. Weinberger had evaluated the politician’s cognitive function.
Half the participants read that Dr. Weinberger reviewed the politician’s medical history and gave him verbal or paper and pencil tests (commonly used by neuropsychologists).
The other half of the participants read that Dr. Weinberger reviewed the politician’s medical history and conducted an MRI of the politician’s brain. (It is important here to note that no MRI images were shown. All the participants saw were words describing the process and then, the outcome.)
The second paragraph offered a description of the results of the evaluations in ways consistent with either verbal or paper and pencil tests or an MRI. For all participants, the second paragraph ended with identical statements saying that the expert concluded the “politician was suffering from beginning-stage Alzheimer’s disease, that symptoms will continue, and the symptoms will interfere with the politician’s ability to perform his duties”.
And here are the findings:
Biologically based information (i.e., the brain MRI) was viewed more favorably (69.8% said the evidence the politician had early stage Alzheimer’s was strong and convincing) than the behaviorally based (i.e., cognitive testing) information (only 39.5% said the evidence the politician had early stage Alzheimer’s was strong and convincing).
When asked to identify the one most important reason they felt the way they did about the evidence presented, those who saw the behavioral evidence said it was subjective and perhaps unreliable or irrelevant—more than 15% said the neuropsychological testing was unreliable or irrelevant. Not a single participant who saw the biologically based evidence said the MRI evidence might be unreliable—in fact, they saw it as objective, valid and reliable. (Anyone with any knowledge of the validating research and very detailed manuals accompanying psychological tests might find this, as the researchers say, “perplexing”. Of course, those who have that knowledge base would not qualify for inclusion in this study.)
Those participants who were in political out-group assignments (that is, Republican participants who read about a Democratic politician or Democratic participants who read about a Republican politician) were more likely to discount the behavioral science evidence than those in political in-group assignments.
In short, in this study, participants saw the MRI as more reliable and relevant than the cognitive testing, and those with strong political identities discounted the cognitive testing even more than those without the strong political sense of self.
Despite the reality that Alzheimer’s would always be diagnosed with cognitive testing, and brain scans used after testing was completed to rule out other explanations for impairments identified by testing—these participants preferred the verbally described brain images of “hard science” to the low-tech paper-and-pencil tests of the neuropsychologist. It’s a finding that underscores the importance of expert testimony informing jurors of how a diagnosis is made so they know if testing was performed because of the “wow” factor of a colorful MRI or to offer a research-based assessment of brain/memory impairment.
In other words, don’t believe everything you read– jurors can still be seduced by what looks like “hard science”. Your task is to show them what scientific findings are truly backed up by years of scientific research and development.
Munro, G., & Munro, C. (2014). “Soft” Versus “Hard” Psychological Science: Biased Evaluations of Scientific Evidence That Threatens or Supports a Strongly Held Political Identity. Basic and Applied Social Psychology, 36 (6), 533-543 DOI: 10.1080/01973533.2014.960080
If you think neurolaw and neuroscience are everywhere–and don’t find it particularly challenging to talk about brain science, apparently you are living in a very rarified environment. It’s hard to believe but evidently, most people do not think the exploding field of brain science is fascinating! Instead, when they think of brain science they think of things that are far removed from their daily lives and things that make them anxious. [Or bore them to tears.] For litigators this has crucial ramifications, since any body of technical information that is worth presenting to a jury requires understanding if it is to be useful.
UK scientists interviewed 48 London residents about “brain science”. They found that most of the interviewees believed that they would only find themselves interested in learning more about brain science if they developed a neurological illness. Maybe… too little too late?
The researchers identified four themes in the participant’s interviews: the brain is something in the science domain; there was significant angst that something could go wrong with the brain; there was a belief that we are all in control of our brains to some extent, and that our brains are what makes us all different and unique. The individual quotes the researchers included however, highlight the lack of awareness of brain science or research:
“Brain research I understand, an image of, I don’t know, a monkey or a dog with like the top of their head off and electrodes and stuff on their brain.” [Male participant]
“It does conjure up images of, you know, strange men in white coats.” [Female participant]
“You just, like I say, blind people with science, don’t you. And then it becomes a subject that you just don’t understand. With me, I just switch off. I’m not understanding what you’re talking about here, so I just switch off.” [Male participant]
“Where do these people come from, that actually understand these things?” [Female participant]
The researchers highlight the reality that most people do not see “brain science” as something relevant or a part of their lives. However, if an individual developed a mental illness or a neurological condition–they believe they would have more interest in learning. Without those catalysts, however, they have little interest in pushing themselves to understand more. The researchers report the concept of “brain science” seemed foreign or “baffling” to most of those interviewed.
From a litigation advocacy perspective, this study highlights the importance of teaching the science. Whether “the science” of a specific case is patent law, high-tech and abstract concepts, or actual “brain science”–jurors need to hear it and have a sense that they understand it enough to actually make judgments on the case. Keep in mind that they are going to judge it whether it is understood or not. The question is simply whether the judgment is going to be informed by bias, by knowledge, or by a coin flip and a longing to be done with jury duty. We know from 20 years of interviewing jurors that they strongly prefer having clear understanding. And that, dear litigator, is up to you.
We have worked on cases in which animation helped jurors make sense of complex computer programming and on others where the analogy of ordering a pizza with different toppings or a hamburger with or without special sauce were used to help jurors understand different technology applications in an especially complex patent infringement case. We’ve also worked on cases where there were allegations of neurological injuries but a very normal looking Plaintiff and jurors had to “see” the injuries somehow to help them understand what had been lost.
Never lose sight of how foreign the concepts truly are, and help jurors understand so they do not have to “shut off” as one of the interviewees in this study confessed to doing. Often, our mock jurors help to make the abstract and complex both concrete and simple, or at least familiar. Just because you have been buried in a case for years and live, eat and breathe the science, doesn’t mean jurors will have a clue about what you are presenting to them. Teach them in a way that helps them relate the abstract and esoteric to their everyday lives. It empowers them to make the right call. If you don’t know how to explain it to ‘real people’, gather a group of mock jurors and ask them what makes sense, where they get lost, and what analogies are most useful to them. If you invite them to the conversation in the right way, they’ll tell you.
O’Connor, C., & Joffe, H. (2014). Social Representations of Brain Research: Exploring Public (Dis)engagement With Contemporary Neuroscience Science Communication, 36 (5), 617-645 DOI: 10.1177/1075547014549481
We are again honored by our inclusion in the ABA Blawg 100 list for 2014. If you value this blog, please take a moment to vote for us here in the Litigation Category. Voting closes on December 19, 2014. Doug and Rita
A new issue of The Jury Expert has been published, and as usual, it’s one worth reading. As Editor since May, 2008–I get to see the articles as they come in and am always surprised at (and appreciative of) the creative and stimulating content we receive. The Jury Expert, like this blog, is all about litigation advocacy and understanding how new research can help inform your strategies in the courtroom. Here’s what you can see in the lineup for the November 2014 issue.
Wendy Heath and Bruce Grannemann ponder how video image size in the courtroom is related to juror decision-making about your case. They discuss how image size interacts with image strength, defendant emotions, and the defendant/victim relationship. Trial consultants Jason Barnes and Brian Patterson team up for one response to this article and Ian McWilliams pens another. This is a terrific article to help you reconsider the role of image size in that upcoming trial.
Sarah Malik and Jessica Salerno have some original research on bias against gays in the courtroom. This is simple and powerful research that illustrates just how moral outrage drives our judgments against LGBT individuals (especially when they are juveniles). Stan Brodsky and Christopher Coffey team up for one response and Alexis Forbes pens a second. While these findings make intuitive sense, they may also highlight something you’ve not previously considered.
Lynne Williams is a trial consultant who lives in the cold and snowy state of Maine. She is also skilled in picking juries for political trials and a gifted writer as she describes the important differences between picking juries for civil disobedience cases and antiwar protestor cases. This article not only explains what Ms. Williams does, but why and how she does what she does. It’s like lifting up the top of her head and peering inside her brain.
Mary Wood, Jacklyn Nagle and Pamela Bucy Pierson bring us this qualitative examination of self-care in lawyers. They talk about workplace stress and depression and substance abuse. Been there? Are there? Some kinds of self-care may work better than others but–what’s important is that you actually do some self-care! Andy Sheldon and Alison Bennett share their reactions to this article.
Why, you may wonder, would Plain Text EVER be a Favorite Thing. Because it is fabulous. Or, perhaps because, “Plain text is the cockroach of file types: it will outlive us all.”
Adam Shniderman knows neuroscience evidence can be incredibly alluring. This new study shows us that unfortunately (or perhaps fortunately) it is not universally alluring. Here’s a shocker: the impact of the neuroscience evidence is related to the individual listener’s prior attitudes, values and beliefs about the topic. Robert Galatzer-Levy and Ekaterina Pivovarova respond with their thoughts on the issues raised.
Law and Neuroscience by Owen Jones, Jeffrey Schall, and Francis Shen has just published and is as long as any Harry Potter tale at more than 800 pages. Rita Handrich takes a look at this new textbook and reference manual which covers more than you ever knew existed on the wide-ranging field of neurolaw (which is a whole lot more than the “my brain made me do it” defense).
Roy Bullis is back to talk to us about the wide language gulf between attorneys and their social science expert witnesses. Just because you are talking, doesn’t mean you are actually communicating. How do you talk so your expert knows what you mean?
Image from The Jury Expert
Demographic Roulette: What was once a bad idea has gotten worse. Authored by Doug Keene and Rita Handrich with a response from Paul Begala, this article takes a look at how the country has changed over the past 2 decades and our old definitions of Democrat or Republican and conservative or liberal are simply no longer useful. What does that mean for voir dire? What should it mean for voir dire? Two very good questions those.
If it feels bad to me, it’s wrong for you: The role of emotions in evaluating harmful acts. Authored by Ivar Hannikainen, Ryan Miller and Fiery Cushman with responses from Ken Broda-Bahm and Alison Bennett, this article has a lesson for us all. It isn’t what that terrible, awful defendant did that makes me want to punish, it’s how I think I would feel if I did that sort of terrible, horrible awful thing. That’s what makes me want to punish you. It’s an interesting perspective when we consider what makes jurors determine lesser or greater punishment.
Neuroimagery and the Jury. Authored by Jillian M. Ware, Jessica L. Jones, and Nick Schweitzer with responses from Ekaterina Pivovarova and Stanley L. Brodsky, Adam Shniderman, and Ron Bullis. Remember how fearful everyone was about the CSI Effect when the research on the ‘pretty pictures’ of neuroimagery came out? In the past few years, several pieces of research have sought to replicate and extend the early findings. These studies, however, failed to find support for the idea that neuroimages unduly influence jurors. This overview catches us up on the literature with provocative ideas as to where neurolaw is now.
Predicting Jurors’ Verdict Preference from Behavioral Mimicry. Authored by Matthew Groebe, Garold Stasser, and Kevin-Khristián Cosgriff-Hernandez, this paper gives insight into how jurors may be leaning in support of one side or the other at various points during the trial. This is a project completed using data from actual mock trials (and not the ubiquitous undergraduate).
Our Favorite Thing. We often have a Favorite Thing in The Jury Expert. A Favorite Thing is something low-cost or free that is just fabulous. This issue, Brian Patterson shares the idea of mind mapping and several ways (both low-tech and high-tech) to make it happen.
The Ubiquitous Practice of “Prehabilitation” Leads Prospective Jurors to Conceal Their Biases. Authored by Mykol C. Hamilton, Emily Lindon, Madeline Pitt, and Emily K. Robbins, with responses from Charli Morris and Diane Wiley, this article looks at how to not “prehabilitate” your jurors and offers ideas about alternate ways of asking the question rather than the tired, old “can you be fair and unbiased?”.
Novel Defenses in the Courtroom. Authored by Shelby Forsythe and Monica K. Miller, with a response from Richard Gabriel. This article examines the reactions of research participants to a number of novel defenses (Amnesia, Post-Traumatic Stress Disorder (PTSD), Battered Women Syndrome (BWS), Multiple Personality Disorder (MPD), Post-Partum Depression (PPD), and Gay Panic Defense) and makes recommendations on how (as well as whether or not) to use these defenses.
On The Application of Game Theory in Jury Selection. Authored by David M. Caditz with responses from Roy Futterman and Edward Schwartz. Suppose there was a more predictable, accurate and efficient way of exercising your peremptory strikes? Like using a computer model based on game theory? In this article, a physicist presents his thoughts on making those final decisions more logical and rational and based on the moves opposing counsel is likely to make.