Archive for the ‘Simple Jury Persuasion’ Category
It’s hard to know why research that is a almost a decade old is seen as fodder for a recent Op-Ed in the New York Times, but so it goes. Jennifer Mnookin, a law professor at UCLA, certainly has an impressive resumé, and it is likely most readers of the NYT are not familiar with camera perspective bias. We blogged about this research back in 2010 and mentioned it in our 2012 article on false confessions.
In short, the camera perspective bias research says that when confessions are videotaped, they “should be videotaped in their entirety and with a camera angle that focuses equally on the suspect and interrogator”. Apparently, if the videotape is focused only on the defendant, the observer is less likely to see the police interview as coercive–even when the interrogator makes an explicit threat. When the video is focused on both the interrogator and the defendant, the observer’s bias disappears.
Mnookin’s essay in the NYT describes the camera perspective bias and states that while videotaping interrogations is generally a positive thing, it doesn’t prevent the videotapes from being misleading, to jurors or even legal experts. This shouldn’t surprise us, says Mnookin, since the research has found that even “professionals like judges and police interrogators are not immune” to the camera perspective bias. Mnookin discusses the complexity of disentangling the false confession from the true confession and says videos may make that already difficult task nearly impossible.
“And yet by making confessions so vivid to juries, recording could paper over such complications, and sometimes even make the problem worse. The emotional impact of a suspect declaring his guilt out loud, on video, is powerful and hard to dislodge, even if the defense attorney points out reasons to doubt its accuracy.”
Mnookin’s op-ed piece echoes what many of the experts in the false confessions area have said for years: videotaping interrogations will not fix the problem of false confessions, it is simply a step on the way to making them less likely to occur. Multiple reader comments on Mnookin’s op-ed are remarkably cogent and coherent, in contrast to most comments on major news sites these days. Many of the commenters identify themselves as attorneys and offer thoughts on the advantages of videotaped interrogations, eye-witness fallibility, and the ethics of courtroom personnel. If a reader actually wants to be educated on the issues surrounding videotaped interrogations, it could happen here.
Daniel Lassiter (the researcher responsible for much of the research on camera perspective bias) came to the same conclusions back in 2010 that Mnookin shares in her current-day NYT op-ed.
“The video recording of police interviews and interrogations will bring an unprecedented degree of openness to the process that all interested parties can agree is essential to a fair and humane criminal justice system. That being said, it is far from certain whether this reform will actually reduce the number of wrongful convictions attributable to police-induced false confessions.”
Lassiter’s hope, back in 2010, was that as knowledge continued to grow in the area of false confessions, then jurors could be educated to see the videotaped interrogation as [just] one piece of data upon which to base decisions. We may not yet be at Lassiter’s 2010 wish for the courtroom, but hopefully we are moving in that direction.
On a related note, we are fans of the Sundance Channel’s fictional series Rectify which follows the post-release (based on new DNA evidence) life of a man who spent 19 years on death row for the rape and murder of his teenage girlfriend. This is not a feel good television show. It is dark, disturbing, confusing and poignant all at once. There are no easy answers. Just very hard questions. Did he or didn’t he? We are almost through Season 2 and do not yet know.
Lassiter GD (2010). Videotaped interrogations and confessions: what’s obvious in hindsight may not be in foresight. Law and Human Behavior, 34 (1), 41-2 PMID: 20087637
We’ve written a number of times about bias against Muslims. But here’s a nice article with an easy to incorporate finding on how to reduce bias against your female client who wears a Muslim head-covering. (In case you have forgotten, we’ve already written about head-coverings for the Muslim man.) The graphic illustrating this post shows the variety of head-coverings Muslim women might wear and the initial findings (as to which head covering style results in the most bias) will probably not surprise you.
Researchers did four studies to see how people reacted to Muslim women wearing veils. They consistently found these reactions:
Responses were more negative when the Muslim woman wore a veil of any kind compared to no veil at all.
When the various veils were compared, the niqab or burqa (where only the eyes are exposed or even the eyes are covered) were seen most negatively.
Not surprising, as we said. In Western society, we like to see who we are talking to, and place a high priority on ‘looking people in the eye’. And our society holds (and expresses freely) negative beliefs about Muslim head-coverings for women. Those beliefs may range from a head-covering being a symbol of extreme or even terroristic beliefs, to a belief that a woman is being subjugated merely because she wears this garb. Yet, there are a litany of reasons women may wear head-coverings. There are also reasons women do not wear head-coverings. There is tremendous diversity within the Muslim community related to this issue, especially among Muslims in the US.
That very diversity is at the heart of what these (intuitive) researchers did next. Instead of just showing photos of women in various styles of head-coverings, for the final experiment, the researchers gave research participants “an article that focused on the reasons that Muslim women often give for choosing a full face veil”. And guess what happened?
Participants had more “positive imagined contact experience and gave more positive ratings of how they felt they would communicate with the Muslim woman wearing such a veil”.
In other words, when allowed to “fill in” the reasons the Muslim woman wore a veil, participants went to negative stereotypes and showed negative perceptions toward the woman. On the other hand, when given information about the variety of reasons Muslim women might have to choose a head-covering, negative assumptions/perceptions decreased. And that was when considering interactions with a Muslim woman in a full head-covering. The researchers say that for the least bias, if a religious Muslim woman wants to wear a head-covering, the hijab is likely the best choice. That may, however, not be an option given her religious beliefs. In either case, this research would say to give jurors information about your client’s choice to wear a Muslim head-covering (of any style) and it will reduce negative assumptions.
Yes, once again it appears that information is a great antidote to bias.
The very process of sharing the reasons for wearing a head-covering with jurors, gives them the opportunity for emotional connection with your client. Her sharing reasons for the head-covering allows them to ‘see’ her individuality and religious conviction. We’d call that both making your client more similar to the jurors (through the use of universal values) and giving jurors an opportunity to see “beneath the head-covering” to the woman herself.
Everett, J., Schellhaas, F., Earp, B., Ando, V., Memarzia, J., Parise, C., Fell, B., & Hewstone, M. (2014). Covered in stigma? The impact of differing levels of Islamic head-covering on explicit and implicit biases toward Muslim women Journal of Applied Social Psychology DOI: 10.1111/jasp.12278
This is something we’ve told our clients about for a number of years because it simply made sense. Now we have a current research citation for it rather than using research that is more than a decade old! We see this “new” strategy as a variation on the “you may want to disagree” strategy–or, perhaps, as an update.
What we especially like about this one is that it tells us how to make something totally implausible seem more acceptable to the listener. Say, something implausible like….Bigfoot! Actually, it goes beyond that. This research shows us how to increase the likelihood you can convince others of supernatural events having occurred. It’s all, as you may have surmised, about the narrative frame. You do not, say the authors, want to begin your narrative by starting off with an admission of long-standing beliefs in the frankly bizarre. That would totally undermine your credibility. Instead, begin by presenting yourself as a skeptic of such events. The authors explain it in this, uniquely academic, fashion:
“The presentation of the evidence that converted the narrator within the account itself offers the audience an invitation to go on the same journey from scepticism to belief along with the narrator.”
We don’t really say it like that (frankly, there should be a rule against anyone saying it like that), but we do essentially recommend that our clients embed their initial skepticism in questions for expert witnesses who explain how something works or in direct examination questions for the witness who is explaining why something was done the way it was done. The off-hand, seemingly casual, inclusion of initial skepticism bypasses juror resistance to persuasion and takes them on our client’s journey of discovery. Just like the author said above.
Here is what the researcher did. She had research participants in two different experiments (a total of 215 participants) read a description of either a “precognitive dream” in which the narrator predicted and ultimately prevented a car accident, or of a telepathic experience in which the narrator thought of “an old friend, Sally” and then half an hour later, learned Sally had been hospitalized. The research participants were placed into three different conditions as they read the descriptions:
The narrator claimed to be skeptical of the paranormal prior to describing the event.
The narrator said s/he really had no interest at all in the paranormal prior to describing the event.
Or, the narrator admitted to being a fervent prior believer.
After reading the descriptions of the events from the skeptical narrator, the disinterested narrator, or the avid believer narrator, the research participants were asked whether they saw the event described as being truly paranormal, just a coincidence, or the product of a gullible narrator.
In both experiments, having a skeptical narrator increased the likelihood participants would see the event as possibly being paranormal. The researcher clarifies that the disinterested narrator did not result in an increase in those seeing the events as paranormal.
“The narrator must establish a prior position contrary to the one they are now assumed to hold in order to influence the audience.”
However, when participants were warned about the “avowal of prior skepticism” technique in Experiment 2, the pattern was reversed–that is, a skeptical narrator was less likely to result in participants seeing an event as paranormal.
When the narrator held a position of prior belief, s/he was seen as more gullible and easily convinced only when female and not male! The researcher thinks it likely is due to men being seen as relatively rational and skeptical when it comes to the paranormal and telepathy while women are not seen that way. We have at least 33 thoughts on this finding.
The author concludes the paper with this straightforward paragraph:
“In conclusion, the present research supports the proposition that an avowal of prior scepticism serves to increase the plausibility of a paranormal causal explanation for an anomalous event as long as the audience are not pre-warned. An avowal of prior belief serves to increase the perceived gullibility of a female, but not a male, narrator, suggesting a bias towards more readily perceiving a woman than a man as gullible.”
From a litigation advocacy perspective, when you have a pretty unbelievable story to tell, embedding skepticism into your narrative can be a powerfully persuasive tool. And if your opponent employs this strategy, you may want to educate jurors on the “avowal of prior skepticism” strategy to “undo” their efforts at persuasion.
Stone, A. (2013). An Avowal of Prior Scepticism Enhances the Credibility of an Account of a Paranormal Event Journal of Language and Social Psychology, 33 (3), 260-281 DOI: 10.1177/0261927X13512115
We’ve written before about visual identity (in the context of covering inflammatory tattoos with makeup for trial) and want to point you to an article in the new issue of The Jury Expert. Bronwen Lichtenstein and Stanley Brodsky (neither of whom are depicted in the image for this post) have an article titled Moving From Hapless to Hapful with the Problem Defendant.
The article describes the way in which one’s appearance can result in assumptions and judgments being made that do not facilitate justice for your client. The authors describe what they call the “hapless defendant” and describe the possible (negative) reactions counsel may have to their client–and by extension, the reaction jurors may have to the defendant based on appearance and behavior.
But then, rather than saying counsel should improve on the defendant’s appearance, behavior, and testimony–the authors actually tell you how to do that in a way that is inexpensive and manageable with an initial investment of time (not money) from you.
“We start with the undeniable fact that many aspects of the U.S. court system have enormous rolling momentum that keeps such hapless defendants uninformed, unprepared, and, for the most part, unsuccessful in their own defense. These defendants are sometimes seen as doomed when defended by public defenders with oppressively heavy caseloads or by court appointed attorneys who have little time to work with them. This article is about the need for quick and effective transformations in representation and interactions so that such defendants have a modestly improved chance of success at their own trials.”
This article is not filled with pie-in-the-sky, idealistic notions about permanent changes in how a defendant presents to the world at large. Instead, it recognizes the transient nature of the changes proposed with the idea that by offering these supports to your hapless defendant, you increase the chance of justice being done in the courtroom for this specific trial. Low-cost. Volunteers. A structured process. Okay–so it is kind of idealistic. It is also a practical and very do-able example of how to put the best of our justice system into action on behalf of those who cannot mobilize to do that for themselves.
“These defendants are people who engage in a process of unknowing self-sabotage that is seeded in social and demographic qualities. We have coined the term hapful to counter the notion of the unlucky, socially stigmatized defendant who comes to court. We propose mobilizing transient changes in behavior, improved attractiveness, limited goals, and assistance from helpful others. By becoming hapful for a little while, accused offenders who are often seen as lowlifes or hopeless victims of social injustice might be now presented and briefly re-conceptualized as persons worthy of thoughtful attention and respectful dispositions.”
Lichtenstein, B, & Brodsky SL (2014). Moving from hapless to hapful with the problem defendant. The Jury Expert, 26 (2)
The study of bias fascinates us. We can easily spot prejudice in others but are oblivious to our own biases. We often ask a question at the end of a research project about community values and whether our (uniformly unbiased and considerate) mock jurors think others in the area would be biased against a party involved in the lawsuit about which they have just heard. Maybe the off-topic and irrelevant bias (perhaps religion, country of origin, ability to speak English, thick accent, appearing to be a gang member, sexual orientation, marital fidelity, obesity, etc.). Typically, the answer is, “Well, it doesn’t make a difference to me but it sure would to a lot of other people who live around here!” This response is shared in all sincerity and good faith by individuals who truly do not see themselves as biased.
The problem, as pointed out by today’s researchers, is that none of us see ourselves as having blind spots. We’re better than that–especially when forewarned that biased decision-making could lie ahead. As sensible and logical and rational as that perspective may seem, it simply doesn’t appear to be true. We’ve written about Emily Pronin’s work on the bias blind spot a couple of times before but she has a new article out that illustrates beautifully what we see often in our pretrial research.
Researchers did two different experiments in which they had participants “rate the artistic merit” of a series of 80 different paintings. The first two experiments used undergraduates from Princeton University (63 female and 38 male in the first experiment and 47 female and 27 male in the second experiment).
In experiment 1, half of the participants were told to press a button and the name of the artist would flash onto the computer screen while others were not told to do so and thus evaluated the “artistic merit” of the painting without knowing who had painted it. For those participants that saw the name of the painter, half of the paintings were identified as being created by a famous artist and half attributed to random names (i.e., “an unknown artist”) culled from a print telephone directory.
Not surprisingly, the participants who saw the artist names rated the merit of the paintings attributed to famous artists as higher than the unknown artist’s work. Those who did not see the artist name rated the two groups of paintings the same in terms of artistic merit. Those who saw the artist name acknowledged the knowledge was biasing but believed their final answers were as objective as if they had not seen the artist name. (Alas, they were incorrect.)
In Experiment 2, instructions were modified so that participants could choose to see or choose to not see the name of the artist. Half the participants were told to choose to see the artist name (this was the explicitly biased condition) and half were told to not choose to see the name of the artist (this was the explicitly objective condition). They were asked to rate how biased they expected their decision-making strategy to be given whether they would see the artist name or not see it.
Once again, the participants who saw the artist names rated the merit of the paintings attributed to famous artists as higher than the unknown artist’s work. Those who did not see the artist name rated the two groups of paintings the same in terms of artistic merit. Those who were in the explicitly biased condition said (in advance) their evaluative strategy would be biased, but (naturally) they saw their own judgments of the paintings (after the fact) as objective.
In other words, even though warned in advance that their strategy would be biasing, and even though they said, up front, their strategy would be biasing–ultimately these participants also felt they were able to rise above that bias. (Alas, they were also wrong.)
So, for Experiment 3, the researchers left the classroom and recruited 85 adults online (52 women and 33 men with an average age of 35.7 years). These participants rated the same 80 paintings with three modified instructions: they rated themselves and their assigned evaluative strategy in terms of how objective their process would be; they were given very detailed information about how bias could easily make inroads into their decision-making on the artistic merits of the paintings; and, they were reminded to be honest in their ratings.
You know what happened. Participants in the explicitly biased condition thought their strategy was more biased but saw their judgments as even better than those participants in the explicitly unbiased condition. Maybe they thought that this special information empowered them to rise above the bias they had expected to display! Interestingly enough, at the pre-task rating, the participants in the explicitly biased condition thought they would be objective and by the end of the task, their estimation of their objectivity had gone up significantly.
The researchers discuss these findings in light of the courtroom (using the example of inadmissible evidence which jurors are instructed to ignore) and the workplace (using the example of HR personnel who see photographs of applicants prior to evaluating the merits of their applications). If we believe we are so objective that we can use biased strategies to make decisions, say the researchers–we are simply fooling ourselves.
They describe our reasoning in this way: “If I am smart enough to know this bias exists and honest enough to acknowledge it, then surely I won’t fall prey to it!”
Alas. Indeed we would. The authors describe the way female under-representation in the symphony has been reduced by having applicants audition behind a screen. Such efforts, they say, clearly reduce bias. So why are we so resistant to using them? The present research provides one such answer:
“Such efforts are likely to seem needless when we believe that we can be objective even in the face of obviously biasing procedures.”
The authors say the idea of “debiasing” doesn’t really work. Maybe it’s like ‘separate but equal’ or pre-Title IX sports budgets. You just cannot unring that bell. We both agree and disagree.
Bias is everywhere and we need to work hard to find ways to stop bias from occurring in the first place. There we agree. For years, we have recommended the use of strategies effective in countering bias by stopping it up front.
But we also have seen a debiasing strategy that is powerful in inhibiting bias. It doesn’t end it, and it isn’t foolproof. But click the link and learn how to cope with a flawed world.
You may not think this is information you need. Alas, according to this research, you really do!
Hansen K, Gerbasi M, Todorov A, Kruse E, & Pronin E (2014). People Claim Objectivity After Knowingly Using Biased Strategies. Personality & Social Psychology Bulletin PMID: 24562289