You are currently browsing the archives for the Simple Jury Persuasion category.

Blog archive

We Participate In:

Archive for the ‘Simple Jury Persuasion’ Category

blind-spotThe study of bias fascinates us. We can easily spot prejudice in others but are oblivious to our own biases. We often ask a question at the end of a research project about community values and whether our (uniformly unbiased and considerate) mock jurors think others in the area would be biased against a party involved in the lawsuit about which they have just heard. Maybe the off-topic and irrelevant bias (perhaps religion, country of origin, ability to speak English, thick accent, appearing to be a gang member, sexual orientation, marital fidelity, obesity, etc.). Typically, the answer is, “Well, it doesn’t make a difference to me but it sure would to a lot of other people who live around here!” This response is shared in all sincerity and good faith by individuals who truly do not see themselves as biased.

The problem, as pointed out by today’s researchers, is that none of us see ourselves as having blind spots. We’re better than that–especially when forewarned that biased decision-making could lie ahead. As sensible and logical and rational as that perspective may seem, it simply doesn’t appear to be true. We’ve written about Emily Pronin’s work on the bias blind spot a couple of times before but she has a new article out that illustrates beautifully what we see often in our pretrial research.

Researchers did two different experiments in which they had participants “rate the artistic merit” of a series of 80 different paintings. The first two experiments used undergraduates from Princeton University (63 female and 38 male in the first experiment and 47 female and 27 male in the second experiment).

In experiment 1, half of the participants were told to press a button and the name of the artist would flash onto the computer screen while others were not told to do so and thus evaluated the “artistic merit” of the painting without knowing who had painted it. For those participants that saw the name of the painter, half of the paintings were identified as being created by a famous artist and half attributed to random names (i.e., “an unknown artist”) culled from a print telephone directory.

Not surprisingly, the participants who saw the artist names rated the merit of the paintings attributed to famous artists as higher than the unknown artist’s work. Those who did not see the artist name rated the two groups of paintings the same in terms of artistic merit. Those who saw the artist name acknowledged the knowledge was biasing but believed their final answers were as objective as if they had not seen the artist name. (Alas, they were incorrect.)

In Experiment 2, instructions were modified so that participants could choose to see or choose to not see the name of the artist. Half the participants were told to choose to see  the artist name (this was the explicitly biased condition) and half were told to not choose to see the name of the artist (this was the explicitly objective condition). They were asked to rate how biased they expected their decision-making strategy to be given whether they would see the artist name or not see it.

Once again, the participants who saw the artist names rated the merit of the paintings attributed to famous artists as higher than the unknown artist’s work. Those who did not see the artist name rated the two groups of paintings the same in terms of artistic merit. Those who were in the explicitly biased condition said (in advance) their evaluative strategy would be biased, but (naturally) they saw their own judgments of the paintings (after the fact) as objective.

In other words, even though warned in advance that their strategy would be biasing, and even though they said, up front, their strategy would be biasing–ultimately these participants also felt they were able to rise above that bias. (Alas, they were also wrong.)

So, for Experiment 3, the researchers left the classroom and recruited 85 adults online (52 women and 33 men with an average age of 35.7 years). These participants rated the same 80 paintings with three modified instructions: they rated themselves and their assigned evaluative strategy in terms of how objective their process would be; they were given very detailed information about how bias could easily make inroads into their decision-making on the artistic merits of the paintings; and, they were reminded to be honest in their ratings.

You know what happened. Participants in the explicitly biased condition thought their strategy was more biased but saw their judgments as even better than those participants in the explicitly unbiased condition. Maybe they thought that this special information empowered them to rise above the bias they had expected to display! Interestingly enough, at the pre-task rating, the participants in the explicitly biased condition thought they would be objective and by the end of the task, their estimation of their objectivity had gone up significantly.

The researchers discuss these findings in light of the courtroom (using the example of inadmissible evidence which jurors are instructed to ignore) and the workplace (using the example of HR personnel who see photographs of applicants prior to evaluating the merits of their applications). If we believe we are so objective that we can use biased strategies to make decisions, say the researchers–we are simply fooling ourselves.

They describe our reasoning in this way: “If I am smart enough to know this bias exists and honest enough to acknowledge it, then surely I won’t fall prey to it!”

Alas. Indeed we would. The authors describe the way female under-representation in the symphony has been reduced by having applicants audition behind a screen. Such efforts, they say, clearly reduce bias. So why are we so resistant to using them? The present research provides one such answer:

“Such efforts are likely to seem needless when we believe that we can be objective even in the face of obviously biasing procedures.”

The authors say the idea of “debiasing” doesn’t really work. Maybe it’s like ‘separate but equal’ or pre-Title IX sports budgets. You just cannot unring that bell. We both agree and disagree.

Bias is everywhere and we need to work hard to find ways to stop bias from occurring in the first place. There we agree. For years, we have recommended the use of strategies effective in countering bias by stopping it up front.

But we also have seen a debiasing strategy that is powerful in inhibiting bias. It doesn’t end it, and it isn’t foolproof. But click the link and learn how to cope with a flawed world.

You may not think this is information you need. Alas, according to this research, you really do!

Hansen K, Gerbasi M, Todorov A, Kruse E, & Pronin E (2014). People Claim Objectivity After Knowingly Using Biased Strategies. Personality & Social Psychology Bulletin PMID: 24562289

Image

Share

women-votersYou may recall the story posted on CNN in late 2012 about how women vote differently based on hormonal fluctuations. Unfortunately, because of how our brains work (and our attraction to outrageous stories, true or not), you may not recall that CNN removed the story in 7 hours due to internet backlash over an article based on a (then) unpublished study. One of the more amusing responses to the post suggested CNN investigate how Viagra influences male votes. Instead, CNN just took down the article.

New published research disputes the study CNN relied on. And we should note the original study did eventually publish. The current researchers set out to see if the 2013 results could be replicated and so their design was as close to the original study as possible (at least according to them). Spoiler alert: The new research discredits the basis for the CNN report.

The researchers recruited 1,206 women in an online study. The participants reported they were pre-menopausal, not pregnant, not using hormonal contraception and having regular monthly menstrual cycles (from 25 to 35 days in duration). The participants were classified as either “paired” (N = 730) or “single” (N = 476) and their specific date of ovulation identified (those in days 4-11 of their 25 day cycle, for example, were classified as fertile and those in days 14-22 were classified as nonfertile while those in any other day of the cycle [day 1-3 or day 23-25] were excluded from the primary analyses). Before you question any of these variables or how they were calculated, the researchers were simply faithfully following the criteria in the 2013 study.

The 1,206 participants (recruited prior to the 2012 election) were asked to “imagine walking into the voting booth today” and report whether they would vote for Romney (the Republican) or Obama (the Democrat). Here is what the researchers found:

There was no relationship between actual voting behavior and fertility or relationship status, or as the authors explain: “There was no association between attitudes and fertility.”

The authors go on to talk about Type 1 errors and failures to replicate other published studies on the relationship of menstrual cycles to preferences and attraction. Hot on the heels of their study is the response from the authors of the 2013 study who, not surprisingly, feel grossly misunderstood. And then, bless his heart, along comes the Neuroskeptic to talk about the errors of their ways for both of them!

What we want to talk about is different from what they all want to talk about and that is

a) the tendency of most people to recall the headline about women’s hormones and voting behavior; but

b) not recall that the study was pulled from the CNN website within hours; or

c) ever know that a follow-up failed to support their findings.

The lesson learned is the impossibility of unringing the bell. It’s a cautionary tale for trial lawyers. Motions in limine are often key to keeping the story clean and focused. And whether a case is below the media radar or on the front page, the story that is in the mind of a juror doesn’t necessarily square with what you think the evidence has established.

Just because something is heavily publicized does not mean it is true (or not true). While everyone agrees with that, it doesn’t mean that they are immune from the effect of repetition or of having heard if from a prominent source. The goal is either or both: 1) Discrediting the message, or 2) Discrediting the messenger.

Just because one pontificates loudly and insistently does not mean what they say is true (or not true). One of Ronald Reagan’s best debate lines was to summarily dismiss critics by saying “well there they go again…”, which was extremely effective in shifting the focus from the takeaway message that criticized him to one that casts a disdainful shadow on his critics.

None of us like to be fooled. Use that desire to know the truth to get jurors to listen to your truth even though it may be quieter and less strident than the other voices fighting for their attention. Caution them (as Reagan did very simply) to beware of idle rumors and loose talk–and to focus instead on character and principles.

Harris, C., & Mickes, L. (2014). Women Can Keep the Vote: No Evidence That Hormonal Changes During the Menstrual Cycle Impact Political and Religious Beliefs Psychological Science DOI: 10.1177/0956797613520236

Image 

 

Share

outcome aversionToday’s post focuses on ideas that will be familiar to many of you but the terms themselves will probably seem foreign. The research is about the role of emotion in our  decisions about moral issues. Essentially, the research looks at emotional pathways to moral condemnation. What motivates our reaction to tragic injury? Is it about our empathy for the victim who suffered injury, or is it about our disgust at the method through which the victim was harmed?

Outcome aversion: When someone is hurt or killed through actions of another, empathy for the victim (due to their injuries) is believed to result in a desire to punish (or hold responsible) the perpetrator.

Action aversion: More recent research focuses our attention on the actual act taken that harmed the victim. For example, if the victim was stabbed, some researchers believe the emotional response to the harm/injury/stabbing would be triggered by the act of stabbing itself, rather than due to the harm to the victim.

The researchers illustrate these two concepts with a story about three sailors stranded in a lifeboat in 1884 with a severely ill cabin boy.

“Having no food, water, or hope of immediate rescue, their best chance at survival was to kill the fourth member of their crew, a severely ill cabin boy, and eat him. The idea seemed unthinkable at first, but the poor conditions of their situation quickly made the threat of death too serious to ignore. Early one morning, while the cabin boy lay unconscious, the captain pulled a penknife from his pocket and sliced through the boy’s neck.”

Most research would use our emotional reaction to the poor (dead) cabin boy (e.g., the outcome aversion) to explain our moral revulsion to the captain’s action. More recent thought has focused on our reaction to the act of cutting the boy’s throat itself (rather than our reaction to the poor boy dying). This is a morally complex story, though, similar to the Donner Party caught in a snowstorm, the soccer team whose plane crashed in the Andes mountains, et cetera. It offers opportunity for a listener to sympathize with the boy’s impending death, the hopelessness of the boy’s likely survival, the desperation of the other people in the boat, and the fiduciary duty of a captain to his crew.

The researchers did five experiments to test their ideas related to action aversion (e.g., condemnation driven by an aversive response to the action itself). In all five experiments, they found “consistent and strong support for the importance of action aversion in moral dilemmas”. Further, the researchers found the aversion to be related to an “evaluative simulation” wherein we imagine how we would feel emotionally if we had, ourselves, cut the cabin boy’s throat. They call this “first party aversion” and it can directly influence third-party moral condemnation.

That is, the stronger your aversion to thinking of yourself cutting the cabin boy’s throat, the stronger your condemnation of the third-party who actually did cut the cabin boy’s throat. Simply put, ‘If I wouldn’t do it, they shouldn’t do it, either”.

From a litigation advocacy standpoint, this is an intriguing strategy to consider. Our mock jurors (especially those under 40) often report they dislike lawyers who attempt to manipulate them emotionally by invoking sympathy for the victim.

Based on this research, you can intensify the desire to morally condemn the Defendant by focusing juror attention on what it would be like for them individually to have wielded the veritable knife cutting the cabin boy’s throat.

They will react emotionally and morally condemn, but not because you have manipulated them by calling (overtly) for sympathy for the victim.

Miller RM, Hannikainen IA, & Cushman FA (2014). Bad Actions or Bad Outcomes? Differentiating Affective Contributions to the Moral Condemnation of Harm. Emotion (Washington, D.C.) PMID: 24512250

Image

Share

shady_characterPeople will actually see you more positively when you raise no money for charity at all than they will when you raise $1,000,000 (but skim $100,000 for yourself). Even if you said you were going to keep 10% up front and the charity really did get the $900,000! When you benefit (in any way) from your charitable activities, your altruistic acts are likely to be seen as somehow tainted by your self-interest. 

There is a really nice write-up of this article at Time Magazine and therefore, we won’t focus on what the researchers did, but rather on the reason they thought tainted altruism worth investigating. It’s all about access to counter-factual information!

Counter-factual thinking is the label used to describe what happens when we think about ‘what if’ or ‘if only’ alternatives to a regrettable situation. When jurors employ counter-factual thinking in response to litigation, they often think things like:

“If only she hadn’t driven a different way to work that day…”

“What if he had sought out a third opinion?”

“If only they hadn’t decided to have a second child…”

“What if the company had trained their employees differently…”

Often the presumed answers to these questions are that the negative event that is central to the case would not have happened. In this particular article, the researchers say the idea of “tainted altruism” stems from the lack of available counterfactual information in making decisions.

To explain, these researchers believe that when we see both charitable acts and personal benefit, we see inconsistency between charitable behaviors and personal benefit– and we presume selfishness. Thus, we are more likely to rate the charitable person (who benefitted from those charitable acts) negatively.

On the other hand, when we see only self-interested behavior, we are not automatically drawn to wondering whether that person could have been more altruistic. The person never tried to pretend any lofty motive, so the judgment about the person is not negative.

This, say the researchers, is the essence of the “tainted altruism effect”. Actions that produce both charitable and personal benefits will be assessed more negatively than those that are self-interested and result in no charitable benefit. The existence of the effect was supported across all four experiments conducted.

The researchers summarize past research findings in this area (and we make a few comments of our own):

People react negatively toward for-profit organizations doing religious or health-oriented work. (On the other hand, few people realize that the non-profit organizations who do religious or health-related work are often enormously profitable. When this is presented, it can alienate jurors.)

People question the motivations of wealthy philanthropists. (Which can be counter-balanced by the admission that the wealth was created by pure capitalistic fervor, and the philanthropy is a separate matter.)

People seem to believe that if your prosocial behavior is truly genuine, you will not receive even unrelated personal benefits. (But, as in the examples above, if the benefit is abstract or does not diminish the benefit to others– by skimming off a percentage, for example– the effect can be minimized.)

Charitable donors with a personal connection to the charitable cause are given less credit for their good works. (This is a highly circumstantial finding that is certainly variable from case to case.)

You might see the results of past research as reflecting a tendency to assume the worst when there is money or glamour (or litigation) involved. Your goal is to supply the missing counterfactual information–or, to get jurors to consider the opposite end of the spectrum.

“My client could have made a modest personal donation to the worthwhile efforts of the charity. [Client] is in the business of raising funds for deserving charities. S/he chose to raise money for [deserving charity] while retaining 10% of the proceeds, as was understood and agreed to from the start. The charity could have received 100% of [client’s] modest donation, or 90% of the fruits of his/her enormous fund raising talents. As it ended up, my client should be thanked, not vilified.

Give jurors the information on the other end of the spectrum so they can weigh the “real issue”–no charitable benefit versus most of the charitable gains. It’s one of those odd times when the counterfactual, presented correctly, can work for you rather than against you.

Newman GE, & Cain DM (2014). Tainted Altruism: When Doing Some Good Is Evaluated as Worse Than Doing No Good at All. Psychological Science PMID: 24403396

Image

Share

precise evidenceWhen your evidence is weak, how can you be more persuasive? Precision. Observers want to see certain things to have confidence in what you are saying. The more precise you are, the more likely the observer is to see you as knowledgeable and accurate (even when negotiating for salary!). So what does the observer look for to assess your confidence? For eyewitnesses, the researchers say, observers (such as jurors) rely on speech rate, eye gaze, posture, and use of nervous gestures to assess accuracy. There is a longing for certainty that draws people to rely on these cues even when they are told of the gap between eyewitness accounts and actual accuracy. More recent research has focused on the use of precision to elicit confidence in you from the observer. 

The researchers conducted two separate experiments: one with the ubiquitous undergraduate (N = 187) and one with Mechanical Turk (online research) participants (N = 163).

The undergraduates read answers to questions about the lengths of rivers and the heights of mountains (which had ostensibly provided earlier by other participants. They were asked to indicate their belief in the accuracy of the answers. The manipulation by the researchers was that the answers were presented as either “imprecise” (rounded to the 100s, e.g., 2600 miles) or “precise” (rounded to the first place, e.g., 2611 miles). The undergraduates were more confident in the “precise” answers to the questions.

The Mechanical Turk participants played a game akin to “The Price is Right” game show. The participants were asked to price three different products and were given help in the form of “audience suggestions”. The audience suggestions either ended with a 0 (imprecise) or ended with a 1 through a 9 (precise). Half the subjects were given estimates over the true value and half were given estimates under the true value. Then they were asked to “choose” the audience member who would “advise” them in the upcoming round of the game. The Mechanical Turk participants were more likely to choose an “advisor” who had provided a precise number (i.e., a number ending in 1 through 9).

Both undergraduates and Mechanical Turk participants believed more precise estimates were made by more confident (and likely more accurate) people. There is no real truth to this belief, but there you have it. If you are more precise, people think you are more confident and therefore are more likely to believe what you are saying. The authors use the example of “sports pundits often discuss[ing] National Football League draft prospects to hundredths of milliseconds–more precision than measurement error allows for”. People prefer precise estimates, say the researchers, “which creates incentives for such overprecise and misleading reporting”.

From a litigation advocacy perspective, the weaker your evidence, the more precise you want to be in identifying damages, settlement requests, or life care amounts. An example is to establish the amount of a life care plan to the penny, even though it is a projection and by its nature, imprecise.

“The weaker the data available upon which to base one’s conclusion, the greater the precision which should be quoted in order to give the data authenticity.” Norman Ralph Augustine

Jerez-Fernandez A, Angulo AN, & Oppenheimer DM (2014). Show me the numbers: precision as a cue to others’ confidence. Psychological Science, 25 (2), 633-5 PMID: 24317423

Image

Share