Follow me on Twitter

Blog archive

We Participate In:

ABA Journal Blawg 100!









Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Login

Simple Jury Persuasion: “It makes no difference to me but I’m sure it would to a lot of other people.”

Friday, March 28, 2014
posted by Rita Handrich

blind-spotThe study of bias fascinates us. We can easily spot prejudice in others but are oblivious to our own biases. We often ask a question at the end of a research project about community values and whether our (uniformly unbiased and considerate) mock jurors think others in the area would be biased against a party involved in the lawsuit about which they have just heard. Maybe the off-topic and irrelevant bias (perhaps religion, country of origin, ability to speak English, thick accent, appearing to be a gang member, sexual orientation, marital fidelity, obesity, etc.). Typically, the answer is, “Well, it doesn’t make a difference to me but it sure would to a lot of other people who live around here!” This response is shared in all sincerity and good faith by individuals who truly do not see themselves as biased.

The problem, as pointed out by today’s researchers, is that none of us see ourselves as having blind spots. We’re better than that–especially when forewarned that biased decision-making could lie ahead. As sensible and logical and rational as that perspective may seem, it simply doesn’t appear to be true. We’ve written about Emily Pronin’s work on the bias blind spot a couple of times before but she has a new article out that illustrates beautifully what we see often in our pretrial research.

Researchers did two different experiments in which they had participants “rate the artistic merit” of a series of 80 different paintings. The first two experiments used undergraduates from Princeton University (63 female and 38 male in the first experiment and 47 female and 27 male in the second experiment).

In experiment 1, half of the participants were told to press a button and the name of the artist would flash onto the computer screen while others were not told to do so and thus evaluated the “artistic merit” of the painting without knowing who had painted it. For those participants that saw the name of the painter, half of the paintings were identified as being created by a famous artist and half attributed to random names (i.e., “an unknown artist”) culled from a print telephone directory.

Not surprisingly, the participants who saw the artist names rated the merit of the paintings attributed to famous artists as higher than the unknown artist’s work. Those who did not see the artist name rated the two groups of paintings the same in terms of artistic merit. Those who saw the artist name acknowledged the knowledge was biasing but believed their final answers were as objective as if they had not seen the artist name. (Alas, they were incorrect.)

In Experiment 2, instructions were modified so that participants could choose to see or choose to not see the name of the artist. Half the participants were told to choose to see  the artist name (this was the explicitly biased condition) and half were told to not choose to see the name of the artist (this was the explicitly objective condition). They were asked to rate how biased they expected their decision-making strategy to be given whether they would see the artist name or not see it.

Once again, the participants who saw the artist names rated the merit of the paintings attributed to famous artists as higher than the unknown artist’s work. Those who did not see the artist name rated the two groups of paintings the same in terms of artistic merit. Those who were in the explicitly biased condition said (in advance) their evaluative strategy would be biased, but (naturally) they saw their own judgments of the paintings (after the fact) as objective.

In other words, even though warned in advance that their strategy would be biasing, and even though they said, up front, their strategy would be biasing–ultimately these participants also felt they were able to rise above that bias. (Alas, they were also wrong.)

So, for Experiment 3, the researchers left the classroom and recruited 85 adults online (52 women and 33 men with an average age of 35.7 years). These participants rated the same 80 paintings with three modified instructions: they rated themselves and their assigned evaluative strategy in terms of how objective their process would be; they were given very detailed information about how bias could easily make inroads into their decision-making on the artistic merits of the paintings; and, they were reminded to be honest in their ratings.

You know what happened. Participants in the explicitly biased condition thought their strategy was more biased but saw their judgments as even better than those participants in the explicitly unbiased condition. Maybe they thought that this special information empowered them to rise above the bias they had expected to display! Interestingly enough, at the pre-task rating, the participants in the explicitly biased condition thought they would be objective and by the end of the task, their estimation of their objectivity had gone up significantly.

The researchers discuss these findings in light of the courtroom (using the example of inadmissible evidence which jurors are instructed to ignore) and the workplace (using the example of HR personnel who see photographs of applicants prior to evaluating the merits of their applications). If we believe we are so objective that we can use biased strategies to make decisions, say the researchers–we are simply fooling ourselves.

They describe our reasoning in this way: “If I am smart enough to know this bias exists and honest enough to acknowledge it, then surely I won’t fall prey to it!”

Alas. Indeed we would. The authors describe the way female under-representation in the symphony has been reduced by having applicants audition behind a screen. Such efforts, they say, clearly reduce bias. So why are we so resistant to using them? The present research provides one such answer:

“Such efforts are likely to seem needless when we believe that we can be objective even in the face of obviously biasing procedures.”

The authors say the idea of “debiasing” doesn’t really work. Maybe it’s like ‘separate but equal’ or pre-Title IX sports budgets. You just cannot unring that bell. We both agree and disagree.

Bias is everywhere and we need to work hard to find ways to stop bias from occurring in the first place. There we agree. For years, we have recommended the use of strategies effective in countering bias by stopping it up front.

But we also have seen a debiasing strategy that is powerful in inhibiting bias. It doesn’t end it, and it isn’t foolproof. But click the link and learn how to cope with a flawed world.

You may not think this is information you need. Alas, according to this research, you really do!

Hansen K, Gerbasi M, Todorov A, Kruse E, & Pronin E (2014). People Claim Objectivity After Knowingly Using Biased Strategies. Personality & Social Psychology Bulletin PMID: 24562289

Image

Share