We’ve written a few times about new research on Asian-Americans and so were eager to see a new chapter in a book on ethnic pluralism and its role in the 2008 election. It’s an intriguing treatise on the amazing diversity in the Asian-American community (composed of at least nine ethnic groups and 11 different religious affiliations).
“Asian-American” doesn’t mean one thing. It means many things. According to the chapter authors (So Young Kim and Russell Jeung), Asian-Americans are unique among American ethnic groups in that they do not predictably act as either a racial bloc or a religious bloc. Asian-Americans do not share a religious faith and 27% do not follow any religion per se. And despite their high levels of education and income–they are not particularly politically involved. In fact, Asian-Americans may have lower levels of voting than other ethnic groups (although it is hypothesized this could shift as the various Asian-American groups log in more time in the US and begin establishing a stronger pan-ethnic identity).
What is especially interesting to us is that the authors found patterns in voting among Asian-Americans during the 2008 Presidential elections. Overall, Asian-Americans were more likely to support Barack Obama during the 2008 elections than Caucasian voters with similar incomes and religious affiliations. However, within the Asian-American group, there were subgroup patterns that call out for recognition:
Asian-Americans who were agnostic, atheist, Hindu or Muslim were more likely to vote for Barack Obama (and were also reportedly more liberal).
Asian-Americans who were Protestant and Catholic and more conservative also supported Obama. (You weren’t expecting that were you?)
Finally, Vietnamese-Americans (many of whom are Catholic) were more likely to vote for John McCain.
Another important descriptor the authors use is disenfranchised. Many Asian-Americans feel they are not valued (and truly, they have not been studied to the extent of other ethnic groups in this country) and this is likely an important variable to consider in terms of identification with your case.
Religion, Race, and Barack Obama’s New Democratic Pluralism is a data-dense book with an emphasis on political shifts and ideology based on ethnicity (featuring chapters on mainline Protestants, Evangelicals, Catholics, Jews, Muslims, Seculars, Women, African-Americans, Latinos, and Asian-Americans). What is most interesting from a litigation advocacy perspective is that this chapter shows us that we know a lot less than most of us think we know about Asian-Americans. There is not a blanket description of the Asian-American just as there is not a blanket description of American women, African-Americans, American Muslims or Jews, disabled people or other identifiable groups.
It’s a terrific reminder to not assume and to maintain curiosity about those different than us. They can often surprise you.
So Young Kim, Russell Jeung. 2012. Asian Americans, Religion and the 2008 Elections (Chapter 11). In Religion, Race, and Barack Obama’s New Democratic Pluralism, Gastón Espinosa, Ed. Publisher: Routledge.
We’ve talked before about mock jurors believing they can ‘see’ who is lying, using drugs, or other negative behaviors litigants (or anyone else!) would want to keep private. Now we have new evidence that some of those jurors may have good radar—at least when it comes to being able to identify certain religious group members!
It’s easy to ‘see’ some religious affiliations, especially those whose faith involves a high degree of integration of religious obedience and daily life. Think of the Amish and other religious groups who are identifiable by attire or facial hair or turbans or robes, or the attire of some Orthodox Jews, Sufis, or Muslims. But most of us are not wearing particular clothing or neat labels identifying religious affiliation. In fact, when asked, many of us even lie about just how often we attend religious services!
Consider, for example, the religious group of Mormons. Think you could pick them out of a crowd? Maybe you think you couldn’t. But the subjects in a recent study could! Even when their hair was removed, eyes and mouth covered, and images were turned upside down. That’s pretty strange. Here’s how it worked.
Researchers examined the folklore around intuition and found strong perceptions that we can ‘know things’ about people just from their faces. We think, for example, we can identify sexual orientation, age, gender—and we are so frustrated when we cannot, or race. They cite research finding that people (both Mormon and non-Mormon) were able to identify un-labeled Mormon faces at a better than chance rate. This held true both in areas with large Mormon populations and in areas with few Mormons—although Mormons were better at identifying fellow Mormons than were non-Mormons.
Ultimately, the researchers decided to explore what it was about Mormon faces that facilitated identification to the observer. They went to fairly extensive lengths to include plain faces without piercings, extra earrings, or extreme haircuts and kept all photos within a younger age range, so as to make discrimination of Mormons/non-Mormons tough. They also had people simply observe parts of the faces (like the eyebrows) and attempt to identify the person as Mormon or not. (As you might imagine, eyebrows don’t tell you much.)
The researchers concluded that while participants were indeed able to categorize Mormons and non-Mormons more accurately than chance—they were seemingly unaware of their ability to do so. The categorization of Mormons appeared to be drawn largely from the quality of facial skin—since Mormons do not drink alcohol, smoke or drink caffeine. Participants apparently infer overall good health from what they observe of the skin and identify (accurately) the participant as Mormon. So it wasn’t so much “Mormon” they were identifying, it was the attribution of what they thought of as a Mormon trait—“healthy”.
So what does this mean for litigation advocacy? The point of this post is that we all look at very subtle cues and inform ourselves about people. We make assumptions. We doubt there will be times when you are asking a jury to identify ‘who’s the Mormon’. But you will be challenged to deal with perceptions that someone has led a healthy or unhealthy life, that they seem virtuous or dissipated.
Many of us have read the research that says we can perceive basic things about people in split seconds. Jurors do that too. They look at you. They look at your witnesses. They look at the parties. And, in the absence of other data, they form conclusions about gender, sexual orientation, good/bad habits, character, and whether you dye your own hair or have it done professionally. You need to control this interpretation by giving jurors understanding for what they see. If you do not, they will make up their own interpretations—and you have no way of knowing what they’ll ‘intuit’.
Rule NO, Garrett JV, & Ambady N (2010). On the perception of religious group membership from faces. PloS one, 5 (12) PMID: 21151864
We’ve written a number of times about bias against Muslims. But here’s a nice article with an easy to incorporate finding on how to reduce bias against your female client who wears a Muslim head-covering. (In case you have forgotten, we’ve already written about head-coverings for the Muslim man.) The graphic illustrating this post shows the variety of head-coverings Muslim women might wear and the initial findings (as to which head covering style results in the most bias) will probably not surprise you.
Researchers did four studies to see how people reacted to Muslim women wearing veils. They consistently found these reactions:
Responses were more negative when the Muslim woman wore a veil of any kind compared to no veil at all.
When the various veils were compared, the niqab or burqa (where only the eyes are exposed or even the eyes are covered) were seen most negatively.
Not surprising, as we said. In Western society, we like to see who we are talking to, and place a high priority on ‘looking people in the eye’. And our society holds (and expresses freely) negative beliefs about Muslim head-coverings for women. Those beliefs may range from a head-covering being a symbol of extreme or even terroristic beliefs, to a belief that a woman is being subjugated merely because she wears this garb. Yet, there are a litany of reasons women may wear head-coverings. There are also reasons women do not wear head-coverings. There is tremendous diversity within the Muslim community related to this issue, especially among Muslims in the US.
That very diversity is at the heart of what these (intuitive) researchers did next. Instead of just showing photos of women in various styles of head-coverings, for the final experiment, the researchers gave research participants “an article that focused on the reasons that Muslim women often give for choosing a full face veil”. And guess what happened?
Participants had more “positive imagined contact experience and gave more positive ratings of how they felt they would communicate with the Muslim woman wearing such a veil”.
In other words, when allowed to “fill in” the reasons the Muslim woman wore a veil, participants went to negative stereotypes and showed negative perceptions toward the woman. On the other hand, when given information about the variety of reasons Muslim women might have to choose a head-covering, negative assumptions/perceptions decreased. And that was when considering interactions with a Muslim woman in a full head-covering. The researchers say that for the least bias, if a religious Muslim woman wants to wear a head-covering, the hijab is likely the best choice. That may, however, not be an option given her religious beliefs. In either case, this research would say to give jurors information about your client’s choice to wear a Muslim head-covering (of any style) and it will reduce negative assumptions.
Yes, once again it appears that information is a great antidote to bias.
The very process of sharing the reasons for wearing a head-covering with jurors, gives them the opportunity for emotional connection with your client. Her sharing reasons for the head-covering allows them to ‘see’ her individuality and religious conviction. We’d call that both making your client more similar to the jurors (through the use of universal values) and giving jurors an opportunity to see “beneath the head-covering” to the woman herself.
Everett, J., Schellhaas, F., Earp, B., Ando, V., Memarzia, J., Parise, C., Fell, B., & Hewstone, M. (2014). Covered in stigma? The impact of differing levels of Islamic head-covering on explicit and implicit biases toward Muslim women Journal of Applied Social Psychology DOI: 10.1111/jasp.12278
The study of bias fascinates us. We can easily spot prejudice in others but are oblivious to our own biases. We often ask a question at the end of a research project about community values and whether our (uniformly unbiased and considerate) mock jurors think others in the area would be biased against a party involved in the lawsuit about which they have just heard. Maybe the off-topic and irrelevant bias (perhaps religion, country of origin, ability to speak English, thick accent, appearing to be a gang member, sexual orientation, marital fidelity, obesity, etc.). Typically, the answer is, “Well, it doesn’t make a difference to me but it sure would to a lot of other people who live around here!” This response is shared in all sincerity and good faith by individuals who truly do not see themselves as biased.
The problem, as pointed out by today’s researchers, is that none of us see ourselves as having blind spots. We’re better than that–especially when forewarned that biased decision-making could lie ahead. As sensible and logical and rational as that perspective may seem, it simply doesn’t appear to be true. We’ve written about Emily Pronin’s work on the bias blind spot a couple of times before but she has a new article out that illustrates beautifully what we see often in our pretrial research.
Researchers did two different experiments in which they had participants “rate the artistic merit” of a series of 80 different paintings. The first two experiments used undergraduates from Princeton University (63 female and 38 male in the first experiment and 47 female and 27 male in the second experiment).
In experiment 1, half of the participants were told to press a button and the name of the artist would flash onto the computer screen while others were not told to do so and thus evaluated the “artistic merit” of the painting without knowing who had painted it. For those participants that saw the name of the painter, half of the paintings were identified as being created by a famous artist and half attributed to random names (i.e., “an unknown artist”) culled from a print telephone directory.
Not surprisingly, the participants who saw the artist names rated the merit of the paintings attributed to famous artists as higher than the unknown artist’s work. Those who did not see the artist name rated the two groups of paintings the same in terms of artistic merit. Those who saw the artist name acknowledged the knowledge was biasing but believed their final answers were as objective as if they had not seen the artist name. (Alas, they were incorrect.)
In Experiment 2, instructions were modified so that participants could choose to see or choose to not see the name of the artist. Half the participants were told to choose to see the artist name (this was the explicitly biased condition) and half were told to not choose to see the name of the artist (this was the explicitly objective condition). They were asked to rate how biased they expected their decision-making strategy to be given whether they would see the artist name or not see it.
Once again, the participants who saw the artist names rated the merit of the paintings attributed to famous artists as higher than the unknown artist’s work. Those who did not see the artist name rated the two groups of paintings the same in terms of artistic merit. Those who were in the explicitly biased condition said (in advance) their evaluative strategy would be biased, but (naturally) they saw their own judgments of the paintings (after the fact) as objective.
In other words, even though warned in advance that their strategy would be biasing, and even though they said, up front, their strategy would be biasing–ultimately these participants also felt they were able to rise above that bias. (Alas, they were also wrong.)
So, for Experiment 3, the researchers left the classroom and recruited 85 adults online (52 women and 33 men with an average age of 35.7 years). These participants rated the same 80 paintings with three modified instructions: they rated themselves and their assigned evaluative strategy in terms of how objective their process would be; they were given very detailed information about how bias could easily make inroads into their decision-making on the artistic merits of the paintings; and, they were reminded to be honest in their ratings.
You know what happened. Participants in the explicitly biased condition thought their strategy was more biased but saw their judgments as even better than those participants in the explicitly unbiased condition. Maybe they thought that this special information empowered them to rise above the bias they had expected to display! Interestingly enough, at the pre-task rating, the participants in the explicitly biased condition thought they would be objective and by the end of the task, their estimation of their objectivity had gone up significantly.
The researchers discuss these findings in light of the courtroom (using the example of inadmissible evidence which jurors are instructed to ignore) and the workplace (using the example of HR personnel who see photographs of applicants prior to evaluating the merits of their applications). If we believe we are so objective that we can use biased strategies to make decisions, say the researchers–we are simply fooling ourselves.
They describe our reasoning in this way: “If I am smart enough to know this bias exists and honest enough to acknowledge it, then surely I won’t fall prey to it!”
Alas. Indeed we would. The authors describe the way female under-representation in the symphony has been reduced by having applicants audition behind a screen. Such efforts, they say, clearly reduce bias. So why are we so resistant to using them? The present research provides one such answer:
“Such efforts are likely to seem needless when we believe that we can be objective even in the face of obviously biasing procedures.”
The authors say the idea of “debiasing” doesn’t really work. Maybe it’s like ‘separate but equal’ or pre-Title IX sports budgets. You just cannot unring that bell. We both agree and disagree.
Bias is everywhere and we need to work hard to find ways to stop bias from occurring in the first place. There we agree. For years, we have recommended the use of strategies effective in countering bias by stopping it up front.
But we also have seen a debiasing strategy that is powerful in inhibiting bias. It doesn’t end it, and it isn’t foolproof. But click the link and learn how to cope with a flawed world.
You may not think this is information you need. Alas, according to this research, you really do!
Hansen K, Gerbasi M, Todorov A, Kruse E, & Pronin E (2014). People Claim Objectivity After Knowingly Using Biased Strategies. Personality & Social Psychology Bulletin PMID: 24562289
We’ve written a number of times about the role of non-belief or of strong religious beliefs on juries and juror decision-making. The majority of research, largely based on White participants, has shown repeatedly that for White Christians, if you are an non-believer (e.g., an Atheist or a Muslim), you will be looked on less favorably than you if you were a Christian. We’ve written about countering that negative judgment at some length over in The Jury Expert.
But what about Black Christians? Will Black Christians also have a negative judgment of those who don’t share their religious beliefs? The answer, according to today’s research, is a resounding “it depends”.
The research participants were 175 Black Christian undergraduates in the United States. Seventy-six per cent were female and the average age was 19.3 years. They were shown a “target” who was named Aisha. She was “Christian, Muslim or Atheist, and either Black or White. In the Muslim condition, Aisha wore a hijab.” Participants were asked to rate Aisha on both positive and negative traits and to list the things they considered as they evaluated Aisha on these traits. They also completed demographic and personality measures assessing their “need to belong, motivation to control prejudice, social desirability, and numerous measures of religiosity”.
What this research shows is that some Black Christians will judge a nonbeliever (e.g., an Atheist or a Muslim) more negatively than they will judge a fellow Christian, but others will not take the person’s religion into account at all. Apparently, the difference is whether the individual Black Christian is “religiously conscious”. There is no standardized measure of religious consciousness and it is hard to tell exactly what that phrase means from the article itself. The authors say it refers to whether one is “conscious of the religion of others”. In other words, it relates to whether one views another in terms that include their religion, or in entirely non-religious terms.
The first and third authors “coded participant responses for explicit mentions of religion [in their description of the person being judged]; initial inter-rater reliability was 0.82 and subsequent discussion resolved all differences until the agreement reached 100%.”
Based on this method of assessing “religious consciousness”, the authors found 70 participants mentioned Aisha’s religion and 105 did not. The participants who mentioned or did not mention Aisha’s religion did not differ on demographic or personality measures. What the researchers found is this:
Only Black Christians who were religiously conscious (e.g., the 70 who mentioned Aisha’s religion) showed intergroup bias. That means the majority of the participants (e.g., the 105 who did not mention her religion) did not show any intergroup bias. (There was no significance for these participants as to whether Aisha was Black or White.)
Keep in mind that this sample may not be normative. First, most Black teenagers are not in college, which makes this sample more questionable for generalization. Second, the age of these research subjects places them firmly amidst Gen Y, a well researched group whose acceptance of out-groups such as atheists and religious minorities is higher than older people. And third, the frequency of Muslims in the African American population is more common (and possibly more accepted) than in the White American population.
Nonetheless, these findings are quite different than the patterns seen in research on White Christians (who display a strong bias in favor of those who share their beliefs). In this sample, only 40% had a more negative view of Aisha when she was an Atheist or a Muslim, than they did when she was a Christian. In this issue of The Jury Expert, Gayle Herde suggests some ways of “listening” to juror responses in voir dire to assess whether their religious beliefs are intrinsic (i.e., “religion is a way of life”) or extrinsic (“religion is a part of life”). It is possible that Herde’s distinction could explain some of the differences in “religious consciousness” but it would have to be tested with greater care for us to know.
So the answer to the question posed in the title of this post is that based upon this research, if your client is Atheist or Muslim you would prefer a Black Christian juror since they are more likely than White Christian jurors to omit the inclusion of religious beliefs in their judgment of the individual.
And, if you can figure out a way to assess whether that Black Christian is “religiously conscious” or “intrinsically religious”, you will be more clear about whether you want those particular Black Christian jurors weighing their decisions in your deliberation room.
Van Camp, D., Sloan, LR, & ElBassiouny, A. (2014). Religious bias among religiously conscious Black Christians in the United States. The Journal of Social Psychology, 154 DOI: 10.1080/00224545.2013.835708