Archive for June, 2012
“If Alice saw Bob trip over a rock and fall, Alice might consider Bob to be clumsy or careless (dispositional). If Alice tripped over the same rock herself, she would be more likely to blame the placement of the rock (situational)”.
In other words, we commit a “fundamental attribution error” when we over-value the personality-based explanations for what we see in the behavior of others and under-value the situational factors behind their behavior. (And as pointed out by Alice and Bob tripping over that rock, when we are assessing our own behavior we do the reverse.)
Despite the term being coined in the early 1970’s, research still continues to find many examples of our relentless efforts to see explanations for our own downfalls as situational (outside our control) and explanations for the downfalls of others as somehow indicative of their weak character. Two new studies find the fundamental attribution bias alive and kicking.
In the first study, researchers looked at whether we extend the fundamental attribution self-assessment-error to members of our own in-group by excusing in-group members negative behaviors as “only human”. Sure enough. When it came to negative behaviors, research participants willingly interpreted those behaviors as “only human” but were less willing to do so when it came to the negative behaviors of out-group members. [The researchers term this “infrahumanization” (“the subtle, yet troubling tendency for people to ascribe more such uniquely human attributes to their ingroups than to outgroups”).
We think it’s an extension of the self-excusing aspect of the fundamental attribution bias to those we see as “like us”. We cut those like us the same break we grant ourselves.
The second study goes to extreme situations and looks at our perceptions of torture when it’s done by us versus when it’s done by others. (Again, the in-group versus the out-group split–but this time, out-groups with no overtly hostile perception of each other.) Participants in the research study were both British and American nationals. They each read a newspaper account of the torture of a terrorist suspect–either captured and tortured by their own nation’s security forces or the other nation’s forces. The newspaper account was detailed as to the torture experienced: noting the alleged terrorist was “held in cruel and inhuman conditions and subjected to prolonged and brutal torture, including the repeated slashing of his genitals with a razor blade”. Wow!! Clearly, the researchers did not want to rely on the imagination of the research participants to conjure up just what “torture” means. The conduct was horrifying regardless of who did what. They got the message. And their results were predictable:
“When the torture was perpetrated by the in-group, participants described it as more morally justified than when the torture was perpetrated by the other nation’s security services.”
These studies reflect powerful (and very human) justifications for our own behavior–or rather, our in-group’s own behavior. We want to believe that what has been done in our name (that is, by our in-group) is justified. We will even go so far as to condone torture when it’s done by “us and not them”. It’s why we repeatedly underscore the importance of making your client “like” the jurors.
If the client’s behavior is viewed as negative, jurors are going to see your client as “not like” them and are more apt to judge your client harshly for being a “bad person”. While it is unlikely you will have such an extremely “different” client as the terrorist example in the second research study–all parties to a lawsuit are different in some way–perhaps in race, age, religion, sexual orientation, or other aspects of their character.
The importance of both humanizing your client and showing jurors how your client shares their values, beliefs and experiences cannot be over-emphasized.
This is very, very big.
Koval, P., Laham., S., Haslam, N., Bastian, B., & Whelan, J. (2012). Our flaws are more human than yours: Ingroup bias in humanizing negative characteristics. Personality and Social Psychology Bulletin, 38, 283-295 DOI: 10.1177/0146167211423777 Download the article here: PDF
Tarrant, M., Branscombe, NR, Warner, RH, & Weston, D. (2012). Social identity and perceptions of torture: It’s moral when we do it. Journal of Experimental Social Psychology., 48, 513-518
This year at the conference for the American Society of Trial Consultants, there was a discussion about regional differences in voir dire and jury selection. One Bible Belt consultant mentioned that rather than only Judeo-Christian religiosity–she was often interested in whether jurors were Old Testament or New Testament Christians. That is, did the jurors believe in a God of vengeance and punishment or did they believe in a God of forgiveness?
This comment came back to me as I read some new (to me) research on how this sort of variability in religious emphasis has impact on our individual behaviors. It’s an intriguing twist on the research we’ve written on before about the impact of “eyes” on our behavior. That research has suggested we behave more morally when we believe we are being watched. New research says maybe not so much. Instead, whether we behave morally or amorally depends on just whom we believe is watching! Here’s a hint: Is it the Old Testament God or the New Testament God?
Researchers examined data from the World Values and European Values surveys conducted between 1981 and 2007. From this survey, they looked at belief in hell and heaven, belief in God and religious attendance. They also looked at crime data from United Nations records on murder, robbery, rape, kidnapping, assault, theft, drug-related crimes, auto theft, burglary and human trafficking. Additionally (these folks were thorough!) they included factors such as the dominant religion of the country, income inequality, life expectancy and rate of incarceration. As they examined this international data, they found the more prevalent a belief in hell (i.e., eternal punishment for bad acts) was in a given country, the lower the crime rate! In other words, the authors say, when you live under a belief in some sort of supernatural or divine punishment for your bad behavior–you are less likely to engage in bad acts.
So they decided to look at whether belief in an angry/punishing god (versus a loving and compassionate god) would have impact on cheating behavior in this country. What they found was that overall [self-reported] level of religious devotion did not directly predict cheating behavior. Nor did a [self-reported] belief in God.
This is intriguing as these are variables often examined in jury selection. This research says it isn’t the “degree” of religiosity that makes a difference. It is instead in your conception of just who God “is”–or rather “how” God is.
“Viewing God as a more punishing and less loving figure was reliably associated with lower levels of cheating”.
And that relationship held even when they controlled for a number of demographic variables (e.g., gender, ethnicity, and religious affiliation). The authors suggest that a belief in “mean gods” results in more moral behavior while a belief in a “forgiving god” results in our willingness to engage in bad acts.
Put another way at the Institute for the Biocultural Study of Religion [ICBSR] website:
“At least some instances, the fear of a vengeful God correlates with better moral behavior. Religious liberals tend to pride themselves in their tolerant religious views and their all-loving God. Yet it seems that religious conservatives, those who believe in a judging God, are the ones who actually act more morally upright. Religious conservatives act wisely precisely because they have the fear of the Lord.”
It’s an intriguing study to ponder in the context of litigation advocacy. If overall religiosity does not predict cheating behavior but your conception of just “who God is” does, we want to look at this more thoroughly.
Does your concept of “who God is” make a difference in your beliefs about the level of punishment deserved for bad acts?
This study (conducted in the Pacific Northwest) would imply it isn’t just the Bible belt where this sort of belief perspective makes a difference. If jurors tend to be harsh critics of cheating (for themselves), it stands to reason that they will be less tolerant of cheating by others, too.
Shariff, AF, & Norenzayan A. (2011). Mean gods make good people. Different views of God predict cheating behavior. The International Journal for the Psychology of Religion, 21, 85-96 DOI: 10.1080/10508619.2011.556990
We’ve read about aggressive men with thick necks and wide faces. We often suspect them of being violent thugs. Some, more charitably, might opine they are simply more confident and assertive. Or maybe weight-lifters. New research says they are also likely liars and cheaters. Seriously?
We all have biases and use stereotypes to make snap judgments about others but this one is pretty extreme.
Wide face = liar and cheat.
Hmmm. Let’s take a look at the research wherein men with wide faces were three times more likely to lie in competition and nine times more likely to cheat than those with “skinny heads”.
Researchers had undergraduate male business majors play a competitive game. Prior to the game, photographs were taken and researchers measured the breadth across cheekbones and the height of the face from upper lip to eyebrow ridge and then calculated the “width to height ratio”. Just to be safe, they also calculated the relationship between face-width-to height-ratio and the participants height and weight. They didn’t want to confuse “wide-face” guys with “big guys”. There was no relationship between having a wide face and being big in size.
The researchers cite past research (although quite recent) showing we judge men with wide faces as untrustworthy, dominant, less attractive, aggressive, less warm, less honest and less cooperative. That behavior is apparently different when engaging in group competitive activities.
What these researchers found was that while engaged in group cooperative endeavors–when winning was on the line–men with wider faces were more likely to make sacrifices to support their own group. In other words, while in groups they were prosocial even though other research has shown antisocial behavior from wide-faced men interacting in dyads.
There are several things we find interesting about this study [and prior work on wide-faced men] in terms of litigation advocacy:
First impressions matter! We write a lot about the power of first impressions and how they often rely on stereotypes and biases that may be outside conscious awareness. According to past research, the American public tends to believe that wide-faced men are thugs disposed to violence or aggression.
First, be aware of the tendency to (often unconsciously) judge these men differently. If you know the person to be a good guy, you are likely to overlook this level of judgment by strangers.
You want to show that your wide-faced client/witness is different than other wide-faced men, and help the jury to understand that they are a lot like the juror and their friends.
If you have your own wide face, you want to pay attention to communicating gentleness and self-deprecation as you interact in the courtroom.
According to this research, context matters! It’s important to place your wide-faced client in context for jurors. If the individual was involved in a group endeavor, their motivations may have been supportive and prosocial [in contrast to a tendency toward antisocial motivations in dyads as pointed out by past research]. It will be important to show jurors examples of the individual self-sacrificing or generous acts the wide-faced party performed.
Finally, this is research done by evolutionary psychologists which is always sort of strange. (Some of us have wide faces and aren’t liars and cheaters!) Take it with a bit of caution but also know–we have preexisting and largely negative stereotypes about men with wide faces. It may be wise to counteract those stereotypes with examples of how “this wide-faced man isn’t a liar and a cheat”!
Stirrat, M., & Perrett, DI (2012). Face structure predicts cooperation: Men with wider faces are more generous to their in-group when out-group competition is salient. Psychological Science. DOI: 10.1177/0956797611435133
We’re always looking for the mythical silver bullet that will tell us how to know what juror is worst (and best) for our case. But really? Shoes? There are first impressions and then there are those things we assume about you when we look at your shoes.
Researchers had more than 200 undergraduates (including some nontraditional students since the age range was from 18 to 55 years) fill out questionnaires about themselves. The questionnaires inquired into their personality and background as well as their sexuality. They were also asked to provide a photo of the pair of shoes they wore most often.
Then, a second group of students looked at only the shoes and guessed about the personality and background of the shoe owner. And were they right? Sometimes. Oddly enough, they were able to identify gender (okay, that one isn’t so odd), age, income and how anxious the wearer was about rejection (termed “attachment anxiety” by the researchers). Much of the observers’ accuracy was based on the use of stereotypes but there were some interesting correlations between footwear and self-reported personality characteristics.
Highly socially anxious people tend to wear shoes that look brand new and in good repair–likely to minimize potential rejection.
Conversely, socially avoidant people tend to wear shoes that are neither attractive nor stylish–likely because they don’t really care what you think.
Extroverts do not wear more bright and colorful shoes. Their identifying markers are shoes that appear worn out (probably from the demands of being the life of the party).
Agreeableness was negatively correlated with pointy toed shoes and positively correlated with practical and affordable shoes. (Although those wearing masculine or high top shoes tended to be less agreeable.)
People who were emotionally stable were less likely to wear pointy toed shoes or high heels–instead they wore comfortable shoes.
People who were open to new experiences tended to wear colorful shoes and to photograph the shoes against a colorful background.
Politically liberal students were less likely to wear pointy toed shoes, attractive shoes, to have expensive shoes or to have shoes in good repair. (Perhaps this should be referred to as the Birkenstock hypothesis.)
This isn’t the most useful research we’ve reported to you, but it’s kind of fun. It’s a lot to remember and it’s questionable just how generalizable this data is to non-college students (who are often gainfully employed and may not wear their favorite shoes to court). Further, we wonder whether people really photographed the shoes they wore most often, or perhaps instead the shoes they liked the most (a logical move for the image conscious). It isn’t clear that a query like “are you wearing your favorite shoes today?” would be allowed in voir dire, although it is possible that the attorney who asks it could be thought of as either creepy or a foot fetishist (or both). It is also unlikely you would be allowed to have all jurors extend their footwear for your perusal.
So. A silver bullet this is not. On the other hand, if you tack this blog post up in the lunchroom at the office, you’ll learn more about your coworkers as they discuss it than you’d ever expect. For understanding jurors, though, we’ll be heading back to attitudes, values, beliefs, and life experiences and how they all interact with your individual case.
Gillath, O., Bahns, AJ, Fe, F, & Crandall, CS (2012). Shoes as a source of first impressions. Journal of Research in Personality, 46, 423-430 DOI: 10.1016/j.jrp.2012.04.003
Many of us are familiar with the recency effect (which would say be the last meeting) and the primacy effect (which would say don’t be last, be first). This body of research is also sometimes referred to as the “serial position effect” (which basically says, whatever you do, don’t get lost in the middle). Much of the research on these ordering issues is from before the turn of the last century.
[Perhaps that wasn’t fair--it makes this body of research sound ancient. In truth, we do think twelve years old is ancient when it comes to social sciences research. Think of just how much has changed in the past decade. Thank goodness the publish or perish dictum still flourishes in academia!]
In litigation we see the power of being the first to pitch your case– the power of ‘going first’. This coveted position allows Plaintiff to frame the debate and prepare the battleground. The contrast we often see in mock trials is the large percentage (usually at least half) of the jurors who at least deny being persuaded by the uncontested Plaintiff case. These judicious citizens want to wait until they can appraise the other side of the dispute.
New research (in press at Psychological Science but on SSRN now) gives us an updated answer on which effect is stronger–the primacy effect or the recency effect. Researchers looked at ten years of MBA admissions decisions data–in total, 9,323 decisions [selected for file completeness from a larger sample of more than 14,000 decisions]. Apparently, when MBA applicants are interviewed, they are interviewed in randomly selected small groups (of three or four) each day. If decision-making is clear of primacy or recency effects–the success of a single applicant should not be dependent upon the other applicants who happen to be randomly selected to interview on the same day.
But, that is not how it goes. Rather than considering the entire sample of applicants, interviewers seem to make their decisions based on the sample they have observed in a single day. In other words, the luck of the draw. You aren’t judged in contrast to the larger applicant pool, but rather to the subset with whom you were randomly scheduled. Woe to the MBA applicant who draws the short straw of stellar opposition on interview day. The researchers refer to this unfortunate phenomenon as “narrow bracketing”. They offer this definition of the term:
”when people conduct a subset of judgments, they do not sufficiently consider the other subsets they have already made or will make in the future”
The researchers cite decision-making research that says we all assume small samples reflect the entire sample. This is simply wrong.
“For instance, an interviewer who expects to evaluate positively about 50% of applicants in a pool may be reluctant to evaluate positively many more or fewer than 50% of applicants on any given day. An applicant who happens to interview on a day when several others have already received a positive evaluation would, therefore, be at a disadvantage.”
In other words, this research says that it’s better to be seen first rather than last. The primacy effect wins out over the recency effect. This finding deviates from research supporting the recency effect (being seen last) and certainly is not Biblical.
Having participated in a number of “beauty contests” with potential client law firms and being confronted with information about which of our trial consultant colleagues have gone before us–we’re not sure about how seriously we take this finding. But there you have it. Research on how decisions are made that says “go first” when all interviews are done in the same day. At least if you want an MBA. If you are facing trial and have no control of whether you can go first or last, we’d say the debate remains open.
Simonsohn U., & Gino, F. (2012). Daily Horizons: Evidence of Narrow Bracketing in Judgment from 10 years of MBA-admission Interviews. Psychological Science