Archive for the ‘Voir Dire & Jury Selection’ Category
It is disconcerting to watch the political upheaval in this country but similar things seem to be happening around the world. We just found a new group that measures societal changes in trust. Edelman has surveyed “tens of thousands of people across dozens of countries” for the past 17 years measuring levels of trust in business, media, government, and non-governmental organizations (NGOs) which are typically non-profit. According to Edelman, this year is the first time the average level of trust (“to do what is right”) in all four types of institutions decreased. They also report the following statistics:
71% of survey respondents said government officials are not at all or somewhat credible.
63% said CEOs are not at all or somewhat credible.
60% of respondents trusted “a person like yourself” (which was in line with trust in a tech expert or an academic). In other words, they say, peers are now on par with experts.
NGOs were most trusted, Business was a close second (only one point behind NGOs), media came in third, and government came in fourth. (These place finishes should be considered skeptically since their combined overall approval rating was less than 50%.)
The following graphic shows a comparison of 2016 and 2017’s trust ratings for the four areas surveyed.
In addition to the Executive Summary, you can view Global Results, and watch a video on what Edelman calls a trust implosion. When trust declines, populism rises says Edelman—and we have seen that internationally as well as here at home.
From a litigation advocacy perspective, perhaps most important for our work is their lessons on how trust has been broken—housed over at Scribd. Here are a few of their lessons we see as related to litigation advocacy:
Leading the list of societal concerns and fears we measured that are commonly associated with populist actions are corruption (69% concerned; 40% fearful); globalization (62% concerned; 27% fearful); eroding social values (56% concerned; 25% fearful); immigration (55% concerned; 28% fearful); and the pace of innovation (51% concerned; 22% fearful).
People are nearly four times more likely to ignore information that supports a position they don’t believe in; don’t regularly listen to those with whom they often disagree (53%); and are more likely to believe search engines (59%) over human editors (41%).
53% agree that the pace of change in business and industry is too fast. They worry about losing their jobs due to lack of training or skills (60%); foreign competitors (60%); immigrants who work for less (58%); jobs moving to cheaper markets (55%); and automation (54%).
The trust crisis demands a new operating model for organizations by which they listen to all stakeholders; provide context on the issues that challenge their lives; engage in dialogue with them; and tap peers, especially employees, to lead communications and advocacy eﬀorts.
We will be paying careful attention to these issues as we pursue pretrial research and litigation advocacy in 2017. The ways that people (aka “jurors”) evaluate cases will reflect the kinds of mistrust and alienation that this study identifies. Anger seems to be intense, we are devaluing experts, concerned about those different from us, and not listening to those with whom we disagree. These states of being have direct relevance to our efforts to teach, explain, and persuade.
Some interesting research is described in plain language over at the Vox website by Joshua Knobe (an academic from Yale). The article highlights a question we’ve been wondering about that may be important for all of us to consider over the next four years as we plan strategies for litigation.
The question is this: Just how hard is it to get people to move from a perspective of seeing some behavior as morally outrageous to seeing that same behavior as acceptable or even normal? And the answer is disturbing: it is pretty darned easy.
The author discusses how it is important that we continue to see morally outrageous behavior as not normal and that we do not move it to something as simple as bad or wrong. He offers examples of how cognitive science can help us to understand how we begin the process of moving behavior from “morally outrageous” to “bad or wrong” and perhaps, on to “that is just how s/he is”.
Recent studies have taught us a lot about what happens when people classify events as normal or abnormal. [snip] Our minds use the normal-abnormal distinction to rule out many options in advance. At the core of this research is a very simple idea: When people are reasoning, they tend to think only about a relatively narrow range of possibilities. [snip] One important question about human cognition is how people end up choosing one option over the other in a case like this.
[snip] This is where the notion of normality plays its most essential role. Of all the zillions of things that might be possible in principle, your mind is able to zero in on just a few specific possibilities, completely ignoring all the others. One aim of recent research has been to figure out how people do this. Though the research itself has been quite complex, the key conclusion is surprisingly straightforward: People show an impressive systematic tendency to completely ignore the possibilities they see as abnormal.
We make use of the normal-abnormal distinction when thinking about causality.
That last sentence from the Vox article is why we are sharing it with you here today. We know our mock jurors always want to know ‘why’ or to hear about the motivations of the parties. What this research tells us is that if jurors see behavior as abnormal—they are less likely to view the morally outrageous behavior as acceptable or valid. Or they may be more likely to see the behavior as totally unacceptable and worthy of punishment.
Yet, we have to be careful with these research findings and not assume we know how to use them. Here, again, is an example taken from the Vox article (and real life).
For an especially striking example, consider a real-world problem that arose in Arizona’s Petrified Forest National Park. Tourists were stealing bark from the trees, and the park as a whole was gradually being destroyed. What could be done to stop this theft? The staff of the park decided in the end to put up a sign:
‘‘Your heritage is being vandalized every day by theft losses of petrified wood of 14 tons a year, mostly a small piece at a time.’’
The goal was to raise awareness of the problem, making people see more clearly what was so bad about stealing from the park. Perhaps the sign did succeed in raising awareness, but it also had another, more surprising effect.
By drawing attention to the fact that people often steal, it made people see theft as normal.
Many of the park visitors might have seen theft as something that wasn’t even worth considering (like trying to eat your shoe), but the sign helped to switch them over to seeing it as something that might be bad but was still among the normal options (like eating chocolate cake). A systematic study examined the [counter-intuitive persuasive] impact of this sign.
The key result: Putting up this sign actually led to an increase in the total amount of theft.
In the story above, the park took down the sign and stopped informing visitors of the thefts (and thus normalizing theft and making it a behavioral option). From a litigation advocacy perspective, we want to work to maintain the (moral) outrage and horror associated with egregious behavior and not do or say anything that would normalize it. That is easier said than done though, as Joshua Knobe comments in closing the Vox article.
So then, what is to be done? I wish I could say that cognitive scientists have settled on a different but equally effective solution and that all we need to do now is go out and implement it. Unfortunately, however, that is not the case. Research in cognitive science has done a lot to give us a deeper understanding of the problem we now face, but it has not yet furnished us with a workable way of addressing it.
Thus, it is for all of us to focus on how we can seek to not dull jurors’ minds to the horribleness of egregious conduct.
A number of years ago we worked on a criminal case in which an employer was drugging female employees and taking nude photos of them while they were unconscious. What we noted in pretrial research was that the more often we showed the photos of the women, the less impact it had. The Defense would want to work to maximize the photos being shown while the Prosecutor would want to limit the photos to maximize the visual (and emotional/moral) impact.
The next few years will be a challenge for each of us as we work to understand more about how to either maintain the moral outrage of jurors at egregious behavior or trying to dull the impact of the egregious behavior through repeated exposure (depending on our side of the aisle). The actual research the Vox article is based upon is available here.
Thomas F. Icard, Jonathan F. Kominsky, & Joshua Knobe (2017). NORMALITY AND ACTUAL CAUSAL STRENGTH. Cognition
The Gallup folks just published an update on LGBT adults in the US and we want to bring it to your attention to illustrate how societal change is happening and we need to keep up. We are going to highlight a few facts from the Gallup report but encourage you to read the story in its entirety.
For those interested in these things, this giant survey was conducted by telephone [60% cell phone and 40% land lines] with a random sample of 1,626,773 adults living in the US. They were all 18 or older, lived in all 50 states and in DC, and their responses were collected between June 2012 and December 2016. Of the 1.6M+ participants, 49,311 participants said “yes” to a question of whether they personally identified at LGBT.
Here are the highlights of what Gallup’s survey respondents told them:
10M US adults identify as LGBT (this is 4.1% of the population).
Millennials identifying as LGBT are up from 5.8% in 2012 to 7.3% in 2016 and are more than twice as likely as any other generation to identify as LGBT. In the same time period (2012-2016), the proportion of GenXers identifying as LGBTers remained fairly stable and Boomers identifying as LGBT decreased slightly.
More women identify as LGBT than do men.
Among ethnic minorities, the largest increase since 2012 in LGBT identification occurred among Asians and Hispanics. Gallup thinks this is likely affected by the differences in age compositions of these groups (with Asian adults being the youngest among race and ethnicity groupings and Hispanics coming in second).
Increases in LGBT identification were stable across all income and education groups (by 2016, there was “virtually no variation by education”).
Increases in LGBT identification were largely among those identifying as “not religious” (and this group is 3x more likely to identify as LGBT than those who say they are “highly religious”).
Gallup opines that Millennials are less concerned than other generations with sharing “private information” on surveys. They also think the social/cultural climate has changed since survey participants were teens and young adults and it is now more acceptable to identify as LGBT. Gallup cites the legalization of same-sex marriage to support this assertion.
Gallup also thinks it is important to note that all these changes have occurred in a span of only five years (2012-2016). They call this a “marked change” and comment that the US LGBT population has become “larger, younger, more female, and less religious”.
From a litigation advocacy standpoint, this is essential information. We are seeing more and more high-profile LGBT disclosures in the news. Gossip columns routinely report on celebrity statements on sexuality (no link to this one, you can find them on your own!). Most of us are aware of the relatively recent transgender transition of Caitlyn Jenner. Perhaps the most recent is the announcement from model Hanne Gaby Odiele that she was born intersex and had surgery she believes was unnecessary. The fact that LGBT’s are increasingly visible, their issues are discussed more openly, and (especially for younger people) LGBT folks are close friends, family, and—sometimes—themselves. And increasingly, that’s okay.
But not everywhere, or for everyone. As cases are planned and narratives developed, an awareness needs to be maintained of who your jurors might be, and what their experiences, values and beliefs may be. And that includes sexual identification as well as race, ethnicity, gender and age. It is one more variable to make sure you maintain awareness of, as it is clearly changing faster than ever before.
Gallup (January 11, 2017). In US, More Adults Identifying as LGBT.
Here’s an update on the stash of tattoo posts we have here. This is a collection of new research on tattoos (to make sure we are up to date) that will undoubtedly help you decide what your individual ink means/will mean, and of course, what it suggests about your jurors, your clients, your kids, and maybe you, too! We’ll start out with the punch line from one of the articles (Galbarczyk & Ziomkiewicz 2017): women do not find tattooed men irresistibly attractive despite what men think about other men with tattoos.
Do women really “dig” tattoos? (Not so much)
Men apparently believe that a man with tattoos is likely to be serious competition for the attention of a woman. Women themselves do not generally see tattooed men as the be all, end all. That (perhaps surprising) conclusion is according to new research out of Poland where 2,584 heterosexual men and women looked at photos of shirtless men. In some of the photographs, the man’s arms were marked with a smaller black symbol (see graphic illustrating post for one of the photo pairs). Men rated these tattooed men higher in terms of what (they thought) women would look for in a long-term partner. Women did not agree and rated the tattooed men as worse candidates for long-term relationships than the men pictured without tattoos. Once again, men don’t seem to understand what women find attractive. The authors wanted to figure out if women or men were more drawn to tattoos on men and they conclude this way: “Our results provide stronger evidence for the second, intrasexual selection mechanism, as the presence of a tattoo affected male viewers’ perceptions of a male subject more intensely than female viewers’ perceptions.”
In other words, when men get tattooed, other men are going to be more impressed than will women. For men who are homophobic, this could be a traumatizing study.
Are tattooed adults more impulsive? (Not really)
There’s been a plethora of research done on whether the personalities of tattooed adults are different from the personalities of adults with no tattoos. And, after multiple grants of academic tenure—the answer is….not really. This study (Swami, et al.), done in Europe, had 1,006 adults, complete psychological measures of how impulsive and prone to boredom they were. About 1/5 of the participants (19.1%) had at least one tattoo but there were no real differences in terms of gender, nationality, education or marital status. There were also no strong differences in either impulsivity or likelihood of becoming bored—not for those with one tattoo and not for those with more than one tattoo (the highest number among the individual participants was 23 tattoos).
The authors concluded that tattooed adults and non-tattooed adults are more similar than different. (This doesn’t really surprise us as tattoos have become much more normative, although—there is nothing normative about having 23 tattoos.)
So are tattooed women less mentally healthy than non-tattooed women? (Nope)
Women with tattoos have been seen as deviant and anti-social in past research.
If that seems odd to you, know this: When I was in graduate school, there was a widely held view that women with multiple ear piercings as more likely to have personality psychopathology. Multiple piercings were outside the norm of behavior then, and are now, much more common.
So—here’s a study out of Australia (Thompson, 2015) looking at whether that is still the case. This study was completed using an internet survey (710 women) which asked participants to complete the Loyola Generativity Scale. The term generativity comes to us from psychological research and is, very simply, the desire we have (or do not have) to contribute positively to the future. You will often see generativity used to describe the desire to mentor younger people in career or other life areas.
The people who developed the scale describe it this way: “Generativity is a complex psychosocial construct that can be expressed through societal demand, inner desires, conscious concerns, beliefs, commitments, behaviors, and the overall way in which an adult makes narrative sense of his or her life.” (With no offense intended to the scale developers, it is likely easier for you to think of generativity as a desire to positively contribute to future generations.) Essentially, this researcher wanted to see if women with tattoos would have the same level of generativity as women without tattoos.
As in the study of risk-taking and impulsivity that preceded this one, there were no differences between tattooed and non-tattooed women in terms of their level of generativity. What was seen as edgy and counter-cultural 30 years ago is now merely a personal expression and fashion statement.
Finally, can we trust tattooed adults if they have a tattoo with a Christian-theme? (It depends)
This research focused on what they identified as “mixed signals” which they defined as a signal projecting untrustworthiness (in this case, a tattoo) but where the theme or content of the signal suggests trustworthiness (in this case a tattoo of a religious symbol, the cross). Interestingly, this researcher chose to place the tattoos on the neck (either on the side or centered under the chin). While the third photo may look like a necklace to you, it is actually a tattoo. Some were photos of men or women with cross tattoos, others were men or women with star tattoos, while still others saw men or women with no tattoos.
Participants included 326 people who were shown 26 photographs and asked to rate trustworthiness of the person pictured on a scale from 1 (extremely low trust) to 7 (extremely high trust). Only after they had rated the photos were the participants asked whether they would identify as Christians (58.9% did) and if they had tattoos themselves (31% did). The results here are (ironically) mixed.
Christian participants rated the face without tattoos (which perhaps would have communicated shared values) as more trustworthy than the tattooed faces but they also rated faces with the religious tattoo as being more trustworthy than non-Christians did. Non-Christian participants thought the religious tattoo face less trustworthy and the star tattoo face more trustworthy.
From a litigation advocacy perspective, this series of articles on tattoos and what they mean in the present day to the observer, tells us you cannot rely on knowledge from a few years ago to inform you on what a tattoo means now. It is the same with venires—old knowledge is old knowledge. Do not assume that the venire is the same as it was 5 years ago—or that neck tattoos are always signs of deviance. Update yourself. Jurors will probably feel it and be more open to your message.
Galbarczyk, A., & Ziomkiewicz, A. (2017). Tattooed men: Healthy bad boys and good-looking competitors Personality and Individual Differences, 106, 122-125 DOI: 10.1016/j.paid.2016.10.051
Swami, V., Tran, U., Kuhlmann, T., Stieger, S., Gaughan, H., & Voracek, M. (2016). More similar than different: Tattooed adults are only slightly more impulsive and willing to take risks than Non-tattooed adults Personality and Individual Differences, 88, 40-44 DOI: 10.1016/j.paid.2015.08.054
Thompson, K. (2015). Comparing the psychosocial health of tattooed and non-tattooed women Personality and Individual Differences, 74, 122-126 DOI: 10.1016/j.paid.2014.10.010
Timming, A., & Perrett, D. (2016). Trust and mixed signals: A study of religion, tattoos and cognitive dissonance Personality and Individual Differences, 97, 234-238 DOI: 10.1016/j.paid.2016.03.067
Images from Galbarczyk & Ziomkiewicz and Timming et al. articles
While you may think you have heard this line recently, this is really (based on new research) what most of us think about ourselves. It is called the “better than average effect” and it is very persistent. We might smirk at politicians who actually say things like this aloud, but that’s only because we tend to keep those thoughts to ourselves. We (persistently) view ourselves as just better than others, and of course, two new research studies underscore this point.
The first study (Tappin & McKay) recruited 270 adults and asked them to judge the desirability of 30 traits representing agency (e.g., hard-working, knowledgeable, competent), sociability (e.g., cooperative, easy-going, warm) and moral character (e.g., honest, fair and principled). Participants also were asked to indicate how desirable the trait was. how much this specific trait described both the average person and how much it described themselves.
While the agency and sociability traits were rated variably, almost all the participants rated themselves much higher on moral character than they rated the average person.
In an intriguing secondary finding, while the researchers found that overall self-esteem was not related to feelings of superiority, overall self-esteem was related to a sense of moral superiority.
In the second study (Howell & Ratliff), researchers used data from the Project Implicit website where people take various psychological tests that measure unconscious or implicit biases. They focused on people who took tests involving weight biases (these are tests that ask how much you—and the average person—prefer thin people to fat people).
Once again, participants rated themselves as less biased against fat people than the average person was and when given feedback that they were indeed biased against fat people, they were defensive. The more they had rated themselves as unbiased, the more defensive about fat bias feedback they were. They were then asked whether they thought the test was valid—unsurprisingly, they did not think it was valid since it contradicted their self-assessments.
The problem with this belief that we are better than others, both in terms of moral superiority and in our belief that we are less biased than others (which apparently we all share) is that it stops us from honestly assessing ourselves. Therefore, we are prevented from taking action to combat our own prejudices and biases (since we don’t think—or won’t admit—that we have them). Typically, when we hear information about those who are biased or less good than we are, we presume the speaker is talking to “those other people” and tune out.
From a litigation advocacy perspective, these studies have important implications for witness preparation, case narrative, and voir dire. We have discussed the importance of knowing when to raise juror awareness of their own biases and when to stay silent on this blog before. We’ve also posted before on when “playing the race card” works and when it doesn’t work.
This research seems to indicate the importance of using those previously published guidances to direct your decisions about witness preparation, voir dire and case narrative in your specific case. Additionally, it will be important to share “redeeming” information on your client’s involvement in positive activities and your client’s life reflecting the values shared universally by jurors (e.g., family, community, education, volunteerism, et cetera).
Tappin, B., & McKay, R. (2016). The Illusion of Moral Superiority Social Psychological and Personality Science DOI: 10.1177/1948550616673878
Howell JL, & Ratliff KA (2016). Not your average bigot: The better-than-average effect and defensive responding to Implicit Association Test feedback. The British Journal of Social Psychology. PMID: 27709628