You are currently browsing the The Jury Room blog archives for May, 2013.

Follow me on Twitter

Blog archive

We Participate In:

You are currently browsing the The Jury Room blog archives for May, 2013.

ABA Journal Blawg 100!









Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Login

Archive for May, 2013

Me me me Time coverTime Magazine did it again recently and came out with a cover story on how Millennials are so much more narcissistic than any of the rest of us older and more mature people. Time deserves credit for knowing how to sell magazines and how to fan controversy.

This is a generation that would have made Walt Whitman wonder if maybe they should try singing a song of someone else.

Not surprisingly (especially with quotes like that one!) the Time article resulted in a lot of controversy and comments from readers like these over at jezebel.com. It also spawned multiple cover imitators like these. In short, it’s a viral sensation. Apt for an article on the Millennials.

What’s intriguing is that all the irritation and outrage over this article simply shows most people didn’t read the entire thing. About half-way through the article, the focus shifts from “data” on Millennials to realities, and the message is very different from what is presumed from reading the first few pages.

While every Millennial might seem like an oversharing Kardashian, posting vacation photos on Facebook is actually less obnoxious than 1960s couples trapping friends in their houses to watch their terrible vacation slide shows. Can you imagine if the boomers had YouTube, how narcissistic they would’ve seemed?

And then Stein cites a really good TEDx talk by Scott Hess titled Millennials: Who They Are and Why We Hate Them.

The article also cites the YouTube video “You are Not Special” commencement speech for Wellesley High School in 2012. As you can see, this article really isn’t a hate piece on Millennials. Instead, it’s an eye-opening exploration into how all that “data” is simply misleading. Millennials are not just like us [Baby Boomers or Gen Xers]. They are different. Maybe they are better and maybe we are jealous. Much like Hess in his TEDx talk–Stein wonders if we can’t begin to see evolutionary advances as including the changes we see in generation after generation of young people throughout time. It isn’t a bad thing. It’s just what happens as we evolve.

So, yes, we have all that data about narcissism and laziness and entitlement. But a generation’s greatness isn’t determined by data; it’s determined by how they react to the challenges that befall them. And, just as important, by how we react to them. Whether you think millennials are the new greatest generation of optimistic entrepreneurs or a group of 80 million people about to implode in a dwarf star of tears when their expectations are unmet depends largely on how you view change. Me, I choose to believe in the children. God knows they do.”

After reading the internet reactions, we were primed to chew up this new Time article. As parents and boomers with good memories, we recall the natural narcissism of the 15-30 years. We’ve written a lot about generations and so we went, as usual, to the original source and found something refreshing and kind and thoughtful. If, of course, you actually read the entire article!

As an aside, there is an amusing video (at least amusing to a Boomer) on the 41-year-old (Gen X) author Joel Stein “being a Millennial for a Day”. Fail.

The New Greatest Generation. By Joel Stein and Josh Sanburn. Time. 5/20/2013, Vol. 181 Issue 19, p26. 8p.

Image

Share

Conspiracy theorists and survey design

Wednesday, May 29, 2013
posted by Rita Handrich

conspiracyThe information age should have helped but it isn’t getting better. A 2013 poll says almost two out of three registered American voters believe in at least one political conspiracy theory. And some say that number, almost 2/3 of us, is ridiculously low. According to Jesse Walker over at reason.com, closer to 100% of us believe in at least one political conspiracy theory! What is he talking about? Me? A conspiracy theorist?

“Koerth-Baker cites a review-essay that Swami co-wrote for The Psychologist, reporting that it reveals “a set of traits that correlate well with conspiracy belief.” But the Psychologist piece brushes too quickly past an important sociological question: What gets defined as a “conspiracy theory” in the first place?

The answer has more to do with who is promoting a theory than with what it contains. If you announced in the 1970s that a network of underground Satanic sects was kidnapping kids and sacrificing them to the devil, you may well have gotten tagged as a fringy conspiracist. In the 1980s, on the other hand, allegations that once were confined to Jack Chick comics were broadcast on mainstream TV shows, from Oprah to 20/20. [snip] People across the country went to jail for allegedly engaging in ritual Satanic child abuse. And then, gradually, the hysteria faded, and the sorts of conspiracy claims that had been uncritically endorsed on 20/20 in 1985 went back to being framed as fringy ‘conspiracy theories’.”

We have seen examples of conspiracy theorists in our pretrial research. It isn’t at all pretty. But the idea that we are all conspiracy theorists is a little terrifying. Here’s a sampling of the findings in the Fairleigh Dickinson survey (again courtesy of Jesse Walker):

20% of registered American voters think it is “probably true” that Obama stole the 2012 election.

23% of American voters think Bush stole the 2004 election.

25% think Bush knew about 9/11 in advance.

36% think “President Obama is hiding important information about his background and early life”.

Thankfully, Walker educates us to think more critically. Anyone, says Walker, who has any political belief whatsoever likely believes in at least one conspiracy theory. Imagining conspiracies is just part of how human beings tend to perceive the world. He points out that the survey questions are very broad and thus encourage endorsement of the conspiracy-laden belief statements. He says if the pollsters “narrowed that broad Obama question down to a specific theory (‘President Obama is covering up the circumstances of his birth’) or balanced it with a similarly vague statement about a prominent member of the other party (‘Mitt Romney is hiding important information about his background and early life’), the effect might well disappear.

It’s an important reminder about what we communicate and what it elicits from those who listen (often not so well). It’s why we look carefully at how jurors in pretrial research exercises respond to holes in case narratives. When there are rabbit holes in your case narrative, conspiracy theorists find them and race ahead of the rest to “just know” what really happened. They may see a sexual affair, a cover-up, a pay-off, or just “something” that “they” are not telling us.

Conspiracy theories don’t pop up in every research project we do (a fact for which  we’re grateful–facts are at the heart of most cases for most jurors). But when they do, we need to know why, and we urge our mock jurors to share what they heard that drew them to their conclusions.

Often it is a simple response based on a hole in case narrative and we can fix it. Fixing it doesn’t mean stopping a conspiracy-minded juror from leaping to that conclusion. It means giving jurors who support your case actual facts and evidence to refute a conspiracy theory and thus prevent it from gaining traction in deliberations.

Other times the reason for a conspiracy theory explanation of the case is highly idiosyncratic and simply not related to anything tangible and we then look for clues that would point to a dismissal of that particular sort of juror who cannot bring themselves to listen to our case narrative.

Over the last twenty years, we’ve heard a lot. But there is always a new twist or turn to keep it fascinating. So we keep listening. And we keep reading. We appreciate Fairleigh-Dickinson sharing their survey results and we especially appreciate thinkers like Jesse Walker who remind us to not believe everything we hear or read!

Fairleigh Dickinson PublicMind Survey on Conspiracy Theories. January 7, 2013.

Share

autobiographical narrativeSome judges (in our experience, mostly in Federal Court) ask jurors to orally provide autobiographical information to the court. Typically the judge has a list of questions on a board, and asks the jurors to stand and answer the questions that are listed, and sometimes “any additional information you think the Court should know.” Do you want the judge to do that? Maybe. And maybe not. Of course we like information, but how can you know if it’s actually going to be a net gain or loss? An interesting new study might offer some insight: If you get jurors to think autobiographically, they are more likely to resist social change–or, as the authors say, become more politically conservative. Why? Here’s how the authors explain it:

“When writing chronological autobiographical narratives, people re-experience the events of their life in a way that portrays the current situation as the result of past personal actions and choices. [snip] This evokes a sense that the current situation is not the result of chaos and randomness, but that the way things are is the way things should be. [snip] The status quo should be maintained.”

In short, when you are asked to think chronologically about your life, you tend to see causation–this happened and events led inexorably down this path. To illustrate this further, when people are asked to write about how hard work and personal development brought them where they are today, they become more conservative and focused on individual choices and attainment rather than supporting public healthcare (for example). Conversely, according to the author’s literature review–asking people to write about how “chance and good fortune” had impact in their lives results in softening of attitudes or a more liberal outlook towards restricting unemployment benefits (for example).

The researchers did two separate experiments. In each they asked participants to write either autobiographical or biographical narratives.

The first study had 60 Dutch undergraduate students participating (22 men and 38 women, average age 21 years). They were asked to write autobiographical narratives in either a chronological narrative, a reverse chronological narrative, or simply asked to fill out the questionnaires (the dependent measures). They found participants asked to write a chronological narrative were more resistant to change than were those writing either the reverse chronological narrative or the control group who simply completed questionnaires.

The second study used 98 Dutch undergraduate students (20 men, 78 women with an average age of 19.4 years). They were asked to write about their own life in chronological order or in reverse chronological order, about the life of one of their parents in the same two ways, or to simply complete questionnaires and not write any narrative (control group condition). Those who wrote a chronological autobiographical narrative were most politically conservative of the groups. There was no difference between those in the reverse-chronological order and those in the control group who simply completed questionnaires.

The researchers conclude that only writing a chronological autobiographical narrative increases your political conservatism because you will wish (more strongly) to retain the status quo and resist social and political change– regardless of your own convictions on the issues. They believe the difference between the autobiographical and biographical narratives can be explained by the actor/observer differences inherent in those tasks.

When you write about yourself, you are the actor and you can attribute causality to events in your life.

When you write about someone else, even someone closely related like a parent, you are an observer and can thus only speculate about causality.

Given these results, the authors wonder if even just thinking about your own life in a chronological order could have heretofore unrealized effects on our thoughts and behaviors.

And that question leads us to how this article relates to litigation advocacy. You likely won’t find a judge willing to let you ask jurors for chronological autobiographies in written form. And even if you could, you don’t want to read all that. And you may not want your jurors to be led to higher levels of political conservatism.

If you want your jurors to view a party in a more conservative perspective (focusing on their personal choices and personal responsibility), structure your case narrative of that person in a chronological autobiographical fashion. S/he did this, which resulted in that, and the results followed...

And if you want jurors to view your own client more empathically, focus more on  the choices made by the other side, and how your client was an inevitable victim of the choices made by others. Bad luck and chance undermined the reasonable efforts your client has made to succeed.  

This is not entirely new material, but it is an interesting contribution to the body of literature on juror decision-making. We have long known that jurors focus on actors, not observers. So if the jurors go into deliberations talking about liability, you want them talking on the other side, not on your client. They will talk about what was done, what was decided, and what choices were made. And when they talk about damages, someone is in trouble if they start talking about why they did it.

What you are attempting is to get jurors to see opposing counsel’s client or your own client in a way that predisposes jurors to view the parties in a way that is either focused on conscious decisions or inevitable events. Jurors will be calling that shot, even if you don’t plan for it.

Lammers, J., & Proulx, T. (2013). Writing autobiographical narratives increases political conservatism Journal of Experimental Social Psychology, 49 (4), 684-691 DOI: 10.1016/j.jesp.2013.03.008

Image

Share

pictures plus wordsA picture is worth a thousand words”. Most of us think pictures are more persuasive than words. Recently I ran across a sentence in an article saying “it’s commonly believed that we remember 20% of what we hear and 80% of what we see”. Or something to that effect. I don’t know about you but I don’t remember 80% of anything I hear or see and I have a pretty good memory. So I went to our trial consultant email list and asked who could tell me if the statement was supported by research for which they could identify a citation. Immediately, I began to get information from visual consultants.

The classic study in the field was from something called the Weiss-McGrath Report and it did say that we retain more in memory from pictures. In fact, the widely propagated [untrue] statement from that research was that there was a 650% increase in information retention by jurors when oral and visual evidence are combined. Wow! No wonder it is so widely cited. Too bad it isn’t even a little bit true [see pages 27-30 of the linked pdf for explanation]. Shortly thereafter, Ken Broda-Bahm wrote in to say that the study was quoted very often but was in fact misquoted and pretty bogus and based on an undocumented 1856 reference. (That isn’t a typo. We really mean 1856.) We were referred to Ken Lopez’ blog post examining visual persuasion. Finally, Laura Rochelois came to the rescue. She recommended we look into a book written in this century (2009) by Richard Mayer.

Mayer’s book is an academic text but there are myriad posts online reacting to Mayer’s work. Among the search results, we found chapter-by-chapter summaries in pdf format online at Michigan State University. Another nice resource is a 20 minute video interview with Mayer available on YouTube.

In part, Mayer says that it isn’t video or animation that results in learning. What results in learning is good instructional design and presentation. However, according to Mayer, optimal learning and retention is best when words and pictures are presented. Learning is increased between 64% and 121% according to studies Mayer completed between 1989 and 1996.

So the answer to the question about using pictures, words, or both? Not just pictures. Not just words. Both.

Richard E. Mayer (2009). Multimedia Learning, 2nd Edition. New York: Cambridge University Press. DOI: 10.1017/CBO9780511811678

Image 

Share

Ethnic-Groups-ImageShuki. Soukias. Raheem. Samir. Jamal. Lakisha. Atholl. Tyronne. Magestic. Did you know that something as simple as a first name makes the difference between whether you even get the interview? Last weekend we were doing a focus group and one of the mock jurors had a very unique first name. One of a kind. She was African-American. It reminded us of this research and we wanted to post about it since the findings never exactly went viral (as they perhaps should have).

We are only going to cover one research article but there are several out there if you are interested in learning more. This one is more recent and that is why we chose it. Unfortunately, it doesn’t matter if the studies are 10 years old or fairly recent: they all say the same thing. When we see names that seem ethnic to us, we are less interested in interviewing the applicant. It comes down to what is familiar and thus comfortable for us. We like common names more than unique names. And we want to hire people who have common names so that we are more comfortable.

After reviewing the literature, researchers hypothesized that common names would be liked more than unique or ethnic names. To examine the hypothesis, they completed three studies. First, researchers tested sample first names on “working adults and undergraduate business students”. They included “white” (e.g., John, Mary, Robert, Susan), “Russian” (e.g., Vladamir, Sergei, Oksana, Svetlana), “African-American” (e.g., Tyronne, Jamal, Latoya, Tanisha), and “unique” (e.g., Ajax, Atholl, Magestic, Tangerine) names in their sample. The goal of this study was to make sure the names “fit” for the participants sense of White, Russian, African-American and Unique first names.

Participants rated the “white” names as common, the “Russian” names as “likely not American”, and saw the African-American names and the “unique” names as different (as opposed to common). Further, common names were more likable, African-American and Russian names were somewhere in the middle, and unique names were liked the least.

In the second study, 166 university students enrolled in part-time graduate business courses (61% male, 39% female; 78% White, 4% African-American, 2% Hispanic, 12% Asian/Pacific Islander; and 3% “other”; average age 30 years; average work experience 8.4 years) were selected. The participants in this study were asked to evaluate the names in terms of how unique they were, how much they liked the names, and “how willing they would be to hire people with those names”.

In this study, again, participants liked the unique names less and were less likely to be hired. The “best names” (i.e., most liked and most likely to be hired) in this study were Mary and Robert and the “worst names” (i.e., least liked and least likely to be hired) were Atholl and Magestic. African-American and Russian names were seen as being in between these two extreme in both likability and willingness to hire.

So, in the third study, the researchers wanted to check actual hiring behavior to see if the same findings occurred again. In this study, participants were “105 working adults enrolled in a part-time MBA program who had not participated in either of the earlier two studies”. Average age was 28 (range was 21 to 47); they averaged 6.3 years of work experience; 55% reported they had hiring experience; 82% were White, 2% were African-American, 4% were Asian or Pacific Islander, 3% were Hispanic, and 2% were “other” (7% did not report race); 62% were male and 31% were female (7% did not report gender).

Participants in this study were told to imagine they were hiring a new administrative assistant. They were given a real newspaper ad for an administrative assistant and a booklet with 8 resumes and 8 sets of questions regarding hiring. All resumes were designed to make them reasonable candidates for the administrative assistant position. 4 resumes were males and four females with one name taken from each of the four name categories. Participants were asked to evaluate how likely they would be to hire the individual candidates.

We know you think you know what is coming. And (at least in this instance) you are likely wrong. There were no effects by name type or gender. The researchers were surprised by this (as were we) given the strong findings in the first two studies and the plethora of research on the topic.

They offer several ideas for why this happened–chief among them that the participants (as MBA students) wanted to be seen positively and so they were very careful to respond in a socially positive manner. Further, the participants took 15 to 20 minutes to review the 8 resumes and this is unlikely to happen in a regular workplace where supervisors scan resumes to make a rough cut and first impressions are more likely to determine whether a resume goes on for an interview or is discarded.

What the authors do say is that their results would indicate that rejecting unique names is not simply due to racial prejudice. If that were the case, the African-American names would have been the “worst” names for getting an interview. Instead, the studies found that how unique a name was determined which names would go in the “most disliked” category. The researchers think recruiters may well react negatively to names that are unique and thus not recruit or interview those candidates with unique names as often as they choose to recruit and interview those with common names. Anecdotal evidence confirms this theory, both in the States and abroad.

The most obvious omission in the research studies above regards Hispanic names. Would they trigger special treatment? Even in 2008 (when the research was published) the prominence of Hispanic employees in the US workforce would have made inclusion in the study obvious. So we will have to wait a bit to find out whether Javier should change his name to James, and Amelia stands a better chance if she is known as Emily. It would be a shame, but then again, none of us choose our names, so judging one another for the decisions made by our parents when we were born is probably always a shame.

So should people with unique first names change their names for career purposes? The EEOC would beg to differ and may indeed be differing (although it is not confirmed) with the owner of the Whitten Hotel in Taos, New Mexico. This is instead an issue for every organization to take responsibility for reversing. We may not be comfortable with ethnic sounding names but that is very much our problem and not the problem of the individual with the ethnic name. Here’s what you can do:

Use initials on resumes rather than names (and no pictures!) so each candidate is evaluated based on skills and abilities rather than assumptions about names or faces (and the ethnic or racial information they may communicate).

Educate your hiring managers and HR staff about ethnic first name biases. In the process, make it absolutely clear that not only is it a foolish policy to select people based on what their parents thought were nice names, it is likely illegal if construed to be racially tinged. And it usually is.

Promote a work environment that reflects the changing demographics of the country (and the world).

Cotton, J., O’Neill, B., & Griffin, A. (2008). The “name game”: affective and hiring reactions to first names Journal of Managerial Psychology, 23 (1), 18-39 DOI: 10.1108/02683940810849648

Image

 

Share