You are currently browsing the archives for the Decision-making category.

Follow me on Twitter

Blog archive

We Participate In:

You are currently browsing the archives for the Decision-making category.

ABA Journal Blawg 100!

Subscribe to The Jury Room via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Login

Archive for the ‘Decision-making’ Category

Here’s another combination post offering multiple tidbits for you to stay up-to-date on new research and publications that have emerged on things you need to know. We tend to publish these when we’ve read a whole lot more than we can blog about and want to make sure you don’t miss the information.

Juror questions during trial and the prevalence of electronic and social media research

The National Center on State Courts just published a study authored by a judge in the Pennsylvania Lawyer on whether allowing jurors to ask questions during trial will help resolve issues of electronic and social media research during trial. The judge-author suggests the judicial directives to not conduct any form of research (the instructions usually itemize various forms of social media as examples of “what not to do”) do not stop the research from happening—it simply makes the research surreptitious rather than public. Since this publication is in the Pennsylvania Lawyer, they focus on Pennsylvania jury instructions but also discuss how other venues have used (and controlled) juror questions during trial. The article offers suggestions developed in the subcommittee on civil jury instructions. It is well worth a read if you have questions about the practice of allowing juror questions.

We should question alibis and the weight we place on them during jury deliberations

Given all the concerns about the accuracy of eye-witness testimony, it only makes sense we should also closely examine alibis and whether we simply accept them as true. A new article in Pacific Standard magazine says we need to pay attention to alibis as new research is telling us that accuracy of alibis resemble the vagaries of faulty eye-witness testimony. According to the new research, we tend not to remember mundane events (like where we were on August 17, 2009). The authors of the study described say that the wrong people can end up in jail due to alibi inconsistency and eyewitness mis-identification.

The curious impact of donning a police uniform

New research published in Frontiers in Psychology tells us that putting on a police uniform automatically affects how we see others and creates a bias against those we consider of lower social status. Essentially, say the researchers, the uniform itself causes shifts (likely due to the authority communicated by the uniform) resulting in judgment of those considered to be lower status (i.e., in this study those wearing hoodies were identified as having a lower social status). The researchers think it possible that police officers (who put on their uniforms) may perceive threat where none exists.

Identifying lies with fMRI machines

We’ve written about identifying deception using fMRIs frequently at this blog and here’s a four-page “knowledge brief” from the MacArthur Foundation Research Network on Law and Neuroscience. You can also download this summary at SSRN. This is a terrific (and brief) summary on everything you need to know about what fMRI machines can tell us about deception and what they cannot tell us about deception. You could think of this as a primer on fMRIs and how they work (and don’t work) as well as a guide to deposition testimony of an expert witness touting the deception-identifying abilities of the machine. This resource is very worth your time.

Ciro Civile, & Sukhvinder S. Obhi (2017). Students Wearing Police Uniforms Exhibit Biased Attention toward Individuals Wearing Hoodies. Frontiers in Psychology, (February 6,)

Image

Share
Comments Off on Juror questions during trial, alibis, police uniforms, and fMRIs and lie detection

Lately we’ve heard a lot more anti-immigrant bias expressed in public and it turns out, hate speech breeds hatred of its own. This research has pretty frightening findings and you may find it hard to believe there is such misinformed hatred in 2017. Or, perhaps you won’t find it hard to believe at all.

We will just share a few of the disturbing findings here:

The researchers (from Northwestern University) showed American participants (recruited via the internet through online subject pools and via email through university channels) the ‘Ascent of Man’ diagram (which is apparently popular in research circles and conveniently illustrates this post). They asked participants identify where they thought (whole groups of) people belonged on this scale “from the ape-like human ancestor to the modern human”. You likely can guess if you regularly read this blog what happened.

Participants placed Muslims and Mexican immigrants significantly lower on the scale than they placed Americans as a whole.

In other words, the participants saw Muslim and Mexican immigrants as significantly less than fully human. In an attempt to understand this better, the researchers statistically controlled for conservative views and racial prejudice, but still found differences.

Those participants who dehumanized Muslim and Mexican immigrants by placing them lower on the ‘Ascent of Man’ scale were also more likely to see them as threatening, to withhold sympathy for them and to support measures like increased surveillance, restricted immigration and increased deportation.

Overall, say the researchers, “the correlation between dehumanization and then-candidate Trump was significantly stronger than the correlation between dehumanization and support for any other Democratic or Republican candidates”.

And what did that dehumanization result in? The researchers asked Muslim and Mexican immigrants to report how dehumanized they felt, and found the greater the perception of dehumanization, the more likely the individual was to support violent versus non-violent collective action.

For example, Mexican immigrants who felt dehumanized by candidate Trump “were more likely to dehumanize him, want to see him personally suffer, and endorse hostile actions such as spitting in his face”.

Further, Muslims who felt dehumanized also favored violent over non-violent collective actions and were less willing to assist in anti-terrorism efforts by law enforcement.

The authors suggest two results from dehumanization of others:

Those who dehumanize are more likely to support hostile policies.

Those who feel dehumanized feel less integrated into society and are more likely to endorse violent as opposed to nonviolent responses in return (which will reinforce the idea among those who dehumanize that “these people are like animals”).

Ultimately, this results in a “vicious cycle” of what the researchers call meta-dehumanization, and make life less safe for all of us. Previous research, reported by the authors, tells us marginalization leads to radicalization.

In other words, say the authors, the subset of the American public spewing hate speech toward immigrants may result in radicalization and subsequent violence from those they hate (and dehumanize) and thus, make their fear-based prophecy come true.

Kteily, N., & Brunei, E. (2017). Backlash: The politics and real-world consequences of minority group dehumanization. Personality and Social Psychology Bulletin, 43 (1), 87-104

Image

Share
Comments Off on Anti-Muslim and anti-Mexican attitudes create a  self-fulfilling prophecy

Anyone who has been in court more than a few times, has likely heard a judge “rehabilitate” a potential juror who has expressed bias by asking the juror if they will, in judging “this case”, be “fair, impartial and unbiased”. Why yes, your Honor (say almost all of them). Mykol Hamilton and Kate Zephyrhawke, researchers, call this “prehabilitation” and have written several articles over in The Jury Expert speaking to this issue.

But, today, we have a simple and straightforward blog post so you can tell it to the judge: it simply does not work! You may be familiar with Mind Hacks blog (one of our favorites) and Tom Stafford. Recently, Tom wrote a post that seems specifically designed to educate judges on the ineffectiveness of this common practice. According to Dr. Stafford’s post, it’s all about confirmation bias.

He describes a classic (i.e., thirty years old) psychological experiment by Charles Lord and his colleagues, that looked at confirmation bias. You can see the succinct explanation of this “classic”  experiment over at Mind Hacks. Essentially, in this study, even though participants were asked to be objective and unbiased, they showed evidence of the same biases as they voiced prior to study completion.

In other words, the instructions to be fair and unbiased did not work even though the participants voiced their agreement with the importance of being fair and unbiased.

These researchers followed up this study with a 1984 study wherein they asked participants to “consider the opposite” (an effect we’ve written about here before).

To use the “consider the opposite” strategy, you would ask jurors to simply consider alternatives that could have happened and talk about how they might have occurred just as readily as the real outcome. In research, when participants do this, they are able to see the multiple outcomes that “could have happened” much more clearly and their tunnel vision (aka hindsight bias) surrounding the event dissipates.

In this study, half the participants were asked (again) to be “fair and impartial” and half were asked to “consider the opposite”. The results were powerful. Those in the “consider the opposite” condition “completely overcame the biased assimilation effect”.

That is, they did not make decisions based on preconceptions expressed prior to the experiment.

So. There you go. You can tell it to the judge (or not). If the judge insists on a “prehabilitation approach”—you can incorporate the idea of the “consider the opposite” strategy into your opening statement and then show jurors how to apply that strategy in your case in chief.

Who would have thought that research three decades old could solve a current-day problem?

Charles G. Lord, Lee Ross, & Mark R. Lepper (1979). Biased Assimilation and Attitude Polarization: The Effects of Prior Theories on Subsequently Considered Evidence. Journal  of Personality and Social Psychology, 37 (11), 2098-2109

Lord CG, Lepper MR, & Preston E (1984). Considering the opposite: a corrective strategy for social judgment. Journal of Personality and Social Psychology, 47 (6), 1231-43 PMID: 6527215

Image

Share
Comments Off on Countering judicial “rehabilitation”: Tell it to the judge 

It is disconcerting to watch the political upheaval in this country but similar things seem to be happening around the world. We just found a new group that measures societal changes in trust. Edelman has surveyed “tens of thousands of people across dozens of countries” for the past 17 years measuring levels of trust in business, media, government, and non-governmental organizations (NGOs) which are typically non-profit. According to Edelman, this year is the first time the average level of trust (“to do what is right”) in all four types of institutions decreased. They also report the following statistics:

71% of survey respondents said government officials are not at all or somewhat credible.

63% said CEOs are not at all or somewhat credible.

60% of respondents trusted “a person like yourself” (which was in line with trust in a tech expert or an academic). In other words, they say, peers are now on par with experts.

NGOs were most trusted, Business was a close second (only one point behind NGOs), media came in third, and government came in fourth. (These place finishes should be considered skeptically since their combined overall approval rating was less than 50%.)

The following graphic shows a comparison of 2016 and 2017’s trust ratings for the four areas surveyed.

In addition to the Executive Summary, you can view Global Results, and watch a video on what Edelman calls a trust implosion. When trust declines, populism rises says Edelman—and we have seen that internationally as well as here at home.

From a litigation advocacy perspective, perhaps most important for our work is their lessons on how trust has been broken—housed over at Scribd. Here are a few of their lessons we see as related to litigation advocacy:

Leading the list of societal concerns and fears we measured that are commonly associated with populist actions are corruption (69% concerned; 40% fearful); globalization (62% concerned; 27% fearful); eroding social values (56% concerned; 25% fearful); immigration (55% concerned; 28% fearful); and the pace of innovation (51% concerned; 22% fearful).

People are nearly four times more likely to ignore information that supports a position they don’t believe in; don’t regularly listen to those with whom they often disagree (53%); and are more likely to believe search engines (59%) over human editors (41%).

53% agree that the pace of change in business and industry is too fast. They worry about losing their jobs due to lack of training or skills (60%); foreign competitors (60%); immigrants who work for less (58%);  jobs moving to cheaper markets (55%); and automation (54%).

The trust crisis demands a new operating model for organizations by which they listen to all stakeholders; provide context on the issues that challenge their lives; engage in dialogue with them; and tap peers, especially employees, to lead communications and advocacy efforts.

We will be paying careful attention to these issues as we pursue pretrial research and litigation advocacy in 2017. The ways that people (aka “jurors”) evaluate cases will reflect the kinds of mistrust and alienation that this study identifies. Anger seems to be intense, we are devaluing experts, concerned about those different from us, and not listening to those with whom we disagree. These states of being have direct relevance to our efforts to teach, explain, and persuade.

Image

Share
Comments Off on The Edelman Trust Barometer: “An era of backlash against authority” 

We all want our expert witnesses to be influential with jurors. But when you have an expert testifying about forensic science (like fingerprint or DNA identification) what part of the testimony is going to influence jurors the most? Will it be the science? The technology used by the witness to interpret and understand the data? Or some characteristic of the witness? A new study tells us what jurors find most influential as they make decisions about your case.

You may find these results distressing (or you may breathe a sigh of relief over them). The researchers were interested in seeing how much the

“science” (i.e., how has the method the witness used to determine their findings been tested and validated) was persuasive, to what degree the

“technology” (i.e., is the technology older or is it new, whiz-bang technology of the latest findings) was persuasive, or, to what degree

“individual characteristics” of the witness (i.e., education and experience) was the most persuasive to jurors.

So was it the whiz-bang of the technology or the validation of the technology used? Nope. The individual background and experience of the expert witness was what most persuaded jurors. In other words, credibility turned on whether the jurors found the expert witness was qualified and had the experience base to understand the science and communicate their findings effectively. And in our experience, the likability and personal appeal of the expert is a significant additional factor that goes well beyond credentials.

Courts have long asserted standards for admissibility of scientific evidence and testimony. Jurors always insist that they need to understand the science in order to judge the merits of a technological or scientific dispute, but in truth most jurors get tired of trying to figure it out pretty quickly, unless they have a background that gives them a head start. Ultimately, the messenger is a crucial part of the evidence, and for those who struggle to understand it, the messenger (i.e., your expert witness) is a crucial factor.

Many of us also have beliefs that the latest technology is more persuasive than tired, old-fashioned and low-tech ways of interpreting data. But in this study, jurors compared fingerprint experts using either whiz-bang technology with computerized matching of prints or a visual scan of the fingerprints using the ACE-V (analyze, compare, evaluate, and verify). While the ACE-V method may “sound” good to jurors, the National Academy of Sciences (NAS) report in 2009 stated that the ACE-V method of fingerprint analysis was “not specific enough to qualify as a validated method”.

Regardless of the invalidation of the ACE-V method, the experienced expert won out over the technology. The researchers thought perhaps jurors were not confident that the witness using whiz-bang technology knew enough about how to interpret the results accurately.

So, what it came down to both times (across two experiments) was juror evaluations of the experience of the testifying forensic scientist. Researchers said the jurors “leaned on the experience of the testifying forensic scientist to guide their assessments of the soundness of his [sic] findings”. In another section of the paper, the researchers opine that witness “experience serves as a proxy for scientific validity”.

Even more disturbing, the researchers asked participants for the “total number of college and graduate level classes in science, math, and logic that you have completed”. They thought those who were more educated in scientific methods would focus more on the scientific validity of the analysis used by the testifying expert. This was not the case.

“It may be that jurors simply didn’t perceive a connection between the scientific validation of a forensic technique and its accuracy.”

From a litigation advocacy perspective, this speaks to the importance of educating jurors at a level they can understand about the science behind the expert’s interpretation. We have blogged before about using skepticism in direct examination and this approach (wherein you have your expert discuss the methods used by the other expert and why your expert’s strategy is more reliable and valid) would be a good strategy to discredit opposing counsel’s expert.

Overall, when your case relies on science and technology, use pretrial research to ensure jurors understand enough of the science to make educated and informed decisions about the evidence. If you do not make sure you are teaching at a level they understand, it is likely they will fall back, like the jurors in this research, on their intuition about witness experience and training (and probably on how likable, knowledgeable, confident, and trustworthy is the expert) rather than on whether the expert used credible methods used to analyze the evidence.

Koehler, J., Schweitzer, N., Saks, M., & McQuiston, D. (2016). Science, technology, or the expert witness: What influences jurors’ judgments about forensic science testimony? Psychology, Public Policy, and Law, 22 (4), 401-413 DOI: 10.1037/law0000103

Image

Share
Comments Off on Forensic Science Testimony: What most  influences jurors?