You are currently browsing the archives for the Witness Preparation category.

Follow me on Twitter

Blog archive

We Participate In:

You are currently browsing the archives for the Witness Preparation category.

ABA Journal Blawg 100!

Subscribe to The Jury Room via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.


Archive for the ‘Witness Preparation’ Category

Those of us who’ve been around for a while have heard this repeatedly. But, lest you think times are changing, here’s some sobering data from a March, 2017 report co-edited by a Michigan State University College of Law Professor.

From the beginning, this is a disturbing report. Here’s how it starts:

African-Americans are only 13% of the American population but a majority of innocent defendants wrongfully convicted of crimes and later exonerated. They constitute 47% of the 1,900 exonerations listed in the National Registry of Exonerations (as of October 2016), and the great majority of more than 1,800 additional innocent defendants who were framed and convicted of crimes in 15 large-scale police scandals and later cleared in “group exonerations”.

The report focuses on murder, sexual assault and drug crimes. To stay brief, we will give you highlights only of the murder statistics for Black defendants. Once you see those, we think you will want to review the whole of this very recent document.

Here are the statistics on Black defendants accused of murder.

Judging from exonerations, innocent black people are about seven times more likely to be convicted of murder than innocent white people.

African-American prisoners who are convicted of murder are about 50% more likely to be innocent than other convicted murderers.

The convictions that led to murder exonerations with black defendants were 22% more likely to include misconduct by police officers than those with white defendants.

In addition, on average black murder exonerees spent three years longer in prison before release than white murder exonerees, and those sentenced to death spent four years longer.

Many of the convictions of African-American murder exonerees were affected by a wide range of types of racial discrimination, from unconscious bias and institutional discrimination to explicit racism.

If you represent Black defendants, these are realities you know. The report is not that long and you can read it and see the consistency of how having black skin gives you less of a shot at justice. One day, we’d like to see the report telling us that courtrooms are color blind, but we are nowhere near that goal.

Samuel R. Gross, Maurice Possley, & Klara Stephens (2017). Race and Wrongful Convictions in the United States. UC Irvine: National Registry of Exonerations. 

Available here.


Comments Off on Your Black client is much more likely to be wrongfully convicted

In 2014, we wrote about research investigating how people felt when a witness wore a veil such as some forms of a hijab or a niqab. Here were some of the findings we described in that research.

We’ve written a number of times about bias against Muslims. But here’s a nice article with an easy to incorporate finding on how to reduce bias against your female client who wears a Muslim head-covering. (In case you have forgotten, we’ve already written about head-coverings for the Muslim man.)

The graphic illustrating this post shows the variety of head-coverings Muslim women might wear and the initial findings (as to which head covering style results in the most bias) will probably not surprise you. Researchers did four studies to see how people reacted to Muslim women wearing veils. They consistently found these reactions:

Responses were more negative when the Muslim woman wore a veil of any kind compared to no veil at all.

When the various veils were compared, the niqab or burqa (where only the eyes are exposed or even the eyes are covered) were seen most negatively.

Today’s research goes beyond bias caused by face veils and looks at whether observers are able to detect deception in witnesses wearing veils (as compared to those not wearing veils). The researchers cite three fairly recent (post-2000) cases resulting in judges in the USA, the UK and Canada ruling witnesses cannot wear the niqab when testifying, in part, say the researchers, because they believed it necessary to see a person’s face to detect deception.

The researchers decided to test that assumption by comparing the ability to detect deception when a testifying witness wore a face covering veil versus when the witness did not wear a face covering veil. They ran a study in Canada with 232 participants and then a second study with participants from Canada, the UK and the Netherlands (with a total of 291 participants) and came to a perhaps surprising conclusion. While the detection of deception in unveiled witnesses was no better than chance—the same was not true for those witnesses who wore veils.

“Observers were more accurate in detecting deception in witnesses who wore niqabs or hijab than those who did not veil.”

The researchers say that (contrary to the assumptions underlying court decisions in three countries) the witness who wore a veil did not hamper lie detection—but rather improved it. Why? They make several hypotheses:

Researchers think participants in the “veiled” condition may have interpreted “eye gaze information” more accurately.

Participants had less visual information to attend to and thus were more likely to base their decisions on verbal than non-verbal information.

In short, the researchers think their participants were forced by the situation to rely more on verbal behavior and to focus their attention on the eyes of the witness in the veiled condition. This is actually consistent with the research we’ve covered in our multiple posts on deception detection research. Examples from detection research such as narrowing your focus from multiple cues to just a few or even one cue, examining eyebrows, having certain personality characteristics of your own, how much the witness uses profanity, and even how long it has been since the witness has used a bathroom, and much more are all mentioned in the research as aiding in deception detection. And then there are all of the things jurors often believe point to deception that truly do not help them to identify who is a truth teller and who is a liar.

In this research, the participants could examine eyebrows in the veiled condition and their focus was certainly narrowed so they were less likely to be distracted by irrelevancies—that alone likely improved their ability to detect deception. This is an interesting study that tells us the common reliance we see among mock jurors on non-verbal indicators to detect deception and even the court rulings since 2000 are outdated when it comes to jurors’ ability to detect deception in a witness. Like the researchers say in their article title, less is actually more when it comes to detecting deception.

We made some recommendations to reduce bias against your veil-wearing client back in 2014 and we would still make those recommendations today.

Here they are:

The researchers say that for the least bias, if a religious Muslim woman wants to wear a head-covering, the hijab is likely the best choice. That may, however, not be an option given her religious beliefs.

In either case, this research would say to give jurors information about your client’s choice to wear a Muslim head-covering (of any style) and it will reduce negative assumptions.

The very process of sharing the reasons for wearing a head-covering with jurors, gives them the opportunity for emotional connection with your client. Her sharing reasons for the head-covering allows them to ‘see’ her individuality and religious conviction.

We’d call that both making your client more similar to the jurors (through the use of universal values) and giving jurors an opportunity to see “beneath the head-covering” to the woman herself.

Leach AM, Ammar N, England DN, Remigio LM, Kleinberg B, & Verschuere BJ (2016). Less is more? Detecting lies in veiled witnesses. Law and Human Behavior, 40 (4), 401-10 PMID: 27348716


Comments Off on Identifying deception when the witness wears a face-covering veil

When we began this blog in 2009, the reality that facts don’t matter was one of the first posts we wrote. We wrote again about this reality back in 2011. And we’ve written about it several times since then so…here we go again!

In this new era of fake news and fake news allegations, we’ve seen a surge in the number of “fact checkers” employed by the media to verify accuracy of statements made by people in this country’s leadership. Some think the publicizing of fact checking can be effective against the spread of misinformation. New research (conducted during the 2016 Presidential election) tells us (yet again) that while fact checking is certainly of value, it depends on whether your intended audience is listening.

That is, while fact checking helped study participants understand what was true and not true, that knowledge made no difference in their voting behavior.

While that disturbing reality sinks in, here’s a brief summary of the research which was published by the Royal Society Open Science Journal and concentrated on statements (both factual and inaccurate) made by candidate Trump during the Republican primary campaign of 2016. The researchers conducted their research online with 2,023 participants. As part of the study, participants were presented with four inaccurate statements and four accurate statements made by candidate Trump (you can see the list of statements in the article itself, but they include misstatements on the unemployment rate and the relationship between vaccines and autism). Sometimes the statements were attributed to Trump and other times they were not attributed to any of the candidates. Then, inaccurate statements were corrected using non-partisan sources such as the Bureau of Labor Statistics. So far so good. When the researchers corrected the false statements, belief in those statements fell across the board.

That is, belief in the Trump falsehoods fell for Trump-supporting Republicans, Republicans favoring other candidates, and for Democrats.

However, the researchers continued on and examined who the supporters intended to vote for—and the correction of misinformation (and reported self-awareness of the inaccuracy of the statements) made no difference in for whom the Republican participant planned to vote. The only participants less likely to vote for Trump were the Democrats (who’d not planned to vote for him anyway).

The researchers conclude that while fact-checking can change people’s beliefs, their strength of partisanship has an effect on the strength of the change when it comes to voting intention. And, perhaps not surprisingly, the researchers wonder just what would have to happen to change voting intention in the face of strong partisan beliefs. They suggest that people “use political figures as a heuristic to guide evaluation of what is true or false, yet do not necessarily insist on veracity as a prerequisite for supporting political candidates”.

If you don’t think that makes sense, you are not alone (we don’t think it makes much sense either). For years, we have believed (and seen it borne out time after time) that political affiliation is not a difference that makes a difference when it comes to decision-making on litigation cases. Yet, we are seeing increasing amounts of research telling us the USA is so split along partisan lines that perhaps, at least right now, it is a difference that makes a difference. We still have not seen it in our work but you can bet we are watching it closely in ongoing pretrial research. Stay tuned.

Swire, B., Berinsky, A., Lewandowsky, S., & Ecker, U. (2017). Processing political misinformation: comprehending the Trump phenomenon. Royal Society Open Science, 4 (3) DOI: 10.1098/rsos.160802


Comments Off on Facts [still] don’t matter: the 2017 edition 

So maybe it doesn’t pay to be beautiful  

Wednesday, March 1, 2017
posted by Douglas Keene

Or at least, maybe there is no “ugliness penalty” if you are not beautiful. We’ve written a number of times here about the many benefits given to those who are seen as beautiful or attractive. This paper debunks the stereotype and says that salary goes beyond appearance and individual differences matter too.

The researchers used a nationally representative US data set (from the National Longitudinal Survey of Adolescent Health aka “Add Health”) with “precise and repeated measures of physical attractiveness”. In this data set, are researcher-ratings of physical attractiveness of all participants (on a five-point scale) at four different points over a 13 year period. And what did they find? Overall, say the authors, the “beauty premium” completely disappeared when other factors (e.g., health, intelligence, better personality traits) were controlled for statistically.

“Physically more attractive workers may earn more, not necessarily because they are more beautiful, but because they are healthier, more intelligent, and have better personality traits conducive to higher earnings, such as being more Conscientious, more Extraverted, and less Neurotic,” explains Kanazawa.

Other research would say that beauty or attractiveness could account for some of these other personality characteristics as they can be shaped by how others respond to us. As the authors discuss their findings, they mention this reality and comment that (because the dataset ended at age 29) they are unable to account for the impact of life experience on Neuroticism (for example).

“To the extent that physically less attractive individuals are more likely to have negative life experiences, physical attractiveness may still be an ultimate cause of earnings via Neuroticism.”

However, there was also evidence for an “ugliness premium” (which is the opposite of an ugliness penalty)—in which the less attractive you were, the more you were paid. In this dataset, these were the people rated as “very unattractive” and, oddly, they always earned more than those rated as “unattractive”. And, even more surprising, sometimes the “very unattractive” earned more than those described as “average-looking” or even “attractive”.

The authors tell us the reason this sort of finding was not reported in earlier research was that the “very unattractive” and “unattractive” groups were often lumped together in a “below average” category that prevented researchers from seeing the benefits of being “very unattractive”.

Overall, say the authors, there is some evidence for the beauty premium but no evidence for the ugliness penalty. Further, there is strong evidence for the (very) ugliness premium. They point out that this survey did not continue after age 29 and thus cannot answer the question of whether the beauty premium or the ugliness penalty are cumulative throughout working careers. On the other hand, the inclusion of attractiveness ratings in a dataset is highly unusual and the authors hope more researchers will include these ratings in the future datasets.

“Physical attractiveness is a very neglected variable in social science data, and no other longitudinal data sets on a representative sample measures it as precisely as Add Health does.”

From a litigation advocacy perspective, we think beauty goes a long way in a party, a witness, and even an attorney. On the other hand, there can be a beauty backlash, so you need to watch for that in pretrial research as well. The likability factor is also very important and even an unattractive witness can seem more appealing when likable. (You can see our more than 200 posts on witness preparation here.)

From a law office management perspective, this is also an area to which you need to pay special attention. You will want to modify procedures so that promotions and salary increases are based on objective performance data and not on gender, beauty, age, ethnicity, disability status and so on. (You can see 60+ posts on law office management here.)

[We want to give you full disclosure regarding the research report cited in this post. The senior author is a very controversial figure whom colleagues have criticized as unreliable and/or as a researcher who personifies “bad science”. He has been criticized for many things and fired from several writing positions due to the negative and public reactions to his work. You can make your own judgments as to the merit of this research but we wanted you to have the full picture.]

Kanazawa, S., & Still, MC (2017). Is there really a beauty premium or an ugliness penalty on earnings? Journal of Business and Psychology.


Comments Off on So maybe it doesn’t pay to be beautiful  

Here’s another combination post offering multiple tidbits for you to stay up-to-date on new research and publications that have emerged on things you need to know. We tend to publish these when we’ve read a whole lot more than we can blog about and want to make sure you don’t miss the information.

Juror questions during trial and the prevalence of electronic and social media research

The National Center on State Courts just published a study authored by a judge in the Pennsylvania Lawyer on whether allowing jurors to ask questions during trial will help resolve issues of electronic and social media research during trial. The judge-author suggests the judicial directives to not conduct any form of research (the instructions usually itemize various forms of social media as examples of “what not to do”) do not stop the research from happening—it simply makes the research surreptitious rather than public. Since this publication is in the Pennsylvania Lawyer, they focus on Pennsylvania jury instructions but also discuss how other venues have used (and controlled) juror questions during trial. The article offers suggestions developed in the subcommittee on civil jury instructions. It is well worth a read if you have questions about the practice of allowing juror questions.

We should question alibis and the weight we place on them during jury deliberations

Given all the concerns about the accuracy of eye-witness testimony, it only makes sense we should also closely examine alibis and whether we simply accept them as true. A new article in Pacific Standard magazine says we need to pay attention to alibis as new research is telling us that accuracy of alibis resemble the vagaries of faulty eye-witness testimony. According to the new research, we tend not to remember mundane events (like where we were on August 17, 2009). The authors of the study described say that the wrong people can end up in jail due to alibi inconsistency and eyewitness mis-identification.

The curious impact of donning a police uniform

New research published in Frontiers in Psychology tells us that putting on a police uniform automatically affects how we see others and creates a bias against those we consider of lower social status. Essentially, say the researchers, the uniform itself causes shifts (likely due to the authority communicated by the uniform) resulting in judgment of those considered to be lower status (i.e., in this study those wearing hoodies were identified as having a lower social status). The researchers think it possible that police officers (who put on their uniforms) may perceive threat where none exists.

Identifying lies with fMRI machines

We’ve written about identifying deception using fMRIs frequently at this blog and here’s a four-page “knowledge brief” from the MacArthur Foundation Research Network on Law and Neuroscience. You can also download this summary at SSRN. This is a terrific (and brief) summary on everything you need to know about what fMRI machines can tell us about deception and what they cannot tell us about deception. You could think of this as a primer on fMRIs and how they work (and don’t work) as well as a guide to deposition testimony of an expert witness touting the deception-identifying abilities of the machine. This resource is very worth your time.

Ciro Civile, & Sukhvinder S. Obhi (2017). Students Wearing Police Uniforms Exhibit Biased Attention toward Individuals Wearing Hoodies. Frontiers in Psychology, (February 6,)


Comments Off on Juror questions during trial, alibis, police uniforms, and fMRIs and lie detection