You are currently browsing the archives for the Forensic evidence category.

Follow me on Twitter

Blog archive

We Participate In:

You are currently browsing the archives for the Forensic evidence category.

ABA Journal Blawg 100!

Subscribe to The Jury Room via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Login

Archive for the ‘Forensic evidence’ Category


When litigation cases rely on science or highly technical information, it is critical to help jurors understand the information underlying the case at a level that makes sense to them. If they do not understand your “science”, they will simply guess which party to vote for or “follow the crowd”. Here’s an example of what happened when scientists “followed the crowd” to see what fields of science were seen as most precise (and therefore reliable).

You can see from the graphic illustrating this post that too many people are watching CSI shows on TV. When forensic science is more “certain” than nanotechnology or aerospace engineering, and even mechanical physics—we have a problem! The authors actually agree with us in this press release:

“The map shows that perceptions held by the public may not reflect the reality of scientific study,” Broomell said. “For example, psychology is perceived as the least precise while forensics is perceived as the most precise. However, forensics is plagued by many of the same uncertainties as psychology that involve predicting human behavior with limited evidence.”

Be that as it may, when these researchers from Carnegie Mellon set out to see which branches of science the public feels most certain about—that is what they found. It is part and parcel of the frustration we often see from our attorney-clients when what they have presented is not what our mock jurors have retained.

We’ve also talked about the newer findings of political polarization coloring reactions to almost everything. The researchers also found political differences in the evaluation of whole fields of science here (again quoting the press release from Carnegie Mellon).

While political affiliations are not the only factor motivating how science is perceived, the researchers did find that sciences that potentially conflict with a person’s ideology are judged as being more uncertain. “Our political atmosphere is changing. Alternative facts and contradicting narratives affect and heighten uncertainty. Nevertheless, we must continue scientific research. This means we must find a way to engage uncertainty in a way that speaks to the public’s concerns,” Broomell said.

In other words, people believe what they choose to believe and you can’t predict how they will engage or not engage. And finally, they also make this comment which we are hearing more and more in the mass media—essentially, the responses these participants gave have no apparent relation to facts.

However, our results also suggest that evaluations of specific research results by the general public (such as those produced by climate change, or the link between autism and vaccination) may not be strongly influenced by accurate information about the scientific research field that produced the results.

This is an area of lament from many who deal in data and facts. We are living in a post-expert world (and some would say post-truth and post-facts). So what are you to do?

From a litigation advocacy perspective, this study tells us how important it is to make scientific research relevant and common-sense (or even counter-intuitive) to your jurors. They need to understand it and have it make sense to them or be allowed to revel in the counter-intuitive nature of the findings.

Feeling comfortable with the “science” (whatever it may be) is a much better way to ensure consistency from your jurors than relying on the chart illustrating this post to predict how listeners will react to the particular science upon which your case relies.

Broomell, S., & Kane, P. (2017). Public perception and communication of scientific uncertainty. Journal of Experimental Psychology: General, 146 (2), 286-304 DOI: 10.1037/xge0000260

Image taken from the article itself

Share
Comments Off on Which science is most “certain” according to the American public? 

Those of us who’ve been around for a while have heard this repeatedly. But, lest you think times are changing, here’s some sobering data from a March, 2017 report co-edited by a Michigan State University College of Law Professor.

From the beginning, this is a disturbing report. Here’s how it starts:

African-Americans are only 13% of the American population but a majority of innocent defendants wrongfully convicted of crimes and later exonerated. They constitute 47% of the 1,900 exonerations listed in the National Registry of Exonerations (as of October 2016), and the great majority of more than 1,800 additional innocent defendants who were framed and convicted of crimes in 15 large-scale police scandals and later cleared in “group exonerations”.

The report focuses on murder, sexual assault and drug crimes. To stay brief, we will give you highlights only of the murder statistics for Black defendants. Once you see those, we think you will want to review the whole of this very recent document.

Here are the statistics on Black defendants accused of murder.

Judging from exonerations, innocent black people are about seven times more likely to be convicted of murder than innocent white people.

African-American prisoners who are convicted of murder are about 50% more likely to be innocent than other convicted murderers.

The convictions that led to murder exonerations with black defendants were 22% more likely to include misconduct by police officers than those with white defendants.

In addition, on average black murder exonerees spent three years longer in prison before release than white murder exonerees, and those sentenced to death spent four years longer.

Many of the convictions of African-American murder exonerees were affected by a wide range of types of racial discrimination, from unconscious bias and institutional discrimination to explicit racism.

If you represent Black defendants, these are realities you know. The report is not that long and you can read it and see the consistency of how having black skin gives you less of a shot at justice. One day, we’d like to see the report telling us that courtrooms are color blind, but we are nowhere near that goal.

Samuel R. Gross, Maurice Possley, & Klara Stephens (2017). Race and Wrongful Convictions in the United States. UC Irvine: National Registry of Exonerations. 

Available here.

Image

Share
Comments Off on Your Black client is much more likely to be wrongfully convicted

When my kids were younger, I used to talk to them about the difference between intent and impact as they struggled to understand the varying reactions of people to their behavior. Back in 2009, we posted on some new research showing that we reacted more indignantly when bad deeds were done “on purpose”. Here is some of what we wrote then and you may want to visit that post in full as well:

This is an intriguing study because it speaks to the heart of telling the emotional story at trial.  You want jurors to have an emotional response—a connection to your story, to your client. You want them to ‘want to’ find for your client, and see him or her as a worthy recipient of their support. What this research tells us is that if the pain inflicted on your client was ‘intentional’, jurors may have a stronger emotional response to it. [snip]

Your goal is to light the fire of moral indignation in the minds of the jurors. You want to answer both aspects of the common juror refrain “it may be legal but it sure isn’t right”. Show them it isn’t right. Show them it isn’t legal. Give them facts to buttress their feelings in deliberations.

It is research we often consider when we hear that common refrain from our mock jurors—“it may be legal but it sure isn’t right”. But this is eight years later and technology has advanced to the point that we now have research telling us a brain scan can tell us whether someone was acting “knowingly” as opposed to “recklessly”.

We are grateful the researchers point out that their technique “represents a proof of concept, and not yet a usable tool”. Nevertheless, expect to hear this one coming to a courtroom before too long (much like the other neurolaw defenses we’ve covered here before).

Here’s what they did. The researchers used “neuroimaging and machine-learning techniques” (aka fMRI) to display varying brain activities related to whether the defendant “knew he was carrying drugs” or “merely aware of a risk that he was”. They clarify this question of “criminal intent” is what criminal juries must determine—in other words, was the defendant’s behavior “knowing” or was it “reckless”?

While there have been studies using fMRIs before this one, the authors say there are “no fMRI studies [snip] that have attempted to determine whether and how the ‘culpable mental states’ map onto differential activations in the human brain”. In other words, if you know you are behaving illegally do different parts of your brain “light up” as compared to when you are aware you might be acting illegally but proceed recklessly.

So, fMRIs are expensive but the researchers did 40 of them (20 men and 20 women). Half the participants were told they were carrying a suitcase containing contraband (the “knowing” condition) and half were told their suitcase might have contraband in it (the “reckless condition”). After that introduction, into the fMRI machines they went.

Those in the knowing condition (who knew they were carrying contraband) were more likely to “light up” in the anterior insula (said by the authors to be involved in the assessment of risk and uncertainty) and the dorsomedia area of the prefrontal cortex (said by the authors to be involved in assessing probabilities) of the brain.

Those in the reckless condition were more likely to “light up” the occipital cortex (said by the authors to reflect higher uncertainty).

The researchers comment on the small sample size and other issues with their study that preclude generalizability. For this show of reticence and respect for statistical realities we are grateful. The reality is that, no matter what areas of the brain light up, we can’t know if that shows the difference between “knowing” and “reckless” or if it is simply a response to risk level. Not to mention, these were imagined behaviors and not real ones.

Our mock jurors have been very suspicious of neurolaw findings and whether you can “prove” they mean what the researchers say they mean. Neurolaw developments remain a very interesting but “not ready for prime time” area of research, or perhaps better said, “not ready for a Daubert challenge”. If you are interested in knowing more about neurolaw, here’s a review of the book Law and Neuroscience.

Vilares I, Wesley MJ, Ahn WY, Bonnie RJ, Hoffman M, Jones OD, Morse SJ, Yaffe G, Lohrenz T, & Montague PR (2017). Predicting the knowledge-recklessness distinction in the human brain. Proceedings of the National Academy of Sciences of the United States of America, 114 (12), 3222-3227 PMID: 28289225

Image

Share
Comments Off on Criminal defense? Brain scans could show whether “they did it  on purpose”

We all want our expert witnesses to be influential with jurors. But when you have an expert testifying about forensic science (like fingerprint or DNA identification) what part of the testimony is going to influence jurors the most? Will it be the science? The technology used by the witness to interpret and understand the data? Or some characteristic of the witness? A new study tells us what jurors find most influential as they make decisions about your case.

You may find these results distressing (or you may breathe a sigh of relief over them). The researchers were interested in seeing how much the

“science” (i.e., how has the method the witness used to determine their findings been tested and validated) was persuasive, to what degree the

“technology” (i.e., is the technology older or is it new, whiz-bang technology of the latest findings) was persuasive, or, to what degree

“individual characteristics” of the witness (i.e., education and experience) was the most persuasive to jurors.

So was it the whiz-bang of the technology or the validation of the technology used? Nope. The individual background and experience of the expert witness was what most persuaded jurors. In other words, credibility turned on whether the jurors found the expert witness was qualified and had the experience base to understand the science and communicate their findings effectively. And in our experience, the likability and personal appeal of the expert is a significant additional factor that goes well beyond credentials.

Courts have long asserted standards for admissibility of scientific evidence and testimony. Jurors always insist that they need to understand the science in order to judge the merits of a technological or scientific dispute, but in truth most jurors get tired of trying to figure it out pretty quickly, unless they have a background that gives them a head start. Ultimately, the messenger is a crucial part of the evidence, and for those who struggle to understand it, the messenger (i.e., your expert witness) is a crucial factor.

Many of us also have beliefs that the latest technology is more persuasive than tired, old-fashioned and low-tech ways of interpreting data. But in this study, jurors compared fingerprint experts using either whiz-bang technology with computerized matching of prints or a visual scan of the fingerprints using the ACE-V (analyze, compare, evaluate, and verify). While the ACE-V method may “sound” good to jurors, the National Academy of Sciences (NAS) report in 2009 stated that the ACE-V method of fingerprint analysis was “not specific enough to qualify as a validated method”.

Regardless of the invalidation of the ACE-V method, the experienced expert won out over the technology. The researchers thought perhaps jurors were not confident that the witness using whiz-bang technology knew enough about how to interpret the results accurately.

So, what it came down to both times (across two experiments) was juror evaluations of the experience of the testifying forensic scientist. Researchers said the jurors “leaned on the experience of the testifying forensic scientist to guide their assessments of the soundness of his [sic] findings”. In another section of the paper, the researchers opine that witness “experience serves as a proxy for scientific validity”.

Even more disturbing, the researchers asked participants for the “total number of college and graduate level classes in science, math, and logic that you have completed”. They thought those who were more educated in scientific methods would focus more on the scientific validity of the analysis used by the testifying expert. This was not the case.

“It may be that jurors simply didn’t perceive a connection between the scientific validation of a forensic technique and its accuracy.”

From a litigation advocacy perspective, this speaks to the importance of educating jurors at a level they can understand about the science behind the expert’s interpretation. We have blogged before about using skepticism in direct examination and this approach (wherein you have your expert discuss the methods used by the other expert and why your expert’s strategy is more reliable and valid) would be a good strategy to discredit opposing counsel’s expert.

Overall, when your case relies on science and technology, use pretrial research to ensure jurors understand enough of the science to make educated and informed decisions about the evidence. If you do not make sure you are teaching at a level they understand, it is likely they will fall back, like the jurors in this research, on their intuition about witness experience and training (and probably on how likable, knowledgeable, confident, and trustworthy is the expert) rather than on whether the expert used credible methods used to analyze the evidence.

Koehler, J., Schweitzer, N., Saks, M., & McQuiston, D. (2016). Science, technology, or the expert witness: What influences jurors’ judgments about forensic science testimony? Psychology, Public Policy, and Law, 22 (4), 401-413 DOI: 10.1037/law0000103

Image

Share
Comments Off on Forensic Science Testimony: What most  influences jurors? 

leadership-geneA recent symposium for IT executives included a presentation that pitched the idea of genetic screening of job applicants for traits like “honesty, leadership, being a team player, and having a high level of emotional intelligence”. While we think you may want to hang onto your checkbook if offered this sort of service, it is a disturbing outgrowth of the burgeoning research into genetic testing for almost everything. Here is a quote from the Seeker website which brought this possibility to our attention:

Although federal and states laws prohibit employers from requesting or using an employee’s genetic information, genetic testing is mainstream. Millions of people voluntarily pay to have their genomes analyzed thanks to inexpensive DNA kits available from companies like Ancestry DNA , Genome , 23andMe, Family Tree, to name a few. And research is moving forward in fields such as psychiatric genetics, trying to find correlations between genes and behavior.

“We fully appreciated the lack of legality and some of the issues with the science,” Furlonger told Seeker by email. “Nonetheless, it seems clear that work is being undertaken and therefore the current state should not be ignored.”

We are glad they appreciate the “lack of legality”. (Some researchers do not acknowledge the legal concerns—like this group on how to hire the “good psychopath” by testing them pre-hire.) The actual best answer to this question is that there is no gene for leadership (or honesty, or being a team player, or having high emotional intelligence) and there is no way testing of this sort would be useful to a company trying to figure out who to hire.

Neurolaw researchers (like Hank Greeley) are speaking up against this strategy:

“Why would an employer rely on imperfect, and generally weak, associations between genes and test scores instead of relying directly on the test scores?” said Henry Greely, director of the Center for Law and the Biosciences at Stanford University and the chair of steering committee of the Center for Biomedical Ethics. It’s like running, he said. Rather than look for genetic variations that indicate whether someone is a good sprinter or not, just watch a person sprint. That ought to tell you all you need to know.

We agree and are glad to have voices of reason speaking out against the desire to “push the hiring envelope” into areas that make no sense and violate medical privacy (as well as statistical integrity). Because while the genetic testing can’t tell you anything about the purported target traits, they can tell you things about the person that should not be a factor in hiring (including gender, possibly ethnicity, and medical issues). Will genetic testing results be a tool to worsen the problems of women and non-Asian minorities in breaking into STEM fields? Here’s what we wrote in August 2016 when we came across the “good psychopath” workplace fit test. We think it works for this idea too.

From a law office management perspective, we really would urge rejecting this sort of strategy. What they seem to intimate is that you want to find the 10% of the psychopathic population who have moderate psychopathic tendencies and then, divide them into primary and secondary psychopaths and then, figure out which of the primary psychopaths have really good social skills so their behaviors will not wreak havoc in your workplace.

Putting on our duly licensed Psychologist hats for a moment, the distinction seems to be a very slippery slope. Secondary psychopaths are trouble from the beginning. Primary psychopaths have better social skills so they can manage the day-to-day more successfully, but under stress they are going to create havoc, too. And we have never seen a trial team that isn’t under terrific stress. It is the nature of litigation, and stress tolerances need to be higher than average, not a potential area of weakness.

The authors put a troubling amount of faith in a psychological trait scale, when you can assess the same things by looking at work history, length of relationships, and having your own warning signs on high alert during the interview process. Use your intuition about whether someone will be a good fit. It is also risky to assume you can “get around” the Americans with Disabilities Act by using the PPI-R scale with job applicants when what you are measuring is psychopathy and resulting goodness of fit in your workplace.

And a high-functioning psychopathic attorney is just the kind of person to drag you through a lawsuit by claiming that you rejected him or her based on an ADA protected factor.

Image

Share
Comments Off on Whoa! A hiring strategy we really do NOT want to  see happen!