Archive for the ‘Case Presentation’ Category
We’ve written about CRISPR (aka gene editing) before and even about concerns of Americans about use of emerging technologies, and while this post is sort of about CRISPR—it is also about visual evidence done right.
We often work on cases where jurors will need to understand very complex information. It may be a patent case or a complex business litigation case or something else that is technically daunting—but jurors often need to understand something very complicated. And often that something is very technologically advanced (and thus intimidating to the jurors).
It is almost always a very difficult process for the attorneys in a complex case (in which they have often been buried for years) to see through the many details of a complicated technology and tell a simple (yet accurate) story for jurors. We often test visual evidence in our pretrial research to see what resonates with jurors, what they remember, and what helps them to make sense of abstract and esoteric technology, processes, or patented ideas.
When we see terrific examples of visual evidence (culled from many different areas) we like to share them here to help you understand there really is a way to take very, very complex facts and details and make them accessible to those who have no experience whatsoever in the area and may be very intimidated by even attempting to understand the information.
Here is just such a video tutorial. This video uses cartoon images and plain language to explain the gene editing technique referred to as CRISPR. While the last parts of the video place it clearly in the pro-CRISPR camp, the first parts explain the technology clearly and succinctly. Because it is in a cartoon format (with which we are all familiar from childhood) it is non-threatening. Since it is visually presented, we are able to understand a tremendous amount of technical information without jargon or numbers that make less technical viewers’ eyes glaze over.
If CRISPR can be explained in a few minutes of cartoons, you can explain anything in ways the most naïve juror can understand. All you need is a fabulous visual evidence consultant. We happen to know a few of them!
New research tells us you may not want to have slow motion videos played at trial if you are the defense attorney. However, if you are the prosecutor—push hard for that video! It’s really a simple lesson: when jurors see slowed down footage of an event, they are more likely to think the person on the screen has acted deliberately (and that will likely mean a more severe sentence and/or verdict).
The researchers say that while the slowed down footage does give the observer better ability to see what happened clearly, it also creates a “false impression that the actor had more time to premeditate” then when the events are viewed in real-time. Their experiments show (among other things) that jurors seeing a crime as calculated rather than impulsive can mean the difference between a “lethal injection and a lesser sentence”. That’s a profound impact from a video.
In a series of experiments, the researchers showed participants a video of an attempted armed robbery where the shop employee was shot and killed. When participants watched the video slowed down, they were 3x more likely to convict the defendant than those who watched the video in real-time.
The researchers reference the trial of John Lewis (still on death row for murdering a police officer in 2007) and the slowed down video used at his trial. Defense attorneys argued on appeal that the slowed down video evidence at his trial made jurors think his actions were premeditated, but prosecutors said since both regular speed and slowed down video were played—jurors were not confused. The judges sided with the prosecutors.
In this set of experiments, researchers tested the notion by showing participants both a real-time speed video and the slowed down version. What they found was that “slow motion replay, compared with regular speed replay, produces systematic differences in judgments of intent”. And, said the researchers, showing both regular speed and slow motion video was “somewhat, albeit not completely, effective in reducing the impact of slow motion on first-degree murder convictions”. In other words, showing both versions does not entirely mitigate the bias introduced by the slow-motion version.
We have all had the experience of watching an action movie in which car crash or a fight is filmed in slow motion. We watch and think to ourselves “Oh no! Slow down!” before the crash. The slow motion allows us to imagine escape alternatives even while we see the events unfold before us. Showing a shooting in slow motion offers the same sense of “Don’t do it!” and alternatives to the bad outcome that don’t occur in real-time.
The researchers see this as a question of whether and under what conditions slow motion video should even be allowed in court and think it imperative that the benefits be weighed against potential costs. Their results, they say, support the idea that slow-motion videos appear to influence observers in a more punitive way than real-time speed. Showing both speeds—mitigates to a degree but not entirely (so the slow-motion still has a chilling effect on juror emotions and decisions).
From a litigation advocacy perspective, this is a reference you may want to tuck away to support a motion to not allow slow motion videos to be shown at a trial where you are defense counsel. Slow motion video is most likely to be found desirable by a prosecutor or plaintiff. While it will certainly be a strategy prosecutors will want to employ (to increase the sense of premeditation and intentionality in jurors observing the video), this article was designed to test the decisions made in the John Lewis appeal and could be found persuasive to judges.
Caruso, E., Burns, Z., & Converse, B. (2016). Slow motion increases perceived intent. Proceedings of the National Academy of Sciences DOI: 10.1073/pnas.1603865113
FALSE! Alas, even though Microsoft has popularized this notion of a shrinking attention span—it is simply not true. Or at least, there is no proof it is true. And the study the falsehood was based on was not even looking at attention span—it was looking at multi-tasking while browsing the web. To add insult to injury for the authors (who actually are academics), they do not even use the word goldfish in their article. Academics who’ve been misquoted or misinterpreted by the media are shaking their heads around the globe. This distorting of research by the popular press for the sake of sensational stories isn’t new, but for those who do the work, it is pretty disturbing. Reporters often do little back-checking with the geeks that make the world go ‘round, because it’s hard, and it often takes the edge out of a catchy story. Once the first misinterpretation is published, the skewed reports drift farther and farther from the research they purportedly rely on. Alas…
Okay. So what happened here? Microsoft apparently commissioned a 2015 non-peer-reviewed study to examine how internet browsing had changed over time—that is, how long do surfers look at a page prior to moving on? Then it was misinterpreted (really misinterpreted) with spurious comparison information added about how adult attention spans were shrinking—an assertion unsupported and unaddressed even by the Microsoft study. This misinformation was picked up by the New York Times and Time Magazine as well as numerous other mainstream media sites. Each site represented the data as a scientific truth stemming from a paper commissioned by Microsoft. The only problem was, it wasn’t true.
The table following is another example of how the work was misinterpreted—it misrepresents the human (and goldfish) attention span as the real focus of the paper, which could barely be farther from the truth. The last half of the below table (Internet Browsing Statistics) is actually taken from the article Microsoft commissioned to look at how browsing patterns on the internet have changed over time. The top half however (Attention Span Statistics) is not and is totally unrelated to the study they commissioned. And, none of it has been validated or otherwise proved to mean anything at all.
(If you have trouble reading this table, here is the original source.)
You can find the text of the complete article commissioned by Microsoft here. Open it as a pdf file and search it for “goldfish”. You won’t find it. Nada. The study was not designed to look at the human attention span nor was it designed to compare human attention spans to that of a goldfish. It was designed to look at how advances in web technology had changed how we surf the web. Because, Microsoft wants to figure out how to make the most out of web surfing.
We are fortunate to have fact-checkers on the web — particularly when it comes to topics like data visualization. PolicyViz does a thorough job of debunking this myth as does a writer posting on LinkedIn. They both want everyone to STOP comparing people to goldfish! We would concur. We would also love to see people using their common sense and questioning sensational claims–“the average attention span of a goldfish”? Really? Or, what is the significance of any of those memory lapse statistics? Has that always been the case? Is it different? Why should we care?
From a litigation advocacy perspective, there are two key lessons here: First, pay no attention to comparisons of your jurors to goldfish. Instead use things like chunking your information into 10 minute segments—that factoid is actually supported by research on learning and not just drummed up by a marketing representative. If jurors do not pay attention, it likely isn’t their declining attention spans, but rather that your presentation did not speak to their values, attitudes and beliefs. Test your presentations pretrial and make sure real people pay attention and understand.
And second, be very aware of how easily seduced people are by unproven, but juicy, factoids based on data that is unproven or false, just because it is amusing or it seems to support some preexisting but uninformed suspicion. Cleverness often sells.
We are big fans of how visual evidence can take very complicated ideas and make them easy to grasp by allowing those who are puzzled to “see” the complex big picture. Recently, we saw two really good examples of how to take complex issues and make them simple enough for the layperson to grasp. Both examples are from the political realm and both are based on easily fact-checked data. But the questions they answer come from a lot of data that would be very difficult to make sense of without these images.
1. Who chose these presidential candidates?
Here’s one from the New York Times that was posted on a infographic site (FlowingData.com/). As it happens, only 9% of Americans chose Hillary Clinton and Donald Trump as our two main political party candidates. For an interactive look at how you can “see” which 9% gave us these two candidates, visit the New York Times site for an interactive version of the graph. This is a very cool and very understandable way to allow viewers to understand what is a very complicated concept! As you read it, consider how it might apply to the ways your jurors need to grasp complex ideas.
2. Which politicians lie more?
PolitiFact is an independent fact-checking website. They pay attention to what politicians say and then tell us whether what they say is true to some degree, or sort of lying, lying, or must have their pants on fire (known to be demonstrably false). They illustrate political statements with their trademarked truth-o-meter graphic.
Enter Robert Mann whose work was featured over at DataViz (described as a site to get you excited about data–one graphic image at a time). Mann took a compilation of more than 50 statements made since 2007 by well-known politicians and (using the Politifact ratings) put all of the statements into a comprehensive chart to show us who lies more.
When some commenters questioned the accuracy of his graph and wondered just which Politifact analyses were used, a colleague of his (Jim Taylor) stepped up to say why Mann was accurate and to explain how to fact check the fact checkers–which, thanks to the internet, is a pretty straightforward thing to do these days. Additionally, Mann himself has issued an explanation for the data behind this graph which specifically explains how he chose whom to include and whether he “cherry-picked” statements (that would be a no).
From a litigation advocacy perspective, if you can get graphics that tell as much of a story as these visuals do, you go a long way toward juror persuasion. Our next post will be an example of bad data visualization, because while terrific examples like the ones in this post are more rare—there are plenty of losers out there.
Here are a few articles that did not act as a catalyst to stimulate an entire post but that tweaked our fancy enough that we wanted to share them with you. Think of them as “rescue items” if you have social anxiety and want to seem scintillating….or something like that.
So have you seen this in the last second?
Here’s an interesting memory study where the researchers found that if participants didn’t know they were going to be tested on things they’d seen repeatedly, they would have no idea when asked to identify if they’d seen a specific item before. Specifically, they asked participants to do a simple memory test to replicate memory for different kinds of information (e.g., numbers. letters or colors). For example, participants would be shown four characters on a screen that were arranged in a square. They would be asked to report which corner the letter was in (when the other characters were either numbers or colors). The researchers repeated this task many, many times and the participants rarely made mistakes. But then (because researchers cannot leave well enough alone) the researchers asked the participants to respond to an unexpected question. Specifically, the participants were asked which of the four letters appearing on their computer screen had appeared on the previous screen. Only 25% responded correctly (which is random chance of accuracy). The question was asked again after the following task but this time it wasn’t a surprise and participants gave correct answers between 65% and 95% of the time. The researchers call this effect “attribute amnesia” and say it happens when you use a piece of information to perform a task but are then unable to report what that information was as little as a single second later.
Remember that post on uninterrupted eye contact causing hallucinations?
We wrote about it in one of these ‘tidbit’ posts back in 2015 and even included a very awkward video from a Steve Martin/Tina Fey movie. This time researchers were looking for the optimal length of uninterrupted eye contact that would be experienced positively by the most people. Think of this as a potential answer to the question witnesses often have about how long to maintain eye contact with individual jurors or just use this as a guide for comfortable eye contact with strangers at Starbucks. On average, the close to 500 participants were most comfortable with eye contact that lasted slightly over three seconds. The majority preferred a duration of eye contact between two and five seconds and no one liked eye contact of less than a second or longer than nine seconds. We conclude that less than a second is too furtive, and longer than 9 seconds is intolerably intrusive. One problem with the study was that it used filmed clips rather than actual live interactions but it is an approximate guide to “normal” eye contact versus “creepy” eye contact.
Oh no! There may be a problem with all those fMRI studies!!!
A new article published in the journal PNAS tells us there is a fMRI software error that could result in the invalidation of 15 years (and more than 40,000 papers) of fMRI research. We know you are likely thinking of the article on that poor dead salmon who still showed brain activity. This article was cited all over the internet in July of 2016 as proof that all the work done on fMRI machines was likely flawed. Even though the bug was corrected in 2015, it was undetected for more than a decade and the researchers thought perhaps every study should be replicated to ensure accuracy in the literature upon which we rely. The fMRI software error and the resulting shambles of the literature was seen as a devastating bombshell with headlines like this one from Forbes suggesting “tens of thousands of fMRI brain studies may be flawed”. Fortunately, hysteria like this is likely why the Neuroskeptic was born and certainly why the Neuroskeptic blog makes such a contribution to knowledge in this field. Is this software glitch really serious? Yes, says the Neuroskeptic. It is a serious problem but it is not invalidating years of fMRI research. In fact, in an update posted to Neuroskeptic blog on July 15, 2016, the author of the paper in PNAS had requested some corrections to the publication to avoid these sensationalist headlines but PNAS refused so he put the updates onto another accessible site. Visit the Neuroskeptic’s excellent blog to read a common-sense and rational explanation of what the fMRI software bug really means and how those familiar with the fMRI work have known about this for some time now.
Yes, Virginia—women are still harassed for choosing STEM careers even though it is 2016
You’ve likely heard the lament that there are too few women in STEM careers and that we need to fix the problem. The Atlantic has published a very well-done article on how women are pushed out of STEM careers and that as many as 2 out of 3 women science professors reported being sexually harassed. And those are just the ones who made it through to graduation. The stories of those still in training having photos taken of their breasts, being harassed at conferences, or being hand-fed ice cream by male professors are disturbing. There is also “pregnancy harassment” and stories of PIs (principal investigators on grants who are typically faculty members) insisting pregnant postdocs return to the lab weeks after giving birth and then harassing the postdoc for having “baby brain” and questioning their experimental results. It is well worth your time to read.
Chen H, & Wyble B (2015). Amnesia for object attributes: failure to report attended information that had just reached conscious awareness. Psychological Science, 26 (2), 203-10 PMID: 25564523
Binetti, N., Harrison, C., Coutrot, A., Johnston, A., & Mareschal, I. (2016). Pupil dilation as an index of preferred mutual gaze duration Royal Society Open Science, 3 (7) DOI: 10.1098/rsos.160086