You are currently browsing the archives for the Communication category.

Follow me on Twitter

Blog archive

We Participate In:

You are currently browsing the archives for the Communication category.

ABA Journal Blawg 100!

Subscribe to The Jury Room via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Login

Archive for the ‘Communication’ Category

derogatory tweets by stateLast week we published a new post on terrific visual evidence from the political arena that quickly and visually described complex (and huge) data sets. This week (and no, this will not be a regular weekly feature) we mine another (and perhaps unexpected) data source: Twitter. While you may have seen Jimmy Kimmel having movie stars read mean tweets about themselves—this visualization of American intolerance is much less amusing.

Although we are based in Texas (claiming the #3 slot in terms of American derogatory tweets), we travel all over the country doing pretrial research. As we prepare for that travel, we always investigate the demographic data of a planned venue so we can recruit a sample of mock jurors that reflect the community in which we are doing research. We look at online versions of local newspapers to get a sense of what residents have been reading about, talking about, or experiencing first-hand that might have relevance to our case. And sometimes, depending on the case specifics, we do a lot more research than usual to identify themes we might hear from mock jurors that can be instructive for us as we write up reports. However, prior to seeing the information we are about to share with you, we have not done Twitter research. From now on, we might need to add that to the list.

Imagine that you are working on a case that involves litigants that are minorities, or issues about social justice, or immigrants. You want to have a sense of the biases present in the area—where you can find representative opinions about your clients or the other party. You might want to understand the prevalence of biases and their social acceptability within the venue. Are slurs against black people, slurs against Hispanics or Latinos, slurs against women, slurs against the cognitively disabled, slurs against those who are overweight, or just plain nastiness and incivility in general something that people feel able to express openly? How can you know? You guessed it. Twitter.

The results can be found on Twitter, courtesy of the geotagging function in Twitter which allows you to “map” hate speech. The mining was done by a residential search engine site (Adobo) so that people looking for a new place to live can either steer clear of localities or get a higher chance of having a next-door neighbor who shares a particular favorite form of bigotry. It could also, though, be a simple place to take a look at how biases  (aka online hate speech) will interact with your specific case facts.

Some of the maps do not fit with our stereotypes of various states—take for example the least bigoted states: Wyoming, Montana, Vermont, South Dakota, Idaho, Arkansas, Minnesota, Maine, North Dakota, and Wisconsin. Some of these states are often seen as having many people in them who are very bigoted and yet, there is no mention of them on Adobo’s data visualization maps as being exceptionally bigoted states. And therein lies the issue. Some areas of the country, especially rural areas, don’t tweet nearly as much as others.

Twitter participation is relatively rare among adults on the internet (only about 23% of all online adults according to Pew Research Center). So these maps geotagging hate speech may not be representative of all your neighbors (we’re talking about the 77% of online adults who are not on Twitter). When we evaluate barriers to evidence-based jury decision-making, understanding the prevalent social attitudes is crucial. So, often these Twitter maps are not accurate enough by themselves—but they are a small window into potentially big problems. And that was the goal for Adobo in publishing the results. Go see the rest for yourself. It’s eye-opening and disheartening with only a few brighter spots.

https://www.abodo.com/blog/tolerance-in-america/

Image

Share
Comments Off on Research from Twitter: Where are the most bigoted states in the country? 

attention-spanFALSE! Alas, even though Microsoft has popularized this notion of a shrinking attention span—it is simply not true. Or at least, there is no proof it is true. And the study the falsehood was based on was not even looking at attention span—it was looking at multi-tasking while browsing the web. To add insult to injury for the authors (who actually are academics), they do not even use the word goldfish in their article. Academics who’ve been misquoted or misinterpreted by the media are shaking their heads around the globe. This distorting of research by the popular press for the sake of sensational stories isn’t new, but for those who do the work, it is pretty disturbing. Reporters often do little back-checking with the geeks that make the world go ‘round, because it’s hard, and it often takes the edge out of a catchy story. Once the first misinterpretation is published, the skewed reports drift farther and farther from the research they purportedly rely on. Alas…

Okay. So what happened here? Microsoft apparently commissioned a 2015 non-peer-reviewed study to examine how internet browsing had changed over time—that is, how long do surfers look at a page prior to moving on? Then it was misinterpreted (really misinterpreted) with spurious comparison information added about how adult attention spans were shrinking—an assertion unsupported and unaddressed even by the Microsoft study. This misinformation was picked up by the New York Times and Time Magazine as well as numerous other mainstream media sites. Each site represented the data as a scientific truth stemming from a paper commissioned by Microsoft. The only problem was, it wasn’t true.

The table following is another example of how the work was misinterpreted—it misrepresents the human (and goldfish) attention span as the real focus of the paper, which could barely be farther from the truth. The last half of the below table (Internet Browsing Statistics) is actually taken from the article Microsoft commissioned to look at how browsing patterns on the internet have changed over time. The top half however (Attention Span Statistics) is not and is totally unrelated to the study they commissioned. And, none of it has been validated or otherwise proved to mean anything at all.

Microsoft study

(If you have trouble reading this table, here is the original source.)

You can find the text of the complete article commissioned by Microsoft here. Open it as a pdf file and search it for “goldfish”. You won’t find it. Nada. The study was not designed to look at the human attention span nor was it designed to compare human attention spans to that of a goldfish. It was designed to look at how advances in web technology had changed how we surf the web. Because, Microsoft wants to figure out how to make the most out of web surfing.

We are fortunate to have fact-checkers on the web — particularly when it comes to topics like data visualization. PolicyViz does a thorough job of debunking this myth as does a writer posting on  LinkedIn. They both want everyone to STOP comparing people to goldfish! We would concur. We would also love to see people using their common sense and questioning sensational claims–“the average attention span of a goldfish”? Really? Or, what is the significance of any of those memory lapse statistics? Has that always been the case? Is it different? Why should we care?

From a litigation advocacy perspective, there are two key lessons here: First, pay no attention to comparisons of your jurors to goldfish. Instead use things like chunking your information into 10 minute segments—that factoid is actually supported by research on learning and not just drummed up by a marketing representative. If jurors do not pay attention, it likely isn’t their declining attention spans, but rather that your presentation did not speak to their values, attitudes and beliefs. Test your presentations pretrial and make sure real people pay attention and understand.

And second, be very aware of how easily seduced people are by unproven, but juicy, factoids based on data that is unproven or false, just because it is amusing or it seems to support some preexisting but uninformed suspicion. Cleverness often sells.

Image

Share
Comments Off on Myth-busting: ”Today’s adults have a shorter attention span than a goldfish” 

truth-meterWe are big fans of how visual evidence can take very complicated ideas and make them easy to grasp by allowing those who are puzzled to “see” the complex big picture. Recently, we saw two really good examples of how to take complex issues and make them simple enough for the layperson to grasp. Both examples are from the political realm and both are based on easily fact-checked data. But the questions they answer come from a lot of data that would be very difficult to make sense of without these images.

1. Who chose these presidential candidates?

Here’s one from the New York Times that was posted on a infographic site (FlowingData.com/). As it happens, only 9% of Americans chose Hillary Clinton and Donald Trump as our two main political party candidates. For an interactive look at how you can “see” which 9% gave us these two candidates, visit the New York Times site for an interactive version of the graph. This is a very cool and very understandable way to allow viewers to understand what is a very complicated concept! As you read it, consider how it might apply to the ways your jurors need to grasp complex ideas.

clinton trump

 

2. Which politicians lie more?

PolitiFact is an independent fact-checking website. They pay attention to what politicians say and then tell us whether what they say is true to some degree, or sort of lying, lying, or must have their pants on fire (known to be demonstrably false). They illustrate political statements with their trademarked truth-o-meter graphic.

Enter Robert Mann whose work was featured over at DataViz (described as a site to get you excited about data–one graphic image at a time). Mann took a compilation of more than 50 statements made since 2007 by well-known politicians and (using the Politifact ratings) put all of the statements into a comprehensive chart to show us who lies more.

When some commenters questioned the accuracy of his graph and wondered just which Politifact analyses were used, a colleague of his (Jim Taylor) stepped up to say why Mann was accurate and to explain how to fact check the fact checkers–which, thanks to the internet, is a pretty straightforward thing to do these days. Additionally, Mann himself has issued an explanation for the data behind this graph which specifically explains how he chose whom to include and whether he “cherry-picked” statements (that would be a no).

politifact chart on who lies moreFrom a litigation advocacy perspective, if you can get graphics that tell as much of a story as these visuals do, you go a long way toward juror persuasion. Our next post will be an example of bad data visualization, because while terrific examples like the ones in this post are more rare—there are plenty of losers out there.

Image

Share
Comments Off on If a picture paints a thousand words, then this post has more than  2,000 words

memory blindnessThis isn’t really about bad memory—it’s about something much scarier—the power of others to modify your memory without your awareness. New research out of California tells us that it is possible to change the statements of the person giving testimony in such a way that they may not even notice! To make matters worse, it is possible the altered testimony will be so firmly accepted as truth by the fact-teller that they even develop a false memory supporting the account.

The researchers label this effect “memory blindness” and define it as our failure to recognize our own memories. For those of you who remember the flurry of controversies about implanted memories and “False Memory Syndrome” (such as the McMartin Daycare scandal from 1983), and the research done by Ralph Underwager, Elizabeth Loftus, and Richard Ofshe, this will sound familiar. It was seen as something fairly common in children, but this research addresses how susceptible adults are, too.

You may think this impossible but the researchers found it not only possible but disturbingly easy to achieve. The researchers conducted two experiments. In the first, they showed 165 undergraduate student-participants a slideshow of a woman who interacted with three different people—one of whom ultimately stole her wallet. Fifteen minutes after they had watched the slideshow, participants were asked questions similar to what the police would ask (e.g., how tall was the thief, what was the thief wearing) and their responses were written down. Fifteen minutes after that response, the participants were shown their responses in written form but, the researchers had randomly changed three of their answers so they were incorrect. The researchers let another 15 minutes pass and then asked the participants the same “police type” questions to see if they changed their answers.

The majority of the group did not notice their responses had been changed and when asked the questions the second time, repeated the information that was not what they had initially reported but instead was the incorrect information inserted by the researchers.

As an aside, only 18% said they thought “something was odd” in how the experiment was conducted. The researchers do not know what the other 82% were thinking but had to assume they did not notice anything amiss with their responses.

In the second experiment, the researchers gathered 379 participants to watch a slide show of a man stealing a radio from a car. This time, instead of asking the participants what they had seen happen, they were asked to pick the thief out of a photo lineup (with “relatively dissimilar faces”). The misinformation in this second study was telling the participants they had selected a different person from the lineup than they had originally identified.

Over half (53.7%) changed their answer in the final photo array to match the false feedback—which means that 47.3% realized their choices had been changed.

The researchers say that eye witnesses given typed copies of their statements to sign may not notice errors (due to typographical mistakes or more nefarious reasons) and that reviewing their incorrect statements alone may contaminate their memories. Even though almost anyone would say that they wouldn’t fall for this kind of mistake, the majority of participants did not notice changes and modified their reports to match inaccurate reports of past behavior.

Still others might say the police would never alter statements intentionally, and to them we would encourage a review of the Hillsborough disaster in the UK (more than 25 years ago) where almost 100 people were crushed to death during a football match. A recent inquest uncovered the reality that eye-witness testimonies had been “deliberately altered” by the police.

It is disturbing to realize that our memories can be so easily messed with by researchers and more disturbing to see examples of the same thing done by the police. While we’ve blogged before about the lack of reliability of eye-witness testimony, this is certainly another one to add to the list of reasons to question the memory of those who assert they “saw it with my own eyes”. If memory can be altered in as short a time delay as 15 minutes, it can certainly be altered over the time it takes a case to come to trial.

Cochran KJ, Greenspan RL, Bogart DF, & Loftus EF (2016). Memory blindness: Altered memory reports lead to distortion in eyewitness memory. Memory & Cognition PMID: 26884087

Image

Share
Comments Off on Another explanation for poor eye witness IDs:  Memory Blindness

moralityEarlier this week we wrote a post about how to invoke morality as a persuasive strategy with your jurors. Now Gallup has helped us by identifying the moral values most Americans agree on and the five about which they most disagree.

Gallup measures views on moral issues each year (since 2001) as part of their tracking of attitude shifts on social issues. They assign respondents to one of five religious groups (e.g., No religion, Jewish, Catholic, Protestant, Mormon) and then measure their attitudes on various social issues to determine what they see as moral and not moral. True, it is not a complete religious typology, but it is an interesting start.

They vary a bit from their typical single (annual) survey presentations by combining all their data from 2001 through 2016: “Results for this Gallup poll are based on combined telephone interviews in Gallup’s 2001 through 2016 annual Values and Beliefs poll, conducted each May with random samples of U.S. adults, aged 18 and older, living in all 50 U.S. states and the District of Columbia”. This gives them a total sample size of 16,754 Americans opining on moral issues.

Here are the moral issues which most religious groups in the US generally agree are either “morally acceptable” or “not morally acceptable”:

Divorce, death penalty, wearing clothing made of animal fur, medical testing on animals—are all viewed as morally acceptable with more than 50% of respondents agreeing.

On the other hand, suicide, cloning humans, polygamy, and extramarital affairs are seen as not morally acceptable (again, as measured by less than 50% of Americans surveyed agreeing they were morally acceptable behaviors).

And here are the moral issues which religious groups in the US generally disagree on (that is, some see them as acceptable and but the majority do not):

Abortion, doctor-assisted suicide, cloning animals, gay-lesbian relations, having a baby outside marriage.

We’d consider these five to be “hot button issues” which may make jurors close their minds to the facts of your case rather than considering the circumstances involved. Intriguingly, one of the religious groups measured (the Mormons) was distinctly different when it came to their views on premarital sex, stem-cell research, and gambling.

Mormons are more likely than other religious groups to view stem cell research negatively by a slight margin (54%). They see premarital sex as clearly morally unacceptable (71%) and gambling is viewed askance as well (with 63% saying gambling is morally unacceptable).

While it is important to stay abreast of research pointing toward new litigation advocacy strategies like our post on “making it moral”, it is also important to keep up with changing attitudes toward social issues and how religious beliefs and affiliations may result in differing attitudes from the norm. Know your venue, know your jurors, and keep up to date as societal attitudes shift and sway.

Image

Share
Comments Off on Gallup Polls says US religious groups disagree on five key moral issues