Archive for the ‘Case Preparation’ Category
Most of us think we know more than we actually do and sometimes, that sense is taken to an extreme that can be annoying (as well as inaccurate). Two years ago, we wrote about a study on modulating political extremism and mentioned the recommended strategy was similar to one we use to topple self-appointed “experts” in litigation research, and at trial. Now, we have another study that uses the same strategy but significantly shortens the length of time it takes for the speaker to reassess their (lack of) knowledge.
The researchers say the belief that we actually understand the working of ordinary things (like a vacuum cleaner) when we really do not is called “the illusion of explanatory depth”. And they mention the paper we blogged about back in 2014 which recommended asking people to offer a detailed explanation of their understanding—at which point, most come to realize they really do not understand (for example, how the vacuum cleaner works) as much as they thought they did. Even if they cling to their belief that they are an expert anyhow, their ability to persuade others is undermined. It works well to unseat a self-appointed expert but it does take a little time. In truth, the goal of asking for the explanation in pretrial research isn’t to embarrass them, but rather to understand how someone got sidetracked onto a rabbit-trail that could distract an actual juror. We discovered that it also had some salubrious secondary benefits, though…
New research tells us it really is not necessary to have people generate those full explanations that take up time. Instead, asking the “expert” to reflect briefly, but in a very specific way, on the extent of their knowledge is often enough to shake their over-confidence and help them understand they really do not understand how a “vacuum cleaner” works. The researchers conclude that
“reflection on explanatory ability is a rare metacognitive tool in the arsenal to combat our proclivity to over-estimate understanding”.
In other words, the question provides a way to get the know-it-all to stop and assess their actual knowledge accurately and acknowledge their actual lack of understanding. So, here’s how it works. The researchers asked participants in their nine experiments to
“Carefully reflect on your ability to explain to an expert, in a step-by-step, causally-connected manner, with no gaps in your story how the object works”.
And here’s what is truly amazing. It didn’t matter if they asked the participants (across 9 separate studies) to “reflect” for 5 seconds or for 20 seconds—this was a shortcut to accurate self-knowledge assessment. The researchers say that, in their 9 experiments, the speed of the “reflecting” intervention was up to 20x faster for high complexity objects than a full verbal explanation.
The researchers tried other instructions (like “carefully reflect on your understanding of how the object works” or “type out your full explanation as if you were explaining to an expert in a step-by-step, causally-connected manner, with no gaps in your story how the object works”) and determined neither of these worked as well as the directive to “carefully reflect on your ability to explain to an expert in a step by step, casually-connected manner with no gaps in your story as to how the object works” as outlined above.
And, as in our 2014 blog post, the strategy even works to soften extreme political beliefs and attitudes. Something about the reflection task results in participants suddenly “seeing” the complexity of an object (the vacuum cleaner) or the complexity of a political policy—and they are very able to back away from their self-proclaimed expert status. As an added bonus, this effect works best on high complexity (e.g., the vacuum cleaner) as compared to low complexity objects (e.g., a manually operated can opener).
The researchers think this strategy works because it requires a shift from the vague and abstract (e.g., how well do you understand how a vacuum cleaner works) to the specific and concrete (e.g., judge how well you understand how the parts of an object enable it to work). That subtle shift from abstract to concrete results in a “mechanistic” understanding of the desired explanation which makes the difference in the individual’s ability to accurately assess their (lack of) knowledge.
Another reason the strategy works is because the person reflecting almost immediately sees the number of steps it would take to explain how a complex object works and they realize they will only be able to explain a small percentage of the total steps involved in making an object work.
From a litigation advocacy perspective, this is a potentially powerful tool for helping jurors be open to hearing how something or some process works. You can use it directed at yourself for example, while examining a witness.
“You know, Dr. Johnson, I really thought I knew how a vacuum cleaner worked and then I stopped to think about how I would explain how the different parts all work together to an expert in a step-by-step fashion, and I decided to call you as a witness here instead.” (This will allow jurors to check in internally and realize they also do not know how a vacuum cleaner really works.)
Then, continuing with the vacuum cleaner example, your expert witness can say something like, “It’s a lot more complicated than you might think. Do you want me to explain the whole thing in great detail, or are you asking me to talk about how this one widget in dispute works to modulate the level of suction?”
You can then instruct the witness to focus on whatever level of detail serves the cause. Perhaps s/he explains the role of the widget but give us a small summary of how the overall vacuum cleaner works and why the widget in dispute is essential (or not).
It’s a really amazing thing when you see how quickly and non-defensively an “expert” will acknowledge their “gaps in causal knowledge” (as the researchers call it). We have never had a mock juror become angry over being asked to educate the group but they have always sheepishly admitted they are not quite the fount of information they previously thought they were!
Johnson DR, Murphy MP, & Messer RM (2016). Reflecting on explanatory ability: A mechanism for detecting gaps in causal knowledge. Journal of Experimental Psychology. General, 145 (5), 573-88 PMID: 26999047
You know what ‘creepy’ is and in the movie The Silence of the Lambs, Anthony Hopkins personified creepiness. While it may be hard to believe, no one has ever “pinned down” what makes a person creepy. Since there must be a need for such information, enter academic Francis McAndrew of Knox University (in Galesburg, Illinois), for an impressive effort.
First he educates us on what creepiness is—as though we needed him to do that. We all know what constitutes “creepiness” and what results in us being “creeped out” but he does a pretty good job of defining it.
“Creepiness is anxiety aroused by the ambiguity of whether there is something to fear or not and/or by the ambiguity of the precise nature of the threat (e.g., sexual, physical violence, contamination, et cetera) that might be present. Such uncertainty results in a paralysis as to how one should respond.”
So in order to begin what will likely be a long academic exploration (he already has tenure!) on the topic of creepiness, he constructed a measure of just what “normal people” think is creepy. He asked 1,341 people (1,029 females and 312 males ranging in age from 18-77 with an average age of 28.97, via internet survey) to answer some questions about a hypothetical “creepy person” that a friend had encountered. He asked them to rate the person’s physical appearance, behavior and intentions on a scale from 1 (normal) to 5 (creepy). He later asked them to rate occupations and hobbies on a “creepiness scale”.
And here is some of what he found:
Participants were asked if “creepy individuals” were more often male or female. Both male and female participants thought men were more likely to be creepy.
Females were more likely to perceive a sexual threat or sexual interest from a creepy person than were males.
The creepiest occupations were: clown, taxidermist, sex shop owners, and funeral director. (Public service announcement: The full list of occupations deemed “creepy” was in the article and we carefully reviewed it. Neither attorneys nor psychologists were on the creepiness scale, although college professors were on the scale. Be careful out there.)
The creepiest hobbies were collecting things (like dolls, insects, reptiles, or body parts such as teeth, bones or fingernails); variations on ‘watching’ others, bird watchers (who knows what they are really doing?); taxidermy, and a fascination with pornography or exotic sexual activities.
Older participants had less alarm over creepy people, were less likely to feel physical or sexual threat from a creeper and had less anxiety over interacting with a creepy person.
Finally, survey participants were convinced that creepy people do not know they are creepy.
Essentially, what this research says is it is the uncertainty or ambiguity surrounding the creepy person that leads us to think they are a potential threat. It’s good for us to recognize potential threats in our environment—although that birdwatcher wariness is a little odd, unless the concern it that they are really Peeping Tom’s, and the birding interest is a transparent ruse. And it appears that is precisely what our alarm over encountering someone creepy serves to do—detect potential threats.
From a litigation advocacy perspective, this falls into the category of “be aware of the impression that witnesses create in jurors”. If you are prepping a witness and it occurs to you that “this person takes a while to warm up to”, consider what impression they created in you before the warmth took over.
If you conclude that you felt wary of them until they described X or Y, or told you a story about their family or background that you found reassuring—you might have a problem witness. Testing witnesses for credibility and likability is very worthwhile, and it can give you some ideas about how to reduce their potential for “creepiness”.
As an extra piece of information for you, here’s a video that is awkward but not really creepy (at least by the researcher’s definition).
McAndrew, F., & Koehnke, S. (2016). On the nature of creepiness. New Ideas in Psychology,, 43, 10-15 DOI: 10.1016/j.newideapsych.2016.03.003
When I was younger, I would have moments of clarity I referred to as epiphanies. I learned pretty quickly that if I did not somehow reinforce that epiphany in my mind, I would forget it—only to (sometimes) realize it again at some point in the future.
So now, when I am working on a project and have a seemingly idle association, I write it down so it doesn’t disappear and I often find that idle association turns out to be very informative later on. These insights are what some refer to as “aha!” moments, and today’s research article focuses on just how accurate and intuitive those moments can be for all of us. In fact, these “insight solutions” are correct more often than our analytic solutions according to this research. Albert Einstein once referred to his own insights as “great speculative leaps” to a conclusion and then tracing back the connections to verify the idea. (You can read the entire article here courtesy of the senior author.)
Today’s researchers wanted to know how accurate insights would be when compared to analytical solutions. The researchers had participants in four studies take on puzzle solving tasks. One study used only linguistic puzzles, one used only visual puzzles, and the last two used puzzles with both linguistic and visual elements. The participants had a set period of time (i.e., 15 or 16 seconds) in which to solve the puzzles and each experiment contained between 50 and 180 puzzles.
Here is an example of a linguistic puzzle used in the research. These words would appear on a computer screen:
crab pine sauce
Participants were asked to offer a word that would fit all of them to make a compound word (apple, in this case). As soon as the participants had solved the puzzle, they would hit a button and say their answer and then tell the experimenter whether they had derived their answer via analytical thinking or insight (they had received training in how to tell the difference between the two). The researchers say that the insight solutions were overwhelmingly more correct than the analytical thinking solutions.
In the linguistic puzzles, 94% if the responses classified as insight were correct compared to 78% of the analytical responses.
In the visual puzzles, 78% of the insight oriented responses were correct compared to only 42% of the analytic responses.
Additionally, solutions offered during the last five seconds of the task had a lower probability of being correct and the majority of those answers were based on analytical thinking.
The researchers say that insightful thinkers tend not to guess but rather, they wait for an aha! moment. And, when an aha! moment does emerge, that solution tends to seem obvious and the individual is certain the solution is correct. The researchers conclude that if you want a creative idea or solution to a problem, it is better to not have a hard deadline for completion. While a drop-dead deadline will get results, they are less likely to get creative results.
The researchers say this is because insight oriented solutions are an all-or-nothing process while analytical problem resolution is incremental and allows partial information on which the individual can base a guess (which often is incorrect).
From a litigation advocacy perspective, we often see the aha! moment in process during pretrial research with mock jurors. We urge our attorney clients not to draw conclusions for the jurors but rather, to allow jurors to come to their conclusions and solutions. What we see over and over again is that the mock juror who is given enough information to connect the dots but not force-fed a solution—is a juror who is a fierce advocate for one side of the case or the other. You may accept what you are given, but you own what you discover for yourself.
We pay attention during pretrial research and watch for gaps in the case narrative that result in distortions or conspiracy theories about the case and plug those holes for eventual courtroom presentation. We’ve always thought the conclusions drawn by jurors with a road map of what happened were much more powerful than the conclusions presented to jurors by the attorney and this research article shows us why.
Giving jurors an aha! moment as they connect the dots in your case will result in jurors who feel confident in their conclusions and will advocate for you in deliberations.
Salvi, C., Bricolo, E., Kounios, J., Bowden, E., & Beeman, M. (2016). Insight solutions are correct more often than analytic solutions. Thinking & Reasoning, 1-18 DOI: 10.1080/13546783.2016.1141798
You likely remember the story of Pandora’s box (although it turns out the box was actually a jar) from Greek mythology. The story of Pandora was an object lesson in the possible negative outcomes of misplaced curiosity and our research article today would say we haven’t learned the lesson of Pandora’s curiosity.
Researchers in the US wanted to see if they could figure out why curiosity is often pursued even though the results of pursuit will likely be negative. What the researchers found is that people (some more than others) are so uncomfortable with uncertainty that they will work to resolve that uncertainty even if they are expecting negative consequences and no pleasure nor long-term benefits. The researchers refer to this as the “perverse side of curiosity”.
It tracks with the old axiom that you can assure failure today, but success requires patience. They conducted 4 separate experiments to see if they could figure out why we work so hard to resolve uncertainty.
The experiments are somewhat odd. In the first, they had participants click pens that resembled normal ballpoint pens—where each pen was marked with either a red sticker or a green sticker. The participants were told that the red sticker pens would deliver a “painful but harmless” electric shock if clicked but the green sticker pens would not. This was referred to as the certain-outcome condition. Other participants had pens with all yellow stickers and were told that some pens contained the batteries that would shock them and others did not but the outcome was completely uncertain.
While the researchers say the intuitive guess is that more pens would be clicked in the certain-outcome condition (the green or red stickers), more pens were actually clicked in the uncertain-outcome condition (the yellow stickers). The researchers conclude that “curiosity can even lead people to expose themselves to electric shocks”.
In the second study, they used the same idea but each participant was given 20 certain-outcome pens (with red or green stickers) and 10 uncertain-outcome pens (with yellow stickers). The number of pens of each ilk were chosen so that if the participant randomly chose pens to click, they would have clicked twice as many pens with a certain outcome. Again, participants clicked more of the uncertain-outcome pens than the certain-outcome pens.
Satisfied that the Pandora effect was robust, the researchers moved on to Experiment 3. In this experiment, the pens were abandoned and the researchers employed the sound of either nails on a chalkboard (a negative experience), water pouring into a jar (a positive experience), or an uncertain outcome where they would hear one sound or the other unpredictably. Oddly, the researchers decided to “prevent them from feeling bored” the computer would play ‘Twinkle, Twinkle, Little Star’ at low volume in the background. (Seriously? How annoying would that be?!) The participants would press buttons to select either nails on a blackboard, water, or a button marked with a question mark (?). You know what happened.
Despite protection from boredom offered by a nursery school melody, the participants chose to be ‘surprised’ by the researchers and chose the ‘?’ button most often. They also asked the participants to rate how they felt every so often during the experiment and found (shockingly) that the more buttons pressed, the worse the participants felt. The researchers say that “curiosity led people to ‘open the box’ and then suffer”. (They do not say whether the suffering was from pressing buttons or that nursery rhyme melody.)
For the fourth study, the researchers raised the ante and made all the stimuli negative (“pictures of disgusting insects”) and the uncertainty condition would show a surprise “disgusting insect”. The insects used were a bedbug, a centipede, a cockroach, a mosquito, and a silverfish. (We agree with the researchers that these are disgusting particularly when magnified.) In this experiment, the participants were told there were 30 photos of insects that were covered and they must view three of the pictures. In the certain outcome condition, the box covering the insect was labeled with its name. In the uncertain outcome condition, the covered picture displayed a question mark only (?). Once again, participants chose to view more “uncertain condition” insects but the researchers also found that when participants predicted how they would feel after viewing the uncertain-outcome insects—they viewed fewer of the uncertain condition insects than they did if there was no prediction of how they would feel.
In other words, say the researchers, “predicting hedonic experiences reduced people’s tendency to open the box when the outcome was a priori uncertain”.
Overall, the researchers concluded that curiosity will result in people opening a “box” if the outcome is uncertain and negative. However, urging them to ”predict hedonic consequences” will decrease their idle curiosity. The researchers think that “curiosity resolution” is not always beneficial” and a consideration of the possible consequences of the curiosity resolution process would be prudent. There are risks, they say, in seeking information.
From a litigation advocacy perspective, we think resolving idle curiosity of your jurors is not only beneficial but essential. We’ve seen idle curiosity take mock jurors down countless “rabbit trails” that are almost always extra-evidentiary and result in more confusion than clarity. So we make use of that idle curiosity that these researchers warn against—and use their curiosity about potentially distracting side issues that pop up in pretrial research to plug holes in the case narrative.
We want jurors as focused on the evidence as possible (except when we don’t!) and identifying holes in the narrative that lead to “idle curiosity and rumination” is the best way to help jurors avoid the titillation, conspiracy theories, fears, and general over-interpretation of evidence that can occur when a case narrative leaves a perilous hole into which jurors are prone to wander.
Hsee CK, & Ruan B (2016). The Pandora Effect: The Power and Peril of Curiosity. Psychological Science PMID: 27000178
The study we’re looking at today relates to aspects of race, gender, and background play a part in the influence that a scientist has on others. These researchers completed 5 separate experiments to examine if race (e.g., White, Black, and Asian) and gender (e.g., male or female) and socioeconomic status (high or low) of scientists made a difference in credibility ratings and why that happened (when it did). Instead of summarizing all five studies, we will simply tell you that in total there were more than 900 participants in studies in the US, Canada and India. The researchers had participants read a research report which (conveniently) included a photo of the researcher. As you may have ascertained, the race and gender of the “researchers” pictured in the photos varied so the researchers could test their hypotheses.
Their findings were the same across all five studies and across three countries (US, Canada and India). Essentially, the participants had definite opinions on the credibility of the researchers but it wasn’t about the appearance of the researchers (sex, race). Instead, how the participants perceived the credibility of the researcher in the photographs was dependent on the ideology of the participant themselves.
The researchers used a scale from the mid-1990s called the Social Dominance Oriention (SDO) Scale to assess whether participants were elitist (i.e., wanting to maintain the status quo) or egalitarian (i.e., wanting to level the playing field). The SDO Scale is unlikely to be approved for use in court (due to the language used in it) but the researchers offer examples of elitist and egalitarian beliefs by quoting questions from the SDO.
Sample elitist beliefs:
“Inferior groups should stay in their place”
“It’s OK if some groups have more of a chance in life than others”
Sample egalitarian beliefs:
“All groups should be given an equal chance in life”
“We should strive to make incomes as equal as possible”
As you read these questions and think about the idea of priming (which we’ve blogged about previously) you may have your own ideas as to why the researchers found what they did.
What the researchers found was that elitists thought White male researchers were more credible while egalitarians thought women and people of color were more credible. In other words, elitists were biased toward White men while egalitarians were biased toward women and people of color.
While this finding is interesting, what comes next in the article is very interesting.
A key finding in the work was that when ideologies were at either extreme (very elitist or very egalitarian) the support for White men versus the support for women and people of color were strongest.
Second, if the researcher pictured in the photo was shown to be of higher status (and thus academically competent) the effects were neutralized.
From a litigation advocacy perspective, what this article says is you want to strike the fringe dwellers on your panel (and we’ve typically agreed with this if your goal is for the jury to reach a verdict). We’ve always said they are too unpredictable and the researchers say they are the most likely to make decisions based on ideology rather than as a considered response to evidence and testimony.
Second, this research would say you want to clearly establish an identity for the “researcher” that jurors will identify in the way most beneficial to your case. If it is your witness or client, the more compelling their status to the kinds of jurors you have is ideal so the jurors are comfortable assuming they are competent academically.
We once were asked to help prepare a witness who was a world-famous expert in a highly technical area of intellectual property. For better or for worse, he was a professor at a famous university in the San Francisco Bay area, with a beard and an eye-catching head of frizzy hair. To many, he looked like Albert Einstein or some other science genius. But to the rural folks from the Eastern District of Texas, he simply looked like an aging hippy. If you are ‘preparing the battle ground’ for an opposing witness, finding ways to undermine their relatability or admirability is worth considering.
To this we would add that you also want to work with witnesses and parties so their testimony shows them to be not only credible, but also trustworthy, likable and confident (without being cocky). We think the idea of showing that your client (whether an individual or an organization) shares values with the jurors heightens their acceptance, even when they are talking about things no juror really understands. If the witness displays the universal values that are strongly held by the jurors, he or she is prone to being seen as “one of us”—and that’s a very good day in court.
Zhu LL, Aquino K, & Vadera AK (2016). What Makes Professors Appear Credible: The Effect of Demographic Characteristics and Ideological Beliefs. The Journal of Applied Psychology PMID: 26949817