Whether you are involved in criminal or civil litigation, before long you are likely to run into a forensic neuropsychologist and a neuropsychological exam. A new article (mostly directed at civil litigation involving adults) discusses 12 forms of bias and how to mitigate those biases. You may want to review it carefully (or have an expert witness review it carefully) prior to trial. The article is written by three practicing forensic neuropsychologists and is intended to assist both the expert witness and both sponsoring and examining attorneys. For the purposes of this blog post, which is only meant to raise your awareness of this resource, we will list the 12 forms of bias that are identified with the author’s recommendations on how to mitigate. This is an information-rich resource, so for additional background and details, please review the article itself.
Logistical and administrative biases (or how the neuropsychologist has arranged the evaluation and the sources of information upon which they rely).
Conflating clinical and forensic roles. There is a clear distinction between these roles and they should not be mixed. The authors give specific examples and describe the differences between a treating expert and a forensic neuropsychologist charged with assessing and writing a report but not with treatment or advocacy.
Financial/payment bias. The authors describe payment arrangements on a continuum from “straightforward to murky to highly biased”. They recommend a “fee for service” arrangement and offer examples of how alternate arrangements can be questioned in open court.
Referral source bias. The authors describe “Rule 26 disclosure” and how forensic neuropsychologists repeatedly retained by a specific attorney can be seen as “hired guns” by jurors. The authors describe multiple ways you can “see” a referral source bias in a testifying expert.
Self-report bias. The authors describe how some evaluators forget the importance of verifying the report of the examinee with workplace, school and family reports and prior testing to ensure the reports are accurate. They discuss secondary gain, misremembering pre and post injury events, or situation-specific amnesia.
Statistical biases (under-utilization of base rates and ignoring normal variance in test scores).
Under-utilization of base rates. Base rates are often confusing for jurors and it is important that a neuropsychologist uses them accurately even though the authors stress evidence that neuropsychologists are both unaware of base rates and under use them in their evaluations.
Ignoring normal variance in test scores. Another statistical bias is not understanding normal variance in test scores and thus making inappropriate conclusions.
Cognitive, personal and attributional biases.
Confirmation bias. This is a bias we often discuss on our blog and it is also a trap for the unwitting evaluator. Essentially, confirmation bias occurs when you use your pre-existing beliefs to support your hypotheses rather than seeking confirmation in the data.
Personal and political bias. While this may seem to be an obvious bias for the evaluator to guard against, it is commonly seen according to the authors. Additionally, they discuss a term from the psychotherapy arena: countertransference and warn against examinee characteristics “such as age, attractiveness, gender, ethnicity and socioeconomic status” that could bias the examiner either toward or against the examinee.
Group attribution error. This occurs when the examiner makes an assumption about an individual based on the belief that the “individual’s traits are representative of an entire group”. This extends far beyond race and ethnicity with examples offered of examiners who think everyone with Alzheimer’s should present in a certain fashion or everyone with head injuries should have common symptoms, or that everyone with fibromyalgia has a somatoform disorder.
Diagnosis momentum. This is the tendency for a diagnosis to be seen as unquestionably accurate as increasing numbers of people select that specific diagnosis rather than performing a complete evaluation to ensure the validity of the diagnosis of record. This could obviously have major impact on case outcome.
Good old days bias. This is a bias held by the examinee rather than the examiner that may result in self-reports that over-report the level of past function. This makes the examination of prior records imperative and its presence is often seen as a hallmark of a “psychological process that occurs post-injury”.
Overconfidence. This bias happens when an individual neuropsychologist grows sloppy in their work because they feel experienced enough to “know the truth”.
Naming biases seems to be epidemic, kind of like coming up with clever Twitter hashtags. Ultimately, the point is that people try to make sense of confusing or disruptive thoughts and feelings as quickly and effortlessly as they can, even if it requires torturing the truth. Overall, the authors acknowledge there are countless other biases that exist and this is a starting point for assessment of a forensic neuropsychological evaluation. They offer multiple strategies for the forensic evaluator to defend against biases (and thus for the attorney who wishes to examine potential sources of bias in the report). This is a useful resource to keep on hand and use to assess biases that may be present in court-ordered forensic neuropsychological reports.
Effective trial strategies for reducing biases often come from teaching jurors what the possible biases are, and how making smart and correct judgments requires ignoring or avoiding them. Warn jurors of how tempting it can be to race to conclusions, pointing out some of the pitfalls, and tipping them off that getting seduced into getting hooked into these false impressions will not only be a source of error, but for everyone who wants to be correct, it will also be a source of regret.
Richards, P., Geiger, J., & Tussey, C. (2015). The Dirty Dozen: 12 Sources of Bias in Forensic Neuropsychology with Ways to Mitigate Psychological Injury and Law, 8 (4), 265-280 DOI: 10.1007/s12207-015-9235-1
We’ve written about the bias blind spot here before and now there is an actual scale to measure your specific bias blind spot (since, as it turns out, we all have one or more). You may wish to disagree with the statement that we all have a bias blind spot. That is precisely why it’s called our bias “blind spot”—we cannot see it. In fact, in the sample of 661 adults used to develop the Bias Blind Spot Scale, only 1 person reported being more biased than the average person. The rest of the group thought themselves less biased than others.
As one of the authors says: “People seem to have no idea how biased they are. Whether a good decision-maker or a bad one, everyone thinks that they are less biased than their peers. This susceptibility to the bias blind spot appears to be pervasive, and is unrelated to people’s intelligence, self-esteem, and actual ability to make unbiased judgments and decisions.”
The scale itself is somewhat unusual for social science research. Items are written in very academic prose and describe 14 different kinds of biases. Participants completing the scale are asked to assess the degree to which they themselves exhibit this bias in judgment and decision-making and (in comparison) the degree to which the average American exhibits this bias in judgment and decision-making.
Fourteen biases are assessed in the scale: action-inaction bias, bandwagon effect, confirmation bias, disconfirmation tendency, diffusion of responsibility, escalation of commitment, fundamental attribution error, halo effect, in-group favoritism, ostrich effect, projection bias, self-interest bias, self-serving bias, and stereotyping. And again, participants were asked to rate the extent to which they individually exhibited this bias and the extent to which the “average American” exhibited the bias.
Here are three examples of biases included in the scale so you can get a sense of the language of scale items:
Many psychological studies have shown that people react to counter evidence by actually strengthening their beliefs. For example, when exposed to negative evidence about their favorite political candidate, people tend to implicitly counter argue against that evidence, therefore strengthening their favorable feelings toward the candidate. (Confirmation bias)
Research has found that people will make irrational decisions to justify actions they have already taken. For example, when two people engage in a bidding war for an object, they can end up paying much more than the object is worth to justify the initial expenses associated with bidding. (Escalation of commitment)
Psychologists have identified a tendency called the “ostrich effect”, an aversion to learning about potential losses. For example, people may try to avoid bad news by ignoring it. The name comes from the common (but false) legend that ostriches bury their heads in the sand to avoid danger. (Ostrich effect)
According to the researchers, the more blind one is to the personal bias, the lower the quality of their decision-making since they listen less to the advice of others and are less likely to use de-biasing training to improve their decision-making. Obviously, these people are not going to be open to changing their minds based on expert testimony or the agreement of peers. We blogged about a bias blind spot juror (we called her Victoria) a couple of years back. We don’t think this bias blind spot scale is useful. In fact, it is so pedantic and laborious for people who are not well-educated, we feel that item comprehension is in question. The idea though, is quite interesting, and hopefully the researchers (or others in the area) will test a more ‘natural language’ version, and we can try it out in pretrial research to see if any biases are particularly relevant to our specific cases.
Overall though, the researchers say that “awareness of one’s vulnerability to bias” is an important factor in being able to benefit from training in de-biasing. We interpret this as (once again) the importance of helping jurors be fair by increasing their empathy for others and helping them see the client or party or witness as espousing the same values they themselves hold dear.
Scopelliti, I., Morewedge, C., McCormick, E., Min, H., Lebrecht, S., & Kassam, K. (2015). Bias Blind Spot: Structure, Measurement, and Consequences Management Science DOI: 10.1287/mnsc.2014.2096
We’ve written about myside bias before here. Myside bias is a subset of confirmation bias, and these researchers say it is also “related to the construct of actively open-minded thinking”. We see myside bias so often during pretrial research and in post-verdict juror interviews that side-stepping it is always at the front of our minds as we consider case narrative development.
In a nutshell, myside bias looks like this:
When challenged, we see and hear things through the lens of our own values and personal experience.
Ultimately, we interpret the meaning of what we see and what we hear in accordance with our own biases.
In other words, we don’t take in what you say. We take in what we (all ready) believe to be true. We lock onto data points that confirm our own world views. This is the point in pretrial research where our clients jump out of their chairs in the observation room, pace back and forth, munching handfuls of peanut M&Ms, and begin talking with their mouth full: “That is NOT what I said!”. It’s a frustrating experience for all involved.
So we were glad to see new research published on the relationship between myside bias and intelligence levels. How wonderful would it be if all we had to do was estimate intellectual function in order to know who would have the highest levels of myside bias? Let me answer that. It would be fabulous! Why, it would almost be too good to be true. And, unfortunately it is. There is absolutely no relationship between myside bias and intelligence. Bummer. In other words, smart or not, we all do it. Sigh.
Researchers summarize years of research they (and others) have done on myside bias.
“The research discussed here shows that in a naturalistic reasoning situation, people of high cognitive ability may be no more likely than people of a low cognitive ability to recognize the need to dampen myside bias when reasoning. High intelligence is no inoculation against myside bias.”
In short, if you are not warned in advance to dampen your myside bias, you don’t do it. However! If you are warned in advance to dampen your myside bias, people with higher intelligence are more able to do so. The researchers say we need to find ways to assess rational thinking skills and myside biases. We say–wait! What we need to do now is think of ways to warn our jurors in advance that they need to set aside their myside biases as they listen to the evidence!
And that strategy has potential. What we are talking about is the presence of the ability to be analytical. When looking at potential jurors, we are always examining who will be a natural leader. Natural leaders are often very intelligent and have the ability to think analytically. So we want to alert them to the need to set aside preconceived opinions without asking the tired “can you be fair?” question.
It’s already at work in some routine sorts of ways. Unfortunately, it usually takes the form of dull and empty boilerplate jury instructions such as “Don’t let bias, prejudice or sympathy play any part in your deliberations of this case.” Maybe when that was written decades ago it shocked people into arresting their myside bias, but at this point they skip over it and look for an instruction that makes sense.
And it does make sense, with an effective attorney guiding the process. The problem is, that there is a tendency to gloss over it, to make it a point to mention, rather than an issue to emphasize. To include in opening statements, or witness examinations, or closing arguments the notion that “before I heard the facts, I was naturally inclined to assume X, only to realize after carefully looking at things just how wrong I was at first” can be compelling. It normalizes something that is truly normal, thereby allowing the jury to do the same– to recognize the need to rise above our preconceptions.
We’ve written a lot about ways to “raise the flag” for racial bias awareness. There are times you want to do that and times you don’t want to do that. The same, we would wager, is true about myside bias. We will be experimenting with ways to know when you want to alert jurors to not showing myside bias and when you want to just zip it. We’ll get back to you on this one.
Stanovich, KE, West, RF, & Toplak, ME (2013). Myside bias, rational thinking and intelligence. Current Directions in Psychological Science, 22 DOI: 10.1177/0963721413480174
Does font choice really reduce confirmation bias? [That’s the bias where we simply look for confirmation of what we already believe to be true.] New research says it could. What might that mean for jury deliberations?
Last year we saw research showing that when students are asked to study with worksheets containing more difficult to read fonts–they learn more. The “difficult to read” fonts used were Comic Sans Italicized, Monotype Corsiva and Haettenschweiler. While not outlandish like some highly stylized fonts, all of these fonts slowed students down enough so they ultimately received better grades. [The easy-to-read font used was Arial.]
New research (just published in the Journal of Experimental Social Psychology) tells us again that if your font is more difficult to decipher and thus promotes careful and analytic processing, confirmation bias is reduced. In other words, the reader takes a more comprehensive look at opposing views and ultimately makes less biased decisions.
Wow. This research was written up in Pacific Standard Magazine by the ever-creative Tom Jacobs. Tom suggests Congress pass a bill mandating all proposed legislation be printed in Comic Sans font. We have a different idea about what this research suggests but first, here is what the researchers did:
Participants were instructed to read some information about a suspect charged with a crime and then told they would be asked to decide on a verdict given the information at their disposal. Some were asked to provide their responses within a three minute time frame and others were shown a list of words (simple words like guitar, eagle glasses, mixer, ocean, table, parade, window, and baseball) and asked to keep those words in mind as they would be asked about them later. Others were not asked to engage in either of these conditions.
Then they were asked to read testimony from the suspect’s school psychologist. The testimony read alternately described the suspect positively or negatively. Then they read objective facts about the case which were purposely ambiguous as to guilt. The case facts were printed in either a “fluent” font [i.e., easy to read] or a “disfluent” font [i.e., not easy to read].
Rather than using varying fonts as the researchers in the 2011 study had done, these researchers messed around with how easy the copy was for the participants to read.
“In the fluent condition, the facts were written in a Times New Roman 16-point font. In the disfluent conditions, participants received a document written in a 12-point Times New Roman font that had been photocopied recursively three times on the lowest contrast setting until the text was significantly degraded, but still readable, which has been shown to induce analytic thinking via disfluency.”
Then the participants gave their verdict [guilty or not guilty] and were also asked to rank the certainty of their verdict. What the researchers found is pretty amazing.
When the participants read the “fluent” font, they were more likely to offer a biased verdict (based on whether they had read a negative or positive description from the school psychologist).
When they read the “disfluent” font, they were less likely to offer a biased verdict–that is, their verdict was not as related to whether the description they had read was negative or positive. Interestingly, those participants with time limits and memory tasks were also more likely to respond in a biased fashion. The researchers thought this showed you need to focus on the task rather than using cognitive resources for other demands at the same time.
The researchers sum up their findings in a single [very powerful] sentence:
“Disfluency may offer an opportunity for better judgment and discourse between opposing positions, ultimately giving what was once an over-looked message, a chance to be seen.”
This research speaks to the preparation of visual evidence. We’ve written before on whether you want your jurors to think or not to think. In essence, this research speaks to the same thing.
If you want jurors to think, present your visual evidence in a slightly more difficult to process font.
If you want them to decide based on their pre-existing beliefs make that font as easy to read as possible.
There is often a lot of grumbling among lawyers when key documents are produced by the opposition with degradation due to age, multiple faxings, copyings, or a faded duplicate. Who knew that it would actually be to the advantage of the opposition when the document is thus rendered “disfluent”? It’s scary to think about presenting demonstratives in a font that’s harder to read–it may be scarier to consider allowing jurors to make decisions based on biases you may not know they have.
Keep their focus on the evidence– make it harder to read!
Ivan Hernandez, & Jesse Lee Preston (2012). Disfluency disrupts the confirmation bias. Journal of Experimental Social Psychology. DOI: 10.1016/j.jesp.2012.08.010
We’ve commented on economists writing about psychological principles before on this blog. We tend to enjoy their different perspective. And we especially enjoyed seeing this retraction in the Atlantic by an economist who learned about a form of bias we see often in the courtroom.
Essentially, in the first article, Buturovic and Klein said liberals were less knowledgeable on economic questions than were conservatives. And a firestorm erupted. The authors were accused of either rigging the results or thanked for validating a long-held belief about the inferiority of liberals.
They listened, and they learned. A new study (based on feedback they received from the original work) found a different pattern. For the second study, they added nine additional questions that balanced the study so that the questions either challenged conservative beliefs or challenged liberal beliefs. This time, it didn’t find any difference between the two groups. Their retraction goes as follows:
“One year ago, we reported the results of a 2008 Zogby survey that purported to gauge economic enlightenment. [snip] We also found that that self-identified Progressives and Liberals did much worse than Conservatives and Libertarians, and this finding generated a lot of controversy. Those results were based on eight questions [snip] that specifically challenged leftist positions and/or reassured conservative and/or libertarian positions, while none had a clear slant against conservatives and/or libertarians.
In a new survey, conducted in December 2010, we supplemented those eight questions with another nine new questions, all specifically challenging conservative and/or libertarian positions (and often reassuring leftist positions). [snip] However, the new test consisting of all 17 questions yielded results that vitiated prior evidence of the left being worse. Now, all groups do poorly, with each group tending to do relatively poorly on the questions challenging its positions.”
So, in other words, there is no evidence that liberals or conservatives are smarter when it comes to economics. The differences found were all about the ‘myside bias’–more commonly referred to as the confirmation bias. We want to have our preexisting beliefs validated. We see it pretty consistently with mock jurors.
This week we did a mock trial and divided the jurors into deliberation groups based on age, gender, education and income. On these characteristics, they were as evenly balanced as possible. And then we sat and watched the deliberations to see some jurors responding in favor of the defense and others passionately supporting the plaintiff. As we listened, it seemed to come down to a decision-making style difference. Some emotionally focused on the morality of the issues inherent in the story, while others pounded the evidence and facts and need for personal responsibility.
The point is, that like the economist Daniel Klein, we all have biases. And to revisit that sage, Paul Simon, “a man hears what he wants to hear and disregards the rest.” When challenged, we see see and hear things through the lens of our values and personal experience. Ultimately, we interpret the meaning of what we see and what we hear in accordance with our own biases. It’s a frustrating experience for all involved. Most of us are neither as open nor as gracious in acknowledging our emotional and cognitive errors as is Daniel Klein. For that, he is to be commended. Now if he can just figure out how we can all block that bias in the first place…
Buturovic, Z., & Klein, D.B. (2010). Economic Enlightenment in Relation to College-going, Ideology, and Other Variables: A Zogby Survey of Americans. Econ Journal Watch
Klein, D.B., & Buturovic, Z. (2011). Economic Enlightenment Revisited: New Results Again Find Little Relationship Between Education and Economic Enlightenment but Vitiate Prior Evidence of the Left Being Worse. Econ Journal Watch.
Shepherd S, & Kay AC (2011). On the perpetuation of ignorance: System dependence, system justification, and the motivated avoidance of sociopolitical information. Journal of Personality and Social Psychology PMID: 22059846