Archive for the ‘Bias’ Category
Whether you are involved in criminal or civil litigation, before long you are likely to run into a forensic neuropsychologist and a neuropsychological exam. A new article (mostly directed at civil litigation involving adults) discusses 12 forms of bias and how to mitigate those biases. You may want to review it carefully (or have an expert witness review it carefully) prior to trial. The article is written by three practicing forensic neuropsychologists and is intended to assist both the expert witness and both sponsoring and examining attorneys. For the purposes of this blog post, which is only meant to raise your awareness of this resource, we will list the 12 forms of bias that are identified with the author’s recommendations on how to mitigate. This is an information-rich resource, so for additional background and details, please review the article itself.
Logistical and administrative biases (or how the neuropsychologist has arranged the evaluation and the sources of information upon which they rely).
Conflating clinical and forensic roles. There is a clear distinction between these roles and they should not be mixed. The authors give specific examples and describe the differences between a treating expert and a forensic neuropsychologist charged with assessing and writing a report but not with treatment or advocacy.
Financial/payment bias. The authors describe payment arrangements on a continuum from “straightforward to murky to highly biased”. They recommend a “fee for service” arrangement and offer examples of how alternate arrangements can be questioned in open court.
Referral source bias. The authors describe “Rule 26 disclosure” and how forensic neuropsychologists repeatedly retained by a specific attorney can be seen as “hired guns” by jurors. The authors describe multiple ways you can “see” a referral source bias in a testifying expert.
Self-report bias. The authors describe how some evaluators forget the importance of verifying the report of the examinee with workplace, school and family reports and prior testing to ensure the reports are accurate. They discuss secondary gain, misremembering pre and post injury events, or situation-specific amnesia.
Statistical biases (under-utilization of base rates and ignoring normal variance in test scores).
Under-utilization of base rates. Base rates are often confusing for jurors and it is important that a neuropsychologist uses them accurately even though the authors stress evidence that neuropsychologists are both unaware of base rates and under use them in their evaluations.
Ignoring normal variance in test scores. Another statistical bias is not understanding normal variance in test scores and thus making inappropriate conclusions.
Cognitive, personal and attributional biases.
Confirmation bias. This is a bias we often discuss on our blog and it is also a trap for the unwitting evaluator. Essentially, confirmation bias occurs when you use your pre-existing beliefs to support your hypotheses rather than seeking confirmation in the data.
Personal and political bias. While this may seem to be an obvious bias for the evaluator to guard against, it is commonly seen according to the authors. Additionally, they discuss a term from the psychotherapy arena: countertransference and warn against examinee characteristics “such as age, attractiveness, gender, ethnicity and socioeconomic status” that could bias the examiner either toward or against the examinee.
Group attribution error. This occurs when the examiner makes an assumption about an individual based on the belief that the “individual’s traits are representative of an entire group”. This extends far beyond race and ethnicity with examples offered of examiners who think everyone with Alzheimer’s should present in a certain fashion or everyone with head injuries should have common symptoms, or that everyone with fibromyalgia has a somatoform disorder.
Diagnosis momentum. This is the tendency for a diagnosis to be seen as unquestionably accurate as increasing numbers of people select that specific diagnosis rather than performing a complete evaluation to ensure the validity of the diagnosis of record. This could obviously have major impact on case outcome.
Good old days bias. This is a bias held by the examinee rather than the examiner that may result in self-reports that over-report the level of past function. This makes the examination of prior records imperative and its presence is often seen as a hallmark of a “psychological process that occurs post-injury”.
Overconfidence. This bias happens when an individual neuropsychologist grows sloppy in their work because they feel experienced enough to “know the truth”.
Naming biases seems to be epidemic, kind of like coming up with clever Twitter hashtags. Ultimately, the point is that people try to make sense of confusing or disruptive thoughts and feelings as quickly and effortlessly as they can, even if it requires torturing the truth. Overall, the authors acknowledge there are countless other biases that exist and this is a starting point for assessment of a forensic neuropsychological evaluation. They offer multiple strategies for the forensic evaluator to defend against biases (and thus for the attorney who wishes to examine potential sources of bias in the report). This is a useful resource to keep on hand and use to assess biases that may be present in court-ordered forensic neuropsychological reports.
Effective trial strategies for reducing biases often come from teaching jurors what the possible biases are, and how making smart and correct judgments requires ignoring or avoiding them. Warn jurors of how tempting it can be to race to conclusions, pointing out some of the pitfalls, and tipping them off that getting seduced into getting hooked into these false impressions will not only be a source of error, but for everyone who wants to be correct, it will also be a source of regret.
Richards, P., Geiger, J., & Tussey, C. (2015). The Dirty Dozen: 12 Sources of Bias in Forensic Neuropsychology with Ways to Mitigate Psychological Injury and Law, 8 (4), 265-280 DOI: 10.1007/s12207-015-9235-1
Sometimes we think change goes slowly and other times it goes fast! And the older you get the faster time seems to move! So here is proof that times change quickly! Pew Research Center has announced that in 2016 they will call 75% cellphones to complete their surveys in order to allow for the fact that almost half of American households use only wireless telephone services and the proportion of interviews conducted on cell phones has risen steadily since 2009.
They include some interesting facts about just how much times are changing when it comes to telephone use.
9 in 10 Americans have a cellphone with those adult Americans who use cellphone-only steadily increasing since 2004 according to the US government.
Roughly half of US adults (47%) have only wireless phone service.
People who rely only on cellphones are demographically different from those with landlines as well. They are considerably younger, less educated and lower-income. They are more likely to be Hispanic and urban. If you do not sample cellphone-only users, you do not get a sample representative of US adults. (This is why we follow Pew’s publications closely. They pay attention to societal changes.)
When adults have cellphones whose area code does not match the area they live in, it isn’t a problem in national polls but can be in regional or state polls. So respondents are always asked where they are located so the survey does not end up skewed. Sometimes there are addresses associated with cellphone numbers but not always.
And here is what is most surprising. Contrary to what you may experience when you pick up a call from an unknown number and get the background noise from a call center and a long delay prior to a person coming on the line—federal regulations say cellphones have to be manually dialed by an interviewer and not an autodialer. Obviously not everyone is doing it since it adds significant costs to interviewing (twice the cost of landline interviews, according to Pew).
Overall, this is invaluable information. If people who rely on cellphones-only are younger, poorer, less educated, and more likely to be Hispanic and urban—they represent a different group than the regular population and we need to pay attention to that difference as we conduct pretrial research. On the other hand, the explosion of cell-only users is obviously growing far beyond the profiles of who uses cellphones exclusively. And clearly, polling or sampling that doesn’t incorporate cell phones is missing much of the voter/juror population.
Pew Research Center (January 5, 2016). Pew Research Center will call 75% cellphones for surveys in 2016.
It’s a basic tenet of the reptile theory that you want to frighten your jurors to make them vote for your client in deliberation. [The ABA has put out an open-access primer on the reptile theory and you can see that here.] It is also been shown repeatedly that conservatives are more fearful than liberals, but now we have research telling us that if you terrify a liberal, they think more like conservatives. We’ve seen the results of fear in multiple pretrial research with mock jurors but we do not think the reptile theory particularly original. It seems to be an adept repackaging of the terror management theory but it is certainly marketed persuasively as the “only way to win”.
So, on to today’s research. Researchers from the UK analyzed data from two nationally representative surveys (completed about 6 weeks before and about a month after the July 7, 2005 bombings in London). As a reminder, in the bombings in London, the bombs went off on the public transport system. The explosion led to the death of 52 people and injuries to 770 and were part of an Al Qaeda attack carried out by several Britain-born Muslims and a Jamaican Muslim immigrant. (In the event you wonder why this is only being published now, the data just recently became available.)
The researchers looked at questions that represented “four moral foundations”:
In-group loyalty (“I feel loyal to Britain despite any faults it may have”)
Authority-respect (“I think people should follow rules at all times, even when no one is watching”)
Harm-care (“I want everyone to be treated justly, even people I don’t know. It is important for me to protect the weak in society.”)
Fairness-reciprocity (“There should be equality for all groups in Britain”)
Then they looked at the level of agreement with this statement on Muslims (“Britain would lose its identify if more Muslims came to live in Britain”) and this statement about immigrants (“Government spends too much money assisting immigrants”). They wondered if beliefs about Muslims and immigrants would be more negative following the terror bombings and….attitudes were more negative.
However, attitudes were not more negative for everyone! Only liberals attitudes became more negative while conservatives attitudes remained about the same. The researchers believe the liberals were “terrified” and thus more negativity was reported directed at Muslims and immigrants and they wonder if when conservatives experience terror—does it work to consolidate their perspective and make them more resistant to change?
In other words, conservatives hunkered down in their pre-existing beliefs and liberals rushed to join them. The authors make this comment about implications for their research:
“For people working to tackle prejudice, it is important to be aware that terror events may have different effects on the attitudes of people who start from different political orientations. Among people who tend to be conservative, such events may consolidate their existing priorities, making them resistant to change. Among people who tend to be liberal, the same events may prompt a shift in their priorities and propel them toward more prejudiced attitudes.”
It’s an interesting finding when considering the reptile approach since it would support long-standing terror management theory beliefs that say when you are threatened, you seek safety. Apparently, for liberal Brits, safety was found in numbers among their own kind (and conservative Brits were more “like the” liberal Brits than were the terrorist Muslim immigrants). So, let’s say opposing counsel has frightened your jurors to death (metaphorically speaking). What can you do to (quickly) help them feel safe again?
While we feel a need to make clear that we have never tried any of these in the aftermath of a terrorist attack, here are a couple of strategies you can employ to counteract the reptile approach.
Ultimately, we still like this strategy (blogged about earlier) to counteract the fear purposely instilled by the litigator employing the reptile approach—we even called it the anti-reptile theory and have used it to good effect at trial.
Our colleague Ken Broda-Bahm also wrote an article in The Jury Expert on the Defense approach to the reptile theory at trial.
Van de Vyver, J., Houston, D., Abrams, D., & Vasiljevic, M. (2015). Boosting Belligerence: How the July 7, 2005, London Bombings Affected Liberals Moral Foundations and Prejudice Psychological Science DOI: 10.1177/0956797615615584
We’re unsure if this strategy would work for women but it seems to work for men—at least in medical schools and teaching hospitals. We do presume those male leaders with mustaches do not have the sort of mustache illustrating this post but what do we know? We also tend to believe that if a woman were to grow this sort of mustache, she would also not be selected to advance as a leader. But, we digress. On to the real point of this blog post.
Each year, the British Medical Journal publishes a Christmas issue where they offer a more light-hearted look at important issues of the day. We posted about one of their articles on Christmas Day. Here is another important paper that (alas) reflects what women know all too well when it comes to women in leadership. These researchers (two medical residents, a professor of law ,and a professor of dermatology) examined (carefully and presumably visually) “clinical department leaders (n=1018) at the top 50 US medical schools funded by the National Institutes of Health (NIH)” to see if they were male or female and whether they had mustaches. None of the women in the sample had a mustache. The researchers defined a mustache in the following way: “the visible presence of hair on the upper cutaneous lip” and they included the presence of both standalone mustaches and mustaches in combination with other facial hair. They specifically did not include facial hair such as “mutton chops” or “chin curtains” as mustaches.
According to the researchers women accounted for only 13% of department leaders in the sample (137 women out of 1,018 department leaders).
Leaders with mustaches (none of them, as mentioned earlier, women) accounted for 19% of the sample (190/1,018 total leaders). And according to them, less than 15% of men in the country have mustaches so mustached men are over-represented among medical department leadership. .
The proportion of female leaders ranged from 0% to 26% across institutions and from 0% to 36% across specialties.
Only seven specific institutions and five specialties had more than 20% of female department leaders.
The researchers developed a novel unit of measure called the mustache index. (Essentially this is computed by looking at the number of mustached leaders versus the number of female leaders.) “The overall mustache index of all academic medical departments studied was 0.72 (p<.004). In other words, a medical department is much more likely to be led by a man with a mustache than by a woman. Only six of 20 separate medical specialties had “more women than mustaches” (for a mustache index > 1).
The researchers recommend that “mustachioed” individuals should number less than the number of women in medical department leadership (and they state they clearly do not mean a “no mustache” policy). They want to call attention to the disparity in these leadership positions between men and women—hence the tongue-in-cheek “mustache index”. They offer a number of suggestions to help increase the number of women in leadership positions that revolve around developing job criteria prior to evaluating candidates, flexible work schedules as well as increased personal control over work time and cite the high levels of satisfaction among women physicians in specialties that allow “controllable lifestyle” such as dermatology and anesthesiology.
From a litigation perspective, this really applies most to law office management and we’ve written before about the importance of hiring practices that do not discriminate against applicants by gender or race and ethnicity (as well as other descriptive characteristics). You can see all those posts by looking at our blog category on law office management. Do a quick count in your own office. Do leaders with mustaches outnumber leaders who are women?
Wehner MR, Nead KT, Linos K, & Linos E (2015). Plenty of moustaches but not enough women: cross sectional study of medical leaders. BMJ (Clinical research ed.), 351 PMID: 26673637
We began to see an increase in mock jurors endorsing multiple racial categories perhaps 10 years ago, and modified our questionnaires to make it easier for them to express that view. We’ve had jurors list as many as half a dozen racial categories and have had mock jurors whom we would describe as multiracial describe themselves as White (in one case due to extreme anger at the juror’s African-American mother who had abandoned the family). It’s been an issue we’ve thought about a lot but apparently we haven’t thought about it as carefully as has Pew Research Center.
Regular readers know we think highly of Pew Research and their work to measure and document changing social norms but this time they’ve done something pretty amazing. Pew now gives us six different ways to measure racial identity or the concept of being “multiracial”. It’s a fascinating comparison since each method of measuring seems to result in slightly different answers. If you ask about the individual, for example, you may get one answer, but if you ask about the racial identity of the individual’s parents or grandparents you may get a different racial category than the individual uses to describe their own race.
According to the Pew report, the most common way to measure racial identity is to simply ask a respondent to “select one or more races, with a separate question measuring Hispanic ethnicity”. From this question, the Pew estimates 3.7% of Americans are mixed race (which they define as self-selecting two or more races).
However, then they looked at multiple other ways to identify race in survey respondents. First they examined a question being considered for the 2020 census which does not list Hispanic origin separately. The question will simply be “mark one or more” and when using this format, Pew says 4.8% of adults indicate they are multiracial.
The next strategy is to also ask about the race and ethnicity of parents. With this method, the share of those reporting a multiracial background jumped to 10.8%! Then Pew looked at adding in grandparents race and ethnicity by asking if “any of their grandparents were ‘some other race or origin’ than their own” and the proportion leapt to 16.6%. (Pew goes into detail explaining why they believe this number overestimates the multiracial population due to the follow-up questions.)
The fifth strategy is to give respondents ten “identity points” and ask them to allocate the points across different racial and ethnic categories as they see fit. In Pew’s exploration of this method (developed by UC Berkeley political scientist Taeku Lee) “some 12.7% of adults gave points to two or more races”. And finally, Pew asked people directly, “Do you consider yourself to be mixed race; that is, belonging to more than one racial group?”. Using this strategy, 12.0% of adults identified themselves as multiracial.
Based on all these ways of measuring racial identity, Pew revised their estimate of the percentage of Americans who self-represent as being multiracial from 3.7% to 6.9% and they indicate that if great-grandparents and earlier ancestors racial identity been taken into account, their estimate would rise to 13.1%.
It’s a long ways from 3.7% to 13.1% and it speaks to the changing demographics of our society (or perhaps to the increased comfort in acknowledging being multiracial). And it may speak of some delicacy about the issue of race. It seems possible that we are seeing a contrast between what someone’s ethnicity is by history, and how they view themselves culturally and ethnically today.
As jurors, if race is a factor (either because of the issues in dispute or by sheer coincidence) does it matter more that a person derives their genetics from one or more racial groups, or that they identify with a particular racial group? It’s a valuable piece of work for us since we always take a look at whether racial identity is tied to ultimate verdict (even though it infrequently is related). Our own belief is that we want to keep up with changing ideas and attitudes in the country as we craft our pretrial research questionnaires and Pew is terrific at helping us do that. Take a look at their new report.
Pew Research Center (November 6, 2015). Who is multiracial? Depends on how you ask: A comparison of six survey methods to capture racial identity. http://www.pewsocialtrends.org/2015/11/06/who-is-multiracial-depends-on-how-you-ask/