Archive for the ‘Beliefs & values’ Category
FALSE! Alas, even though Microsoft has popularized this notion of a shrinking attention span—it is simply not true. Or at least, there is no proof it is true. And the study the falsehood was based on was not even looking at attention span—it was looking at multi-tasking while browsing the web. To add insult to injury for the authors (who actually are academics), they do not even use the word goldfish in their article. Academics who’ve been misquoted or misinterpreted by the media are shaking their heads around the globe. This distorting of research by the popular press for the sake of sensational stories isn’t new, but for those who do the work, it is pretty disturbing. Reporters often do little back-checking with the geeks that make the world go ‘round, because it’s hard, and it often takes the edge out of a catchy story. Once the first misinterpretation is published, the skewed reports drift farther and farther from the research they purportedly rely on. Alas…
Okay. So what happened here? Microsoft apparently commissioned a 2015 non-peer-reviewed study to examine how internet browsing had changed over time—that is, how long do surfers look at a page prior to moving on? Then it was misinterpreted (really misinterpreted) with spurious comparison information added about how adult attention spans were shrinking—an assertion unsupported and unaddressed even by the Microsoft study. This misinformation was picked up by the New York Times and Time Magazine as well as numerous other mainstream media sites. Each site represented the data as a scientific truth stemming from a paper commissioned by Microsoft. The only problem was, it wasn’t true.
The table following is another example of how the work was misinterpreted—it misrepresents the human (and goldfish) attention span as the real focus of the paper, which could barely be farther from the truth. The last half of the below table (Internet Browsing Statistics) is actually taken from the article Microsoft commissioned to look at how browsing patterns on the internet have changed over time. The top half however (Attention Span Statistics) is not and is totally unrelated to the study they commissioned. And, none of it has been validated or otherwise proved to mean anything at all.
(If you have trouble reading this table, here is the original source.)
You can find the text of the complete article commissioned by Microsoft here. Open it as a pdf file and search it for “goldfish”. You won’t find it. Nada. The study was not designed to look at the human attention span nor was it designed to compare human attention spans to that of a goldfish. It was designed to look at how advances in web technology had changed how we surf the web. Because, Microsoft wants to figure out how to make the most out of web surfing.
We are fortunate to have fact-checkers on the web — particularly when it comes to topics like data visualization. PolicyViz does a thorough job of debunking this myth as does a writer posting on LinkedIn. They both want everyone to STOP comparing people to goldfish! We would concur. We would also love to see people using their common sense and questioning sensational claims–“the average attention span of a goldfish”? Really? Or, what is the significance of any of those memory lapse statistics? Has that always been the case? Is it different? Why should we care?
From a litigation advocacy perspective, there are two key lessons here: First, pay no attention to comparisons of your jurors to goldfish. Instead use things like chunking your information into 10 minute segments—that factoid is actually supported by research on learning and not just drummed up by a marketing representative. If jurors do not pay attention, it likely isn’t their declining attention spans, but rather that your presentation did not speak to their values, attitudes and beliefs. Test your presentations pretrial and make sure real people pay attention and understand.
And second, be very aware of how easily seduced people are by unproven, but juicy, factoids based on data that is unproven or false, just because it is amusing or it seems to support some preexisting but uninformed suspicion. Cleverness often sells.
Earlier this year, we wrote about the patent squabble over CRISPR and how that new tech/old laws fight (between researchers at two major research institutions) is playing out in the sadly outdated patent law system. This month, Pew Research took to the phone lines to see just how Americans feel about CRISPR (aka gene editing) and other “biomedical technologies” (e.g., brain chip implants and synthetic blood) which claim that they will change human capabilities.
You may be surprised at how ambivalent the public is about using these new tools. As Pew says, “Americans are more worried than enthusiastic” about how these tools will be used. And, as this technology veers more and more into public awareness, being aware of the ambivalence with which Americans view this ground-breaking technology is going to become increasingly important for trial lawyers.
Here are a few of the facts from the Pew study:
Americans are more worried than enthusiastic about gene editing (even though it will theoretically reduce disease risk in babies), brain chip implants (even though they will theoretically improve cognitive abilities), and synthetic blood (even though it will theoretically improve physical abilities).
While Pew mentions that some respondents were both worried and excited, their worry was stronger than their excitement. Even when it comes to gene editing with the promise of helping prevent diseases for their own babies—48% support the idea and 50% do not.
There are multiple concerns about how “enhanced humans” may think themselves superior to those who remain un-enhanced and there are many questions about the morality of these changes/advances. In other words, are these ideas “meddling with nature” (a more common response among the highly religious) or “no different from other ways humans have tried to better themselves over time”? And it hints at concerns that there may be a class bias embedded in the dispute, wherein the affluent will once again have access to resources and opportunities that leave the less empowered even farther behind.
When it comes to gender, women were less likely to support the new technologies than were men.
There were differentiations made by the respondents between what they saw as “elective procedures” (as in cosmetic surgeries) and those benefits provided by these new technologies that would be therapeutic. The line between the two (elective and therapeutic) was often fuzzy but Pew thinks it may be a good way to differentiate between the reactions to these new technologies.
Overall, Pew thinks these questions about using new technologies raise the issue of what it means to be human and whether these new developments reach beyond limits set by “God, nature or reason”. Where we draw that line is the crux of the matter for many respondents.
What isn’t clear is what the public thinks of law or legal precedent when it comes to such things. That makes sense, since they haven’t any idea of what the implications of the laws could be on this strange new world. If they are frightened by the implications of these innovations, they might want laws that slow down the changes. If they are more excited than frightened, they might want to allow the marketplace to drive the innovations.
From a litigation advocacy perspective, these responses are not necessarily intuitive. While we might intuit that allowing babies to be born without diseases would be a positive thing, respondents did not necessarily agree. They see it as being more complex. Although the parents of that baby struggling with a serious disease would likely strongly support the new technology for helping their child, others might well say “that sounds good, but this is a slippery slope and where will it lead?”.
As with all “hot button” issues, this is one that will require careful pretrial research to identify the most effective way to tell a story that will not set off knee-jerk morally based reactions to the use of new technologies. People want to feel safe from disease, but also from a world where science fiction movies come to life. Equally uncertain is how people see the role of government in nurturing innovation while protecting the public from science run amok.
Pew Research Foundation (July 26, 2016). U.S. Public Wary of Biomedical Technologies to ‘Enhance’ Human Abilities. http://www.pewinternet.org/2016/07/26/u-s-public-wary-of-biomedical-technologies-to-enhance-human-abilities/
Here are a few articles that did not act as a catalyst to stimulate an entire post but that tweaked our fancy enough that we wanted to share them with you. Think of them as “rescue items” if you have social anxiety and want to seem scintillating….or something like that.
So have you seen this in the last second?
Here’s an interesting memory study where the researchers found that if participants didn’t know they were going to be tested on things they’d seen repeatedly, they would have no idea when asked to identify if they’d seen a specific item before. Specifically, they asked participants to do a simple memory test to replicate memory for different kinds of information (e.g., numbers. letters or colors). For example, participants would be shown four characters on a screen that were arranged in a square. They would be asked to report which corner the letter was in (when the other characters were either numbers or colors). The researchers repeated this task many, many times and the participants rarely made mistakes. But then (because researchers cannot leave well enough alone) the researchers asked the participants to respond to an unexpected question. Specifically, the participants were asked which of the four letters appearing on their computer screen had appeared on the previous screen. Only 25% responded correctly (which is random chance of accuracy). The question was asked again after the following task but this time it wasn’t a surprise and participants gave correct answers between 65% and 95% of the time. The researchers call this effect “attribute amnesia” and say it happens when you use a piece of information to perform a task but are then unable to report what that information was as little as a single second later.
Remember that post on uninterrupted eye contact causing hallucinations?
We wrote about it in one of these ‘tidbit’ posts back in 2015 and even included a very awkward video from a Steve Martin/Tina Fey movie. This time researchers were looking for the optimal length of uninterrupted eye contact that would be experienced positively by the most people. Think of this as a potential answer to the question witnesses often have about how long to maintain eye contact with individual jurors or just use this as a guide for comfortable eye contact with strangers at Starbucks. On average, the close to 500 participants were most comfortable with eye contact that lasted slightly over three seconds. The majority preferred a duration of eye contact between two and five seconds and no one liked eye contact of less than a second or longer than nine seconds. We conclude that less than a second is too furtive, and longer than 9 seconds is intolerably intrusive. One problem with the study was that it used filmed clips rather than actual live interactions but it is an approximate guide to “normal” eye contact versus “creepy” eye contact.
Oh no! There may be a problem with all those fMRI studies!!!
A new article published in the journal PNAS tells us there is a fMRI software error that could result in the invalidation of 15 years (and more than 40,000 papers) of fMRI research. We know you are likely thinking of the article on that poor dead salmon who still showed brain activity. This article was cited all over the internet in July of 2016 as proof that all the work done on fMRI machines was likely flawed. Even though the bug was corrected in 2015, it was undetected for more than a decade and the researchers thought perhaps every study should be replicated to ensure accuracy in the literature upon which we rely. The fMRI software error and the resulting shambles of the literature was seen as a devastating bombshell with headlines like this one from Forbes suggesting “tens of thousands of fMRI brain studies may be flawed”. Fortunately, hysteria like this is likely why the Neuroskeptic was born and certainly why the Neuroskeptic blog makes such a contribution to knowledge in this field. Is this software glitch really serious? Yes, says the Neuroskeptic. It is a serious problem but it is not invalidating years of fMRI research. In fact, in an update posted to Neuroskeptic blog on July 15, 2016, the author of the paper in PNAS had requested some corrections to the publication to avoid these sensationalist headlines but PNAS refused so he put the updates onto another accessible site. Visit the Neuroskeptic’s excellent blog to read a common-sense and rational explanation of what the fMRI software bug really means and how those familiar with the fMRI work have known about this for some time now.
Yes, Virginia—women are still harassed for choosing STEM careers even though it is 2016
You’ve likely heard the lament that there are too few women in STEM careers and that we need to fix the problem. The Atlantic has published a very well-done article on how women are pushed out of STEM careers and that as many as 2 out of 3 women science professors reported being sexually harassed. And those are just the ones who made it through to graduation. The stories of those still in training having photos taken of their breasts, being harassed at conferences, or being hand-fed ice cream by male professors are disturbing. There is also “pregnancy harassment” and stories of PIs (principal investigators on grants who are typically faculty members) insisting pregnant postdocs return to the lab weeks after giving birth and then harassing the postdoc for having “baby brain” and questioning their experimental results. It is well worth your time to read.
Chen H, & Wyble B (2015). Amnesia for object attributes: failure to report attended information that had just reached conscious awareness. Psychological Science, 26 (2), 203-10 PMID: 25564523
Binetti, N., Harrison, C., Coutrot, A., Johnston, A., & Mareschal, I. (2016). Pupil dilation as an index of preferred mutual gaze duration Royal Society Open Science, 3 (7) DOI: 10.1098/rsos.160086
“an emotional reaction when a person or a group violates one’s standards and one looks down on them with the tendency to distance and/or derogate them”.
That’s not a very user-friendly definition—so, the free dictionary website says contempt is “disapproval tinged with disgust”. And that sounds more like what we’ve all experienced when someone looks at us with contempt (or vice versa).
So today’s researchers set out to build a scale that would measure contempt by completing six different experiments (with sample sizes ranging from 165 to 1,368) to develop the scale—which ended up including only ten questions (shown in graphic below which was pulled from the article itself.
As the researchers developed the scale, they completed six separate experiments. In each experiment, they learned things to help hone the scale to the final ten-item measure. Here is a summary of what they learned in each of the six experiments.
Experiment 1: Dispositional contempt was related to and yet distinct from similar emotional dispositions such as “envy, anger, and hubristic pride” but was found to be mostly unrelated to disgust.
Experiment 2: Dispositional contempt was related to each component of the Dark Tetrad (narcissism, psychopathy, Machiavellianism, and sadism).
Experiments 3 and 4: Showed them that contempt-prone people had low attachment security. In other words, those who were contemptuous tended to avoid attachment and have anxiety.
Experiment 5: Those who were prone to contempt were more likely to respond contemptuously (go figure) to film clips of individuals violating various standards. While average contempt rating was highest for moral failures, dispositional contempt was most predictive of contempt in response to another’s incompetence and was negatively associated with reacting compassionately.
Experiment 6: Dispositional contempt was a unique predictor of relationship functioning. While the researchers thought if someone was toxic, that would be a death knell for one’s relationship—what the data showed was that seeing one’s partner as contemptuous was more harmful and resulted in less commitment and satisfaction in the relationship.
The researchers think their findings offer many practical paths for further research, including research that explores reducing contemptuousness through various interventions. What they saw is that contemptuousness reduces mental and behavioral flexibility, decreases self-esteem, limits the social network, increases loneliness and depression, unhinges romantic relationships, and results in a lack of caring for others. Interventions to reduce contemptuousness, should reverse all those negative effects of dispositional contempt.
From a litigation advocacy perspective, this is an intriguing tool for use in pretrial research.
Are there specific questions on this scale that would be more predictive of ultimate verdict on specific kinds of cases?
How would an attitude expressed through these scale items predispose someone regarding my case or my client?
It’s an intriguing concept to ponder. We know contempt is a powerful weapon when wielded interpersonally. The question is if the “contempt-prone individual” is identifiable in some way other than through the use of a 10-item scale? We’ll work on that one.
Schriber, R., Chung, J., Sorensen, K., & Robins, R. (2016). Dispositional Contempt: A First Look at the Contemptuous Person. Journal of Personality and Social Psychology DOI: 10.1037/pspp0000101
Typical looking faces are not the most attractive in the view of others but they are the most trustworthy. This reminds us of the post we wrote a while back about how to appear intelligent, trustworthy and attractive when you need corrective lenses (i.e., wear rimless glasses).
In this case, (published in the journal Psychological Science) the researchers made a “digital average” of twelve attractive female faces. If you don’t know what a digital average is—the researchers used computers to combine the faces of twelve different women and came up with an “average” of their faces. (As an example, the photo illustrating this post is a digital average of male faces.) Participants in the current research (undergraduate females from universities in Israel) were asked to rate the (eleven) face “morphs” they were shown on both trustworthiness and attractiveness.
As the face “morphed” closer to the digital average—it was more likely to be judged as trustworthy.
Conversely, the closer the face was to the original ‘attractive’ face, the more attractive it was judged.
The researchers did two additional studies with the same results. The research participants saw the typical (i.e., “digitally averaged”) faces as more trustworthy and the original “attractive female faces” as more attractive. The researchers suggest what this means is that we are more familiar with “typical faces” and thus are more comfortable and likely to find those faces “trustworthy”. The researchers turn an old phrase around into “what is typical is good” when it comes to trustworthiness.
From a litigation advocacy perspective, this is good news for most people, who we imagine are—on average— average. We have spent our lives learning that “what is beautiful is good” so it is indeed good news to think that if your face is more typical than beautiful, you appear more trustworthy to others. Paradoxically, pretty people in this case might be working under a disadvantage.
And remember—if you wear rimless glasses, you appear both attractive and trustworthy (not to mention intelligent).
Sofer C, Dotsch R, Wigboldus DH, & Todorov A (2015). What is typical is good: the influence of face typicality on perceived trustworthiness. Psychological Science, 26 (1), 39-47 PMID: 25512052