You are currently browsing the archives for the Generation or Age of Juror category.

Follow me on Twitter

Blog archive

We Participate In:

You are currently browsing the archives for the Generation or Age of Juror category.

ABA Journal Blawg 100!









Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Login

Archive for the ‘Generation or Age of Juror’ Category

researchers lieThanks to us, you know researchers trick people into eating dog food, put them in MRI machines that just happen to have snakes in them, and do other nefarious things. But did you know they sometimes enlist your parents in their deception? It is sad, but apparently true. Although these UK and Canadian researchers did not tell the helpful parents they were being used to help researchers lie convincingly to their young adult children.

Of course you know that science—and surely social science in particular—is driven by pure and noble intentions. Never does being deceived feel so warm and fuzzy. In this case, they wanted to know if “innocent adults participants can be convinced, over the course of a few hours, that they had perpetrated crimes as serious as assault with a weapon” between the ages of 11 and 14. The participants were between the ages of 18 and 31 years so you would think they’d have no trouble recalling if something like this were true. But no! Even in a “friendly interview”, almost 3/4 of them actually believed the false memories constructed by the researchers.

As you might imagine, this research grew out of questions related to false confessions and whether, in a lab-setting, researchers could create false memories of serious criminal behavior or serious emotional trauma in young adults. The researchers cite false memories created by other researchers including “getting lost in a shopping mall, being attacked by a vicious animal,” and even (although this is hard to believe) “having tea with Prince Charles”. So the researchers wondered if they could create false memories of crime in young adults if their “caregivers” (i.e., parents) report such an event actually happened.

The researchers recruited undergraduates from a Canadian university (N = 60; 43 females; average age 20 years with age range from 18-31 years; all but five were Caucasian  and English native speakers) and asked them to allow their parents to complete “an extensive questionnaire” reporting on various emotional events in their lives between the ages of 11 and 14. The researchers wanted to be sure of three things: that the student-participant had experienced at least one highly emotional event during those years, had never been involved in a crime, and had had no police contact during adolescence.

They actually began with more than 120 students and culled down the list to those who met these three criteria. Yes, fully half of the original group apparently flunked this screener. Since you have to be asleep between 11-14 to avoid a highly emotional experience, we are left to assume that about half of the undergrads at this university had some pretty dicey behavior at a very young age! It might also explain why most of the remaining subjects were female. This really messes with my impression of gracious and polite Canadians…

The sixty students who participated in the study were told they were in a study that was examining various memory retrieval strategies and came back to the lab for three separate 40-minute interviews that occurred about a week apart. During the first interview, the researcher told the participant about two events s/he had experienced between the ages of 11 and 14. The catch was that only one of the events were true. Some of the students were told about a crime that resulted in contact with the police (such as assault, assault with a weapon, or theft). Others were told about a false event that was emotional in nature (such as a serious personal injury, an attack by a dog, or losing a large sum of money).

To ‘hook’ the students, the researchers included some details in the stories of the false events that were true and that had actually occurred between the ages of 11 and 14. None of the students remembered any of the false events having occurred. (Thank goodness! Although when they figure out their parents aided and abetted the researchers in lying to them we predict a long line at the campus counseling center.) So when they did not recall, the researchers asked them to try harder and told them that most people can remember things they do not initially recall clearly if they just use some memory strategies (like the strategy where you consider what it would feel like to engage in the false events). And try these students did!

In the second and third interviews, the participants were asked again to recall as much as possible about the two events (one untrue/false) discussed in the first interview. The students were also asked how vivid their recollection of the event was and how confident they were in their recollection of the memories.

And here are the (shocking and disturbing) results:

30 participants were told they had committed a crime as a teenager, and 21 (71%) developed a false memory of the crime. 20 were told they had assaulted someone either with or without a weapon and 11 (55%) reported elaborate stories of their interactions with the police.

Students who were told of an emotional event also formed false memories (76.7%) of that event.

The criminal false events were just as readily believed as the emotional events. Students provided roughly the same number of details and had similar levels of confidence in the memory. The researchers believe that incorporating true details into the story and having it (supposedly) corroborated by the student’s parents was instrumental in making the false event have enough familiarity that it seemed plausible. (Lots of therapy. That’s all there is to say about this. We recommend that the therapy focus on implanting false memories of the parents having been abducted and forced to invent lies about their kids.) On a positive note, the students gave more details and had more confidence in their descriptions of the true memories. And while we have fun at the creative inclusion of the parents in this research, the researchers used the true reports of the parents as a basis for creating a false story. The parents didn’t know what was to be done with the information they provided.

The researchers explain their results by saying they think the use of the context reinstatement exercise (that’s the one where you picture what it would be like to engage in the false events) was instrumental in the results. “In other words, imagined memory elements regarding what something could have been like can turn into elements of what it would have been like, which can become elements of what it was like.”

From a litigation advocacy perspective, this is but one in a long series of reasons for you to always question the accuracy of memory-related evidence. We have studied and written about this subject before as it has grave consequences for testimony and false confessions, especially when the confessional situation is stressful or is conducted by a professional interrogator. The idea that so many of these participants were led to believe in the accuracy of false memories in such a short period of time speaks volumes about how memory is malleable and modified over time.

Shaw, J., & Porter, S. (2015). Constructing Rich False Memories of Committing Crime. Psychological Science DOI: 10.1177/0956797614562862

Image

Share
Comments Off

1-in-10-people-hiIf so, we can certainly suggest a few to be disregarded! We don’t write about most of the articles we consider for this blog (the reject pile grows taller every day). And when we do write about questionable pieces we let you know if we think it’s a little ridiculous or if it’s a prospective study (statistical talk for generating hypotheses for further study in an experimental or archival context).

We’d call today’s study one with findings that won’t be surprising to most of our readers and one that serves as another good reminder to look for multiple data points before assuming you see a trend.

There have been complaints for years about generalizing from undergraduates enrolled in Psychology 101 courses to the general public. Just because undergraduates think, feel, believe, or are biased in a particular direction—does that mean the general public will also? Our own belief is that yes, sometimes it means exactly that and sometimes, no, it doesn’t mean that at all. The justification for generalizing from this subgroup is that it is so enormous. Most US adults enroll in college for some period of time. Intro Psychology (the source of many study subjects) is taken by a majority of college students. So, while not totally representative of the population, it is a pretty wide swath from it. The quality of the information is always in the details, so we read the literature and store findings away and then look to see if our mock jurors echo their younger counterparts or not—and if modifying the case narrative changes their reactions.

So here is what today’s researchers found: 1 out of 10 undergraduate research participants don’t really try (i.e., “exert effort”) as they complete the research requirement for course completion.

The researchers used 77 participants (41 male and 36 female; average age 19.14 years; 78% Caucasian, 9% Asian, 6% African-American, 3% Asian Indian, 4% other; with no prior history of brain injuries or learning disabilities). Participants completed a brief demographic questionnaire and completed the CNS Vital Signs (CNSVS) using a computer.

Part of the benefit of the CNSVS for research is there are “validity indicators” embedded in the test to make sure the individual is paying attention on different tasks.

The CNSVS is essentially a computerized neuropsychological test battery and as such, requires effort and focus for successful completion. The researchers assume any college student should score at least some points in each area of the test. Participants also completed an arithmetic task and self-report questions about the effort they exerted for the task and how they thought their performance would compare to other college students taking the test. Entire time for the experiment was about 90 minutes per participant.

12% of the students participating failed at least one validity indicator (i.e., did not score points on a subtest). The researchers say this means “more than 1 in 10 college students participating in a cognitive test battery for research showed inadequate effort, which was corroborated by poor test performance”. For those interested in such things, the complex portions of the test (e.g., the most complex trials of the Stroop test, shifting attention test) were not the ones most often failed. Instead, participants who failed did so on simpler tasks (like finger tapping, and simple reaction time measures).

We do think it important to assess the effort put into psychological research by undergraduates. On the other hand, after a semester of hearing about exciting things like invisible gorillas, oncoming trains, fMRIs, and even dogfood paté—we can understand why getting out of bed for fun psychological research and then being asked to tap your finger 27 times (precisely that amount and no more) could be a little disappointing. So we wonder if there would be more effort put into an experiment on eye witness accounts, for example.

The ultimate lessons for litigation advocacy are simple.

Don’t put all your trust in a single study. (And don’t put your trust in anyone who does.)

Know what the literature says (or know someone who does and can apply it to your case).

Do pretrial research to see if mock jurors line up with the research findings.

Modify your narrative accordingly.

DeRight, J., & Jorgensen, R. (2014). I Just Want My Research Credit: Frequency of Suboptimal Effort in a Non-Clinical Healthy Undergraduate Sample The Clinical Neuropsychologist, 1-17 DOI: 10.1080/13854046.2014.989267

Image

Share
Comments Off

lumbersexualWe know it’s important for you to keep up on new stereotype labels. You know what labels like metrosexual, hipster, and perhaps even lumberjack mean. But lumbersexual?

Tom Puzak, over at GearJunkie wrote about it first a couple of weeks ago and then the term went viral.

“He looks like a man of the woods, but works at The Nerdery, programming for a healthy salary and benefits. His backpack carries a MacBook Air, but looks like it should carry a lumberjack’s axe. He is the Lumbersexual. Seen in New York, LA and everywhere in between, the Lumbersexual is bringing the outdoor industry’s clothing and accessories into the mainstream.”

According to Sociological Images blog, the definition of the lumbersexual continues to evolve:

“Lumbersexuals are probably best recognized by a set of hirsute bodies and grooming habits. Their attire, bodies, and comportment are presumed to cite stereotypes of lumberjacks in the cultural imaginary. However, combined with the overall cultural portrayal of the lumbersexual, this stereotype set fundamentally creates an aesthetic with a particular subset of men that idealizes a cold weather, rugged, large, hard-bodied, bewhiskered configuration of masculinity.”

You may confuse this description with your stereotypes of lumberjacks. There is a critical difference however. Sociological Images continues:

“One of the key signifiers of the “lumbersexual,” however, is that he is not, in fact, a lumberjack. Like the hipster, the lumbersexual is less of an identity men claim and more of one used to describe them (perhaps, against their wishes).”

So, the lumbersexual isn’t really a lumberjack, but more of a costume we could see as the opposite of the metrosexual. Gawker continues to educate us on the lumbersexual:

“To facilitate an easy discussion, it might help you to think of a Lumbersexual as a foil to the Metrosexual, the alleged nadir of masculinity from last decade. So, instead of slim-legged pants, envision pants with a little extra leg room (see: “regular cut”). Rather than be clean-shaven, the Lumbersexual has an unkempt beard. The Metrosexual is clean and pretty and well-groomed; the Lumbersexual spends the same amount of money, but looks filthy. Sartorially speaking, a Lumbersexual is a delicate tri-blend of L.L. Bean, Timberlake, and Sears.”

In case you have not yet figured this out, it’s a label with a bit of sneer in it. The Atlantic calls them “bearded, manly men” while the Daily Beast opines the lumbersexual  represents yet more blurring of the lines between gay and straight as they are “all beards, flannel shirts and work boots”. Jezebel compiles a tongue-in-cheek reference guide to the lumbersexual subtypes (e.g., the Metrojack, the Advanced Lumbersexual, and the Urban Woodsman).

“In conclusion, it’s a nice look, but somewhat misleading—reading these pieces feels like meeting a retro sexy librarian type who isn’t actually into books. With the Lumbersexual, the very things that might draw to you such a manly dressed man are likely to disappoint when you discover he won’t be building a campfire, crafting some bookshelves, or investigating that weird noise outside the tent. But hey, fashion is fashion. And the lumberjack look is still pretty hot, right?”

As far as we can tell, the lumbersexual is an urban male (typically White and heterosexual) who dresses like a lumberjack even though he is far from a lumberjack. While it is a recognizable fashion statement, there are (as yet) no attitudes, values and beliefs attributed to the lumbersexual. While there is a sense that these are men trying to look “like real men” according to a hyper masculine definition—there is no evidence that their attitudes, values and beliefs would line up with what we think of as stereotypicaly masculine.

In other words, while you know an evocative pop culture label to assign, you have no real idea who that lumbersexual really is on the inside. Appearances have limited value. Obviously, that’s not a good decision-making strategy for voir dire. Even though it might be good for a laugh.

Image

Share
Comments Off

beeper vs iPhoneBack in the early ‘90s, I had a job that required me to carry a beeper. The constant awareness that I was “on call” was a source of strain and led me to complain I was never really “off duty”. Flash forward to this century and I cannot imagine being without my smart phone. In fact, I often double-check to be sure I have my iPhone when I am on the go so I never leave it behind. It’s a whole different sort of anxiety about being separated from my iPhone than I felt toward that beeper.

And I am not alone. Today’s researchers examine how many of us are anxious when separated from our instant access to email, texting, the internet, and the ability to make phone calls. They go so far as to say “cell phone separation can have serious psychological and physiological effects on iPhone users, including poor performance on cognitive tests”. Further, they say, “iPhone are capable of becoming an extension of our selves such that when separated, we experience a lessening of ‘self’ and a negative physiological state”. Seriously?

Researchers conducted what they call a “multistaged experiment”. They used a survey phase to recruit 208 participants from three separate journalism courses. Of those 208, 136 completed the online questionnaire to allegedly “understand media usage among a sample of college students”. (In truth, the researchers were looking for iPhone users and found 117 iPhone users in the 136 who completed the survey.)

Those 117 iPhone users were contacted again and told they could participate in a second study for additional course credit and a $50 gift card. Of the 117, 41 (73% female, average age 21.2 years, 88% White, 5% Black, 5% Asian and 2% Hispanic) agreed to participate in a 20 minute experiment.

The purpose of the 20 minute experiment was to see what happened to “perceived level of self, cognition, emotion and physiology” when the participant was separated from their iPhone and the iPhone was ringing. However, the participants were told they were testing the accuracy of a new blood pressure cuff while completing word search puzzles. The researchers had the participants complete word search puzzles while hooked up to the blood pressure cuff and either having their iPhone or not having their iPhone (since it was allegedly interfering with the blood pressure cuff operation and so the participant was asked to set their iPhone on a table four feet away from them).

The researchers found that participants separated from their iPhones had increases in heart rate, increases in self-reports of feeling unpleasant, found fewer words in their word search puzzles, increases in blood pressure levels, and higher self-reported anxiety.

The researchers conclude that being separated from your iPhone results in poorer cognitive performance and thus you may not want to be separated from your iPhone during tasks requiring significant mental performance (think test-taking, meetings, classes, and even perhaps, jury duty). The distraction and loss of your sense of self when separated from your iPhone may make you perform more poorly on those tasks. (Somewhere, Steve Jobs is smiling.)

While we wonder if this level of intrusive anxiety and poor performance is unique to college students (as measured by the FOMO scale) who have grown up with the constant presence of various cell phones and smart phones, it does raise the question of whether jurors are distracted from their deliberations by the court instructions to not use the internet, or post status updates about their experiences, or communicate with anyone, or quickly look up the definition of a word or phrase. And in Federal courts, you are usually banned from bringing the phone into the courthouse.

Being told to not use your phone is along the same lines as placing your iPhone out of your reach and not being able to answer it. Does it have the same effect? Are jurors struggling with distraction over not being able to use their phones? If yes, all the more reason to tell them why we don’t want them to use their smartphones. It probably won’t make the distraction go away, but it may help them understand why it is important.

Clayton, R., Leshner, G., & Almond, A. (2015). The Extended iSelf: The Impact of iPhone Separation on Cognition, Emotion, and Physiology Journal of Computer-Mediated Communication DOI: 10.1111/jcc4.12109

Image

Share
Comments Off

mother of all gender gapsWe follow, as you may have noticed, attitudes, values and beliefs toward a wide variety of issues. So we were surprised to see this 2012 national poll from Quinnipiac University pop up in a number of recent blog posts. According to their survey, while Americans favored the legalization of marijuana (51% to 44%) there were significant age and gender gaps.

“Men support legalization 59 to 36% but women are opposed 52 to 44%.”

Younger voters, “18-29 years old support legalization 67 to 29% while voters over age 65 are opposed 56 to 35%.”

For some reason, a number of blogs picked up the survey about 2 years after it was completed and questioned why the gender gap in attitudes toward marijuana legalization existed. Michele Martinez Campbell at Narcolaw wonders if, as others have posited, it is “just that more men than women are potheads” and scoffs at that explanation as glib.Instead, she believes, “female opposition stems from questions about the impact legalization will have on public health, crime and the social fabric”.

Over at TheMoneyIllusion, Scott Sumner calls this “the mother of all gender gaps” and gets 47 comments. One of the commenters points out a similar gender gap on marijuana legalization in a 2014 survey in Germany (although he did not provide a URL), but still none of the commenters seem to notice the “new” survey they are talking about is 2 years old.

Finally, the discussion goes over to Marginal Revolution and Tyler Cowen amasses 113 comments (at this writing)–many of which are sexist although some are quite funny (“it’s hard enough to get the man to take the trash out when he isn’t stoned”). And again, despite the proliferation of comments, not a single commenter mentions the Quinnipiac survey they are hotly debating is from 2012 and not 2014.

It’s a curious pattern for sure–men trending more liberal and women more conservative. It is at odds with what tends to happen and therefore we think it could be important. But, we can’t just take 2012 data and interpret it through a 2014, post mid-term election lens. We need to see if the gender gap Quinnipiac reported in 2012, remains the same in 2014. Why? Attitudes toward marijuana legalization have been changing very quickly. In November of 2014, we simply cannot know if the “mother of all gender gaps” really does still exist based on survey data from 2012.

When using survey data and hypothesizing as to meaning in the current day, you need to be very sure your survey data is also current.

And it would be wise to go to the original source rather than parroting what others have said and furthering the inaccuracies.

Image

Share
Comments Off