You are currently browsing the archives for the Beliefs & values category.

Follow me on Twitter

Blog archive

We Participate In:

You are currently browsing the archives for the Beliefs & values category.

ABA Journal Blawg 100!









Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Login

Archive for the ‘Beliefs & values’ Category

hannibal lecterAt least that is the headline we’ve been reading about this research. We’ve written before about the psychopath. They are typically characterized as scary and “other” than us—not like us at all. They have been described as without conscience, and yet some of them are involved in corporations rather than prison. There actually are researchers who would say that because the brains of psychopaths are abnormal—they should not be punished for their behavior. Today’s spotlight is on an article which is of that ilk. These researchers say “one in five violent offenders is a psychopath”. That number is not really surprising since prevalence rates for psychopathy have been estimated at 15% to 25% of the male offender population. The researchers continue by saying psychopaths have higher rates of recidivism and do not seem to benefit from rehabilitation. The researchers say they know “why” this happens and they hope their work will improve childhood interventions to prevent or at least decrease violent behaviors in those with psychopathy.

They begin by reviewing the literature on the cold and premeditated aggression of the psychopath and posit that the behavior is due to abnormal and distinctive brain development that can be seen from a young age. The researchers recruited 50 men (aged 20 to 50 years; reading age higher than 10 years; no history of major mental or neurological issues) to participate in the study. Obviously, they chose some men who reported they were healthy, and others with a documented history of violent offenses. The study used the fMRI to examine brains, looking for similarities and differences in the brains of healthy non-offenders and violent offenders (some with psychopathy and some with antisocial personality disorder but who did not meet the criteria for psychopathy).

Their subjects were paid minimum wage for their time and included:

12 violent offenders with both antisocial personality disorder and psychopathy, and

20 violent offenders with antisocial personality disorder but not psychopathy, and

18 healthy non-offenders.

The offenders had been convicted of various violent crimes (e.g., murder, rape, attempted murder, grievous bodily harm) and the researchers recruited them from Britain’s probation system. The non-offenders were recruited from unemployment offices and from community webpages. All participants were interviewed and scored on the Psychopathy Checklist and the offenders’ criminal records were reviewed. Participants were asked to not use alcohol or illicit drugs for 2 weeks before and during the study and were given urine and saliva tests at each research session. They were also given an IQ test (the Wechsler Adult Intelligence Scale, third edition) and completed a reactive-proactive aggression questionnaire.

The researchers reported differences in brain regions related to empathy and the lack of empathy, processing prosocial emotions (like guilt and embarrassment) and moral reasoning. These regions are also associated with the ability to learn from rewards or punishment. If you don’t experience discomfort when you do something incorrectly, you are less likely to change your behavior the next time. As in “why can’t that boy stay out of trouble?” These researchers believe they have an idea about why junior keeps messing up.

Contrary to the attention grabbing headlines, it is not that psychopaths cannot learn from punishment. And they do “register” punishment. It is just that they do not modify their behavior after being punished. The researchers believe that the psychopath may fail to consider the negative consequences of an action and instead only focus on the positive. When caught, the psychopath is punished, and often incarcerated. Upon release, however, the psychopath is much more likely to re-offend and thus, is seen as not changing behavior as the result of punishment.

The researchers recommend that parents of children with psychopathy be taught “optimal parenting skills” in order to reduce the conduct problems among their children “except amongst those who are callous and insensitive to others”. They believe this sort of disciplined parenting, which works consistently to teach conduct disordered children the consequences of their actions, can interrupt the abnormalities of brain structure and actually modify behavior (and modify the brain at an age when the brain is more plastic and susceptible to change).

There are obvious concerns with this recommendation. In short, the impact of such a label (especially for pre-teens) is frightening. The New York Times wrote a plain language article on whether you can call a 9-year-old a psychopath which generated more than 600 comments. How would parents change their view of their child if they were told the child was a budding psychopath? How would teachers change their view of a child labeled a psychopath? How would parents and teachers change their behavior if a child was given that label? We know what happens when children are diagnosed with learning disorders, labeled “slow” and so on in the school system. They are expected to perform at a lower level and they do.

There is a huge body of literature on the “halo effect” (easily found through internet searches). Among kids in school, if even well-intentioned teachers are told that a student is a slow learner or a discipline problem, they later report that the student couldn’t understand as well as others, or had problems getting along. Conversely, if the reputation of a student is positive, the teacher is likely to spend more time and attention being helpful and supportive. Labels are dangerous, because they tend to allow people to stop looking carefully and using objective judgment. And with children, it can put them on paths for good or ill that are later very difficult to change.

From a litigation advocacy perspective, what does this mean? Let’s assume that these researchers are correct and the brain of the psychopath is different, and you can see those differences from a very young age. Does that mean psychopaths should not be held responsible for their behavior? That would likely not play well with an audience of jurors since the violent crimes of the psychopath are often heinous and clearly premeditated. Could they perhaps be thought of as legally responsible but not morally responsible? It is truly a dilemma for the attorneys involved and for the jurors who hear the case facts.

From the perspective of a world in which genetic coding or fMRI data is centrally and digitally maintained for entire lifetimes, there have long been concerns about how the data could be (mis)used. Of course it is highly confidential and protected by numerous laws, but so is my credit card information, my social security number, et cetera. If it was communicated to schools, employers, medical professionals, etc., it could permanently alter opportunities to live successful lives. And if the person already has psychopathic markers, surely knowing that isn’t going to improve their ambition toward good citizenship.

Gregory, S., Blair, R., ffytche, D., Simmons, A., Kumari, V., Hodgins, S., & Blackwood, N. (2015). Punishment and psychopathy: a case-control functional MRI investigation of reinforcement learning in violent antisocial personality disordered men The Lancet Psychiatry, 2 (2), 153-160 DOI: 10.1016/S2215-0366(14)00071-6

Image

Share
Comments Off

researchers lieThanks to us, you know researchers trick people into eating dog food, put them in MRI machines that just happen to have snakes in them, and do other nefarious things. But did you know they sometimes enlist your parents in their deception? It is sad, but apparently true. Although these UK and Canadian researchers did not tell the helpful parents they were being used to help researchers lie convincingly to their young adult children.

Of course you know that science—and surely social science in particular—is driven by pure and noble intentions. Never does being deceived feel so warm and fuzzy. In this case, they wanted to know if “innocent adults participants can be convinced, over the course of a few hours, that they had perpetrated crimes as serious as assault with a weapon” between the ages of 11 and 14. The participants were between the ages of 18 and 31 years so you would think they’d have no trouble recalling if something like this were true. But no! Even in a “friendly interview”, almost 3/4 of them actually believed the false memories constructed by the researchers.

As you might imagine, this research grew out of questions related to false confessions and whether, in a lab-setting, researchers could create false memories of serious criminal behavior or serious emotional trauma in young adults. The researchers cite false memories created by other researchers including “getting lost in a shopping mall, being attacked by a vicious animal,” and even (although this is hard to believe) “having tea with Prince Charles”. So the researchers wondered if they could create false memories of crime in young adults if their “caregivers” (i.e., parents) report such an event actually happened.

The researchers recruited undergraduates from a Canadian university (N = 60; 43 females; average age 20 years with age range from 18-31 years; all but five were Caucasian  and English native speakers) and asked them to allow their parents to complete “an extensive questionnaire” reporting on various emotional events in their lives between the ages of 11 and 14. The researchers wanted to be sure of three things: that the student-participant had experienced at least one highly emotional event during those years, had never been involved in a crime, and had had no police contact during adolescence.

They actually began with more than 120 students and culled down the list to those who met these three criteria. Yes, fully half of the original group apparently flunked this screener. Since you have to be asleep between 11-14 to avoid a highly emotional experience, we are left to assume that about half of the undergrads at this university had some pretty dicey behavior at a very young age! It might also explain why most of the remaining subjects were female. This really messes with my impression of gracious and polite Canadians…

The sixty students who participated in the study were told they were in a study that was examining various memory retrieval strategies and came back to the lab for three separate 40-minute interviews that occurred about a week apart. During the first interview, the researcher told the participant about two events s/he had experienced between the ages of 11 and 14. The catch was that only one of the events were true. Some of the students were told about a crime that resulted in contact with the police (such as assault, assault with a weapon, or theft). Others were told about a false event that was emotional in nature (such as a serious personal injury, an attack by a dog, or losing a large sum of money).

To ‘hook’ the students, the researchers included some details in the stories of the false events that were true and that had actually occurred between the ages of 11 and 14. None of the students remembered any of the false events having occurred. (Thank goodness! Although when they figure out their parents aided and abetted the researchers in lying to them we predict a long line at the campus counseling center.) So when they did not recall, the researchers asked them to try harder and told them that most people can remember things they do not initially recall clearly if they just use some memory strategies (like the strategy where you consider what it would feel like to engage in the false events). And try these students did!

In the second and third interviews, the participants were asked again to recall as much as possible about the two events (one untrue/false) discussed in the first interview. The students were also asked how vivid their recollection of the event was and how confident they were in their recollection of the memories.

And here are the (shocking and disturbing) results:

30 participants were told they had committed a crime as a teenager, and 21 (71%) developed a false memory of the crime. 20 were told they had assaulted someone either with or without a weapon and 11 (55%) reported elaborate stories of their interactions with the police.

Students who were told of an emotional event also formed false memories (76.7%) of that event.

The criminal false events were just as readily believed as the emotional events. Students provided roughly the same number of details and had similar levels of confidence in the memory. The researchers believe that incorporating true details into the story and having it (supposedly) corroborated by the student’s parents was instrumental in making the false event have enough familiarity that it seemed plausible. (Lots of therapy. That’s all there is to say about this. We recommend that the therapy focus on implanting false memories of the parents having been abducted and forced to invent lies about their kids.) On a positive note, the students gave more details and had more confidence in their descriptions of the true memories. And while we have fun at the creative inclusion of the parents in this research, the researchers used the true reports of the parents as a basis for creating a false story. The parents didn’t know what was to be done with the information they provided.

The researchers explain their results by saying they think the use of the context reinstatement exercise (that’s the one where you picture what it would be like to engage in the false events) was instrumental in the results. “In other words, imagined memory elements regarding what something could have been like can turn into elements of what it would have been like, which can become elements of what it was like.”

From a litigation advocacy perspective, this is but one in a long series of reasons for you to always question the accuracy of memory-related evidence. We have studied and written about this subject before as it has grave consequences for testimony and false confessions, especially when the confessional situation is stressful or is conducted by a professional interrogator. The idea that so many of these participants were led to believe in the accuracy of false memories in such a short period of time speaks volumes about how memory is malleable and modified over time.

Shaw, J., & Porter, S. (2015). Constructing Rich False Memories of Committing Crime. Psychological Science DOI: 10.1177/0956797614562862

Image

Share
Comments Off

2015 conspiracy-theorist

Well, perhaps you could rule out Bigfoot conspiracy theories, but what about the rest of them? We’ve written about some of the more unusual conspiracy theories here as well as those that simply show up routinely as we complete pretrial research. Regular readers here know that we use those cognitive leaps characteristic of the conspiracy theorist to plug holes in case narrative.

Recently, we wrote about the relationship of uncertainty to conspiracy beliefs and today’s article is similar (although different). And, back in 2013, we wrote about our wariness in selecting anyone with extreme (i.e., “fringe”) beliefs to serve as a juror. Today’s article combines our fondness for the conspiracy theorist with our wariness of the fringe dweller (political or otherwise). And, it gives you a way to assess just how likely you are to fall for a conspiracy theory based on your own predilections. So how could we not blog about that?!

These are researchers from the Netherlands who conducted studies in both the Netherlands and the United States, which provides a comparison of the somewhat different cultures, as well as making the results useful to us here in the States. The researchers believe that political extremists have a “highly structured thinking style” that they use to try to make sense of things that occur around them. The researchers describe political extremists as having “black and white thinking” that they use to classify events as good or evil, positive or negative, and so on. They also describe the political extremist as having a “crippled epistemology” which leads them to accept and trust information about political issues mainly from their own extremist group and to ignore other sources of information. They cling to their political beliefs in a “closed-minded and rigid fashion, seeing their preferred policy as the simple and only solution to societal problems”.

In short, say the researchers, the desire for simple political solutions aids the conspiracy theorist in coping with feelings of uncertainty and fear by helping them view the world as potentially understandable and predictable (if only others would see the path the conspiracy theorist sees so clearly  before them). So, these researchers completed four studies (one in the US and three in the Netherlands) and asked participants to classify themselves as politically left-wing or right-wing and then asked them to indicate how much they endorsed “conspiracy beliefs about a range of current political issues”. The researchers hypothesized that those at the extremes (both right and left) would be most likely to endorse conspiracy theories than would those with more politically moderate ideologies.

And they were correct!

In Study 1 (the US study, 207 participants, age range 18 to 76 years), they found that politically extreme people were not more “interpersonally paranoid” than those with more moderate orientations—ruling out the possibility that political extremists are more paranoid in general.

In terms of conspiracy beliefs, those with extreme political orientations (whether left-wing or right-wing) had stronger beliefs in conspiracy theories about the financial crisis. Things were more complex when it came to climate conspiracy beliefs. The extreme right tended to believe more in a climate conspiracy theory than the left, and those climate conspiracy theories tended to flourish particularly among right-wing extremist males.

In Study 2A and 2B (independently conducted and nationally representative samples of the Dutch electorate), researchers asked how probable or improbable participants thought a number of conspiracy theories were. Their interest was in testing the relationship between a desire for simple political solutions and the tendency to believe in conspiracy theories.

In Study 2A (1,010 participants), they again found that the political extremes were more likely to endorse conspiracy beliefs than those in the political middle, and, that the political extremes were more likely to believe in simple solutions to societal problems. They also found the belief in simple political solutions mediated the relationship between political ideology and conspiracy beliefs at the political extremes.

In Study 2B (1,297 participants), they again saw that conspiracy beliefs were stronger at the politically right extreme and again, conspiracy beliefs and a belief in simple political solutions were significantly correlated. As in Study 2A, a belief in simple political solutions mediated the relationship between conspiracy beliefs and political ideology.

In Study 3 the researchers wanted to be sure there was not a possible alternative explanation for their findings and so they tested whether those with extreme political attitudes would have more extreme attitudes in general. For this study, 268 participants completed online questionnaires measuring their political ideology (left-wing or right-wing), belief in conspiracy theories, and their general attitudes toward 18 various products, activities, types of food and ideas (so the researchers could see if the politically extremist participants had extreme attitudes about everything).

Again, political extremists were more likely to believe in conspiracy theories than those who were more moderate. There were no significant effects for most of the “non-ideological” issues measured, but political extremists did tend to “have a more positive attitude about watching documentaries than moderates”, they tended to like smartphones and to dislike astrology.  Otherwise, political extremists did not differ in their general attitudes from the general attitudes of political moderates and the researchers say there is no indication that the effects of having extreme political beliefs generalizes to extreme attitudes in general.

So what does all that mean? You are more prone to endorse conspiracy beliefs if you are at either political extreme (i.e., right-wing or left-wing). So, when it comes to assessing your own risk for endorsing conspiracy beliefs, visualize an 11-point scale ranging from left-wing to right-wing (or the opposite if you so wish) and determine your own placement. If you place yourself at either end of the spectrum, you are more likely to believe in conspiracy theories.

From a litigation advocacy perspective, while you can’t really come right out and ask in voir dire if someone is a conspiracy theorist, you could ask about political ideology, which gives you a metric for measuring the likelihood of susceptibility to conspiracy theories. You can decide (based on your goal with that particular trial) whether you wish to retain or strike those with those tendencies.

van Prooijen, J., Krouwel, A., & Pollet, T. (2015). Political Extremism Predicts Belief in Conspiracy Theories. Social Psychological and Personality Science DOI: 10.1177/1948550614567356

Image

Share
Comments Off

2015 brain functionWe’ve seen the claims that people don’t find brain scans as alluring as they used to, but here is a study that says, “not so fast!”. It’s an oddly intriguing study involving not only invoking pretty pictures of brain function but also political affiliation and how that factors in to what one chooses to believe.

Much attention over recent years has been given to “an attack on science”, with many public figures (including elected officials) insisting that evolution is a hoax, climate science isn’t real, and vaccines are somehow more harmful than helpful. [For the record, here at the Jury Room we are big-time fans of science. I want to believe that our readers knew that already.]

Researchers discuss perceptions of “soft science” and “hard science” and the general sense that “hard science” is viewed as more reliable, accurate and precise. They describe multiple experiments showing people tend to prefer “hard science” data to data offered by those in “soft science”. The question these researchers focused on was whether “hard science” data (in this case, a brain scan) would be preferred over “soft science” data (in this case, cognitive test results). They also wondered if this preference (for “hard science” or “soft science” data) would be mediated by political orientation.

In the study (106 participants, 83 women, 23 men; ranging in age from 18 years to 47 years with an average age of 19.6 years; 77 identified as White, 17 said they were African-American, and “five or fewer” identified as Asian American, Latino/Latina or other) completed a pretest online which included two questions about their political preference (both used by the American National Election Studies).

Generally speaking, do you think of yourself as a Democrat Republican, Independent, or something else?

If you selected Democrat or Republican for the previous question, would you call yourself a strong Democrat or Republican or a not very strong Democrat or Republican?

Only those participants who identified as either Democrat or Republican were eligible to participate in the study which they were told would involve them reading about an ethics violation and then making judgments about the case.

In the study itself, participants read a one-paragraph case description about a politician elected to office in a geographically distant state who had recently been cited for three ethical violations. The paragraph informed them the ethics committee had questioned the politician’s memory and asked him to have an evaluation done on his memory to determine if memory issues would prevent him from carrying out his duties as an elected representative. Finally, the participants read that if the testing determined the politician was impaired, he would be forced to resign and the governor of the state would appoint a replacement to serve until the next election. The paragraph description concluded by saying the governor had announced that any replacement appointees would be members of the same political party as the governor.

There were (you knew this was coming) several variations in the information the participants read about the politician and his situation.

Half of the participants read that the politician tested was a Democrat and the governor of his state was a Republican. The other half read that the politician was a Republican and the governor of his state was a Democrat.

The researchers paid attention to the political identification of the participant and if the participant said they were Republican and read about a Republican politician—they were placed in a group for analysis that was labeled in-group. If, on the other hand, a Republican participant read about a Democratic politician, they were placed in a group labeled out-group for analysis purpose. (The same applied vice versa when party preference is opposite.) Further, if the participant endorsed a strong affiliation politically, they were classified in the strong political identification group and if they endorsed a weak affiliation politically, they were classified in the weak political identification group.

After reading the initial description of the situation, all participants read a two-paragraph description of an expert evaluation of the politician. The expert mentioned in this description was a “Dr. Daniel Weinberger”. The participants received differing information about how Dr. Weinberger had evaluated the politician’s cognitive function.

Half the participants read that Dr. Weinberger reviewed the politician’s medical history and gave him verbal or paper and pencil tests (commonly used by neuropsychologists).

The other half of the participants read that Dr. Weinberger reviewed the politician’s medical history and conducted an MRI of the politician’s brain. (It is important here to note that no MRI images were shown. All the participants saw were words describing the process and then, the outcome.)

The second paragraph offered a description of the results of the evaluations in ways consistent with either verbal or paper and pencil tests or an MRI. For all participants, the second paragraph ended with identical statements saying that the expert concluded the “politician was suffering from beginning-stage Alzheimer’s disease, that symptoms will continue, and the symptoms will interfere with the politician’s ability to perform his duties”.

And here are the findings:

Biologically based information (i.e., the brain MRI) was viewed more favorably (69.8% said the evidence the politician had early stage Alzheimer’s was strong and convincing) than the behaviorally based (i.e., cognitive testing) information (only 39.5% said the evidence the politician had early stage Alzheimer’s was strong and convincing).

When asked to identify the one most important reason they felt the way they did about the evidence presented, those who saw the behavioral evidence said it was subjective  and perhaps unreliable or irrelevant—more than 15% said the neuropsychological testing was unreliable or irrelevant. Not a single participant who saw the biologically based evidence said the MRI evidence might be unreliable—in fact, they saw it as objective, valid and reliable. (Anyone with any knowledge of the validating research and very detailed manuals accompanying psychological tests might find this, as the researchers say, “perplexing”. Of course, those who have that knowledge base would not qualify for inclusion in this study.)

Those participants who were in political out-group assignments (that is, Republican participants who read about a Democratic politician or Democratic participants who read about a Republican politician) were more likely to discount the behavioral science evidence than those in political in-group assignments.

In short, in this study, participants saw the MRI as more reliable and relevant than the cognitive testing, and those with strong political identities discounted the cognitive testing even more than those without the strong political sense of self.

Despite the reality that Alzheimer’s would always be diagnosed with cognitive testing, and brain scans used after testing was completed to rule out other explanations for impairments identified by testing—these participants preferred the verbally described brain images of “hard science” to the low-tech paper-and-pencil tests of the neuropsychologist. It’s a finding that underscores the importance of expert testimony informing jurors of how a diagnosis is made so they know if testing was performed because of the “wow” factor of a colorful MRI or to offer a research-based assessment of brain/memory impairment.

In other words, don’t believe everything you read– jurors can still be seduced by what looks like “hard science”. Your task is to show them what scientific findings are truly backed up by years of scientific research and development.

Munro, G., & Munro, C. (2014). “Soft” Versus “Hard” Psychological Science: Biased Evaluations of Scientific Evidence That Threatens or Supports a Strongly Held Political Identity. Basic and Applied Social Psychology, 36 (6), 533-543 DOI: 10.1080/01973533.2014.960080

Image

Share
Comments Off

man selfieLast month we were asked to provide internet research on a very large jury panel, and to complete it overnight. What that means is we want to find out as much as we can about the attitudes, values and behavior of those in our venire panel. We do that background research on the internet and on multiple sites. It is painstaking work and must be accurate, no matter how late (or early) it gets. In this case, we started about 3pm one day and finished up about 5am the next morning (with the help of pizza delivery, lots of peanuts, and ample caffeine).

We’ve noted before that pretrial juror research can result in hilarity only achievable in the wee hours of the morning as exhaustion sets in. While, in this research batch, we found a plethora of selfies with duck-faces (mostly by women), today’s research article was garnering lots of press while we were pounding our keyboards (and then sleeping).

This particular research finding does not surprise us at all. When you see men with all sorts of selfies on social media—particularly shots showing off their physique, musculature, and general buffness—what might you conclude? If you say “narcissist”—then you agree with today’s researchers.

The researchers obtained a “nationally representative sample” of 800 men aged 18-40 years (average age 29.3 years, 73.1% White; 13.3% Black; 7.6% Hispanic; 6.1% Asian; 1.3% Native American; 2.3% multiracial; and 2% other) who completed an online survey task.

The 800 participants completed scales measuring self-objectification (e.g., how much the individual values their physical appearance above other traits, to see the self-objectification scale questions, follow the link above and scroll to page 120 of the document that opens), and the dark triad (e.g., psychopathy, narcissism and Machiavellianism; to see this scale’s questions, follow the link and see items in Table 8 on page 429 of the document that opens).

In addition to completing these scales, the participants also estimated the time they spent on social networking sites daily, reported how often they posted selfie photos and,  whether they edited the photos they posted to enhance their appearance.

And here are the (not particularly shocking) findings:

Men who spent more time (the average in this study was 78.73 minutes with a maximum report of 16 hours a day) on social media sites each day were more narcissistic and higher in the trait of self-objectification.

Men who posted more selfies were more narcissistic and psychopathic.

Men who edited their photos before posting them were more narcissistic and higher in the trait of self-objectification.

The researchers opine that men high in dark triad traits (narcissism, psychopathy and Machiavellianism) will edit photos before uploading them to social networking sites so as to present themselves in the best possible light and attract short-term partners. The researchers call these “cheating strategies”—but they could just as easily be called  “editing photos to be more flattering”.

We want to point out that although these men “higher in dark triad traits” did measure higher in narcissism, psychopathy and Machiavellianism—their scores remained in the normal range and could simply indicate good self-esteem.

While most of the mainstream media coverage we’ve seen has reported this fact—it tends to be tucked in at the bottom of the coverage, and you will be unlikely to see headlines trumpeting, “Men With Good Self-Esteem Post More Selfies!!!”.

So therein lies the dilemma. It is easy to make assumptions about lots of selfies on male social networking accounts. But drawing assumptions from one piece of data is always risky.

An example from this most recent round of research was the young male who had only one book listed on his Facebook profile under books he’d read: The Stoner Cookbook. It would have been very easy to write him off as “just a stoner”, but there were other data points that showed him to have much broader interests and to be a reasonable prospective juror.

Today’s research is just one study that has taken off in the mainstream media (like the study on criminal defendants wearing eyeglasses that was so misinterpreted). “Selfies” are one data point when you are doing juror research before voir dire. They may be an important point of data, depending on content. On the other hand, they may not be relevant, and assumptions leading to incorrect conclusions can undermine your painstaking research.

Fox, J., & Rooney, M. (2015). The Dark Triad and trait self-objectification as predictors of men’s use and self-presentation behaviors on social networking sites Personality and Individual Differences, 76, 161-165 DOI: 10.1016/j.paid.2014.12.017

Image

Share
Comments Off