2014년 1월 28일 화요일

Can We Trust Psychological Research?

A recent scandal suggests that data manipulation is all too common in psychology studies

Psychologist Dirk Smeesters does fascinating work, the kind that lends itself to practical, and particularly business, applications. Earlier this year, the Dutch researcher published a study that showed that varying the perspective of advertisements from the third person to the first person, such as making it seem as if we were looking out through the TV through our own eyes, makes people weigh certain information more heavily in their consumer choices. The results appeared in the Journal of Personality and Social Psychology, a top American Psychological Association (APA) journal. Last year, Smeesters published a different study in the Journal of Experimental Psychology suggesting that even manipulating colors such as blue and red can make us bend one way or another.
Except that apparently none of it is true. Last month, after being exposed by Uri Simonsohn at the University of Pennsylvania, Dr. Smeesters acknowledged manipulating his data, an admission that been the subject of fervent discussions in the scientific community. Dr. Smeesters has resigned from his position and his university has asked that the respective papers be retracted from the journals. The whole affair might be written off as one unfortunate case, except that, as Smeesters himself pointed out in his defense in Discover Magazine, the academic atmosphere in the social sciences, and particularly in psychology, effectively encourages such data manipulation to produce “statistically significant” outcomes.
(MORE: The Latest Trend: Blaming Brain Science)
Dr. Smeesters is not being accused of fabricating data altogether. He ran studies, but allegedly excluded some data so as to achieve the results he wished for. Insidious as this may sound, some recent analyses of psychological science suggest that fudging the math to get a false positive is all too easy. It is also far too common, as Leslie John and colleagues have shown.
Nor is all of it, or even most of it, purposeful. As Etienne LeBel and Kurt Peters eloquently put it recently in the Review of General Psychology, the problem is not that social scientists are willfully engaging in misconduct. The problem is that methods are so fluid that psychologists, acting in good faith but having natural human biases toward their own beliefs, can unknowingly nudge data in directions they think they should go. The field of psychology offers a staggering array of competing statistical choices for scholars. I suspect, too, that many psychologists are sensitive to comparisons with the “hard” sciences, and this may propel them to make more certain claims about the results even when it is irresponsible to do so.
(MORE: Lessons from the Lab: How to Make Group Projects More Successful)
Then there are the more obvious pressures, including the old “publish or perish” issue in academia. Getting results that don’t support a study’s hypothesis published is a rare event. Given that academic jobs and whether we are hired and fired tend to rely mainly on publications and grants, many scholars may feel pressured to be sure their results are “statistically significant.” Similarly, if a scholar has just convinced the federal government that, say, cartoons are a possibly impending danger to children everywhere and to give him or her a grant for a million dollars to prove it, it’s difficult to then come back years later and say,“Nope, I got nothing.” Some scholars function as activists for particular causes (or take funding from advocacy groups). And of course statistically significant results tend to grab headlines in ways that null results don’t.
Many psychologists are aware of these issues and very concerned about them—in fact, most of the concern about this problem has been raised from within the scholarly community itself. This is how science works, by identifying problems and trying to correct them. Our field needs to change the culture wherein null results are undervalued and scholars should submit their data along with their manuscripts for statistical peer review when trying to get published. And we need to continue to look for ways of moving past “statistical significance” into more sophisticated discussions of how our results may or may not have real world impact. These are problems that can be fixed with greater rigor and open discussion. Without any attempt to do so, however, our field risks becoming little more than opinions with numbers.


Time

댓글 없음: