Mother Jones
<!DOCTYPE html PUBLIC “-//W3C//DTD HTML 4.0 Transitional//EN” “http://www.w3.org/TR/REC-html40/loose.dtd”>
A new academic journal, the Journal of Experimental Political Science, says that it “embraces all of the different types of experiments carried out as part of political science research, including survey experiments, laboratory experiments, field experiments, lab experiments in the field, natural and neurological experiments.” Andrew Gelman applauds, but with a caveat:
This looks good to me. There’s only one thing I’m worried about. Regular readers of the sister blog will be aware that there’s been a big problem in psychology, with the top journals publishing weak papers generalizing to the population based on Mechanical Turk samples and college students, lots of researcher degrees of freedom ensuring there will be no problem finding statistical significance, and with the sort of small sample sizes that ensure that any statistically significant finding will be noise, thus no particular reason to expect that patterns in the data will generalize to the larger population.
….Just to be clear: I’m not saying that the scientific claims being made in these papers are necessarily wrong, it’s just that these claims are not supported by the data. The papers are essentially exercises in speculation, “p=0.05” notwithstanding.
And I’m not saying that the authors of these papers are bad guys. I expect that they mostly just don’t know any better. They’ve been trained that “statistically significant” = real, and they go with that.
Call me naive, but WTF? I have no training at all, and I’m keenly aware of the problems Gelman is talking about. How is it possible to complete a PhD program and not have this kind of thing drilled into your consciousness for all time? Can there really be people out there who are being trained that “statistically significant” = real, and nothing more? It’s mind boggling. Are there any PhD programs out there that would would fess up to this?
Of course, there are journals who publish some of these papers, so apparently it goes beyond just PhD programs.
In any case, my view is that if you see the phrase “Mechanical Turk” anywhere in a paper, your BS radar should instantly go into high alert. It’s possible that there’s a reasonable justification for using MT, but not often. I’d be pretty happy to see it banned entirely from allegedly scholarly research.
Read this article: