Also this week: big data on possible P-hacking, a meta-analysis of active learning, Paul Krugman vs. the writing skills of Princeton undergrads, and more…
Since I’ve been stuck on the theme of frogs in boiling water, here is a great very short story by Nancy Kress, interweaving the frog in boiling water meme with themes of sibling rivalry, work life balance, and academic theft of ideas. Kress is my favorite science fiction author these days in part because she is a good writer and in part because she is one of the few that is exploring the implications of biological science instead of space ships and computers. Go ahead – take a 5 minute break from science to read fiction!
I’m late to this, but anyway: here are big data on possible P hacking. It’s a histogram of the distribution of (all?) P-values indexed by Google Scholar in the range 0.041-0.059. Note the apparent break point at 0.05, with values slightly <0.05 predominating over those slightly >0.05. I actually wouldn’t get too upset about this, in part because you expect reported P values to be biased towards lower values because of publication bias even in the absence of P hacking. And in part because I don’t think you can tell from these data how serious the P-hacking is. It could be serious, but it could be pretty mild–the rough equivalent of, say, using an α of 0.06 instead of 0.05. I’m also taking a leap of faith that whoever did this analysis is trustworthy, didn’t screw anything up, didn’t make any dubious methodological choices, etc. But I do think that, carefully interpreted and combined with other lines of evidence, these sorts of distributions can be useful canaries in the coal mine for widespread methodological problems in science. I’d be curious to see the distribution separated out by year, to see if the shape changes over time. Though Google Scholar probably doesn’t go back far enough to look at time series data (maybe you could do it by text-mining JSTOR?). I’d also be curious to see it separated out by field. (ht @edyong209)
University departments often seek to hire star researchers, in hopes of indirect as well as direct benefits. For instance, the presence of the star might make it easier to attract other good researchers in future. Do star hires pay off? Agrawal et al. tried to address this question by compiling a database of faculty hires and publication outputs for 255 evolutionary biology departments from 1980 to 2008. They argue that star hires can lead to massive increases in departmental research productivity. But unfortunately, to my eyes the methods used to infer this conclusion leave a lot to be desired. The authors are basically making uncontrolled before-vs-after comparisons, and so the whole thing is kind of an exercise in post hoc, ergo proper hoc. Still, I’m guessing many of you will be curious about this so I thought I’d pass it on and you can judge it for yourselves. It also gives me an excuse to link back to Brian’s fun old post on department building vs. chicken breeding.
A forthcoming meta-analysis in PNAS finds that active learning in STEM courses improves student exam scores by about 0.5 standard deviations on average and reduces the odds of student failure. The effects hold across all disciplines and class sizes, but are largest in small classes, and are robust to publication bias. Those are just the headline results, click through to the paper for much more detail, including comparisons among experiments with different designs. Speaking as an instructor, these data certainly encourage me to incorporate more active learning into my courses. Especially the introductory biostats course that I’m currently teaching (and teaching rather badly, to judge by various metrics). But the estimated effect sizes have sufficiently large error bars that I do worry about making a large investment in new prep for no detectable payoff (plus, failure rates in my courses are already very low).
Paul Krugman asks “How can we prevent students from making writing mistakes like mixing up ‘principle’ and ‘principal’?” I’d have thought “Threaten to complain about their writing on your million-reader blog” would help, but I guess not. 🙂 In any case, as I’m sure Paul Krugman would be the first to admit, encountering annoying-but-minor grammatical mistakes in the essays of Princeton undergrads really is a #firstworldproblem.
And finally, “You don’t love science, you’re looking at its butt when it walks by.” 🙂 (ht Ed Yong)