This is a bit outside our usual territory. But it’s a good example of null hypothesis testing in action. Turns out that even randomly-generated DNA sequences mostly look “functional”, at least if one adopts a biochemical definition of “functional”.
We’ve been talking a lot about reproducibility and researcher degrees of freedom lately. But Andrew Gelman passes on the best story yet about these issues. Two psychologists interested in political ideology did an online experiment in which people of different political persuasions (left, right, center) were shown words in different shades of grey and asked to pick out the shades of the words from a black-white gradient. They found (P=0.01) that political moderates were most accurate, implying that political extremists literally see the world in black-and-white. The design and follow-up analyses ruled out the obvious alternative explanations. I’m sure you can see how this result would’ve made a big splash in the media, and so did the authors (one of whom was a finishing grad student with every incentive to publish a splashy result). But before publishing, the authors hesitated. Even though they’d hypothesized the result in advance, and even though their sample size was big, they were worried that the analytical choices they’d made (which they hadn’t decided in advance) might somehow have inflated the statistical significance of the result. So since data collection was cheap (it was an online quiz), they did the whole thing again, with power of 0.995 to detect an effect of the original estimated size at the 0.05 level. The effect vanished (P=0.59). In their own write-up of their experience, the authors discuss how to give authors and journals incentives to publish replications rather than, or in addition to, novel results.
Speaking of reproducibility, here’s John Ionnadis’ latest: published biomedical animal experiments report statistically-significant treatment effects twice as often as expected, given their sample sizes and plausible estimates of the true effect sizes. This suggests that selective data analysis and biased outcome reporting are serious problems. As I’ve noted before, there’s an opportunity here for someone to do a similar analysis in ecology. Anything that’s been studied sufficiently often to be a good subject for meta-analysis is a good candidate…(HT Ed Yong)
And yet another post from Andrew Gelman on reproducibility and researcher degrees of freedom. I link to it because it makes an important point that isn’t made often enough. Researcher degrees of freedom isn’t just about what analyses you chose to do, given the particular data that happened to occur. It’s also about whether you would’ve done different analyses if different data happened to occur. If so, then you have “many possible roads to statistical significance” and will expect to find nominally-significant results more than 5% of the time when there’s no true signal in the data.
Political scientist, mathematician, and programmer Philip Schrodt has resigned a tenured faculty position at Penn State. Even though his health is good, he just got two NSF grants, and his teaching is going well. And he’s not retiring; he’s becoming a consultant. Click through to read his blog post on why he’s “going feral”. (HT The Monkey Cage)
San Jose State made a splash a little while back by teaming up with Udacity to offer massive open online courses (MOOCs) for credit. They’ve now suspended the experiment, after 56-76% of students (depending on the course) failed the final exam. The news, and the implications, have been widely discussed online. This post from Reihan Salam of Reuters asks the key question: what if the only way to improve that dismal failure rate is to make MOOCs less massive and more expensive–that is, make them more like ordinary classroom-based courses? Relatedly, Crooked Timber has a discussion of what sort of skills MOOCs can’t teach.
Speaking of Crooked Timber, they ask whether peer reviews should be made public after the associated paper is published. The post was inspired by some recent posts in the ecology blogosphere. The discussions over there are always quite lengthy and meaty, so if you’re interested in this issue and are curious about the perspective of folks from the social sciences and humanities, click through and have a look.
Divisions between empiricists and theoreticians in science go back way beyond the dawn of science as a profession. Magic, Maths, and Money provides a nice little potted history of 17th century British empiricism and its Baconian love of “big data”, vs. French rationalism and its Cartesian love of mathematics. Read it and hear the distant echoes of contemporary methodological debates. (HT Economist’s View)
And finally: “You’ll
shoot light saber your eye out!”🙂 (HT Ed Yong)
NSF has announced that its new director will be France Córdova. She is an astrophysicist and is the former president of Purdue University.
And here’s a piece suggesting that, if you want to be an urban naturalist, you just need to walk around with a toddler. I agree! I definitely would walk by many interesting things without noticing, if it weren’t for my toddler.
An interesting post from Terry McGlynn on why he doesn’t like the term “work-life balance”. It’s a term I use often, and he definitely raises interesting points that are worth considering (but I suspect I will keep using the term, because I do think it’s a convenient label).
Sorry to be missing ESA this year! I’m hoping to follow along via twitter, though the lack of wifi at the conference center might make that hard.