Last week I polled y’all on how replicable you think ecology is, and on the sources of lack of replicability. Here are the responses!
tl;dr: There’s more disagreement about this than about any other topic we’ve polled on.
We got 118 responses; thanks to everyone who responded! Not a huge sample, and surely not a perfectly-representative sample of ecologists, or even of readers of this blog. But as usual with our polls, it’s a sufficiently large and representative sample to be worth talking about.
The first question asked readers to give the percentage chance that a typical ecological study would replicate, where “replicate” means “getting a statistically significant result of the same sign as the original, using either the same data collection process and analysis on a different sample, or the same analysis on a similar but independent dataset.” Here’s a histogram of the responses:
The responses were all over the map. But the distribution isn’t perfectly uniform. If you squint a bit, you’ll see that it looks trimodal. There are replication optimists–a peak of respondents who think the typical ecological study has a 70% chance of replicating. There are replication pessimists–a peak of respondents who think the typical ecological study has only a 20% chance of replicating. And there are replication, um, pessoptimists–a peak of respondents who think the typical ecological study has a 50% chance of replicating. Ok, the trimodality might just be a blip, but I doubt that broad spread of the distribution is a blip. Ecologists disagree a lot about whether the typical ecological study would replicate.
Which as an aside kind of surprises me. I mean, meta-analyses are a thing! Every respondent to this survey has presumably read numerous meta-analyses, which should give you a sense of how often two different studies of the same topic produce a statistically-significant effect of the same sign. As a reader of a bunch of meta-analyses, I feel like the replication pessimists are just incorrect.
Or maybe I shouldn’t be surprised by the level of disagreement here. I mean, it’s not as if most ecologists have done what Tim Parker has done: gone through a systemic exercise to estimate the replicability of a whole bunch of ecological studies. So maybe it’s not surprising that casual guesses about the replicability of the typical ecological study are all over the map.
Perhaps what’s going on here is that many respondents were just focusing on studies in their own subfield of ecology? So the pessimists work in less-replicable subfields, whereas the optimists work in more replicable subfields? Now I’m kicking myself for not asking respondents to say what subfield they work in.
My second question asked what fraction of ecological studies are unlikely to replicate, defined as having less than a 33% chance of replicating. Here’s a histogram of the responses:
As with the previous question, the responses were all over the map, again with a hint of trimodality.
I worry a little bit that the responses were all over the map for this second question in part because the question wasn’t sufficiently clear. A minority of respondents’ answers to this question were inconsistent with their answers to the first question. If you think that the typical ecological study has (say) an 80% chance of replicating, you can’t also think that >50% of ecological studies have less than a 33% chance of replicating. That’s mathematically impossible!
The third question asked respondents about various reasons why an ecological study might fail to replicate. For each reason for replication failure, respondents were asked if it’s involved in many or all failures to replicate, some, or few/none. Here are the responses:
According to the respondents, there are four primary reasons for replication failure in ecology, with the most common being that the original result can only be obtained under specific conditions or in a specific, unusual study system. That seems right to me. It’s what meta-analysts call “heterogeneity”, and we know it’s a big deal (Senior et al. 2016). Following closely behind heterogeneity, in the view of the poll respondents, are lack of power, p-hacking, and publication bias. Many fewer respondents see “bad luck” as a common reason why ecological studies fail to replicate. And few respondents think that fraud is involved in an appreciable fraction of failures to replicate.
My own answers to the third question would’ve broadly agreed with those of the bulk of the respondents, had I taken my own poll. Except that I don’t think publication bias is much of a thing. Ecological meta-analyses routinely test for it and rarely find it.
There weren’t any obvious associations between respondents’ answers to the third question, and their answers to the first two questions. For instance, the rare respondents who think fraud is involved in some or many failures to replicate were all over the map in terms of how likely they think it is that the typical ecological study would replicate.
Bottom line: ecologists as a group have little idea how replicable ecology is, or why. What, if anything, should we do to remedy that?