The “decline effect” refers to the observation that many effect sizes that scientists study seem to be declining over time. In ecology, Jennions & Møller (2002) found that decline effects were the rule in ecological meta-analyses. That is, if you plotted the magnitudes of the effect size estimates included in a meta-analysis vs. the years in which those effect size estimates were published, you typically got a negative correlation. The first published effect size estimates tended to be larger than more recent estimates. This is troubling because it suggests publication biases that, in the worst cases, might lead to wild goose chases. Everybody gets excited about some massive initial estimate of some effect, jumps on the bandwagon, and it’s only years later, after much additional research, that everybody realizes that those initial estimates were unusually large (whether because of sampling error, or for some other reason).
Fortunately, my own more recent compilation of data indicates that decline effects are no longer common in ecology. Further, they’re no more common than their opposite, which you might call “incline effects”: cases in which the first published effect size estimates are small in magnitude compared to subsequent estimates.
But still, I have found some cases that might represent real decline effects or incline effects. “Real” meaning “not just due to sampling error, I don’t think.” I’m curious whether you can pick them out of a lineup. After all, if someone tells you that studies of phenomenon X exhibit a decline effect, you’ll probably be able to come up with a post-hoc rationalization as to why. Everything is obvious once you know the answer. It’s much harder to predict which meta-analyses will exhibit a decline effect, or an incline effect.
Below is a list of seven ecological meta-analyses. It includes the clearest-cut decline effects I could find, the clearest-cut incline effects I could find, and a couple of cases that exhibit neither a decline effect or an incline effect. For each meta-analysis, I’ve provided a plot of the absolute values of effect size estimates vs. the years in which those estimates were published (with the oldest being coded as “year 0”). The plots are labeled A-G in their legends, and in case it’s not totally obviously I’ve labeled each plot as exhibiting a decline effect, an incline effect, or neither. But I haven’t told you which plot goes with which meta-analysis–it’s your job to guess! There’s an anonymous poll at the end for you to record your guesses. Good luck!
The meta-analyses:
- Crawford et al. 2019 EcoLetts (plant-soil feedbacks)
- Winfree et al. 2009 Ecology (effect of anthropogenic disturbance on bee abundance)
- He et al. 2013 EcoLetts (stress gradient hypothesis)
- Gibson et al. 2011 Nature (effect of tropical forest disturbance/land conversion on biodiversity)
- Gange et al. 2019 New Phytol (effect of fungal “plant bodyguards” on insect herbivores)
- Koricheva et al. 2004 Am Nat (trade-offs among different anti-herbivore defenses in plants)
- Munguia-Rosas et al. 2011 EcoLetts (natural selection on plant flowering phenology)







More or less a random selection…..I didn’t have a clue even for my own paper! Except that I knew we didn’t have a giant sample size. Took me a while to wrap my head around the graphs too as the correlations were in the opposite direction to what I was expecting from what you wrote about incline/decline.
Thanks for guessing–it’s early days but this looks like my least popular poll ever!
Although it’s an interesting question I think that it’s a difficult one to get any kind of insight into just from looking at the graphs and knowing the titles of the studies. Sample size was my only real point of reference because there’s no clear reason (to me at least) why effect sizes should change for some kinds of questions but not others.
“there’s no clear reason (to me at least) why effect sizes should change for some kinds of questions but not others.”
Hmm, interesting. This poll was inspired by showing an author of one of these meta-analyses the plot showing that the meta-analysis had a decline effect. That author said (paraphrasing) “I’m not surprised, I can totally imagine that the first published effect size estimates were big because that’s what people expected to find, and it’s only as time has gone on that we’ve realized the effect isn’t usually as big as that.”
It’ll be interesting to see what others think. To have that kind of insight one needs to know a field very well, so looking across diverse fields like this may be asking too much.
“To have that kind of insight one needs to know a field very well, so looking across diverse fields like this may be asking too much. ”
I suspect you’re right. On the other hand, when Jeff C. commented about the strong decline over time in estimated effects of ocean acidification on fish behavior, my first reaction was “I’m not surprised.” On the third hand, maybe that’s just my brain pretending after the fact that it totally could’ve predicted that decline effect in advance?
Looking at the list of meta-analyses in the post, I’m not sure which ones you’d pick if you were trying to use the case Jeff C. describes as a “search image”. The meta-analyses in the post include a few that are to do with conservation and global change (Winfree et al., Gibson et al., Munguia-Rosas et al.) But those meta-analyses all got at least some of their effect size estimates from studies that weren’t originally focused on estimating the effect size in question.
Jeremy – thanks a bunch for the recent discussion on the Decline Effect. We found a strong decline effect recently for ocean acidification effects on fish behaviour and demonstrated it to be largely related to publication bias (selective publishing) and low sample sizes.
Twitter thread here: https://twitter.com/biolumiJEFFence/status/1306606123932409856
Preprint here: https://ecoevorxiv.org/k9dby/
I’m with Jeff O. here – really hard to place the trends with the topics!
Thanks for the preprint link Jeff C.
One tentative thought I had was that cases in which you’d expect a decline effect are cases like the one you refer to: cases in which many people have a strong expectation that they’ll find a big effect size, and in which they have to run a study specifically designed to estimate that effect. But just eyeballing the meta-analyses I’ve looked at so far (over 260 and counting…), I don’t see any obvious trend like that. The few real decline effects aren’t all cases in which authors expected big effect sizes. And meta-analyses of hypothesis-testing experiments don’t look any more likely to exhibit decline effects than meta-analyses of any other sort of study.
But those are all just casual impressions; I haven’t actually checked any of this properly.
@Jeff C: just eyeballing your preprint, it does look to me like you’ve found one of the strongest and clearest-cut decline effects in the history of ecology.
Slight caveat that the number of effect size estimates in your compilation isn’t huge. Most of the really strong decline effects and incline effects) in ecology come from meta-analyses with only a few effect sizes. If enough ecologists do enough meta-analyses, and some of those meta-analyses only include a few effect sizes, you’re bound to get some cases of seemingly-strong decline effects and incline effects just by chance. Even if the order in which effect size estimates are published truly is uncorrelated with their magnitudes.
Thanks Jeremy! Yes, we think this is one of the clearer textbook examples of the decline effect in ecology. Good point re. inclines/declines by chance. Though I do think that, in our case, the decline effect is real and contend that effect sizes in early studies are wildly overestimated.
Just took a look at your slides as well. Super interesting project and I’m keen to see the paper when it comes out!
If a decline effect is a signal of an effect that is, in actuality, weak or absent, I would expect to find decline effects in cases where my prior for the hypothesis is low. My reluctance to take this survey comes from the fact that I don’t have enough information to have strong priors from just reading the one-line descriptions of those meta-analyses. Besides one of these papers which I happened to look at just yesterday, I really don’t have enough information to do anything more than blindly guess.
On a related note, your finding that decline effects are rare in ecological meta-analyses suggests to me some combination of:
1. in many sub-disciplines of ecology, researchers often test hypotheses that are relatively likely to be true (compared to, for instance, social psychology)
or
2. publication bias is low, so ecological studies that invalidate new hypotheses are readily published early on.
If #1 is true, this could be because ecologists have robust theory that generates robust hypotheses, or it could be that we are not very bold in our hypothesizing. However, the lack of boldness need not be true if #2 is also correct. However, I’m not convinced that #2 is correct (though it may be).
By the way, I found your slides from your decline effect study interesting – when is the paper coming out? I haven’t missed it, have I?
Hi Tim, interesting thoughts. In ecology, I suspect that both 1 and 2 often hold.
There’s no paper yet from my ESA talk–we’re still compiling data! Up to 275 meta-analyses in the database now. Should be done compiling data in the next week or two, then the paper should be quick to write. Results haven’t changed much from what’s in the talk. Though we can now confirm with our own data that decline effects used to be fairly common in ecological meta-analyses back in the ’90s and oughts, but they aren’t any more. Still trying to work out why they might be. It may have something to do with long-term improvements in the precision of our effect size estimates, but I haven’t nailed that part of the story down yet.
Basically, I’m trying to create a real-life version of this cartoon: https://xkcd.com/1447/ 🙂
What is the difference between decline effect and regression towards mean?
Regression to the mean could be one contributing cause of the decline effect. But there could be other causes too.