
A few months ago I posted about my side project asking whether there’s a systemic tendency for published ecological effect size estimates to decline in magnitude over time (the so-called “decline effect“). Answering that question involved compiling every published effect size estimate I could get my hands on, from every ecological meta-analysis I could find.*
The data compilation is done now. I now have a 14 MB csv file containing over 114,000 effect size estimates from 470 ecological meta-analyses. This is close to (not quite) a census of all ecological meta-analyses and their effect sizes published from 1991 through the spring of 2020. I also have the sampling variance for every effect size estimate, and some other bits of information: the publication year of every meta-analysis, the publication year of every effect size estimate, an identifier for the original paper in which every effect size estimate was originally published, a (crude) descriptor of the response variable for each meta-analysis (e.g., “abundance”, “diversity”, “various”…), and the effect size measure used (usually Hedge’s d or g, the log-transformed response ratio, or the Fisher’s z-transformed correlation coefficient).
Now what?
I feel like one ought to be able to use this database to ask some interesting descriptive/exploratory questions about both ecology and ecologists (aside: see this great old post of Brian’s on the value of exploratory statistics). But this is a new direction for me, and so I’m not 100% sure what those descriptive/exploratory questions should be, beyond the question about the decline effect described above. So I’m open to suggestions! Here are some ideas I’m mulling over:
- Really basic descriptive questions like “How many effect size estimates does the typical ecological meta-analysis contain?” and “Do more recent meta-analyses tend to include more effect size estimates?” But I kind of feel like these really basic descriptive questions aren’t that interesting on their own (?)
- Do ecologists mostly study “small” effects, or “big” effects, or what? And did we pick all the low-hanging fruit years ago, so that as time goes on we’re studying smaller and smaller effects? I actually doubt that we’ve been studying smaller and smaller effects as time has gone on–but now I can find out!
- Questions about “heterogeneity”. “Heterogeneity” refers to variation among effect size estimates attributable to sources other than sampling error. For instances, studies conducted at different places or times might tend to report different effect sizes, because the “true” mean effect sizes varies over space and time. We already know that most variation in effect size estimates in ecology is due to heterogeneity, not sampling error (Senior et al. 2016). But I could ask follow-up questions like “How much of that heterogeneity represents heterogeneity among studies, vs. heterogeneity among different effect size estimates reported in the same study?” And “How does the balance of within- vs. among-study heterogeneity tend to change over time as more and more studies are published?” Just offhand, I might expect that among-study heterogeneity tends to increase over time as more and more studies of a given effect are published. Because as time goes on, ecologists will study effect X in a greater range of study systems, using a greater range of methods.
- Has sampling variance of ecological effect size estimates generally improved over time?
- Is there ubiquitous publication bias in ecology? That is, if you ran Eggers’ regressions, or some other test for publication bias, on every published meta-analysis in ecology, what would the distribution of results look like? Would a large fraction of meta-analyses exhibit evidence of publication bias? That question’s been asked before (e.g., Barto & Rillig 2011), using much smaller compilations of older meta-analyses, so it seems worth revisiting. This seems like an interesting question to me, but also one that could be hard to get at. It’s my impression that standard formal statistical tests for publication bias tend to lack power. And you would need formal statistical tests to address this question at a systemic level, because come on, you can’t visually inspect 470 funnel plots. Any suggestions on how best to get at this?
- How does the estimated mean effect size, and its standard error, tend to change over time as more and more effect size estimates are published? How long does it typically take for ecology to converge on a stable, precise estimated mean for effect X? That is, how long does it take after ecologists first start studying an effect for them to be able to say “Ok, now we have a good handle on how big this effect typically is.” That seems like a super-interesting question to me. But I’m not sure how exactly to go about putting numbers on it. How would you do it?
- Other questions I haven’t thought of? (That hopefully would not require compiling yet more data, because this was already a ton of work for me and my very able undergrad assistant…)
Looking forward to your feedback.**
*The answer seems to be “no”.
**Especially if you think that none of this is all that interesting!
experimental v observational studies?
To get at that, we’d have to back to the meta-analyses and record whether they considered observational data, experimental data, or both. That’d be work, but it’d be doable.
Wow, this could keep you busy for a long time, nice work! Two things immediately spring to mind:
1. Where are the gaps in meta-analysis? What should/could have been done but hasn’t?
2. Given that you have the publication year of every effect size estimate, are meta-analyses getting worse at picking up the older literature? Or better because more literature is digitised now?
“2. Given that you have the publication year of every effect size estimate, are meta-analyses getting worse at picking up the older literature? Or better because more literature is digitised now?”
I’ve looked at that a little bit. It’s a little hard say. Meta-analyses these days range much more widely in terms of sample size than meta-analyses did back in the 1990s. These days, you sometimes see huge meta-analyses, based on hundreds of studies and 1000+ effect sizes. And sometimes you see very small meta-analyses based on just a handful of studies and effect sizes. It turns out that the timespan of studies covered by a meta-analysis has a (noisy) positive correlation with the number of studies it includes. So are (some) meta-analyses these days getting worse at picking up older literature? Or are they just addressing questions for which there are only a few published studies, all of them fairly recent? I don’t know.
Re: the gaps in the meta-analytical literature, I have less idea about that than you might think. For two reasons. One, I have no idea what meta-analyses *should* be done but haven’t been yet, I only know what meta-analyses have been done. Second, I don’t really know that much about the meta-analyses that have been done, because I (and my assistant) have just been raiding them to get the information we need. We only have very brief notes on the topic of each meta-analysis.
My casual impression is that there are meta-analyses on *lots* of topics! At no point have I ever found myself saying things like “Huh, why are there no meta-analyses of plants?” or “Weird that nobody ever meta-analyses experimental data.” Because obviously, there are lots of meta-analyses of plants, experiments, etc. But for me to have noticed a gap in the meta-analysis literature, it’d had to have been some huge, super-obvious gap like “no meta-analyses of plants”. I think it’s impossible for the ecological meta-analysis literature to have a gap that big. Because if it had a gap that big, someone would’ve closed it already. 🙂 #no$20billsontheground
Make, say, 10K effect sizes publicly available, and ask students to write a short report about interesting patterns they have found. These findings can then be checked against the full dataset.
Ooh, interesting idea–split the dataset into a portion to be used for hypothesis-generation, and a portion to be used for hypothesis testing. Will have to think about how best to do that. I think you’d want to do it at the level of the meta-analyses. Make data from X randomly-chosen meta-analyses available for hypothesis generation, and hold back the remaining 470-X meta-analyses for testing those hypotheses.
To which: I sure hope so! 🙂
The less-flippant answer to that question is: any question we ask with this compilation will be a study of ecologists as well as ecology. That is, it’s a study of effect sizes and their sampling variances as recorded in the published literature. The numbers that get recorded in the literature reflect all sorts of choices by both the original investigators, and the meta-analysts. So I think the best questions to ask of this dataset are questions that you can answer without having to second-guess the choices of either the original investigators or the meta-analysts.
The dataset does contain a couple of Hedge’s d values >15,000. I do wonder if those values could possibly be correct. But it’s just 2 effect sizes out of over 114,000, in just one meta-analysis out of 470. So I doubt that any of my results will be affected even if those two Hedge’s d values are mistaken.
Dammit, I forgot about this cartoon! I wish I’d linked to it in the post. Because yeah, it me. 🙂
Looking forward to writing a paper on this dataset, so that I can become the first ecologist to cite Quine (1960) on “semantic ascent” (https://en.wikipedia.org/wiki/Word_and_Object#Semantic_ascent) 🙂
“Semantic ascent” is when, instead of talking about things, we talk about *how we talk about things*. Analogously, instead of studying ecology, I’m studying how ecologists study ecology.
Hey Jeremy, Though I’d love to, I’m a bit too busy at the moment to really engage with the science here (homeschool insanity, teaching, preparing for big defense). But I saw this on twitter and knowing you don’t obsess over twitter like some of us, thought I’d drop by to say:
1) this seems like a cool treasure trove of data to ask scientific questions, and questions about ‘science’ and ‘scientists’.
2) if you come up with some cool ideas that seem focused on the science of synthesis (i.e., a meta-analysis of meta-analyses, kind of like Hillebrand et al’s recent paper). I’d be interested for you to think about submitting to Ecology Letters as a Reviews and Synthesis piece if it seems heading in that direction. Let me know.
Thanks for reaching out Jon, appreciate the positive feedback.
Now cranking through cumulative random effects meta-analyses for every one of the 470 meta-analyses, so I can quantify within- and among-study heterogeneity and how they changed over time. I should’ve kept track of how many computing hours this is taking, because it’s going to take a hilariously long time. To do a cumulative meta-analysis, you do a meta-analysis on the first two published studies, then add in the third study and redo the meta-analysis, and so on. So I’m basically doing almost as many meta-analyses as there are studies in this dataset. There are over 22,000 studies in this dataset.
I recently saw a paper in Ecology based on a cumulative meta-analysis. As in, *one* cumulative meta-analysis. Without wanting to criticize that paper at all (I thought it was an interesting paper), I’m pleased with myself to be prepping a paper that will include *470* cumulative meta-analyses. I feel like Crocodile Dundee in this scene:
[looks at small knife] “That’s not a knife. This is a knife.” [pulls a machete out of his belt]
Analogously:
[looks at paper based on one cumulative meta-analysis] “That’s not a cumulative meta-analysis. This is a cumulative meta-analysis.” [writes paper containing 470 cumulative meta-analyses]
🙂
In seriousness, it’s actually an apples to oranges comparison. If you do a single cumulative meta-analysis, you can really dig into why the cumulative mean effect size estimate changed over time in the way that it did. You can tell a story about how the literature on that topic has developed over time. Whereas if you do 470 cumulative meta-analyses, you can’t such much of anything about why any one of them came out as it did. Seeing the forest requires ignoring the individual trees. The two kinds of paper complement one another. Neither is a substitute for the other.
p.s. As I hope is obvious, my linking to that one line from Crocodile Dundee for purposes of making a silly analogy doesn’t constitute an endorsement of anything else about the movie. I actually don’t remember anything about the movie besides that one line.
Lots of fun ahead!
My 0.02$ is directionality incidence of the effects of X on Y. Meta-analyses are about estimating an weighted average effect size. It tells us something about the amplitude of the effect, on average. But it is just as interesting to ask : How likely am I to observe a positive or negative effect of X on Y? My guess is that large average effect sizes are not necessarily correlated to a large directionality of effects sizes. From an applied point of view, it might be better to have a high directionality.
It certainly would be straightforward to ask about signs of effects, as opposed to means and the associated standard errors.
Just offhand, I’m sure there are some cases in which the mean effect size is (say) very positive, and we have high statistical confidence that the “true” mean effect size is in fact positive, but it’s also common to observe negative effect sizes. Offhand, I’m not sure how frequent such cases are among all 470 meta-analyses in our compilation.
I know this is the opposite of what you were saying for not wanting to compile more data, but I think it would be really interesting to see if the systems explored, and even more so the regions of the world those systems are in, have changed over time? Are we going to new places to study new taxa, or is everyone looking at the same species in the global north?
Maybe this could be partially done by screening key words and titles?
Definitely an interesting question! But yes, one that would require going back and compiling a lot more data.
I have a new undergrad independent study student who will be compiling some data to look at that first question.
I suspect there won’t be any relationship between how often a meta-analysis is cited and how many years it covers or how large the grand mean effect size is. But I could be wrong!
Cool data! If you had spatial data, even if relatively coarse, I’d be curious about whether there were effect size gradients through space and exploring how those gradients varied with life history traits, environments, species richness, etc.
Yeah, sorry, even too-crude-to-be-useful data on that would be too much work to compile, I think.
That’s what I’m trying to figure out!
Pingback: How big does an ecological meta-analysis have to be to be “big enough”? Take our poll! | Dynamic Ecology
Pingback: What the heck is up with the many ecological meta-analyses that have inverted funnel plots? | Dynamic Ecology
Pingback: Why do ecologists publish so many more meta-analyses than evolutionary biologists? | Dynamic Ecology
Pingback: Data on the life histories of ecological research programs (and their meta-analyses) | Dynamic Ecology
Pingback: I think we need a “shrink ray” for estimated mean effect sizes in ecological meta-analyses, but I’m not sure how to build one. Can you help? | Dynamic Ecology