Does publication of a meta-analysis encourage or discourage further studies?

Weird question: is there any data on how publication of a meta-analysis affects the rate at which subsequent researchers publish effect size estimates that could have been included in that meta-analysis? For instance, after someone publishes a meta-analysis of the effect of [thing] on [other thing], do subsequent researchers do fewer studies of the effect of [thing] on [other thing] than they otherwise would have? Perhaps thinking that we now know the answer, so it’s time to move on to studying something else.

Or maybe not. After all, lots of factors affect individual researcher decisions on what studies to conduct, besides “has this topic already been meta-analyzed?” And there certainly are cases in ecology in which a meta-analysis was followed years later by a second meta-analysis on the same topic, incorporating new studies that were published after the first meta-analysis.

It would be hard to prove causality here. For instance, if a meta-analysis is published, and subsequent researchers publish few studies of that topic, well, maybe that’s because interest in the topic was starting to wind down anyway. It didn’t wind down because the meta-analysis was published.

Anyone know of any data on this?

I ask about this because I’m interested in what drives the collective waxing and waning of research effort on a given topic.


12 thoughts on “Does publication of a meta-analysis encourage or discourage further studies?

  1. Interesting question I’ve wondered about too. I certainly don’t know of data that speak to the question, but based on the thought process involved in developing new projects maybe we can speculate/hypothesize. I would say a meta-analysis with many studies discourages new studies whose main goal is to estimate that same effect size in a system well-represented in the meta-analysis. But by highlighting what we don’t know (e.g., what causes variation in the effect sizes?; are effect sizes the same in underrepresented systems?; is the same true of alternative outcomes of the same effect?) the meta-analysis encourages new studies along those lines, especially if the meta-analysis was high-profile or controversial for some reason. The new studies do produce effect sizes for the next meta-analysis, but they also go in some new directions. (A somewhat cynical take would be that the meta-analysis just prompts people to calculate a few different things from same-old data and pitch it as something new.)

  2. A search of Twitter reveals Julia Koricheva’s answer; I was hoping to hear from her:

    If Julia hasn’t seen any data on this, then I assume the data don’t exist. If these data existed, Julia would know. 🙂

  3. Alfredo Sanchez-Tojar’s answer:

    Interesting remark that high levels of among-study heterogeneity provide a strong reason to keep collecting effect size estimates even after publication of a meta-analysis.

  4. Pedro Peres-Neto speculates:

    I’m not sure I buy that. I doubt that high variance around the meta-analysis grand mean does much to encourage further research. And I kind of feel like mean effect sizes far from zero *and* close to zero would both discourage further research (insofar as they had any effect at all on further research…). If the mean effect size is far from zero, then I think researchers are going to feel like “we know the answer now”. Whereas if the mean effect size is close to zero, researchers are going to think “there’s nothing here to discover, the effect is too weak or noisy to be worth studying”. But of course, I’m just speculating here, so maybe I’m totally wrong!

    • Although one obvious exception to my speculation is if a zero mean effect size in a meta-analysis really clashes with people’s expectations. That’s a case where a finding of zero effect might spur a lot of further research:

      • I had Mark Vellend’s et al. study in mind when wrote my original comment ( “…we show that mean temporal change in species diversity over periods of 5–261 y is not different from zero…”. This result not only clashes with people’s expectation, but there is variation around the zero effect size in many cases. I don’t think that the search for changes in “local-scale plant biodiversity over time” will decelerate as a result of their findings. I’m on the speculation side here too; tough still think that variance around effect-sizes is one factor to consider on the question on how meta-analyses influence (discourage/encourage) future research…only after meta-metas we will know whether variance does have an influence.

  5. Maybe not what you are looking for Jeremy, but as clinical trials are usually required to be registered before they start, could you look at the number of trials registered (as a proxy for interest in the subject) in the years before and after landmark meta-analyses are published.

    Appreciate it’s not ecology, but could be a way of getting some data to test your ideas?

    • Good suggestion, thanks.

      And it wouldn’t be too hard to get data for cases in which one meta-analysis was followed years later by a second meta-analysis on the same topic. Such cases are a biased sample, of course. But perhaps they could be used to put a rough upper bound on the rate at which researchers continue studying an effect after its been meta-analyzed.

  6. Hi Jeremy,

    I have a question that is related to what you are asking here, but not to meta analyses.

    In Orr 2005 (Nature Reviews Genetics), he says, ‘Ironically, although Fisher offered the first sensible model of adaptation, the sole question he asked of it suppressed all further interest in the model. His answer after all, suggested that micromutationism is plausible and that one could, therefore, study adaptation through infinitesimally based quantitative genetics.’ with reference to Fisher’s geometric model.

    I’m not entirely sure I buy this argument, but it’s interesting. Do you think there are examples where asking a particular kind of question actually dissuades further research on that topic? For example, if (as Orr might say) the answer to the question of topic A implies a justification of the current “bandwagon” of topic B, and then no further work is done on topic A – maybe because it is viewed only through the narrow lens of using it to justify another topic. Or, it might be that the results are so non-surprising (which might be surprising in itself!) that nobody sees any value in further pursuing this work. And if this is true, then is it also true that when a researcher begins work on some relatively untouched and unexplored topic, they also have a responsibility to frame their work in a context that does justice to the topic, orr to ask audacious and risky questions?


    • That’s a very interesting question Shyam!

      Off the top of my head, I can’t think of any other cases exactly like the one Orr (2005) describes. I can think of one case that’s like it in some ways and unlike it in others: the mid-90s West et al. Science paper explaining quarter power allometric scaling. In the minds of many ecologists, the West et al. paper justified further work on the ecological implications of allometric scaling. It made that further work feel “mechanistic” in many readers’ minds, even though it wasn’t really. None of that further work actually required having a mechanistic explanation for quarter power allometric scaling of metabolic rate and body size. So in that way, West et al. was like Fisher’s geometric model. But the West et al. paper was very unlike the case of Fisher’s geometric model in that lots of people studied the West et al. model, proposed alternative mechanistic models to explain the same phenomena, etc.

      Turning to your broader questions, I wouldn’t say that Fisher’s geometric model actually dissuaded further interest in the genetics of adaptation. Fisher asked whether the infinitesimal approximations of quantitative genetics could be justified by some plausible underlying model of mutation, and the answer was “yes”. I don’t think it’s fair to blame Fisher or others for not immediately using the same model to also answer other questions about the genetics of adaptation.

      Put another way, I think it’s fine for a researcher to set out to answer a specific question, and answer it. I don’t think you have a responsibility as an investigator to try to ask and answer all questions that could possibly be asked using a particular model or approach! And I don’t think that an investigator has a responsibility to try to frame their own questions in such a way as to prompt others to ask further questions. What other people will find interesting, inspiring, or fruitful about your own work is mostly unpredictable. As an researcher, you can’t try to dictate the questions that other researchers are inspired to ask by your work. For instance, Connell (1961) is a textbook classic–hugely influential for the field experiment it reports. But that field experiment for which it’s remembered today is only one tiny part of the paper, and not at all the main focus of the paper. I doubt that Joe Connell could have predicted just how influential that one bit of his paper would be, or that he could’ve written his paper differently so as to encourage readers to pick up some other bit of his paper and run with it. See here for further discussion:

      • Thanks for the link!

        I agree with your perspective about not holding researchers accountable from how their work inspires/guides others because it is mostly unpredictable. In addition, even if it was predictable (for example, even if we knew from studies that framing questions in a particular way for a new topic is correlated with more subsequent research done on that new topic), I’m not sure that we should do this. Every researcher has their own motivations for doing the work they do, and this creates diversity in the scientific work and philosophy of the academic community, which should be valued. Even there may exist a few methods of writing or approaching a topic that can lead to a predictable response in terms of further research, there will be wide disagreement on whether this response is what we should be working towards as a community. Of course, there can be various different arguments, debates and calls for doing research in a particular way, and average and popular opinions and methods may shift, but this is done by choice. By framing it as an expectation or assigning one particular kind of method as “responsible” might seriously hinder that personal freedom of motivation.

        Although of course, there exist many other restrictors of this freedom. For example if you are a student under a PI that rigidly follows a particular school of thought in their field, you may find it difficult to work on particular topic, ask a particular set of questions or use a particular method. Or, if you do contrarian ecology ( and you face resistance, misunderstanding and/or dismissal from the community at large (but with high potential reward). But none of these reasons have any “ethical” element to them, unlike holding researchers responsible for the work of others that builds off of their work – though there might also be disagreement there.

  7. Pingback: Data on the life histories of ecological research programs (and their meta-analyses) | Dynamic Ecology

Leave a Comment

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.