Video: most meta-analyses in ecology are too small (for most purposes)

As most of you already know, I had to pull out of this year’s virtual ESA meeting. To partially make up for pulling out, below is an extended video version of what I would’ve said.

Heads up #1: this is a full-length research talk (actually, it’s even a bit too long for that), not an ESA-length talk, so grab a drink and get comfortable. Or, you know, fast forward a lot. I didn’t bother to edit down the research talk I already had semi-prepared.

Heads up #2: Speaking of semi-prepared…it’s not an especially polished talk. In my own defense, I was aiming for a casual vibe, like if you were talking to me about ecology in a bar. But if I were giving the talk for a live audience (which I’d love to do, I really miss giving live talks…), I would definitely prepare a crisp, polished version.

Heads up #3: I had a beer during the talk. Like I said, I was aiming for a “talking about ecology in a bar” vibe. Also, you get to see my back yard. YMMV as to whether those are features or bugs. 🙂

The link to the talk:

13 thoughts on “Video: most meta-analyses in ecology are too small (for most purposes)

  1. Thought these results were absolutely fascinating, Jeremy. I stayed with it right to the end and it was well worth it. What I really like about the results you’ve presented is that they address two key questions for ecologists – first, what do we know? And second, how do we know it?
    Your results suggest that we don’t know very much. That may be because it’s very hard to ‘know’ things in ecology. Or it could be that we aren’t doing ecology very well. I suspect it’s some of both. And your recommendations imply that you also think we aren’t doing ecology very well (as a group).
    You identify the fragmentation of ecological knowledge as one key problem with ecology. And this gets at what I see as a pervasive problem in ecology – very little concern with actually knowing what we know as a discipline. Maybe this is true in other disciplines as well but it is unquestionably true in ecology. It seems to me that all ecologists should become ‘philosophical’ Bayesians. And here I’m not recommending any particular inferential school of thought – I’m simply suggesting that we should see our next piece of research as leading to a ‘posterior’ estimate. This approach implies that we start with a clear (and perhaps quantitative) prior for any effect or set of effects that we’re interested in. And our research leads to some (usually tiny) change in that prior. This approach would force us to construct corridors to other related research.
    The other thing that occurred to me is that I think most ecological meta-analyses combine experimental studies (am I right about this?). Most ecological experiments test simple hypotheses – usually manipulating one or two putative causal mechanisms. If it’s true that most ecological phenomena have many drivers with interactions among those drivers, then inability to control any of the large number of (unknown) drivers could reverse the sign of an observed effect even if the ‘true’ effect (after controlling for all other variables) remained the same. So, perhaps ecological phenomena are too complex to be understood using many tests of simple hypotheses. That is, if the “true” model for plant growth is
    Plant Growth = a*A + b*B + c*C ….z*Z
    And we do a bunch of studies where we estimate effects for the model
    Plant Growth = a*A or Plant Growth = a*A + b*B
    Is this an effective approach to understanding what drives plant growth?

    Here I’ve only represented complexity as ‘many drivers’. But add in nonlinear relationships and interactions among drivers and initial states and I think it’s even more questionable that meta-analyses of simple hypotheses do much more than a random walk towards ‘true’ effect sizes.

    Lastly, is this pretty good evidence of the ‘Tony Ives’ school of thought?

    • “Thought these results were absolutely fascinating, Jeremy. I stayed with it right to the end and it was well worth it.”

      Thanks! Glad you found it worth your time.

      “your recommendations imply that you also think we aren’t doing ecology very well (as a group).”

      No, I wouldn’t necessarily say that. Like I said at the end, maybe this is as good as it gets. Maybe we’re already doing about as well as we could be doing. One thing I didn’t get into in the talk is that there are benefits to allowing individual investigators to be creative and do their own thing. I don’t know if it would be worth it to give up those benefits. Here’s an old post talking a bit about this:

      “The other thing that occurred to me is that I think most ecological meta-analyses combine experimental studies (am I right about this?).”

      No, that’s not correct. Now, Laura and I didn’t actually count up how many meta-analyses consider observational studies, experimental studies, or both. But it’s definitely not the case that most meta-analyses in ecology only consider experimental studies. Offhand, I think the majority of meta-analyses in our compilation only consider observational studies. But I’d have to go back and count them up to be sure of that.

      “Lastly, is this pretty good evidence of the ‘Tony Ives’ school of thought?”

      Can you elaborate what you mean? I’m familiar with Tony’s thoughts about theoretical ecology as a “library of case studies”. Are you suggesting that we also think of empirical ecology as a “library of case studies”?

      • Exactly Jeremy. Doesn’t this much heterogeneity in effect sizes imply ecology is a library of case studies?

  2. Heh, here’s one vote for “I wish Jeremy had put in the time to cut this down to ESA talk length”. 🙂

  3. Fascinating! I’ve got a few thoughts but haven’t yet organised all of them yet.
    (1) The ‘typical’ study, m=22, is defined by the MEDIAN. I’m thinking if MODE could be a better option. If the distribution of m is unimodal, then not much would change in your interpretation. But, if it is bimodal or multimodal, then you have a way to separate the ‘reliable’ studies from ‘imprecise’ ones.
    (2) Ecologists could use such a filter to separate the reliable/unreliable studies. “The effect-size in strong studies is …, and in weak studies it is …”
    Not very helpful when weak studies get in the way of the strong studies, as nothing can be learnt at the end of the day. Do we need a (dispassionate) quantitative metric to serve as a filter?

    • Re: 1, there’s a histogram of number of studies per meta-analysis in the video. It’s a right-skewed unimodal distribution. Very right-skewed.

      Re: 2, at the end of the talk I suggest that maybe we should quit publishing meta-analyses with less than 20ish primary studies, on the grounds that they’re just not that reliable. Their results likely will change a lot as more primary studies are published. But “20ish” is a rough and arbitrary threshold.

      • Ah!
        Bimodal would have helped. Good studies would stand out from the crowd, without any rue of thumb. So, m>20 is the way forward.

  4. Commenter “Yuval” sent me the following comment via email, because for some unknown reason the comment failed to post:
    Quite an interesting talk, Jeremy, thanks! I have a few thoughts, not
    quite related.
    1) About the magic number 20. As an anecdote, I am now going through the
    literature looking for a specific type of paper. I found about 20, from
    the last 20 years. Perhaps this is more representative than I initially
    thoughts (I was initially quite surprised I could only find 20). Also, I
    was a bit surprised that 20 has so little predictive power — I was
    under the impression that 20 is often used as a baseline for a good
    sample size (full disclosure, I know little to nothing about statistics).
    2) Taking the result that most meta-analyses are good for determining if
    true mean effect size is one of -/0/+, and your finalish comment about
    stopping doing small meta-analyses, I can’t help but ask: What’s the
    real purpose of most meta-analyses? or, What are they really good for? I
    understand that we want to have some baseline notion of a wide range of
    questions and thoughts, such as “mycorrhizal is good for plant growth”.
    But most of these (or all?) seem inherently vague, and then, what does
    knowing the effect size really mean anyway? So that, I can see the value
    of getting confirmation of basic notions using meta-analyses (knowing
    which of +/0/- is relevant), but it is less clear to me why I should
    really care about effect sizes, and from your results, it seems that,
    for the most part, the estimations are not very meaningful anyway.
    3) Continuing on the “rant” of the previous point, to me all of this
    implies that we are not really asking the right questions. Your comments
    about “as good as it gets” suggests that we chose to ask either very
    hard, or ill defined, or non-useful questions, for which we’ll end up
    getting ambiguous and “noisy” answers eternally. It brings up in my mind
    the weather vs. climate dichotomy. You say (I think) that perhaps things
    are just to darn noisy to get nice clean results. That’s similar to
    saying that weather is not predictable past a week or two. But climate
    is, for the most part, predictable. So maybe we need to try to work on
    “ecological climate” instead of focusing on “ecological weather” the
    whole time.

    • Thanks for taking the time to comment Yuval. Interesting thoughts!

      20 observations randomly sampled from a population is sometimes quoted as a (rough, arbitrary) rule of thumb for how many observations you need in order to confidently assume that your sample mean will be normally distributed. That is, “20” is a rough rule of thumb for the minimum sample size needed for the central limit theorem to apply. It’s just a coincidence that the median ecological meta-analysis includes approximately 20 studies (the median is 22). And when I suggest that 20ish studies is usually too few, I’m drawing a rough and pretty arbitrary dividing line based on eyeballing the data. It’s just coincidence that that dividing line happens to fall at 20 studies.

      I agree with you that, given that our ecological questions are usually pretty vague and qualitative (e.g., “Are fungi good or bad for plant growth?”), it’s not clear that we learn all that much by answering those questions very precisely. Knowing the exact values of the effect sizes, and their sampling variances, is mostly just a means to the end of doing null hypothesis testing. (Not always; there are some ecological meta-analyses that make more use of the quantitative information a meta-analysis provides.) But it’s not clear to me what we could do about that. I mean, if ecologists were able to ask more quantitative questions (say, derived from parameterized mathematical models), presumably they’d be asking those questions already.

      Thinking about it a bit, it might be useful for ecologists to get in the habit of comparing effect sizes across topics. For instance (and this is just the first example that comes to mind), Andrew Gelman has some old discussions on his blog of why he doesn’t believe reported effects of women’s menstrual cycle on their propensity to vote for Democrats vs. Republicans. He doesn’t believe the reported effects because they’re as large as effects of the most important predictors of voting for Democrats vs. Republicans (e.g., predictors such as race and education). It seems very implausible that the true effect of the menstual cycle could be *that* large. Personally, I find this kind of comparison very helpful; much more helpful than (say) the convention that a Hedge’s d value of 0.2 is a “small” effect, 0.5 is “moderate”, and 0.8 is “large”. If you measure some effect size, tell the reader how big it is compared to some well-known effect with which the reader is familiar. For example, in ecology, maybe compare all effects on terrestrial plant growth or fecundity to effects of some standard amount of N fertilizer.

      I’ll need to think more about your weather vs. climate analogy…

Leave a Comment

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.