Video: most meta-analyses in ecology are too small (for most purposes)

As most of you already know, I had to pull out of this year’s virtual ESA meeting. To partially make up for pulling out, below is an extended video version of what I would’ve said.

Heads up #1: this is a full-length research talk (actually, it’s even a bit too long for that), not an ESA-length talk, so grab a drink and get comfortable. Or, you know, fast forward a lot. I didn’t bother to edit down the research talk I already had semi-prepared.

Heads up #2: Speaking of semi-prepared…it’s not an especially polished talk. In my own defense, I was aiming for a casual vibe, like if you were talking to me about ecology in a bar. But if I were giving the talk for a live audience (which I’d love to do, I really miss giving live talks…), I would definitely prepare a crisp, polished version.

Heads up #3: I had a beer during the talk. Like I said, I was aiming for a “talking about ecology in a bar” vibe. Also, you get to see my back yard. YMMV as to whether those are features or bugs. 🙂

The link to the talk:

https://drive.google.com/file/d/1WyGYiN44JvWIHW1l-uRDof8lH_otbbdx/view?usp=sharing

8 thoughts on “Video: most meta-analyses in ecology are too small (for most purposes)

  1. Thought these results were absolutely fascinating, Jeremy. I stayed with it right to the end and it was well worth it. What I really like about the results you’ve presented is that they address two key questions for ecologists – first, what do we know? And second, how do we know it?
    Your results suggest that we don’t know very much. That may be because it’s very hard to ‘know’ things in ecology. Or it could be that we aren’t doing ecology very well. I suspect it’s some of both. And your recommendations imply that you also think we aren’t doing ecology very well (as a group).
    You identify the fragmentation of ecological knowledge as one key problem with ecology. And this gets at what I see as a pervasive problem in ecology – very little concern with actually knowing what we know as a discipline. Maybe this is true in other disciplines as well but it is unquestionably true in ecology. It seems to me that all ecologists should become ‘philosophical’ Bayesians. And here I’m not recommending any particular inferential school of thought – I’m simply suggesting that we should see our next piece of research as leading to a ‘posterior’ estimate. This approach implies that we start with a clear (and perhaps quantitative) prior for any effect or set of effects that we’re interested in. And our research leads to some (usually tiny) change in that prior. This approach would force us to construct corridors to other related research.
    The other thing that occurred to me is that I think most ecological meta-analyses combine experimental studies (am I right about this?). Most ecological experiments test simple hypotheses – usually manipulating one or two putative causal mechanisms. If it’s true that most ecological phenomena have many drivers with interactions among those drivers, then inability to control any of the large number of (unknown) drivers could reverse the sign of an observed effect even if the ‘true’ effect (after controlling for all other variables) remained the same. So, perhaps ecological phenomena are too complex to be understood using many tests of simple hypotheses. That is, if the “true” model for plant growth is
    Plant Growth = a*A + b*B + c*C ….z*Z
    And we do a bunch of studies where we estimate effects for the model
    Plant Growth = a*A or Plant Growth = a*A + b*B
    Is this an effective approach to understanding what drives plant growth?

    Here I’ve only represented complexity as ‘many drivers’. But add in nonlinear relationships and interactions among drivers and initial states and I think it’s even more questionable that meta-analyses of simple hypotheses do much more than a random walk towards ‘true’ effect sizes.

    Lastly, is this pretty good evidence of the ‘Tony Ives’ school of thought?

    • “Thought these results were absolutely fascinating, Jeremy. I stayed with it right to the end and it was well worth it.”

      Thanks! Glad you found it worth your time.

      “your recommendations imply that you also think we aren’t doing ecology very well (as a group).”

      No, I wouldn’t necessarily say that. Like I said at the end, maybe this is as good as it gets. Maybe we’re already doing about as well as we could be doing. One thing I didn’t get into in the talk is that there are benefits to allowing individual investigators to be creative and do their own thing. I don’t know if it would be worth it to give up those benefits. Here’s an old post talking a bit about this: https://dynamicecology.wordpress.com/2013/01/16/the-road-not-taken-for-me-and-for-ecology/.

      “The other thing that occurred to me is that I think most ecological meta-analyses combine experimental studies (am I right about this?).”

      No, that’s not correct. Now, Laura and I didn’t actually count up how many meta-analyses consider observational studies, experimental studies, or both. But it’s definitely not the case that most meta-analyses in ecology only consider experimental studies. Offhand, I think the majority of meta-analyses in our compilation only consider observational studies. But I’d have to go back and count them up to be sure of that.

      “Lastly, is this pretty good evidence of the ‘Tony Ives’ school of thought?”

      Can you elaborate what you mean? I’m familiar with Tony’s thoughts about theoretical ecology as a “library of case studies”. Are you suggesting that we also think of empirical ecology as a “library of case studies”?

      • Exactly Jeremy. Doesn’t this much heterogeneity in effect sizes imply ecology is a library of case studies?

  2. Heh, here’s one vote for “I wish Jeremy had put in the time to cut this down to ESA talk length”. 🙂

  3. Fascinating! I’ve got a few thoughts but haven’t yet organised all of them yet.
    (1) The ‘typical’ study, m=22, is defined by the MEDIAN. I’m thinking if MODE could be a better option. If the distribution of m is unimodal, then not much would change in your interpretation. But, if it is bimodal or multimodal, then you have a way to separate the ‘reliable’ studies from ‘imprecise’ ones.
    (2) Ecologists could use such a filter to separate the reliable/unreliable studies. “The effect-size in strong studies is …, and in weak studies it is …”
    Not very helpful when weak studies get in the way of the strong studies, as nothing can be learnt at the end of the day. Do we need a (dispassionate) quantitative metric to serve as a filter?

    • Re: 1, there’s a histogram of number of studies per meta-analysis in the video. It’s a right-skewed unimodal distribution. Very right-skewed.

      Re: 2, at the end of the talk I suggest that maybe we should quit publishing meta-analyses with less than 20ish primary studies, on the grounds that they’re just not that reliable. Their results likely will change a lot as more primary studies are published. But “20ish” is a rough and arbitrary threshold.

      • Ah!
        Bimodal would have helped. Good studies would stand out from the crowd, without any rue of thumb. So, m>20 is the way forward.

Leave a Comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.