How replicable is ecology? Take the poll!

The talk of the social science online-o-sphere this week is this long meaty polemic from Alvaro de Menand. Alvaro was a participant in Replication Markets, a DARPA program to estimate the replicability of social science research. “Replication” here refers to getting a statistically significant of the same sign as the original, using either the same data collection process and analysis on a different sample, or the same analysis on a similar but independent dataset. Participants in the replication market were volunteers who wagered on the replicability of 3000 studies from across the social sciences. A sample of which are going to be replicated to see who was right. But in the meantime, previous prediction markets have been shown to predict replicability in the social sciences, and so in the linked post Alvaro treats the replication market odds as accurate estimates of replicability.

And he’s appalled by the implications, because the estimates are very low on average. The mean is just a 54% replication probability. The distribution of estimates is bimodal with one of the modes centered on 30%. And when you break the results down by field (Gordon et al. 2020), there are entire fields that do quite badly. Psychology, marketing, management, and criminology are the worst. (Economics does the best, with sociology not too far behind.)

The hypothesized reasons for this are pretty interesting (turns out you can learn a lot by reading a bunch of papers from a bunch of fields…). Alvaro argues that lack of replicability is mostly not down to lack of statistical power, except perhaps when it comes to interaction effects. Nor does he think the main problem is political hackery masquerading as real research, except in a few narrow subfields. And he has interesting discussions of the typical research practices in various fields. As sociologist Keiran Healy pointed out on Twitter, the replication market participants basically seem to have identified a methodological gradient across fields. The more your field relies on small-sample experiments on undergrads to test hypotheses that are pulled out of thin air, the less replicable your field’s work is estimated to be. Alvaro also has interesting discussions of variation within fields.

At the end, he has some proposals to address matters, some of them quite radical (e.g., earmarking 60% of US federal research funding for preregistered studies).

I’m curious whether all this applies to ecology. What do you think? How replicable are ecological studies in your view, and what do you think are the sources of non-replication? Take the short poll below! I’ll summarize the answers in a future post.

Here we go again – the planet is practically dead

So the 2020 version of the Living Planet Report has been released to massive headlines blaring catastrophe. The central claim is that vertebrate (i.e. fish, amphibian, reptile, bird, mammal) local populations declined, on average, by 68% from 1970 to 2016 (the report is released 4 years after the end of the data). The authors of the report have done a much better job of getting out the notion that this is an average decline. IE they’re not claiming that there are 68% fewer vertebrate individuals on the planet, but that the average decline is 68% (but see footnote)*.

To invert their claim, the average vertebrate population in 2016 is 32%  (100%-68%) of the size that it was in 1970. If we look at the 2018 report it says that the average vertebrate population in 2014 is 40% of what it was in 1970. And the average vertebrate population in 2010 is 48% of what it was in 1970. So if a population in 1970 was of size N then, 2010=0.48N, 2014=0.40N, and 2016=0.32N. Wow! That is a 52% decline in the 40 years from 1970 to 2010, a 16.6% decline in four years from 2010 to 2014 and a remarkable 20% decline from 2014 to 2016. The math is a little complex because it is exponential, not linear, decline but that gives a 1.82% decline per year from 1970 to 2010, a 4.46% annual decline from 2010-2014, and a 10.6% per-year decline from 2014-2016. So not only are there huge declines, but the declines appear to be accelerating (admittedly with small samples for recent years). If we are conservative in the face of this accelerating trend and hold declines constant for the next 10 years (from 2016 so to 2026) at 10.6%/year and start in 2016 at 32% of 1970 numbers then we are down to 10% of the 1970 numbers by 2026. Do you believe that! In 6 years from now the average population will be just 10% of what it was in 1970. (To be clear, the LPI authors did not make this claim – I did, but it is just a 10 year extrapolation from their numbers). You would think such a decline would be more obvious to the casual observer. I’m old enough to remember 1970 and have spent a lot of time in the woods in my life. If there was a 20% decline (or increase) I’m not sure my fallible memory would reliable detect the change (in fact I’m pretty sure it wouldn’t). But if there were 90% less birds on average than my childhood, I would have thought I would have noticed. You would also think the world would be absolutely exploding with things vertebrates eat (e.g. insects and plants).

If this isn’t happening, then what is going on? Well for starters, it is pretty dicey to take short term rates and extrapolate them when things grow or decline exponentially. If you do that you are liable to find out everything is extinct or at infinity pretty quickly. So lets go back to the core claim straight from the report – there has been a 68% decline in the average vertebrate population since 1970. Not quite as extreme, but you would still think I (and a lot of other people) would have noticed declines in vertebrates of this extent not to mention the boom of insects and plants as they’re freed from predation.

If you don’t trust my fond recollections of my childhood nor my extrapolation to what should have happened to insects and plants (as you definitely shouldn’t!), then how about this. The LPI core result is completely different than other studies (not cited in the Living Planet Report for what it is worth). Several, like the LPI, track thousands of populations over decades. All (like the LPI) suffer from some observer bias – scientists have more data in temperate regions and near cities and for bigger animals, but there has been no evidence to date that this fact is biasing the results of any of the three studies. First, here is a plot very similar to the LPI plot but for invertebrates in the UK by Outhwaite and colleagues in Nature Ecology and Evolution:

Now this is invertebrates, not mammals, but what we see is 3 broad groups have abundances higher than they did in 1970 (freshwater species showing a spectacular recovery possibly due to clean water laws), and one broad group that is down just a smidge. The overall balance across all 4 groups is a 10% INCREASE.

Here is a paper by Dornelas and colleagues in Ecology Letters (disclosure I am a co-author):

They (we) used a slightly different method – we calculated the slope of the timeseries and then plotted histograms of the slopes. Note that there is a lot of variability with some real declines and real increases, but the overall trend across populations is strongly centered on (i.e. averages on) about zero (neither up nor down). In fact the title of that paper is “A balance of winners and losers in the Anthropocene” and finds that 85% of the populations didn’t show a trend significantly differently from zero, 8% significantly increased, and 7% significantly decreased. A lot of churn of which species are up or down, but NOT an across the board catastrophic decline. Maybe this is because Outhwaite and Dornelas didn’t study vertebrates? Unlikely. Dornelas et al did pull out different taxa and found that reptiles, amphibians and mammals skewed to more increases than decreases and no real difference from zero in birds and fish (their Figure 4). Or check out Leung et al who analyzed a subset of the LPI data (hence all vertebrates) focusing on the well sampled North American and European regions using a different methodology who got more groups increasing than declining. Or check out Daskalova et al who also found winners and losers were balanced (and most species were neutral). Even the most extreme result of the studies that exclusively use longer term data to look at this question that I am aware of (van Klink et al) shows a 35% decline over 45 years for terrestrial insects and 60% increase over the same period in aquatic insects. I think it is an interesting and challenging question why these studies received little press (despite also being published in high profile journals), but the LPI gets enormous coverage every time it comes out.

These 5 other studies more closely match my childhood memories. There could be weaker trends (+ or – 10 or 20%). And for sure I could be seeing different species (winners replacing lowers). But these 5 studies completely contradict the LPI result (all 5 find a robust mix of increases and decreases and most find something like a balance between increases and decreases). So what is going on?

For one thing, I think the LPI bites off too much – it tries to reduce the state of vertebrates across continents and species to a single number (aka index). That has to sweep a lot of complexity under the rug! There is underlying variability in the LPI too – they just don’t emphasize it as that is not their point. And to a large extent these other papers are just unpacking that complexity by exposing the underlying high variability in trends.

But those other papers find a more neutral balance while the LPI most definitely does not. Something more has to be going on. It could be their data (but some of the aforementioned papers used the same or a subset of the data). Or it could be their methodology (but some of the aforementioned papers used similar methodologies). Personally, I think it is a complex interaction between the data they are putting in and the weaknesses of the methodology (in the sense that every methodology has weaknesses, not that their methodology is fundamentally flawed or wrong). There may be more to say about this in the future. But for now, I hope we can at least pause and think and do a sanity check.

I want to leave no doubt that I am convinced humans are hammering the planet and the vertebrates (and invertebrates and plants) that live on it. We’re removing >50% of the [terrestrial] primary production each year, have removed more than 50% of the tree biomass, modified >50% of the land, use more than 50% of the freshwater, have doubled the amount of nitrogen entering the biosphere each year and nearly doubled the amount of CO2 in the atmosphere since pre-industrial times. But I also don’t think it is possible for there to be a 68% decline in 46 years leading to a projection of a 90% decline over 56 years (10 years from now) nor does a 20% decline in the last two years seem possible. The consequences of 68-90% gone is just too large not to be observed anecdotally and through indirect effects. And the 68-90% decline story just doesn’t align with other major, comprehensive, 1000s of datasets analyses of this question.

What I believe the data show is we’re creating winners and losers – some really big winners and some really big losers and a lot in between, and that’s bad – humans ARE massively modifying the planet in ways that all but the most biodiversity-hating people care about, and the extinctions we are causing are irreversible,so please don’t cite this blog as evidence that “everything is OK”. Its not. Is there room for an “in between”  (bad but not catastrophe) message?

But either way, please think twice before reporting that vertebrates are disappearing from the planet at these incredible rates. Because the logical conclusion is that nothing will be left in a very short time (decade or two) and that doesn’t pass the common sense test. This is not an “all scientists agree” scenario. I personally think the balance of evidence  (such as cited above) points pretty strongly against the LPI conclusion. I worry how many more years scientists (and reporters) can report catastrophic trendlines that predict little to no life of any sort on the planet within our lifetimes and not have people notice that this isn’t actually happening.


Note: I am indebted to many colleagues who have talked about this topic with me over the years, some of them co-authors on the paper cited here, some of them co-authors on forthcoming papers, some of them not co-authors, but I want to stress that the opinions here are controversial and my own so I am not listing them here.


* The report averaged rates of decline in populations, not total decline in number of individuals (unlike this catastrophic headline). But shouldn’t they be the same thing? Well yes if there were the same number of individuals in each population and each species then a 68% decline of 100 here (to 32) and a 68% decline of 100 there (to 32) would still result in a 68% decline (from 200 to 64). But we know in fact number of individual varies wildly (100x-1000x) across populations and species. So It would be a 68% of 1000 to 320 and a 68% decline of 10 to 3.2 giving 1010 to 323.2 which is STILL 68%. But now the fact the 68% is an average comes in. What if the 1000 declined by 60% to 400 and the 10 declined by 76% to 2.4 or 1010 to 402.4. That’s not a 68% decline but a 60.2% decline even though average the rates 60% and 76% still give an average 68% decline. We don’t know for sure whether large populations are more likely to decline or small populations are more likely to decline, but we do know that at least in birds abundant species are declining while rare species are increasing, so if you assume that it would mean things are actually even worse than the 68% decline in terms of total number of vertebrate individuals increasing, but we don’t know for sure. But I don’t think this is the central reason why the LPI numbers don’t match my childhood memories, nor other studies. With such large data and no truly strong correlations between abundance and decline, most of this comes out in the wash. So theoretically this could be a mathematical reason the total number of individuals has decreased by less than 68% even when the average decline across all populations is 68%. But I don’t think it likely. In fact I think in a weird way, arguing this is a way of distancing the LPI from what it is really claiming/implying.

Jeremy Fox’s lab seeking three new grad students for 2021

I am seeking three new graduate student (M.Sc. and/or Ph.D.) to start sometime in 2021. Preferably Sept. 2021, but other start dates are possible. My lab is a good fit for students looking to do fundamental research in population and community ecology, particularly research combining theoretical modeling with experiments in model systems. Right now I’m doing a lot of work on spatial synchrony of population fluctuations, and on higher order interactions and species coexistence, but I have other irons in the fire too. See my lab website for more on what’s going on in my lab these days. And see here to learn more about my department and its graduate program.

2021 hopefully will be an exciting time to join my lab. My recent NSERC Discovery Grant renewal was very successful. An influx of new students will bring a lot of new ideas and energy into the lab. And Canada’s been doing a decent job of responding to the coronavirus pandemic, so hopefully by 2021 my lab will have safely returned to something resembling pre-pandemic normalcy.

If you’re interested in joining my lab, please have a look at my letter to prospective grad students. It talks about my approaches to science and mentoring, and includes some questions I ask of all prospective students. If it seems like my lab might be a good fit for you, send me an email ( with an introductory note, transcripts (unofficial is fine), and a cv. Looking forward to hearing from you!

Saturday blast from the past: the times, they are a changin’ (when it comes to post-publication review)

In light of recent discussions on Twitter and elsewhere about how journals, and individual scientists, should or shouldn’t respond to PubPeer comments (particularly those alleging data anomalies), it seems timely to re-up this old post. It’s from 2014, but I think it holds up pretty well. Verging on prescient, actually!

That old post is also a useful reminder that post-publication reviewers who allege or hint at misconduct sometimes are wrong (and sometimes, aren’t clearly right or clearly wrong). Wrong and debatable allegations can do real damage, especially when there’s no agreed formal procedure for handling them. Anecdotally, it seems to me that a lot of public discussion about how to discover and address cases of potential scientific misconduct is motivated by cases in which misconduct was eventually shown beyond any reasonable doubt. I think it’s worth also keeping other sorts of cases in mind. As discussed in the linked post, I don’t think there are any easy answers here.

Thursday evening blast from the past: remembering Tony Ives’ legendary MacArthur Award lecture

Allison Barner and others have been recalling Tony Ives’ 2013 MacArthur Award lecture, which sadly was never written up as an Ecology paper:

Totally with Allison and Chris on this–that was the best talk I’ve ever seen. So much thought-provoking content, and so unconventionally and compellingly presented. Whether you were there or not, you might enjoy the post I wrote about it at the time.

Anecdotally, it seems like the tradition of MacArthur Award winners writing up their talks as papers has fallen by the wayside. I live in hope that it’ll be revived (come on, do it Jon!). Several MacArthur Award papers are classics–they influenced many young researchers and fully deserved to.