So the 2020 version of the Living Planet Report has been released to massive headlines blaring catastrophe. The central claim is that vertebrate (i.e. fish, amphibian, reptile, bird, mammal) local populations declined, on average, by 68% from 1970 to 2016 (the report is released 4 years after the end of the data). The authors of the report have done a much better job of getting out the notion that this is an average decline. IE they’re not claiming that there are 68% fewer vertebrate individuals on the planet, but that the average decline is 68% (but see footnote)*.
To invert their claim, the average vertebrate population in 2016 is 32% (100%-68%) of the size that it was in 1970. If we look at the 2018 report it says that the average vertebrate population in 2014 is 40% of what it was in 1970. And the average vertebrate population in 2010 is 48% of what it was in 1970. So if a population in 1970 was of size N then, 2010=0.48N, 2014=0.40N, and 2016=0.32N. Wow! That is a 52% decline in the 40 years from 1970 to 2010, a 16.6% decline in four years from 2010 to 2014 and a remarkable 20% decline from 2014 to 2016. The math is a little complex because it is exponential, not linear, decline but that gives a 1.82% decline per year from 1970 to 2010, a 4.46% annual decline from 2010-2014, and a 10.6% per-year decline from 2014-2016. So not only are there huge declines, but the declines appear to be accelerating (admittedly with small samples for recent years). If we are conservative in the face of this accelerating trend and hold declines constant for the next 10 years (from 2016 so to 2026) at 10.6%/year and start in 2016 at 32% of 1970 numbers then we are down to 10% of the 1970 numbers by 2026. Do you believe that! In 6 years from now the average population will be just 10% of what it was in 1970. (To be clear, the LPI authors did not make this claim – I did, but it is just a 10 year extrapolation from their numbers). You would think such a decline would be more obvious to the casual observer. I’m old enough to remember 1970 and have spent a lot of time in the woods in my life. If there was a 20% decline (or increase) I’m not sure my fallible memory would reliable detect the change (in fact I’m pretty sure it wouldn’t). But if there were 90% less birds on average than my childhood, I would have thought I would have noticed. You would also think the world would be absolutely exploding with things vertebrates eat (e.g. insects and plants).
If this isn’t happening, then what is going on? Well for starters, it is pretty dicey to take short term rates and extrapolate them when things grow or decline exponentially. If you do that you are liable to find out everything is extinct or at infinity pretty quickly. So lets go back to the core claim straight from the report – there has been a 68% decline in the average vertebrate population since 1970. Not quite as extreme, but you would still think I (and a lot of other people) would have noticed declines in vertebrates of this extent not to mention the boom of insects and plants as they’re freed from predation.
If you don’t trust my fond recollections of my childhood nor my extrapolation to what should have happened to insects and plants (as you definitely shouldn’t!), then how about this. The LPI core result is completely different than other studies (not cited in the Living Planet Report for what it is worth). Several, like the LPI, track thousands of populations over decades. All (like the LPI) suffer from some observer bias – scientists have more data in temperate regions and near cities and for bigger animals, but there has been no evidence to date that this fact is biasing the results of any of the three studies. First, here is a plot very similar to the LPI plot but for invertebrates in the UK by Outhwaite and colleagues in Nature Ecology and Evolution:
Now this is invertebrates, not mammals, but what we see is 3 broad groups have abundances higher than they did in 1970 (freshwater species showing a spectacular recovery possibly due to clean water laws), and one broad group that is down just a smidge. The overall balance across all 4 groups is a 10% INCREASE.
Here is a paper by Dornelas and colleagues in Ecology Letters (disclosure I am a co-author):
They (we) used a slightly different method – we calculated the slope of the timeseries and then plotted histograms of the slopes. Note that there is a lot of variability with some real declines and real increases, but the overall trend across populations is strongly centered on (i.e. averages on) about zero (neither up nor down). In fact the title of that paper is “A balance of winners and losers in the Anthropocene” and finds that 85% of the populations didn’t show a trend significantly differently from zero, 8% significantly increased, and 7% significantly decreased. A lot of churn of which species are up or down, but NOT an across the board catastrophic decline. Maybe this is because Outhwaite and Dornelas didn’t study vertebrates? Unlikely. Dornelas et al did pull out different taxa and found that reptiles, amphibians and mammals skewed to more increases than decreases and no real difference from zero in birds and fish (their Figure 4). Or check out Leung et al who analyzed a subset of the LPI data (hence all vertebrates) focusing on the well sampled North American and European regions using a different methodology who got more groups increasing than declining. Or check out Daskalova et al who also found winners and losers were balanced (and most species were neutral). Even the most extreme result of the studies that exclusively use longer term data to look at this question that I am aware of (van Klink et al) shows a 35% decline over 45 years for terrestrial insects and 60% increase over the same period in aquatic insects. I think it is an interesting and challenging question why these studies received little press (despite also being published in high profile journals), but the LPI gets enormous coverage every time it comes out.
These 5 other studies more closely match my childhood memories. There could be weaker trends (+ or – 10 or 20%). And for sure I could be seeing different species (winners replacing lowers). But these 5 studies completely contradict the LPI result (all 5 find a robust mix of increases and decreases and most find something like a balance between increases and decreases). So what is going on?
For one thing, I think the LPI bites off too much – it tries to reduce the state of vertebrates across continents and species to a single number (aka index). That has to sweep a lot of complexity under the rug! There is underlying variability in the LPI too – they just don’t emphasize it as that is not their point. And to a large extent these other papers are just unpacking that complexity by exposing the underlying high variability in trends.
But those other papers find a more neutral balance while the LPI most definitely does not. Something more has to be going on. It could be their data (but some of the aforementioned papers used the same or a subset of the data). Or it could be their methodology (but some of the aforementioned papers used similar methodologies). Personally, I think it is a complex interaction between the data they are putting in and the weaknesses of the methodology (in the sense that every methodology has weaknesses, not that their methodology is fundamentally flawed or wrong). There may be more to say about this in the future. But for now, I hope we can at least pause and think and do a sanity check.
I want to leave no doubt that I am convinced humans are hammering the planet and the vertebrates (and invertebrates and plants) that live on it. We’re removing >50% of the [terrestrial] primary production each year, have removed more than 50% of the tree biomass, modified >50% of the land, use more than 50% of the freshwater, have doubled the amount of nitrogen entering the biosphere each year and nearly doubled the amount of CO2 in the atmosphere since pre-industrial times. But I also don’t think it is possible for there to be a 68% decline in 46 years leading to a projection of a 90% decline over 56 years (10 years from now) nor does a 20% decline in the last two years seem possible. The consequences of 68-90% gone is just too large not to be observed anecdotally and through indirect effects. And the 68-90% decline story just doesn’t align with other major, comprehensive, 1000s of datasets analyses of this question.
What I believe the data show is we’re creating winners and losers – some really big winners and some really big losers and a lot in between, and that’s bad – humans ARE massively modifying the planet in ways that all but the most biodiversity-hating people care about, and the extinctions we are causing are irreversible,so please don’t cite this blog as evidence that “everything is OK”. Its not. Is there room for an “in between” (bad but not catastrophe) message?
But either way, please think twice before reporting that vertebrates are disappearing from the planet at these incredible rates. Because the logical conclusion is that nothing will be left in a very short time (decade or two) and that doesn’t pass the common sense test. This is not an “all scientists agree” scenario. I personally think the balance of evidence (such as cited above) points pretty strongly against the LPI conclusion. I worry how many more years scientists (and reporters) can report catastrophic trendlines that predict little to no life of any sort on the planet within our lifetimes and not have people notice that this isn’t actually happening.
Note: I am indebted to many colleagues who have talked about this topic with me over the years, some of them co-authors on the paper cited here, some of them co-authors on forthcoming papers, some of them not co-authors, but I want to stress that the opinions here are controversial and my own so I am not listing them here.
* The report averaged rates of decline in populations, not total decline in number of individuals (unlike this catastrophic headline). But shouldn’t they be the same thing? Well yes if there were the same number of individuals in each population and each species then a 68% decline of 100 here (to 32) and a 68% decline of 100 there (to 32) would still result in a 68% decline (from 200 to 64). But we know in fact number of individual varies wildly (100x-1000x) across populations and species. So It would be a 68% of 1000 to 320 and a 68% decline of 10 to 3.2 giving 1010 to 323.2 which is STILL 68%. But now the fact the 68% is an average comes in. What if the 1000 declined by 60% to 400 and the 10 declined by 76% to 2.4 or 1010 to 402.4. That’s not a 68% decline but a 60.2% decline even though average the rates 60% and 76% still give an average 68% decline. We don’t know for sure whether large populations are more likely to decline or small populations are more likely to decline, but we do know that at least in birds abundant species are declining while rare species are increasing, so if you assume that it would mean things are actually even worse than the 68% decline in terms of total number of vertebrate individuals increasing, but we don’t know for sure. But I don’t think this is the central reason why the LPI numbers don’t match my childhood memories, nor other studies. With such large data and no truly strong correlations between abundance and decline, most of this comes out in the wash. So theoretically this could be a mathematical reason the total number of individuals has decreased by less than 68% even when the average decline across all populations is 68%. But I don’t think it likely. In fact I think in a weird way, arguing this is a way of distancing the LPI from what it is really claiming/implying.
It’s really hard to emphasize nuance in these politically-charged issues. I think your post does a good job trying to say that there are ways we are changing the world which are not good in any sense, but which are not as simplistic as this kind of report suggests.
Do you think some of your claims could be rallying points where most/many ecologists agree? ‘Human action directly leading to more winners and losers’ for instance? Or is it a bit messier than even this (e.g. if it is unclear whether or not such changes in ecosystem makeup would happen without humans for instance).
I think the winners and losers message is communicatable and a potential for rallying. It helps to take emotionally evocative examples of each. In some ways this is what the Dirizo 2014 paper does in talking about the rodentization of the world. If elephants and bison and antelope are the losers and small rats and voles are the winners I don’t think many people are on board with that (other parts of the paper that talk about the defaunization of the world which is really the same claim as LPI and I’ve already made it clear I don’t really agree with that). Others talk about birds. Are we OK with more robins and grackles and fewer bright yellow warblers (because that is exactly what is happening)?
The winners and losers example is definitely compelling, but I don’t think it has to be partnered with discussions of agreement and/or disagreement. A review out this year suggested that if you have to communicate uncertainty to the general public, portraying ‘conflict among experts’ harms credibility and is received rather negatively. I think we run at the uncertainty, stick to the numbers as best we can, and lay it out there in digestible pieces.
A couple of comments. First, non-scientists, especially politicians do not deal all that well with nuance, so parsing the data as Brian does may be all very well for academics but is unlikely to play well in more public forums. The politics of the Atlantic cod collapse, and most media discussions of human population illustrate that point. Second, there is plenty of evidence for drastic declines in vertebrate populations, but as you say, you’d have to be paying close attention to notice. The BC population of the northern spotted owl is one example where conservationists have seen drastic (+/- 90% declines since the 70s). Mountain caribou would be another, and I guess grassland birds would be another. And poaching and the illegal wildlife trade are gorging their ways through populations of many of the world’s charismatic (and not so charismatic species). We can debate ways in which species ought to be valued, and whether some are “worth” more than others, but the public (and human culture) notices the big, the charismatic, and the cute. So, even if the real average loss is much less than reported, the patterns of species decline are still a disaster.
You are absolutely there are some well documented declines. And many the public is well aware of (e.g. around Tucson you cannot hike with your dog in some parts of the mountains due to endangered big horn sheep, or many beaches are closed for turtle breeding). I have no desire to sweep these under the rug. Extinction is permanent and should e avoided at almost all costs which in turn means avoiding large declines whenever possible.
But “there exists large declines” is not what is being reported right now. What is being reported is that on average (basically overall) there are large declines.
Its not accurate though. There are well documented increases. The whole northeastern US where I live, deer, beaver, turkeys have all made spectacular comebacks from their low points (several hundred years ago and mostly low until the last few decades) to the point where some are pests and/or above pre-European levels. And I’d have to check how the time frames match up but I’m pretty sure the Bald Eagle and California Condor would show as up trends if they were included in their data. I think there is an interesting psychological phenomenon where humans are much more likely to notice what we’re losing than what we’re gaining (e.g. economists have shown we’d much rather avoid losing $1 than win $1). A lot of invasive species who large increases too (whether we think that is a good thing or not but that is a separate question).
So if you want to report on overall state you have to figure out the balance between increases and declines. That’s not easy. But I’d bet pretty good money the true answer is not close to the claim of overall massive decline that LPI claims.
I think you raise a really important point that embracing the some win, some lose view (which I think is the accurate view), then you have to get into a discussion of is it OK who is winning and losing (most humans are OK if mosquitos are losing and in fact we are actively trying to achieve that). But that becomes a moral discussion, not a scientific discussion. Which is why I think science avoids leading us to that discussion even if it probably should.
Thanks, Brian. I am left wondering why it is so difficult to figure out how, with more or less the same data, one set of studies can find average changes of 0% (+/- a lot) while another finds an average change of -68% (+/- not very much, at least in the graph). This is not a subtle difference, and so presumably not one that can be ascribed to minor differences at the analysis stage.
Is there not a published equation of the form LPI = f(dN1/dt, dN2/dt…) from which one can see precisely how we end up with LPI = 0.32?
Yes there are several peer reviewed papers. And they have in the past published their code. So I definitely think they’re being transparent. And I don’t think their method is flawed per se (although I have seen a non-consequential error in their equations). I really do think in this case the difference is really subtle at the level of an interaction between method and data but is having a gigantic effect.
Is there a straightforward way to describe that method*data interaction and how it leads to a gigantic effect, or is that what you hint at when you say “there may be more to say about this in the future”? I’m super curious, having heard some takes on it, but nothing that stuck…
I found a quote I found from a peer-reviewed paper which I think sheds some light on things. First of all, note that on the living planet index page they say “For the global LPI, the method of aggregation has recently been revised to include a weighting system which gives trends from more species-rich systems, realms and groups more weight in the final index (McRae et al. 2017)”. So they are not just averaging trends here.
From McRae et al., the abstract contains this quote: “…we also find that freshwater populations have declined by 81%, marine populations by 36%, and terrestrial populations by 38% when using proportional weighting (compared to trends of -46%, +12% and +15% respectively)”.
Note that the raw trends for two of these three groups are net positive, but with their weighting system, they become negative.
I agree with Brian, that I don’t think either analysis is “wrong” per se. That said, I personally prefer Brian’s simpler (unweighted) approach, and I find it more believable and interpretable. I am generally bothered by the fact that statistical methods in so much of ecology have become increasingly detached from the data. It feels like there’s too much focus placed on overcoming data limitations, rather than letting the data (or lack thereof) speak. That’s just my opinion… At the same time, the opposite opinion – in which data gaps represent sampling biases that should be corrected for statistically – also has some validity if you’re trying to scale things up from local to global. Just two sides of the coin, I guess.
I agree that the weightings can really move the answer around a lot. In particular they are upweighting the tropics because it has the most species even though that is the sparsest and noisiest data. Definitely not saying its right or wrong (although like you I prefer the direct representation of the data). Its just a preference. But I think most people don’t realize how much it changes the answer. They would spend a lot more time analyzing the weights if they realized the effect it has.
Weighting more species-rich systems, realms, and groups more heavily?! This has nothing to do with stuff like trying to correct for gaps in the data via data imputation, or using a Bayesian hierarchical model to “borrow strength” from well-sampled groups to improve our estimates for poorly-sampled groups, or whatever. Does it? Am I badly misunderstanding? Isn’t this just “we, the people who are putting together this index, care a lot about species richness, so we’re going to weight the populations from species-rich locations, and species-rich taxonomic groups, more heavily”?
Assuming I haven’t badly misunderstood (and it’s possible I have!), this illustrates why I could never be an ecologist who works on developing indices of vague, subjective concepts. Indices that are intended to shape public opinion and policymaking. At some point, the judgment calls you’re making as to the numbers on which to base your index, and how to weight and process them, just become soooo subjective (and, consciously or unconsciously, so oriented towards supporting your preferred narrative/policy/goal).
Not that this problem is unique to ecology, of course. Far from it! I was just reading a Twitter thread deep dive into the data underlying a prominent index of the quality of the public education system of every country in the world. It said the US was on a par with Uzbekistan in 91st place, way below every other wealthy country in the world! When you dig into the underlying raw data, that turns out to be because of (i) bad data (the Uzbekistani government’s claims about the literacy rate of their population are almost certainly overstated substantially) and (ii) subjectivity. When you ask some unidentified experts a vague subjective question about how good the US education system is, they score it pretty low. That subjective question, and the literacy data, are weighted so heavily in the index that the US ends up tied with Uzbekistan.
But I have no idea what to do instead. Policymakers and the public aren’t going to suddenly start asking for nuance. They aren’t going to stop asking for numbers and data even when no good data are available. They aren’t going to suddenly stop caring about the “big picture” and the “bottom line”. And they aren’t suddenly going to start sifting through a bunch of different (noisy, seemingly-contradictory) lines of evidence that haven’t been boiled down into a single “big picture” index.
@Jeremy you understand correctly. It is basically a weighted average by species richness all run through a geometric mean. Note that this weighting is like the inverse of say a meta-analysis weighting where we upweight the points that have the most data. Here’ we’re likely upweighting the points with the least data.
@Jeremy, I don’t think it’s as arbitrary as you’re thinking it is. Here’s how I understand the logic. Suppose you have two regions: in region A you have trend data for 5 out of 10 total species, in region B you have trend data for 5 out of 100 total species. Let’s say for arguments sake that the 5 species in region A increase by 1% per year, and the 5 species in region B decrease by 1% per year.
If you want to calculate an average trend for these 110 species as a whole, you would want to place more weight on the trends from region B because it has more species (resulting in a net negative overall trend). Region B receives more weight (10x as much) despite the fact that you actually have trends for a much lower proportion of the total species community in region B.
This comes with the assumption that the species for which you have trends are representative of the regions from which they are drawn. That’s definitely a whopper of an assumption. Anyway, that’s my understanding of the reasoning there – I don’t think it’s arbitrarily done just because they “like” species rich communities or anything like that.
Thanks, I see your point. But even so, that’s basically an extremely ad hoc and bad way to do a hierarchical model. Particularly because, as Brian noted, the data for species from species-rich sites probably are sparser and noisier.
Definitely nothing as fancy as a hierarchical model going on here! It is probably the worlds most complicated directly descriptive statistic with no modelling at all. I’m not sure a fancy hierarchical Bayesian model adds more light though. Data limits are what they are. Personally I’d rather see a separate trend line for each biome with larger error bars in the tropics. But as you note that is not going to meet the goal of massive press coverage. In the end for their goal you go unweighted & bias by observer frequency or you weight by species. Not sure one is better than the other. Its really the goal that is the problem
Thanks very much for sharing this blog post Brian. As a co-author on one of the studies you mention and someone who has been pondering this issue and discussing it with you and others, I very much share your concerns that the Living Planet Reports and trends in the Living Planet Index diverge from results in the published literature about global net population trends across taxa. What I worry most about is a public credibility crisis if conservation rhetoric diverges from scientific evidence. It would be great to see more balanced messaging in the media that explains the scientific nuance that a growing body of literature is showing with respect to population and biodiversity change during the Anthropocene. But I am not sure how we as scientists can work towards redressing the balance in the public discourse.
” What I worry most about is a public credibility crisis if conservation rhetoric diverges from scientific evidence.”
Agreed. And I think we’re about at that point. And although extrapolation forward is dicey as my simple estimates show things look even more unlikely if you play that out a little.
And it wont’ matter a bit that there are 5 papers pointing to an accurate picture and one (that gets all the coverage) that is inaccurate. Everybody will remember the inaccurate.
There is a bit of a tragedy of the commons in public communication of science. The individual scientist is rewarded for sensationalizing, while the overall good of the science-public trust is polluted by sensationalizing.
Probably worth being clear that the LPI does not suggest an accelerating rate of decline, in fact the metric seems to be levelling off. The shift in assessment over time must be due to changes in methodology or data included – which would be interesting to understand – but the LPI does not look unrealistically precipitous. See Figure 1 in paper – perhaps worth appending to your analysis for clarity.
I understand what you are saying visually but I stand by my analysis. Its in the nature of exponential decline to zero. As a rate – ie % of what is there last year – they do report an acceleration (increase in rate of decline).
I agree that nuance is both crucially important AND also often hard to communicate beyond any specific expert bubble. But as a coauthor on a recent study based (partially) on LPI data and (fully) on LPI methodology I’d also say yes, some parts of the planet will be dead soonish, if current developments continue (https://onlinelibrary.wiley.com/doi/full/10.1111/gcb.14753). Large freshwater species are suffering and going extinct (https://www.sciencedirect.com/science/article/abs/pii/S0048969719362382), and we know pretty well what the mechanisms are. Given how freshwater is both a habitat and a very limited resource in many regions I fear we will see many species vanish as collateral damage in water-related conflicts around the world; e.g. https://www.ft.com/content/82ca2e3c-6369-11e8-90c2-9563a0613e56
First freshwater is an important habitat & I agree it is among the most threatened. But it is just a couple percent of the earths surface. So I’m pretty confident its not what is driving the lpi. Second, both the outhwaite & van klink papers I cited address freshwater. The stories there are not simple but have a mixture of good & bad news.
re 1st: in relation to the earth’s surface they cover (approx 1%) freshwater host disproportionately high levels of biodiversity (approx 10%). From the 20811 populations considered in the LPI 3741 are freshwater populations (i.e. roughly 18%). Considering that the FW LPI for those 3741 populations is calculated as avg decline of 84% since 1970 I would say yes, the 68% decline calculated for the overall LPI is more influenced by FW populations than what one would expect given only the amount of earth’s surface covered by freshwaters
re 2nd: I know van Klink et al’s study. Considering the numbers they present for freshwater insects I recommend checking the eletters published alongside the paper on the Science website.
I certainly wasn’t trying to put down the importance of freshwater systems, although I’m sure the land area argument is used to do that. You are right that freshwater is overweighted by species (as you say about 10%), which in the LPI means that the freshwater records are renormalized to a 10% weighting. I still don’t think that the 68% decline is primarily due to freshwater (and I think the 84% decline for the freshwater LPI represents the same sort of overstatement found elsewhere in the LPI).
I read the eletters. I see debate about attribution of causes of change, and the to me tiresome argument of geographic bias (which also applies to the LPI data) but nothing that fundamentally underlines the reported trends. Did I miss something?
Those two studies were surprising on FW. In addition to the several comments on the apparent data trend in van Klink made in Science https://science.sciencemag.org/content/368/6489/417/tab-e-letters I would add that if you actually examine the data sources for those trends you find that most of the long term data was based on reservoirs in Russia (maturing habitats) or river restoration projects in America (improving habitats) which are likely to give trends. Also there is a huge bias in types of FW bodies in the studies included in van Klink (a function of an inherent bias in the data available) the studies are almost all on big rivers and lakes, but most freshwater species are in smaller water bodies. Outhwaite is puzzling as the same authors analysed the same datasets for the State of Nature report and came to a different end point. Again careful attention needs to be paid to the source of the data and the size of water bodies it represents. Improvements due to better waste water treatment benefits sites in national water quality monitoring data sets more than sites that are not in those datasets.
Surprising or not surprising, those two studies are the best available data I am aware of. Throwing stones at papers you disagree with is pretty easy (every paper has shortcomings), but I missed anything in the eletters that fundamentally undermines the data. And the geographic debate argument is a perfect example it is regularly pulled out as a critique of large studies one doesn’t like but then not pulled out as a critique of studies one likes – I have yet to hear anybody make the argument about geographic bias of the LPI even though its data has geographic biases as well.
If you want to convince me you need to make an apples to apples argument by providing data that shows what you want to claim, not just throw stones at the papers that contradict your claim.
So I repeat, for now I would say the Outhwaite and van Klink papers are the best available data with no fundamental flaws I am aware of.
Even if the freshwater samples in van Klink et al.’s study are biased to large freshwater bodies, this alone shouldn’t lead us to assume that a hypothetical unbiased global dataset would necessarily show different results. By and large, global surface-water coverage has increased over recent decades (becomes obvious by looking into the full results tables in the SI of https://www.nature.com/articles/nature20584). We just did a reanalysis of those data (still in review) and found that total areas of both permanent and seasonal small water bodies are increasing in most world regions. Notwithstanding the many that are certainly being degraded, new ones are also popping up in all kinds of remote places that are simply off the radar of currently available freshwater indicators and population/community time-series data. I think site selection bias hugely affects our perception of how both habitats and the communities in them are changing. This is not to say that specific declining water bodies or communities that we might care about are replaceable with others – but that is IMO a different question that has more to do with how and where we value biodiversity than with the validity of published trends.
One has to look under the hood of the LPI to understand what is there. I did have a quick look and you find everything BUT true abundances. You find surveys from breeding bird volunteers, which units are “annual index”. You find catch per unit effort from exploited populations. One data entry says: “Density estimated from questionnaires by moose hunters and an equation linking tracking data”. LPI is an index of indices… .
“LPI is an index of indices… .”
I guess this gets to a similar point as Jeremy’s comments above. There are much more modern and more sophisticated tools for these questions (and e.g. the Outhwaite paper used a very state of the wildlife occupancy model). Personally, I think there is something to be said for the simple and direct data too. I suppose in some ways the LPI is in an uncomfortable middle spot where they are fairly complex in some ways and fairly simplistic in others.
I agree with the critique of the LPI. It would be more convincing to see the trends of individual populations with error bars over time than some weighted average. And percentages or proportions can get very confusing. The idea of trying to summarise such disparate data into one index sounds attractive but is flawed because it discards useful information – similarly we cannot measure our health or an economy’s by one index.
The goal of LPI seems to be to shock their perceived audience into action. Those equally passionate take it at face value and perpetuate it. But others find it defies their observations and common sense, just like ridiculous past estimates of future and even current extinction rates. Some extinction estimates are first stated as ‘estimates with caveats’ but quickly get simplified into being ‘facts’ (such as the implausible ‘1 million species are threatened with extinction’ by IPBES now cited as a fact in uncritical scientific papers – although one might argue that almost all species are eventually threatened with extinction including our own).
Maybe I overlooked it, but the LPI data do not seem downloadable or published in a peer-reviewed journal so the results cannot be reproduced or reanalysed (e.g. using additional data). So this does not seem to be good science.
Sorry to be late coming to this fascinating debate (and thanks for the kind words on our recent paper).
Brian made a brilliant point about LPI being in a difficult middle-ground between a complex statistical model and a simple descriptive statistic. This is a general problem for biodiversity indicators: there is often a trade-off between scientific rigour (= complexity, capturing of uncertainty) and ease of communication. Which should we prioritise?
Also, knowledge in science evolves as better methods become available, but for biodiversity indicators it’s hard to change methods without undermining credibility with non-specialists.
Hmm, not sure I agree. Changing indicators certainly comes at a cost, but I do not think that credibility with non-specialists needs to be one of them.
I agree that there is often a trade-off between rigour and ease of communication, but these are not discrete choices but gradients, and we should not just assume that an optimal location in that trade-off space is very close to the ease-of-communication extreme. If our best bet at retaining credibility as a scientific community is to communicate “facts” whose underpinning assumptions cannot be touched despite evidence that they should be (as in an indicator that can never update its methodology), we have much bigger problems than the LPI. We would then essentially buy our credibility through a complete misrepresentation of how science works and what it can and cannot achieve, and that might bite our credibility in the ass down the road. IMO, we shouldn’t be afraid of communicating that every generation of scientists tries to do the best possible job, but that what can be considered good necessarily evolves. Maybe it’s time to make a case that the LPI is just not good enough anymore. We now know about things such as site-selection bias, and we increasingly have the tools to address these things with the help of ancillary data and more sophisticated models. Do people nowadays dismiss the credibility of the automobile industry because we consider cars that some decades ago were sold as “safe” as no longer safe enough for modern standards? Ultimately, we won’t progress in informing society with best-available scientific evidence (which is arguably our mandate) if we do not dare to phase out our outdated technologies.
I also think we increasingly no longer face the classical trade-off between rigour and continuity in indicators: thanks to IT, it is no longer necessary to introduce “interpretability breaks” into indicator time-series each time we make an improvement, as we can simply retrospectively update the entire time-series. For continued comparability against earlier reports etc., we can simply maintain the old indicator version in parallel and only phase that out slowly over the years. This is how software works. Again, no one would dismiss Linux’s or Microsoft’s credibility for overturning technologies that they had previously praised with better ones. IMO, science should not be afraid of communicating such “alternative truths” to non-experts, along with a simple explanation of the underlying assumptions and pros/cons in the underlying data/tools. Non-experts are capable of deciding for themselves which automobile brand they want to trust more based on tests of their relative performance – I think they can deal with changing “facts” better than we often assume.
Brian, I had a look at the living planet database and it is incredibly patchy. And- it has populations that are sub populations of each other. E.g. for red deer, they have an estimate for the whole of scotland, an estimate for the entire UK, and an estimate for the north block of the Isle of Rum, a tiny sub-population on one of the smaller islands in Scotland. If I understand right, these would be roughly equally weighted as inputs to their algorithm. Also they are all for different time periods and none are up to date.
– a survey for the whole of Scotland for 1970 through to 2000,
– a survey for the whole of UK for 1995 through to 1999, and
– a survey of a small area in the north of the Isle of Rum, a tiny island south of Skye for 1974 to 2006.
If you look at the European rabbit, they have only six records for the whole of the UK, each of a few dozen individuals. They have two recods from Spain and three from France. The UK records stop in the 1980s.
For red squirrel they have six records in Belgium ending in 1986, one in Sweden ending in 1983, and three in Jersey ending in 1997. Nothing in the UK.
For the wild cat, two entries from Spain and one from Hungary. Spain ends in 1996, Hungary in 2001. That is it. Nothing from the UK.
For the giant panda, WWF logo species they have only two entries from 1985 and 2000 both estimating the population as 1000, despite WWF themselves often blogging much more accurate population estimates for this species.
They are aware of these inadequacies and use statistical methods to try to compensate for them – but how can you pay much attention to an attempt at an estimate based on such patchy data? Even when there is much better data available that for some reason they haven’t entered into the database?
It’s not just the problem of a bias towards trends in small populations. E.g. if you rely on observations of animals in one particular location – they could have just moved. With climate change as well as deforestation / reafforestation, and other habitat changes, many animals are moving to new habitats, so if you set up field observations for a particular animal – then they move – you can expect the numbers preferentially to go down rather than up. So they might not even be trends at all as they are not trying to get estimates even for small sub populations. Basically you can’t make a silk purse from a cows ear however fancy the stitching.
Do say if I’m missing something. This is my blog post about it:
Even though I’m late, I just wanted to let you know that I modestly contributed to the papers assessing long-term trends in biodiversity estimators, in our case using quite standardized data collected under the frame of LTER network sites:
Elizabeth Wolkovich (I think it’s Elizabeth Wolkovich; the post author is listed as “Lizzie”) comments on the recent Nature paper co-authored by Brian reanalyzing the Living Planet index data. She uses a colleague’s complaint that the Nature paper is based on “bad data” to ask the broader question, “What is ‘bad data’, anyway?”
Pingback: Scepticism towards the Living Planet Index is unwarranted | The Solitary Ecologist
Pingback: Friday links: Covid-19 vs. BES journals, Charles Darwin board game, and more | Dynamic Ecology
Pingback: Friday links: marking Black History Month in ecology, financial crisis at Laurentian University, #pruittdata latest, and more | Dynamic Ecology