Friday links: Geragnostus waldorfstatleri, the biased Living Planet Index, and more (UPDATED)

Also this week: against scientific debates, painting David Attenborough, exporting anxiety (?), links from Brian (!), and more

From Jeremy:

I’m a bit late to this, sorry: Terry McGlynn reacts to recent posts by Meghan and I, explaining why he doesn’t like scientific debates and arguments, and why it’s good that ecology now seems to have fewer of them. Also includes commentary on why ecology has fewer debates and arguments than it used to. He basically stumps for a combination of my hypothesis , a modified version of my , and a hypothesis I mentioned briefly but didn’t fully develop (increased collaboration = fewer arguments).

I’m also late to this: congratulations to all ASN award winners!

Brian has an old unfinished draft post doing a deep dive into the calculation of the Living Planet Index (LPI). I mention this because the peer reviewed literature has beaten him to the punch. Buschke et al. (2021 Nat Ecol Evol) criticized the LPI as biased, because the LPI finds declining population trends even when fed simulated data exhibiting random fluctuations that are trend-free on average. Now Toszogyova et al. (2024 Nat Comm) argue that there are previously unrecognized sources of bias (or at least, lack of robustness) in the LPI. While also arguing that the LPI behaves appropriately in the simulations of Buschke et al. (2021). So there’s disagreement over both whether the LPI is biased (because presumably the creators of the LPI don’t think it’s biased), and over the causes of whatever biases it may have. Time to finish that draft post and tell everyone what to make of all this, Brian! 🙂 Also, seems like the whole topic of putative biases in the LPI would be good fodder for debate in an undergraduate ecology or biostats course. Or good discussion fodder, if you view debates as having zero or negative pedagogical value. (Brian Adds: HT to Carl Boettiger for pointing out to me that this paper is interesting not only for the science of LPI or biodiversity indices, but for two more meta reasons. 1) this paper is effectively a peer reviewed code review of somebody else’s computer code – the code that calculates LPI – and to a lesser degree somebody else’s data set. That is a really rare category of paper right now but it should become increasingly more common! 2) The reviews on this paper are all publicly shared, albeit anonymously. This is interesting for reasons ranging from it could be instructional for students learning how to deal with (or write) reviews, to insight into how review on a controversial topic at a top journal goes down.) UPDATE: Corrected the publication data of Buschke et al. Also, in the comments Falko Buschke highlights additional results from Buschke et al., illustrated with slick animated graphs. Falko also comments on whether Buschke et al. and Toszogyova et al. should be framed as critiques or refinements of the LPI. I suspect that’s an issue on which we could have a long discussion–and perhaps we will!–because there’s scope for reasonable disagreement. More on this in a future post, maybe. /end update

New best LLM for coding assistance just dropped: Claude 3.5 Sonnet. I plan to try it out soon.

An AI “transformer” model can sometimes outperform any of the individuals used to train it. How is that possible, and what are the broader implications? Here’s a good, accessible, grounded post.

Computer scientist and statistician Jessica Hullman on LLMs as a data analytic tool.

Related to the previous three links: the political economy of AI regulation in the US. Man I’m getting old–did I already link to this recently? I can’t remember, and I’m too lazy to check. Also, has anyone seen my keys?

The future–even the near future–isn’t as predictable as many people think (even well-informed people with skin in the game). I worry that Dan Gardner’s example comes from too long ago, and from too unusual a situation, to really drive his point home. But I still found it an interesting history lesson.

Recent(ish) unreviewed preprint says that medical papers routinely report numerical values, and measures of precision, for their headline results in their abstracts. In contrast, sociology and political science papers don’t (although they’re starting to). I haven’t compiled data on ecology papers, but I’m sure they’re like sociology and political science papers. Would it be better if ecology paper abstracts changed to become more like medical paper abstracts?

Here’s the new David Attenborough portrait by Jonathan Yeo. Personally, I prefer portraits that use the background and setting to convey something about the sitter, as in many Old Master paintings. Yeo’s style is almost the opposite, at least to my philistine eyes. But YMMV.

This week in clickbait science: a preprint (in review) that quantifies and explains variation in how academics talk about political issues on Twitter. You will never definitely guess how country, university prestige, and scholarly field correlate with (the paper’s measure of) academic Twitter account toxicity.

Beyond academic sectarianism. I don’t ordinarily link to pieces on this topic, for various reasons that you can probably guess. But I think this piece is better, and more sincere, than many others on the topic, so I decided to link to it. I’m deliberately not revealing whether, or to what extent, or in what respects, I agree or disagree with it.

Is anxiety the top US export?

GeoGuesser master travels the world to visit the places he knows from playing GeoGuesser. This is strange and very much of the present moment, but in an oddly uplifting way.

Looking forward to someone who loves this blog nevertheless starting a betting market on whether all its posts are based on lies.

Species named after Muppets. 🙂

11 thoughts on “Friday links: Geragnostus waldorfstatleri, the biased Living Planet Index, and more (UPDATED)

  1. I have to say, I love that painting of Sir David! If you scrutinise the background you can see shapes and figures emerging that, while indistinct and impressionistic, hint at his long and adventurous life. Is it my over-active imagination or are those Aztec figures on the right? And is that a monkey on the left? I really want to look at it in person to see if I can make out the details more clearly. Yeo also painted the first official portrait of King Charles, which caused a bit of a controversy, but I like that too: https://www.bbc.com/news/entertainment-arts-68981200

    • “If you scrutinise the background you can see shapes and figures emerging that, while indistinct and impressionistic, hint at his long and adventurous life”

      Your comment prompted me to go back and look again, because that would be excellent portraiture. But I gotta say, I didn’t see it.

      • “Look on his left shoulder, and on right collar. e.g.”

        I dunno, maybe I need to see the original painting, or a really high-res image. I’m still not seeing anything. Which, to be clear, might well just be me!

  2. I’d like to add a question to “Ask us anything”: How does academic sectarianism (or whatever you want to call the situation described in that article) influence the practice of ecology, and the applied implications that flow out of ecology? I suppose the “influence” could be evaluated relative to the counterfactual world in which there is no academic sectarianism. (I’m not really expecting an answer, so maybe I’m just saying this is a question that seems well worth asking.)

    • I think of your paper 2019 Phil Topics paper Mark. What leads to that bias you discussed (which I think in the paper is underestimated)? Dan Kahan’s work would say our “cultural” values drive our world view, and that world view is all wrapped up in how we would view e.g. biodiveristy and nature in general. Sectarianism attracts a certain type of person, and that person in turn will come with a constrained viewpoint on things we take for granted. Kahan often presents Mary Douglas’ matrix of culturally driven values — the sectarianism pointed out in the paper is us narrowing down to Douglass’ upper right corner more and more. That influences how we act together, and how we lean forward and back in our chairs (Vellend, 2019) at the appropriate times together.

      I wish the paper was more nuanced in regards to conservative liberal – -you don’t need to be a “conservative” to feel pretty uncomfortable in academia.

  3. Hi Jeremy – as the author of one of the papers on the Living Planet Index you link to (note: the date was 2021, not 2017), I’d like to point out a detail about our paper that tends to be missed.

    The LPI’s downward trend when populations fluctuate on a random walk was only one part of the paper (and, frankly, the less important part). As we explained in the paper “Even though random fluctuations caused declines in the LPI for otherwise stable populations, these effects were too small to explain empirical declines.”

    The more important part of our paper was how population fluctuations affected the way the LPI interpolates messy time-series data using a GAM. When population trends are strongly nonlinear, noise around the trend means that the GAM is unable to accurately emulate the convexity of the population trajectory. We included some illustrative animations in this blog post.

    Low fluctuations:

    High fluctuations:

    When we corrected for this effect using reshuffling null models, we found that the LPI overestimated declines by 9.6% (the figure reported in the abstract)… but the LPI still declined by more than 50%. By contrast, huge random fluctuations around otherwise stable populations (i.e. 5% per year) were barely enough for the LPI to decline by 5%.

    I’d also like to point out that Figure 1 in the new paper by Toszogyova et al. reports similar patterns: the global LPI still declined by 36 – 69% depending on their various corrections (when they used the same weighting to account for taxonomic and geographical data biases as the LPI).

    I am worried that there is a risk of readers only skimming these papers and getting a false impression that the LPI is completely flawed. This would be a mistake. These studies raise issues about the exact quantitative predictions of the LPI (which is always going to be uncertain), not the overall trend. So, I’d my prefer if these studies are framed as refinements, rather than critiques, of the LPI.

    • Thanks for your comments Falko. I’ll correct the date of your paper in the post (not sure how I got that wrong…) and add a note pointing readers to your comments.

  4. This all seems a bit trivial compared to the LPI, but nonetheless, following up on peacorfc9f72c53f’s comment, I hadn’t noticed but it looks like a freaking image of Darwin on Sir David’s right collar! (His right).

    • “I hadn’t noticed but it looks like a freaking image of Darwin on Sir David’s right collar! ”

      Ok, I see that one! I’m not sure if it’s intentional on the artist’s part–would want to see the original portrait.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.