Friday links: no, humans haven’t killed off 60% of animal species, President McCauley, and more

Also this week: gifs vs. your tenure-track job search, contractions aren’t a problem in scientific writing, why science Twitter always discusses the same topics, and more.

From Jeremy:

Congratulations to my colleague and mentor Ed McCauley on becoming President of the University of Calgary. No word yet on whether the university mascot will be changing from this to this. ๐Ÿ˜‰

Terry McGlynn remarks that he’d prefer if science Twitter conversations would focus on certain topics much less often (*cough* preprints *cough*) and certain other topics much more often. Which raises the question, why does science Twitter (or really any corner of Twitter) focus on certain topics to the exclusion of others of at least equal importance? Here’s one hypothesis, originally developed in a different context: “because you can always get a game” discussing Israel preprints. If you’re not sure if I’m joking about this hypothesis, well, I’m not sure either. ๐Ÿ™‚

Ed Yong criticizes the reporting of WWF’s Living Planet Report. No, it doesn’t show, or say it shows, that humans have “wiped out 60% of animals since 1970”, or “killed more than half the world’s wildlife populations”, or “wiped out 60% of animal species”. Interestingly, unlike many cases of inaccurate popular media reporting of scientific findings, these inaccuracies don’t seem to have come from the report itself or the associated press release, at least as far as I can tell (did I miss something?). And while I freely admit I’m no expert on science reporting, I have to say I share Ed Yong’s frustration. This report does not strike me as even slightly difficult to summarize accurately in one sentence, even for a general news reporter rather than a science journalist. Also, I liked how at the end of his piece Ed addresses head-on the argument that he’s a pedant who’s distracting from the urgent task of waking people up to a crisis:

Surely what matters is waking people up, and if an inexactly communicated statistic can do that, isnโ€™t that okay? I donโ€™t think it is. Especially now, in an era when conspiracy theories run rampant and lies flow readily from the highest seats of government, itโ€™s more important than ever for those issuing warnings about the planetโ€™s fate to be precise about what they mean. Characterizing the problem, and its scope, correctly matters. If accuracy can be ignored for the sake of a gut punch, we might as well pull random numbers out of the ether. And notably, several news organizations, such as Vox and NBC, managed to convey the alarming nature of the Living Planet Index while accurately stating its findings. The dichotomy between precision and impact is a false one.

The next US Congress will include at least 18 members with backgrounds in STEM or demonstrated support for STEM issues (see here, scroll down to Maggie Koerth-Baker’s last comment, which I’m summarizing). Ten of the 18 are incumbents, and a few who aren’t scored upset victories. 16 of the 18 are Democrats. But of course it’s very hard to say just how much, if any, of their electoral success can be attributed to their STEM backgrounds.

US humanities major enrollments are collapsing, even at elite colleges and universities. Here’s what some colleges and universities are trying to do about it.

Using contractions won’t make your paper less accessible to readers for whom English is an additional language.

Applying for a tenure-track job at an R1 university, explained with gifs. ๐Ÿ™‚

10 thoughts on “Friday links: no, humans haven’t killed off 60% of animal species, President McCauley, and more

  1. On the LPI, the LPI authors have not always been as careful about how to interpret the index as they are today. For example a quote from their 2014 report: “The global LPI reveals a continual decline in vertebrate populations over the last 40 years. …. The weighted LPI (LPI-D) shows that the size of populations (the number of individual animals) decreased by 52 per cent between 1970 and 2010”

    • Yeah, that’s a bad one-sentence summary; it’s basically the same as one of the bad headlines Ed Yong quotes.

      What I still don’t get is how anyone writing up this year’s report (or a previous year’s report) could possibly summarize it by saying 60% of *animal species* have been killed off since 1970…

      • I think the leap to 60% of species extinct is more on the journalists than the LPI – I have never seen them communicate anything along those lines. Although I do think scientists more generally are not off the hook. The strong promotion of a “6th mass extinction” has predisposed people to assume extinctions at this level even if the real number is a few percent.

        But I do think the 60% of all vertebrate individuals have disappeared is in part traceable to how WWF chose to communicate the LPI in its early days (although they were clearly more careful now and even issued a correction to journalists making that claim this year).

  2. Ed gives a hypothetical example in which one could alternatively characterize a change as being 17% or 60%. One could imagine various algorithms that result in different numbers still. So, how did the LPI folks settle on this particular one? Is there documentation of the “why” behind various decisions like that? I note that many scientists (not just the media) start their own presentations with the LPI graph as evidence of catastrophe, without a whole lot of nuance.

    • Yes – anytime you boil 165,000 measurements across 14000 populations 3000+ species and 50+ years to a single number and then reweight things, you make a lot of judgment calls. Doesn’t mean they’re right or wrong (and certainly not scientifically invalid) but I think we are just beginning to explore the range of interpretations the one LPI data set could give depending on choice of judgment about how to analyze.

      I think if you trace the roots of the LPI it goes back to a review paper lead-authored by Steve Buckland that reviewed a number of biodiversity indices and proposed this one as having the best properties, although LPI made numerous modifications and changes of their own onto that. So its definitely not made up out of nowhere. But it is a single choice of a wide range of possible choices, many of which might give qualitatively different results.

      • Brian, your remarks here have me thinking back to that “many analysts, one dataset” project that was published recently, that gave various research teams the same dataset and a seemingly-straightforward statistical question to answer, and then watched as all the teams made different (defensible) analytical choices and came up with different answers. It’s even worse here, of course, because the goal of the summary isn’t set in advance, but rather is one more thing on which reasonable disagreement is possible.

        I’m unsure what, if anything, to do about this that we don’t already do. In economics, it’s routine for papers to include many different model “specifications” (as economists call them) in an attempt to show that the estimate of the key effect of interest is at least somewhat robust to alternative plausible analytical choices. That comes at the cost of making papers a terrible slog to read, but maybe that’s a price worth paying? (more online supplements! um, yay?)

        I suppose another thing to do is to ask questions that avoid the issue of “how do we choose among many plausible measures or indices of [vaguely-defined thing]?” For instance, if I have some dynamical model that I’m trying to test, I can use simulations and derivations to derive predictions about any property of the model I want. And then I just go out and manipulate & measure those properties as needed to satisfy myself that the model is right (or right enough for my purposes).

        But of course, that option often won’t be available to many researchers. In particular, indices and summary measures seem like an essential in work aimed at policy makers. Even if “under the hood” those indices and summary measures inevitably are based on a lot of contestable or even arbitrary judgment calls. So that anybody who doesn’t like your conclusions can just come up with their own, equally-defensible index that leads to different conclusions.

      • I am a strong advocate of what I sometimes call the robustness approach and sometimes the multidimensional approach. There are always multiple valid choices. So address the problem head on by exploring the implications of diverse choices and reporting multiple results. If they’re consistent, then yeah! If they’re not that is scientifically interesting and important to figure out why.

        My 2010 chapter on SADs is quite explicit in arguing that one needs at least 3 measures (I analyze which) to capture the range of variation in SADs. And a recent paper with multiple co-authors at Ecology Letters makes the same argument about needing multiple measures (and multiple scales) to make claims about how biodiversity is changing.

  3. This discussion is very timely – tomorrow morning I begin the first of my final year undergraduate seminars for our Biodiversity and Conservation module and we are discussing the Living Planet Report. I’ll send the students a link to this.

  4. As soon as I saw the headline about Ed McCauley I also thought “Hey, maybe they’ll change the mascot to Daphnia!” I’m a little scared that we had the exact same thought. ๐Ÿ˜‰

Leave a Comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.