Friday links: hermit crabs vs. game theory, the truth about peer review, activism vs. academic careers, and more (UPDATED)

Also this week: data on ecological “publishing lives”, becoming a musical science writer, improving graduate seminars, Trump vs. NSF (but not that way), and more.

From Meghan:

Proc B and Royal Society Open Science are going to start publishing peer reviews for manuscripts submitted beginning on January 2nd. In announcing the move, the Royal Society said that 2/3rd of RSOS papers already appear with peer reviews.

From Jeremy:

Chanda Prescott-Weinstein with a great thread on being an activist and an academic, especially as a grad student. Touches on everything from the value of working within the system to change the system, to the uses and misuses of anger, to the complementarity of technical expertise and activism, to why succeeding within the system is not “selling out”, and more. Sorry to hear that lots of people on Twitter are misreading it. (ht Meghan)

Interesting data on the “publication lives” of authors of papers in (a few) leading ecology journals. (UPDATE: I just became aware that the authors have been credibly accused of taking the idea for this study, and some of the data, from someone else without attribution. If they did, that’s obviously highly unethical and the paper should be retracted.) For instance, back in the 1960s it used to be that about 75% of people who published a paper in one of a few leading ecology journals would publish a first-authored paper in one of those same journals at some point. That fraction has declined linearly over time and is now below 40%. The authors interpret their data as reflecting growth in the workforce of “supporting scientists” such as technicians and lab managers. Which could be right, but I wonder if that’s the only thing or even the main thing going on in these data. For instance, there are many more ecology journals now than there were decades ago, including more “leading” journals. So these days, I’m sure it’s increasingly common for people to remain active in publishing ecology papers, including in “leading” journals, without remaining active in publishing in the specific leading ecology journals considered in the study. (Aside: I wish the paper authors had chosen different terminology, rather than referring to people who publish in a few specific leading journals for at least 20 years as having “full” careers and others as “dropouts”. People who no longer publish papers in (a few of) the leading journals in their field often haven’t left the field! But still, the data are interesting and worth thinking about, despite the poor terminology.) (Second aside: Lots of Twitter commentary on this paper summarizes/interprets the paper as describing career paths in ecology. Again, these aren’t data on career paths. If you want up-to-date data on career paths of people holding US PhDs in ecology, you should look at Hampton & Labou, and my commentary.) (ht Meghan)

I’m a bit late to this, but here’s the very sharp Cosma Shalizi’s two truths about peer review:

  1. The quality of peer review is generally abysmal.
  2. Peer reviewers are better readers of your work than almost anyone else.

In my opinion, and in the opinion of scientists who’ve responded to big global surveys, he’s wrong about #1. But he’s right about #2. And he’s right about this:

[M]ost papers which get published receive almost no attention post-publication; hardly anyone cites them because hardly anyone reads them. In the second place: if one of your papers somehow does become popular, it will begin to be cited for a crude general idea of what it is about, with little reference to what it actually says.

Ideas to improve a graduate seminar in ecology & evolution.

How protein folding researchers are reacting to Google DeepMind’s impressive entry into the field. Interesting window into the sociology of a scientific field, which is something I always find interesting (ht Marginal Revolution). A sample to whet your appetite:

There are dozens of academic groups, with researchers likely numbering in the (low) hundreds, working on protein structure prediction. We have been working on this problem for decades, with vast expertise built up on both sides of the Atlantic and Pacific, and not insignificant computational resources when measured collectively. For DeepMind’s group of ~10 researchers, with primarily (but certainly not exclusively) machine learning expertise, to so thoroughly rout everyone surely demonstrates the structural inefficiency of academic science.

My buddy Greg Crowther on how he became a musical science writer.

Axel Rossberg on different kinds of scientific explanation in ecology.

Kevin Lafferty’s very detailed guide to how to structure a scientific paper. Also includes extensive writing and editing advice. Think of it as a follow-up to Brian’s five pivotal paragraphs post.

Apparently Donald Trump plans to apply for an NSF grant to build the wall. This is one of those jokes that will make you laugh because the alternative is to cry. 🙂 😦

And finally, this week in Non-Scientific Holiday-Related Links I Am Probably Going To Immediately Regret: no, Baby It’s Cold Outside is not about date rape.

I don’t want to end on a downer, so here’s a BBC Earth video of hermit crabs demonstrating game theory. 🙂 (ht Matt Levine)

13 thoughts on “Friday links: hermit crabs vs. game theory, the truth about peer review, activism vs. academic careers, and more (UPDATED)

  1. Thanks for the link to the Shalizi comments on peer review. I’d tend to agree with the excerpt you quote (saying that citations of papers tend to be for the general topic of that paper rather than for specific claims and evidence). I’m wondering whether others also agree and, if so, whether that “ruins” citation-based metrics of professional influence (e.g., the h-index) — or whether this is just one problem among many with such indices.

    • Citation-based metrics of influence are extremely crude. I think they’re crude for reasons that include, but are far from limited to, the one Shalizi points out. But that’s just a gut feeling.

    • I think it’s better not to read papers before you cite them. Imagine you read them carefully and find fundamental errors. Then, seriously, what can you do? Cite that paper knowing that it is wrong? Not cite it and be blamed for this by reviewers? Spend much of your own paper arguing why a paper that might be related to your research is actually fundamentally wrong? None of these are attractive options. Hence, I think, the consensus approach it to cite but not read.

      • “Then, seriously, what can you do? ”

        I think the scenario you describe is rare. But when it happens, you either don’t cite the incorrect paper, or cite and explain why it’s wrong.

        The much more common scenario is for people to miscite because they haven’t carefully read the paper.

      • JF: “I think the scenario you describe is rare.”

        If we agree that
        (a) there are many zombie ideas being invoked in the ecological literature,
        (b) Zombie ideas are by definition “big, widespread errors or misconceptions that aren’t recognized as such” (https://dynamicecology.wordpress.com/2016/02/02/lets-identify-all-the-zombie-ideas-in-ecology/),
        (c) A scientific argument based on an errors or misconceptions is wrong,
        then how can papers that are wrong be rare?

        I think the two aspects go hand-in-hand: as you say, papers are not carefully read, but as a result errors in the thinking of either the authors of the reader of the paper go unrecognised, this causes confusion, the confusion motivates thinking in terms of high-level ideas (zombie or not) that have many followers, implying that “technical details” are perhaps not that important in ecology, and this is why we don’t read papers carefully before citing them.

      • I have no idea if “many” zombie ideas are invoked in the ecological literature. I’ve identified some zombie ideas, but I have no idea what fraction of all ecological ideas are zombies. Probably only a small fraction, I guess?

        Miscitations are only a minority of all citations. We’ve linked to data on this (sorry can’t find it now). If memory serves, no more than 1/4 of citations in the ecological literature are miscitations. That jives with the anecdata in this post: https://dynamicecology.wordpress.com/2012/10/09/can-the-phylogenetic-community-ecology-bandwagon-be-stopped-or-steered-a-case-study-of-contrarian-ecology/

      • Let me add two more notes, and then I’ll definitely shut up (promised!)

        * I think a miscitation rate of 1/4 is pretty large.

        * Your analysis of the case study you cite is fantastic. For the purposes of this thread, I’d just interpret it differently. I’d lump together all the papers that cite the highly critical M&L paper but then use the criticized method regardless, because (acknowledging limitations of the method) I’d consider each of these to be committing a technical mistake. This would be your categories 2 (cited M&L in passing without taking the criticism into account), 3 (cited M&L as one paper among many), and 6 (miscited M&L). Together, they make up over 60% of the papers citing M&L. Again, I think that’s pretty large.

        Merry Christmas

      • https://dynamicecology.wordpress.com/2013/06/06/on-ignoring-criticisms/
        Note that I now regret the choice of a specific example in that post. I only meant to use the paper discussed in the post as one example among many that could’ve been used to raise the broader issue I wanted to discuss. I should’ve found a better way to raise that issue, rather than singling out one paper for criticism. Also, given that I did single out one paper, I should have first approached the authors privately with my concerns before singling out their paper. So please read this post only for its discussion of the general issue of when, if ever, it’s ok for authors to ignore or gloss over technical criticisms of their approach.

  2. Jeremy, would you have a reference to one of the “big global surveys” of views related to the proposition that “The quality of peer review is generally abysmal.” I am curious how field-specific these tend to be.

Leave a Comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.