Friday links: statistical significance vs. statistical “clarity”, philosophy of science vs. cell biology, and more (UPDATED)

Also this week: peer reviewers vs. peer reviewers, the history of logit models, philosophy vs. the Cleveland Browns, and more.

UPDATE: At the last minute Meghan added the best link of the week and I didn’t get the chance to blurb it until now. How much do you know about the “Menten” of Michaelis-Menten equation fame? If the answer is “nothing” (as it was for me, to my embarrassment), you need to follow Meghan’s link, it’s amazing.

From Jeremy:

Writing in Methods in Ecology & Evolution recently, Dushoff, Kain, & Bolker argued that statistical significance instead be termed statistical “clarity”. I appreciate the goal; it’s really important to interpret and teach statistical inference well. And Ben Bolker’s a friend for whom I have massive respect (which I sometimes express humorously). But even leaving aside the difficult collective action problem of getting everybody to abandon standard terminology in favor of a specified alternative (has that ever been done? in any context?), I’m not convinced this terminological change would actually improve the use and interpretation of null hypothesis tests. Especially in the context of low-powered studies, where type M and type S errors are likely. Personally, I think the problems in how we use and teach frequentist statistics are more to do with how they’re used than with our terminology. I think improvement in both statistical practice and pedagogy is most likely to come from (indeed, is already coming from) responses to the “replication crisis” in psychology and other fields. A crisis that didn’t have its roots in terminology, I don’t think. But I dunno, what do you think?

Good PNAS piece on how philosophy of science has contributed to cell biology, immunology, and cognitive science. Related old posts from me here and here, on why ecologists should read more philosophy of science and what philosophy of science they should read.

The history of the logit model in statistics. I hadn’t realized that the logit model shares its history as well as its mathematical form with what ecologists call the logistic growth model. (ht @noahpinion)

I’m hilariously late to this, but whatever: in computer science, it turns out that only 1/5 of faculty exhibit the stereotypical trajectory of research productivity over the course of their careers: an early peak followed by a long slow decline. I’d be curious to see a similar analysis for ecology. Related old post from me, that unfortunately is based on anecdotes from ecology rather than hard data.

Stephen Heard on why conflicting advice from peer reviewers often is good for the author. #3 on his list is both correct, and under-appreciated among inexperienced authors. I’d add that, when peer reviewers give conflicting advice, a good editor will give the author guidance as to how to respond.

On the other hand, conflicting advice from peer reviewers may be rarer than you think…This is from back in the fall, but I missed it at the time. Timothy Paine and Charles Fox conducted a huge randomized survey (over 12,000 respondents!) of authors of ecology papers published between 2009 and 2015. They asked how many times those papers were rejected before being published, and where they were rejected from. Turns out that journal editors are good gatekeepers, in several senses. Rejected papers that eventually get published don’t garner as many citations as those that were never rejected, even after controlling for the impact factor of the publishing journal (i.e. it’s not just that rejected papers get cited less because they’re eventually published in lower-impact journals). Papers that got rejected without review go on to garner fewer citations than those that weren’t. Papers that go on to garner unusually many citations rarely get rejected before being published, and conversely rejected papers rarely go on to garner unusually many citations after being published. Finally, the highest-impact, most selective journals are the best gatekeepers by the measures examined. Bottom line: there’s some “randomness” in the peer review process, but not nearly as much as you might think. Peer review in ecology is not a crapshoot. Now, you could argue that citations are not an appropriate measure of a paper’s scientific merit, for instance because they just reflect bandwagon-y behavior on the part of ecologists. Maybe these results merely show that ecologists all know what sorts of papers others ecologists like, but what sorts of papers ecologists like is entirely a product of purely arbitrary fashion. But I think that’s a fairly hard case to make. Related: Brian’s old post on gatekeeping vs. editing in peer review, and Meghan’s recent poll on manuscript rejections.

Video for teaching ecology, political metaphor, or life lesson? You decide (ht @noahpinion)

And finally, Friedrich Nietzche, Cleveland Browns fan. 🙂

From Meghan

Wait, Menten of Michaelis-Menten fame was a woman? Read this piece by Rebecca Skloot for more on her mind-blowing accomplishments.

15 thoughts on “Friday links: statistical significance vs. statistical “clarity”, philosophy of science vs. cell biology, and more (UPDATED)

  1. I liked that thread on the history of logit models, but I don’t agree with it’s author’s argument that Verhulst tested essentially arbitrary different functional forms. I just read the original paper (or rather, it’s English translation) and Verhulst actually derived it from basically the same starting point that I’ve always used to derive it when teaching theory in the past: start from exponential growth, and modify it with the simplest possible limiting term, a quadratic (and what you get out by using a Taylor Series approximation around an arbitrary declining growth term). It’s actually a remarkably modern-reading paper, and now that I’ve gone through it, I’m going to have to keep it in mind as a good paper for students to go through.

  2. Jeremy – I haven’t read the Fox piece on peer review yet, but I’ve donwloaded it. On the surface it seems to me that is a rather different result than grant review where studies (in and out of ecology) have shown almost no predictive ability about future # pubs or citations. I would hypothesize that is because grant accept rates are below 10% where as most journals are still in the 20-30% range.

    Meghan – I cannot believe I never knew about Ms Menten before! Of course I don’t really know Michaelis’ first name or gender explicitly either. I guess I’m just guilty of assuming. What a great piece you linked to and what an amazing person!

      • I think it’s much easier to evaluate “what is” (a paper) than “what might be” (a proposal). For the latter, reviewers’ biases/hopes/blood sugar level play a bigger role.

  3. Meghan, I’m both amazed and embarrassed to discover that Menten of Michaelis-Menten equation fame was a woman. Like Brian,I never thought about it and knew nothing about either Michaelis or Menten. And so in the absence of any thought or information I guess I just defaulted to assuming they were both men, without even realizing I was doing it. Great piece, thanks for sharing that.

  4. “advice from peer reviewers may be rarer than you think…’

    I don’t think this is an A+ report card for editors and reviewers.

    I’m curious: for what percentage of papers is the publication decision effectively arbitrary?

    Perhaps I’m mistaken but it looks like the Paine and Fox paper shows that the upper half of the distribution of papers at any given journal are broadly appropriately selected by editors and reviewers, but there’s still room for a fair amount of arbitrary judgement in the lower half. It would be interesting to see, for example, the distribution in number of citations for rejected papers vs accepted papers for one journal (overlapping histograms), to see where the median of the rejected papers lies on the accepted papers distribution.

    Also it’s interesting that nearly 10% of rejected papers – papers purportedly below the zero percentile for the rejecting journal – went on to be high impact papers (80th percentile?) in another journal. Even if it is a lower impact factor journal, that’s a large error in perceived impact. It would be interesting to have a metric that combined the change in impact (i.e., citations) from rejection to acceptance with the impact factor of the accepting journal as a way of scaling the error in perceived impact at the original journal.

  5. I liked the idea of speaking in terms of statistical clarity instead of statistical significance, thanks for the link! However, I think it might sometimes be even better to speak in terms of strength of evidence – so instead of “there was a statistically significant effect” or “there was a statistically clear effect”, saying “there was strong evidence of an effect”. Thus, instead of “species richness significantly decreased with patch size”, “there was strong evidence that species richness decreases with patch size”. Something like this. But it does look less elegant. What do you think?

Leave a Comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.