Also this week: peer reviewers vs. peer reviewers, the history of logit models, philosophy vs. the Cleveland Browns, and more.
UPDATE: At the last minute Meghan added the best link of the week and I didn’t get the chance to blurb it until now. How much do you know about the “Menten” of Michaelis-Menten equation fame? If the answer is “nothing” (as it was for me, to my embarrassment), you need to follow Meghan’s link, it’s amazing.
Writing in Methods in Ecology & Evolution recently, Dushoff, Kain, & Bolker argued that statistical significance instead be termed statistical “clarity”. I appreciate the goal; it’s really important to interpret and teach statistical inference well. And Ben Bolker’s a friend for whom I have massive respect (which I sometimes express humorously). But even leaving aside the difficult collective action problem of getting everybody to abandon standard terminology in favor of a specified alternative (has that ever been done? in any context?), I’m not convinced this terminological change would actually improve the use and interpretation of null hypothesis tests. Especially in the context of low-powered studies, where type M and type S errors are likely. Personally, I think the problems in how we use and teach frequentist statistics are more to do with how they’re used than with our terminology. I think improvement in both statistical practice and pedagogy is most likely to come from (indeed, is already coming from) responses to the “replication crisis” in psychology and other fields. A crisis that didn’t have its roots in terminology, I don’t think. But I dunno, what do you think?
Good PNAS piece on how philosophy of science has contributed to cell biology, immunology, and cognitive science. Related old posts from me here and here, on why ecologists should read more philosophy of science and what philosophy of science they should read.
The history of the logit model in statistics. I hadn’t realized that the logit model shares its history as well as its mathematical form with what ecologists call the logistic growth model. (ht @noahpinion)
I’m hilariously late to this, but whatever: in computer science, it turns out that only 1/5 of faculty exhibit the stereotypical trajectory of research productivity over the course of their careers: an early peak followed by a long slow decline. I’d be curious to see a similar analysis for ecology. Related old post from me, that unfortunately is based on anecdotes from ecology rather than hard data.
Stephen Heard on why conflicting advice from peer reviewers often is good for the author. #3 on his list is both correct, and under-appreciated among inexperienced authors. I’d add that, when peer reviewers give conflicting advice, a good editor will give the author guidance as to how to respond.
On the other hand, conflicting advice from peer reviewers may be rarer than you think…This is from back in the fall, but I missed it at the time. Timothy Paine and Charles Fox conducted a huge randomized survey (over 12,000 respondents!) of authors of ecology papers published between 2009 and 2015. They asked how many times those papers were rejected before being published, and where they were rejected from. Turns out that journal editors are good gatekeepers, in several senses. Rejected papers that eventually get published don’t garner as many citations as those that were never rejected, even after controlling for the impact factor of the publishing journal (i.e. it’s not just that rejected papers get cited less because they’re eventually published in lower-impact journals). Papers that got rejected without review go on to garner fewer citations than those that weren’t. Papers that go on to garner unusually many citations rarely get rejected before being published, and conversely rejected papers rarely go on to garner unusually many citations after being published. Finally, the highest-impact, most selective journals are the best gatekeepers by the measures examined. Bottom line: there’s some “randomness” in the peer review process, but not nearly as much as you might think. Peer review in ecology is not a crapshoot. Now, you could argue that citations are not an appropriate measure of a paper’s scientific merit, for instance because they just reflect bandwagon-y behavior on the part of ecologists. Maybe these results merely show that ecologists all know what sorts of papers others ecologists like, but what sorts of papers ecologists like is entirely a product of purely arbitrary fashion. But I think that’s a fairly hard case to make. Related: Brian’s old post on gatekeeping vs. editing in peer review, and Meghan’s recent poll on manuscript rejections.
Video for teaching ecology, political metaphor, or life lesson? You decide (ht @noahpinion)
And finally, Friedrich Nietzche, Cleveland Browns fan. 🙂
Wait, Menten of Michaelis-Menten fame was a woman? Read this piece by Rebecca Skloot for more on her mind-blowing accomplishments.