Also this week: Abraham Lincoln vs. confidence intervals, a double-blind review experiment, myths about applying for faculty positions, and more.
An interesting randomized experiment on double-blind review at an orthopedics journal. Reviewers were significantly (and substantially) less likely to recommend acceptance of a fake paper purportedly written by two prominent orthopedists when blinded to the authors’ identities. Even though blinding had no effect on the probability that reviewers would spot the technical mistakes intentionally included in the paper. I’m curious whether the result would generalize to other fields and contexts. In particular, the ms was on a “generic” topic so the blinding was unlikely to be seen through. And I’m curious what would happen if the “authors” weren’t prominent, since there’s a bit of evidence from other experiments that reviewers who are blinded to authors’ identities are more negative. Perhaps what we’re seeing here isn’t (entirely?) an effect of prestigious authors getting undue praise, but anonymized authors getting undue criticism. (ht Meg)
Andrew Gelman uses a nice analogy involving Abraham Lincoln to explain what’s wrong with a common mistaken way of describing confidence intervals. I may use this analogy next time I teach intro biostats. (Aside: in the comments over there, Andrew puzzles me with further remarks that seem to me to both contradict the original post and be based on a dubious argument. But I’m probably just misunderstanding the thrust of his remarks.)
Writing in Nature, Charles Godfray reviews Ilkka Hanski’s memoir. (updated by Meg to fix a spelling mistake!)
Mike Kaspari on how he decides where to submit his papers. Related posts from Brian and I and Brian again.
Climate scientist Kim Cobb dispels 5 common misconceptions about applying for tenure-track jobs at research universities.
Why it’s an especially bad idea for Germans to plagiarize their dissertations.
As the article points out, the German zeal for vetting doctorates can get downright perverse. Up until the late 1990s or 2000, the rules about whether you could call yourself “Dr.” (dating to the Nazi era) specified that only titles from German universities were valid. Around 2000 this was amended to include universities in other EU countries. I think it was amended in 2008 to include degrees from Australia, Israel, Japan, Canada, and the US (and Russia in some circumstances). But prior to that, yes, there were cases of people with Ph.D.’s from Stanford or Cornell being investigated for the crime of calling themselves “Dr.”, since they clearly didn’t have proper doctorates…
(I have a friend who wanted to get a faculty position in Spain back in the early 1990s. He had to have his Ph.D. “homologated”, which in his case ended up requiring an official letter from the British government stating that, yes, the “University of Oxford” really was a legitimate degree-granting institution. (He told me the people at the British Foreign Ministry thought this was really amusing.) Apparently there was an official Spanish government list of recognized foreign universities, which didn’t include Oxford. I like to imagine this was some survival from the Reformation era, when Oxford — as a university in a Protestant country — obviously wasn’t a proper university if you were a Catholic country…)
“Reviewers were significantly (and substantially) less likely to recommend acceptance of a fake paper purportedly written by two prominent orthopedists when blinded to the authors’ identities. Even though blinding had no effect on the probability that reviewers would spot the technical mistakes intentionally included in the paper.”
What an astonishing, completely surprising result.
How many more of these kinds of studies do we need before we admit that paper reviewing is fraught with favoritism, tribalism, careerism and other serious problems that affect the quality of the science and the paths of individuals’ careers? Double blind *may* be a step forward from the current situation, but if journals are really serious about ridding themselves of these problems they will go to “double open”, where everyone’s identity is known and there’s no hiding behind anonymity. The EGU series of journals appear to come closest to this standard at the moment, and it’s paying off. However, unless all prominent journals do the same, authors who want the review process to be private will just submit elsewhere.
Trouble with open review (among other things) is that many people will refuse to review if their identity will be revealed to the authors.
Re-upping this review of the literature on open peer review: http://blogs.plos.org/absolutely-maybe/2015/05/13/weighing-up-anonymity-and-openness-in-publication-peer-review/
Thanks for that article link Jeremy, looks well considered.
This will come across as harsh but as far as I’m concerned, if you’re not willing to say what you really mean/think regarding a manuscript’s scientific content, then you shouldn’t be a reviewer: leave the reviewing to those willing to do so, come what may.