Also this week: should we default to distrust of medical research, and more.
The evolution of peer review over several decades. From sociology, but it generalizes. Basically, it all comes down to trying to give all mss a fair evaluation, while not overworking editors and reviewers, in the face of an increasing flood of submissions. I particularly liked the authors’ comparisons of peer review to alternative evaluation systems that exist in other areas of publishing. Think for instance of literary fiction publishing, which puts much more weight on protecting editors’ time, and so forbids direct submission of mss by authors. This forces literary agents to act as gatekeepers. The agents in turn protect their own time (and income) by giving little or no consideration to most mss, especially from unestablished authors.
Ken Hughes compiles some data on how often various emotional words are used in scientific papers, and argues we should use such words more often. I have mixed feelings about this: see here and here and here. Bottom line, I think there are ways to make scientific papers more enjoyable to read without jacking up our use of words like “awesome”.
I’m very late to this (forgot to link to it when I first saw it): a new preprint in pyschology reports that the
preregistered studies registered reports confirm their hypotheses much less often than do non-preregistered studies. Non-registered studies almost invariably confirm their hypotheses; preregistered studies confirm their hypotheses just under half the time. The difference remains huge even if you exclude pre-registered replications of previously published studies. I haven’t read it, though FWIW I’ve read other good work in the past by the same authors. Just passing it on if you want to read and evaluate it for yourself. Remind me: hasn’t somebody published data on how often ecology papers confirm their stated hypotheses? I feel like I’ve seen data on that somewhere, but maybe I’m misremembering? (UPDATE: This paragraph corrected because I mixed up pre-registered studies with registered reports. Thank you to Tim Parker for commenting to point out my mistake. See Tim’s comment if you’re unclear on the difference between pre-registered studies and registered reports./end update)
I’m late to this as well: here’s an interesting news article on how early career Black atmospheric scientists created a very successful graduate program at Howard University–.
Former BMJ EiC Richard Smith argues that it’s time to start assuming that clinical trials are fraudulent, until they’re shown not to be. At least, that’s how the headline puts it. Without wanting to put words into the author’s mouth, I read the piece as saying that clinical trials should be expected to pass a standardized list of quality control checks (including checks that would catch common forms of fraud). Trials that don’t pass should be ignored (so, not published, not cited, not included in meta-analyses, etc.). The data linked to in the piece were new to me, and suggest that the rate of fraud in clinical trials may be rather higher than in the scientific literature as a whole. See here for links to some other data on the prevalence and predictors of scientific fraud.
And finally, Cub are best known for “New York City”, as covered by They Might Be Giants. But this is my favorite song of theirs:
Have a good weekend. 🙂