Friday links: science is “interesting” but not “awesome”, evolution of peer review, and more (UPDATED)

Also this week: should we default to distrust of medical research, and more.

From Jeremy:

The evolution of peer review over several decades. From sociology, but it generalizes. Basically, it all comes down to trying to give all mss a fair evaluation, while not overworking editors and reviewers, in the face of an increasing flood of submissions. I particularly liked the authors’ comparisons of peer review to alternative evaluation systems that exist in other areas of publishing. Think for instance of literary fiction publishing, which puts much more weight on protecting editors’ time, and so forbids direct submission of mss by authors. This forces literary agents to act as gatekeepers. The agents in turn protect their own time (and income) by giving little or no consideration to most mss, especially from unestablished authors.

Ken Hughes compiles some data on how often various emotional words are used in scientific papers, and argues we should use such words more often. I have mixed feelings about this: see here and here and here. Bottom line, I think there are ways to make scientific papers more enjoyable to read without jacking up our use of words like “awesome”.

I’m very late to this (forgot to link to it when I first saw it): a new preprint in pyschology reports that the preregistered studies registered reports confirm their hypotheses much less often than do non-preregistered studies. Non-registered studies almost invariably confirm their hypotheses; preregistered studies confirm their hypotheses just under half the time. The difference remains huge even if you exclude pre-registered replications of previously published studies. I haven’t read it, though FWIW I’ve read other good work in the past by the same authors. Just passing it on if you want to read and evaluate it for yourself. Remind me: hasn’t somebody published data on how often ecology papers confirm their stated hypotheses? I feel like I’ve seen data on that somewhere, but maybe I’m misremembering? (UPDATE: This paragraph corrected because I mixed up pre-registered studies with registered reports. Thank you to Tim Parker for commenting to point out my mistake. See Tim’s comment if you’re unclear on the difference between pre-registered studies and registered reports./end update)

I’m late to this as well: here’s an interesting news article on how early career Black atmospheric scientists created a very successful graduate program at Howard University–.

Former BMJ EiC Richard Smith argues that it’s time to start assuming that clinical trials are fraudulent, until they’re shown not to be. At least, that’s how the headline puts it. Without wanting to put words into the author’s mouth, I read the piece as saying that clinical trials should be expected to pass a standardized list of quality control checks (including checks that would catch common forms of fraud). Trials that don’t pass should be ignored (so, not published, not cited, not included in meta-analyses, etc.). The data linked to in the piece were new to me, and suggest that the rate of fraud in clinical trials may be rather higher than in the scientific literature as a whole. See here for links to some other data on the prevalence and predictors of scientific fraud.

Stephen Heard continues to try and fail to make me listen to Nightwish. πŸ™‚ If you want some good evolution music, I have you covered.

And finally, Cub are best known for “New York City”, as covered by They Might Be Giants. But this is my favorite song of theirs:

Have a good weekend. πŸ™‚

7 thoughts on “Friday links: science is “interesting” but not “awesome”, evolution of peer review, and more (UPDATED)

  1. Very sporting of you, Jeremy, to link to *me* linking to Nightwish πŸ™‚

    I won’t actually argue for (or against) the intrinsic musical value of “Endless Forms” – on Music Mondays, I’ve been more interested in representation. Mind you, I won’t link to anything I really hate… and if nothing else, Nightwish had a lot of fun recording that song.

    The real question is this: is my post the very first time anyone has linked to Nightwish and Miriam Makeba in the same piece??

    • “The real question is this: is my post the very first time anyone has linked to Nightwish and Miriam Makeba in the same piece??”

      I did a bit of searching to check, and I think the answer depends on how you define “piece”. I *think* there are other webpages containing the words “Miriam Makeba” and “Nightwish”, but I don’t think they’re blog posts. And it’s possible that not all occurrences of “Nightwish” refer to the band. πŸ™‚

      • Takes me back to the days of “googlewhacking” – for those who don’t remember, googlewhacking was the sport of trying to find a search string that returned exactly one google search result. It was challenging but possible once. It’s basically not possible now – the corpus is just too big!

  2. Hi Jeremy –

    Here are two papers that I know about that estimate the rate of support for primary hypotheses in ecology and related disciplines:
    (1) 74-78% statistically significant: Fanelli, D. (2010) β€œPositive” results increase down the hierarchy of the sciences. PLoS ONE 5, e10068.
    (2) 91% statistically significant: Csada, R.D., et al. (1996) The “file drawer problem” of non-significant results: does it apply to biological research? Oikos 76, 591-593.

    And an important point of clarification – the Scheel et al. pre-print you mention looked at rates of hypothesis confirmation in *registered reports*, which are not the same as ‘pre-registered’ studies. With registered reports, peer review and preliminary acceptance of a manuscript is based on the introduction and proposed methods, and this evaluation is done by the journal prior to data gathering. In principle, this should reduce or eliminate bias resulting from selection of studies for publication based on results, and the Scheel et al. paper presents data that supports this principle. In contrast, pre-registrations are not peer-reviewed and do not guarantee publication – the are simply publicly archived method and analysis plans posted prior to conducting the research. Although pre-registration enhances transparency of the scientific process, it is not expected to reduce publication bias in the manner of registered reports. By the way, if you or anyone else would like to read more about this distinction and the benefits and challenges of registered reports and pre-registration in ecology, you can read more about them here: http://doi.org/10.1111/cobi.13342

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.