Happy New Year! Also this week: an interview with science’s best friend in Congress, some interesting suggestions for improving peer review, why nobody reads your preprints, the economics of academic books, and more. Also: what if longtime readers tire of your blog?
Economics and political science blogger Chris Blattman thinks his longtime readers are tiring of his blog, so that even as his readership has grown it’s gotten broader and shallower. Man, I hope that doesn’t start happening to us. I’m cautiously optimistic it won’t, at least not anytime soon, in part because there’s three of us. But we’ll see.
An exit interview with Rush Holt, the biggest champion of basic scientific research in the US Congress, who is leaving to take up the presidency of AAAS.
Data Colada with a typically-clear post on the abuse of parsimony as a way to rule out alternative explanations for one’s results. Uses an amusing example of a hypothetical series of studies purporting to show that women are taller than men, because alternative explanations for the data wouldn’t be “parsimonious”. The approach being critiqued here (“conceptual replications”) is much more common in psychology than in ecology, but the post would still be a fun example to use in any experimental design/research methods course, I think. And there’s a very interesting and much broader philosophical issue under the surface here: when do different, individually-weak lines of evidence that are all consistent with the same conclusion collectively provide strong evidence (a “severe test“) for that conclusion? Maybe never! Put another way, when should we be impressed with a scientific theory because it provides a “unified” explanation for many apparently-unrelated phenomena? Again, maybe never!
Arjun Raj with some interesting ideas for improving peer review. The idea of having reviewers confer with one another before finalizing their reviews is very interesting. Somewhat like grant review panels at many funding agencies, which I do think work very well for the most part. Though just because you can get people to do that for funding agencies or the highly-selective eLife doesn’t mean you’ll be able to get them to do it for just any journal. And I agree with Raj that the idea of publishing everything first in Plos One-style journals, with selective journals then just highlighting what they see as the best stuff, sounds attractive but runs into the problem that lots of good people prefer to review for selective journals (full disclosure: I do). Plus, wouldn’t it be more work for editors to sift through stuff, without any help from authors self-selecting? But my comments here may just show that I’m old.
Zen Faulkes with the story behind his new Plos One paper on the lessons learned from #SciFund crowdfunding efforts. I was interested to read that even though he and his co-authors made the data public from the start and posted a preprint on PeerJ, the most informative and detailed feedback they got was still via pre-publication peer review. And the biggest public response didn’t come until after the peer-reviewed paper was published. This doesn’t surprise me. There are lots of things I could read, so filtering is essential. So except in unusual circumstances I don’t bother reading rough drafts (which is what preprints are), much less even glancing at people’s raw data. I doubt I’m alone in this attitude. Whatever you’re working on, there probably aren’t many people who care enough to want to see a rough draft nownownow, much less your raw data. Which isn’t an argument against preprints or open science, of course. But let’s not be under any illusions about the nature and magnitudes of the benefits of preprints and open science. (And in case you’re now wondering why I read and write blog posts, the answer is that a blog post is a finished product, not a rough draft or incomplete version of something else. If someone were to start publishing their rough draft blog posts, or their notes for ideas for future blog posts, I wouldn’t bother reading them.)