Also this week: a great case study of forecasting errors, how to write like Peter Adler, the paradox of post-publication review, peak Plos One, and Stuff We’re Not Linking To. Also, Meg claims that the dog Daphnia ate her homework, so no links from her this week. 🙂
From Brian:
In the vein of my pushing for more forecasting in ecology, here is a great discussion of how the discipline of having to produce public forecasts and sometimes being spectacularly wrong is pushing meteorologists to compare model strengths and weaknesses, discuss how to improve them, and discuss how to share the uncertainty they know is there with the public.
From Jeremy:
Natalie Cooper on the “one body problem” in academia.
Arjun Raj with some thoughtful comments on the “post-publication review” of a recent Science paper explaining why some human tissues are more cancer-prone than others. As Raj notes, the basic point of the paper seems to be a good and important one–which you’d never know if all you read was the social media s**tstorm. Reinforces my earlier thoughts: a lot of post-publication review at the moment is people trying to knock down high profile papers, and their authors, including by blowing nitpicky and downright incorrect criticisms out of all proportion through the amplifier of social media (see here and here). Obviously, there’s no set of peer review practices that can somehow guarantee high quality reviews that appropriately balance praise and criticism. But like Raj, I’m not sure that post-publication review is getting better as it becomes a more established part of the landscape. That’s a totally anecdotal impression, of course, so it might well be worthless. And maybe I’m falling into the trap of overgeneralizing from a few atypical, high-profile cases. But FWIW, I worry that there’s a paradox at the heart of post-publication review. It only works if the paper attracts a sufficient amount of attention. But any paper that attracts lots of attention is going to have a lot of people who want to take it, and its authors, down a peg.
Interesting exercise in crowdsourced data analysis (which I think I linked to when it was proposed; now the results are in): 29 independent teams of data analysts were given the same data set and the same question (do soccer referees give more red cards to dark skinned players), and told to answer it using whatever analytical approach they wanted. A wide range of analytical approaches were used, yielding answers that varied quite widely in key respects (though not infinitely widely, and other aspects of the answers were pretty consistent). As a commenter notes, what this shows is that aspects of model specification that have nothing to do with either the data or your “priors” as conventionally understood are hugely influential on the outcome of the analysis. And while I haven’t looked the paper, I’m sure that all 29 teams chose analyses that are reasonable and defensible. But just because your chosen analysis is reasonable and defensible does not mean that your conclusions are correct, or even “robust” in the sense that other reasonable, defensible analyses would lead to the same conclusions! I really like this as a way to illustrate and make transparent the judgment calls involved in any data analysis, it’s a really important message for both students and professionals. It would be fun to do this exercise in ecology. There are certainly plenty of data sets on Dryad now that would be fun to use for such an exercise. One key would be to line up a critical mass of analysts; it’s not a useful exercise if you only have a few teams. One way to do it would be to get students in graduate-level biostats classes to do it as a class exercise. A second key would be to make sure that most or all of the analysts have no prior experience with the data or obvious interest in reaching a certain conclusion. We already publish dueling analyses from researchers heavily invested in their pet hypotheses, we don’t need more of that. (ht Andrew Gelman)
A reminder, if one were needed, that social media is not yet one of the most important ways by which scientists filter the literature: a randomized trial by a top medical journal found that promoting papers on social media along with a toll-free link to the paper had no effect on how often the article was viewed. A related old post with survey data on how people find papers to read.
We have probably seen peak Plos One: they’re publishing 25% fewer papers than at their peak in Dec. 2013, though the decline seems to be stabilizing recently. Of course, there could be various reasons for this.
Over at BioDiverse Perspectives, Fletcher Halliday with a bunch of advice on how to write good papers, which he came up with by closely studying Peter Adler’s papers. Here’s my old post asking readers to suggest ecologists who write particularly well, and here are Brian’s tips for clear writing.
Dept. of Stuff I’m Not Linking To: I’m sure y’all saw Monday’s xkcd cartoon on P values, plus it’s an old joke in one form or another. It was fine, it didn’t offend me or anything, but I didn’t think it was funny enough to be worth linking to. I instead decided to take an opportunity to let y’all know just how discriminating my tastes are. Only the best links for our linkfest! 🙂
Ha! I also had a manuscript mysteriously revert to an earlier version yesterday. My computer seems to have undergone a time warp!