Also this week: prediction markets vs. replicability, Photoshop vs. Bill Nye, Marc Cadotte on Chinese science, honest Student, what to get Rich Lenski, Meg, and Ben Bolker for Christmas, and more!
This is a listing of (what one group considers) the 50 most beautiful campus arboretums and botanical gardens in the US. I’ve spent a lot of time in Cornell and Michigan’s arbs, and agree that they are gorgeous. Michigan’s Arb is right next to campus, and I love how convenient it is to get there. (The Matthei Botanical Gardens are also beautiful, but it takes more effort to get to them.) I’ve spent less time in Michigan State’s Beal Botanical Gardens and Wisconsin’s Arboretum, but also enjoyed those.
Nature is recruiting an ecology editor.
How zombie ideas survive. (ht Retraction Watch)
Among the new Canadian government’s first acts: letting government scientists talk to the press, restoring the long form census, and reestablishing the position of government science minister. These are three obvious pieces of low hanging science policy fruit, of much more than just symbolic importance; good to see them get picked immediately. Too early to say what other science policy changes will be coming down the pike, and if any will be accompanied by significant new money. As I said in a previous linkfest, I doubt the government will simply restore the pre-Harper status quo ante (and I say that as someone who liked the status quo ante).
The Reproducibility Project in psychology used a betting market to predict reproducibility of 44 of the studies. The market involved very small sums of money and only had a few dozen participants. But it did well in absolute terms, correctly predicting whether or not 71% of the 44 studies would replicate (for a particular definition of “replicate”), and handily beating a survey of market participants’ individual forecasts. Though if those surveyed had also been asked about their confidence in their forecasts, and if their forecasts had been confidence-weighted, I assume the survey would’ve more or less matched the prediction market. Note that the linked paper contains a cringe-worthy, incorrect summary of what p-values mean (and bizarrely, later seems to criticize its own summary), but I don’t think it affects the main conclusion about the performance of the prediction market. Prediction markets don’t help us infer what hypotheses are true. But I think they’re still of interest, because it’s interesting and useful to know how well the beliefs of scientists match up to what’s true. Daring to dream here: if growing interest in prediction markets were to help create a culture in which scientists were willing to bet their beliefs, that would be a Good Thing, for reasons explained here and here.
Marc Cadotte is on sabbatical in China this year. Here’s the first of what will be a series of posts on science in China. Marc discusses the reasons behind the rapid rise in the volume and average quality of China’s scientific output.
Terry McGlynn says that pre-publication peer reviewers increasingly see themselves as the decision-makers rather than as advisors to the editor, and so feel increasingly free to demand revisions as opposed to recommending revisions. Including demanding different or more substantive changes than is reasonable to request. In the comments, I disagreed. In my own experience as an author and editor, the problems Terry highlights are rare and aren’t increasing in frequency. And I think Terry’s suggestion to ban “prescriptive” reviews goes too far and would ban a lot of stuff that I find very useful as both an author and editor. What do you think? Are reviewers these days more likely to try to force authors to do the study they’d have done, rather than evaluating the study the authors did? And are editors these days more likely to effectively cede their decision-making authority to reviewers?
The average US college student doesn’t actually spend $1200/year on textbooks. It’s more like $500-$700. This fact alone has no policy implications, obviously–but as the linked post notes, any policy argument is likely to lead to bad policy if it’s based on bad data.
This week in Stuff I Hesitate To Link To Because I’m Giving Publicity To Someone Who Shouldn’t Have It: BMC Evolutionary Biology has retracted a highly-cited paper on a phylogenetics software tool because the lead author…refused to license the software for use by academics in countries with what he views as pro-immigration policies. For his part, the author says he didn’t violate BMC’s policy of making software available to any researcher, because any researcher who wants to use it can…move to another country. Thus making them, um, immigrants.
Photoshopping Bill Nye to be a tough guy. 🙂
“Oh, that’s nothing–Fisher would have discovered it all anyway.” Yup. (ht Simply Statistics). Related: Now I know what to get Ben Bolker for Christmas. 🙂
Now I know what to get Rich Lenski for Christmas. 🙂 (ht Not Exactly Rocket Science)
Now I know what to get Meg for Christmas. 🙂 (ht Brad DeLong)
Whoops, sorry, now I know what to get Meg (and every paleontologist I know) for Christmas. 🙂 (ht Brad DeLong)
Unsure where else to post it, but I was wondering whether I could get your thoughts on this:
From my vantage point, I can see how being able to more easily work with large, disparately located data sets is valuable. On the other hand, I find the idea of using a pre-constructed model as your baseline for analysis troubling: science has enough problems of scientists not knowing what’s going on under the hood of data analysis as it is. If you are going to use ‘summary’ models, they sure as hell had better be very accurate.
Sorry, too far from my expertise for me to comment. You could try sending the link to Mathbabe, this sounds kind of up her alley (http://mathbabe.org/).
Will do! Thanks.