Also this week: the boy who cried wolf vs. type I errors, pre-registered replication vs. stereotype threat, update on double-blind reviewing at Am Nat, myths of scientific software, scientific texts vs. Google Ngrams, and more.
How do you use the time before class begins? I enjoyed this piece with three ideas for things to do during that time. It might be hard to scale some of these up to a 300 person lecture, but it might be worth trying. I also like that he talks about how, early in his career, he worried about being able to fill an entire class period, but now he feels like there isn’t enough time to cover what he wants. That has been true for me, too.
Type I error vs. type II error, as explained by the boy who cried wolf. This should help students remember the difference!
I remain fascinated with the ongoing rapid shifts in practice in psychology, towards pre-registered replication. Social psychology results on “priming” appear to be having a particularly rough time of it. Effects supported by dozens of statistically-significant results across bunches of papers vanish entirely when you try to find them with a high-powered, pre-registered replication. Which is perhaps not surprising in retrospect, since in at least some cases funnel plots of previously-published results just scream publication bias + p-hacking. I note this in part because experimental studies of “stereotype threat” are one example of priming. Meg’s done a great job in the past reviewing the existing literature on stereotype threat. As best I can tell (since my knowledge here is limited to what I read on blogs), pre-registered replications, more powerful studies, and formal meta-analyses on stereotype threat are just starting to come in (e.g., Müller & Rothermund 2014, Gibson et al. 2014, Moen & Roeder 2014, Ganley et al. 2013 [discussed here from a priming-skeptic point of view], Flore & Wicherts 2015). Those new studies are a mixed bag, but overall they don’t look great for the existing literature on stereotype threat. It’s early days though, too early to say whether the conclusions of the existing literature on stereotype threat need substantial revising. And I’m no expert, happy to be pointed towards relevant studies I missed in my skim. Finally, hopefully it doesn’t need saying, but my comments here are concerned with one rather narrow issue. We should still be alert to subtle biases, should still teach our students that they can improve, should still be mindful of sending unintended messages, etc. Indeed, those things still sound like good ideas to me independent of the experimental literature on stereotype threat.
An update on Am Nat’s experiment giving the authors the option of double blind review. 16% of authors opt out of blinding, mostly because they think their identity is too obvious to bother trying to hide it. Gratifyingly (and slightly surprisingly, to me), there’s been no uptick in people declining to review. Reviewers do often see through the blinding, at least to the level of guessing the lead author’s lab–but occasionally they only think they do (i.e. they report incorrect guesses of author identity to the editor). Too early to say yet if the new policy is making any difference to which papers get accepted, though one handling editor thinks that famous people are now getting tougher reviews. (ht Trish Morse, Am Nat Managing Editor, via the comments).
The increasing prevalence of scientific texts in the Google Books database makes it tricky to use Google Ngrams to capture broader cultural shifts in word use. But it makes it easier to write silly, nerdy Ngrams-based blog posts. 🙂 (ht Neuroskeptic)
This is a few months old but I missed it at the time: the myths of bioinformatics software. Applies to scientific software more broadly. The myths include “somebody will build on your code”, “if you choose the right license more people will use and build on your program”, and “you used the right programming language for the task”.
In a recent linkfest I noted a new R package, statcheck, that spots statistical reporting mistakes in psychology papers. Here’s the creator’s story of how the package came to be. Unfortunately it won’t work with most papers from other fields, unless they report statistical results the same way psychology papers do.
The Browser has posted its nominees for the best online writing of the year. You can vote for your favorites. There’s a piece from Andrew Gelman on the list, and a piece on teaching Bayes’ Theorem with lego that I’ve linked to in the past. It’s nice to have one’s taste validated. 🙂 There are other popular and semi-popular science pieces on the list. I liked this piece on why the world will only get weirder (though I disagree with the libertarian/anarchist direction the author takes it at the end), and this one on why you often can’t apply game theory to real life. I didn’t look at many of the non-science pieces, but this one on “the Copenhagen interpretation of ethics” is thought provoking. (ht Andrew Gelman)
xkcd on why biology is more than just gene sequencing. 🙂
And finally, Trudeau considers re-muzzling Canadian scientists after 3 hour conversation about rare seaweed. 🙂 (ht Not Exactly Rocket Science)