Also this week: 50 shades of
grey green, the best (?) pedagogical technique you’ve never heard of (?), and more.
Do journals really need cover letters to accompany ms submissions? Interesting question, but contrary to the author of the linked piece I think the answer is “yes”, because the EiC reads the cover letter (and probably not the ms) and uses it to decide whether to assign the ms to a handling editor or reject it, with “reject” being the choice ~50% of the time at leading ecology journals. I think handling editors vary on whether they read cover letters, but I don’t know. Follow that second link for advice on how to write a cover letter. (UPDATE: second link fixed)
Can someone with more pedagogical training than me give a second opinion on this? Because this is the first I’ve ever heard of the pedagogical technique of “direct instruction”. Does it actually work as well as the data in the linked post suggest?
In a new meta-analysis of undergraduate academic performance (which I haven’t read; just passing the link along), the single best predictor of undergraduate student grades is class attendance. It beats SAT scores, high school GPA, and measures of student study habits and study skills, and is not especially collinear with those other predictors either. Note however that, in a smaller sample size, mandatory attendance policies only have a modestly positive effect on grades. That’s consistent with my anecdotal impressions–students who attend class do better both because attending class helps, and because the students who attend class are the ones who would do well even if they didn’t attend. (ht @noahpinion)
BuzzFeed got hold of a bunch of emails from the lab of prominent Cornell University nutrition researcher Brian Wansink. The emails reveal him systematically ordering his trainees to p-hack, in order to come up with results that would generate media interest. This sort of thing is very much not what our own Brian McGill had in mind when he praised exploratory statistics. I think Wansink’s example is a useful one for intro biostats courses, as an extreme (and thus clear) example of what “p-hacking” means. But I think Wansink’s behavior is very unusual in the context of science as a whole (maybe he’s less unusual in the context of nutrition research or social psychology?). So I don’t think you can use Wansink’s example to make the case for a systemic problem with X, whether “X” is “p values” or “the quality of statistical training” or “incentives to publish” or “researcher integrity” or whatever (note that Andrew Gelman disagrees.) Put another way, I don’t think reform of statistical training, or widespread adoption of Bayesian approaches, or whatever, would reduce the (already low) frequency of Brian Wansinks. I agree with Andrew Gelman that Wansink’s work is completely theory-free and so his research program would be a fruitless wild goose chase even if his individual experiments were all preregistered rather than p-hacked. But because Wansink has been doing intentional p-hacking, his case isn’t the best illustration of how hypothesis testing is a bad idea in the complete absence of theory.
Speaking of Brian Wansink, here’s one of his early, influential papers getting shredded by…[wait for it!]…the Joy of Cooking Twitter account. Yes, really. (ht @dsquareddigest)
How economists log-transform data with zeroes. I have never seen that transformation used in ecology. Interesting how different fields develop different “standard” ways of dealing with the same problem.
“50 shades of green“: John Holbo on the sexiness of Erasmus Darwin’s poetry. I found this interesting and fun, and I’ve never even read Erasmus Darwin. Best line:
It’s like Tinder, but for plants.
In the comments, place your bets on how many hours Stephen Heard will spend reading Erasmus Darwin’s poetry this morning after clicking that link. I set the over/under at two hours. 🙂 #resistanceisfutileStephen
The previous link alerts me to this book, which looks interesting.
And finally, Meghan just plays this on a loop in her lab. Presumably. 🙂
Kate Clancy, lead author of a really important study on the prevalence of harassment and assault during field experiences, testified before Congress this week in a hearing on sexual harassment and misconduct in science. Her testimony was 5 minutes long and is really, really worth watching. Here’s an article on the full hearing, which includes this quote from Clancy’s testimony:
While the come ons are the type of behaviors we see in articles about Harvey Weinstein and in sexual harassment trainings, the majority of sexual harassment are in fact the put downs. These are the kinds of behaviors most women in the workplace have experienced at least once in their lifetime, and many experience everyday. The offensive remarks, subtle exclusions; requests to make coffee, yes, but also starting rumors, sabotaging promotions, or ruining a career.
This matched what I’d learned in a recent seminar I attended on sexual harassment, which referenced this article when noting that, because they are so much more common, the “put downs” are at least as damaging as the “come ons”. Given this, it’s a problem that we focus almost entirely on the come ons when discussing sexual harassment.
Related to the above, the American Geophysical Union recently updated their definition of what constitutes scientific misconduct to include sexual harassment. (Here’s a Science piece on the AGU policy.) And the National Science Foundation has now set up a mechanism for reporting harassment directly to them; I think this is the portal, though it’s not totally clear to me how to report (would you just send an email to the address on that page?) and I’d love more information from others who know more. NSF also now requires institutions to report sexual harassment findings, with potential consequences for funding.