Also this week: underwater thesis defense (yes, really), database-defeating data (yes, really), why scientific papers should be longer (yes, arguably), how penguins ruined nature documentaries, and more. Including this week’s musical guest, They Might Be Giants!
There are just three wolves left on Isle Royale*, meaning that the predator part of the longest running predator-prey study is likely to end soon.
(* If you want to pronounce this like a native, you should pronounce it the way you’d say Isle Royal. Ah, Michigan pronunciations.)
MacLean’s had a piece on why there are still far too few women in STEM, which featured work by Alex Bond. One of the points the piece makes is that women are “consistently passed over for recognition”. Their focus is on women in Canada, but this applies in the US, too. Related to that, I’m glad that ProfLikeSubstance is also calling attention to the poor gender ratio of NSF Waterman Awardees.
I’m really glad to hear that the terHorst Lab at Cal State-Northridge organized an event to create Wikipedia pages for women in ecology and evolution! This old post of mine has a list in the comments of women who people have proposed as needing Wikipedia pages (or improvements to existing pages).
Seminars for most of the speakers from the UMich EEB Early Career Scientist Symposium (which focused on the microbiome) are mostly available on youtube! Talks by Seth Bordenstein, Katherine Amato, Kevin Kohl, Kelly Weinersmith, Rachel Vannette, Justine Garcia, and Georgiana May are available.
PhD comics on how to write an email to your instructor or TA. (ht: Holly Kindsvater)
A lot of people think that grant review is a crapshoot, because review panel ratings of funded grants often don’t correlate strongly with the subsequent impact of the work funded by those grants. But that’s a silly criticism, because the whole point of grant review panels is to make (relatively) coarse distinctions so as to decide what to fund, not to make (relatively) fine distinctions among funded proposals. A natural experiment at NIH provides an opportunity to test how good grant review panels are at deciding what to fund. Back in 2009 stimulus funding led NIH to fund a bunch of proposals that wouldn’t otherwise have been funded. Compared to regular funded proposals, those stimulus-funded proposals led to fewer publications on average, and fewer high-impact publications, and the gap is larger if you look at impact per dollar. The mean differences aren’t small, at least not to my eyes, though your mileage may vary, and of course there’s substantial variation around the means. Regular proposals also had higher variance in impact than stimulus-funded proposals, which means NIH can’t be said to be risk averse in its choice of proposals to fund. And if you think that NIH is biased towards experienced investigators, think again–stimulus-funded proposals were more likely to be led by experienced PIs than were regular funded proposals. I’d be very curious to see an analogous study for NSF. (ht Retraction Watch)
p.s. to previous: And just now–late Thursday night–I see that different authors have just published a Science paper looking at a different NIH dataset and reached broadly the same conclusion even though they restricted attention to funded grants. No doubt one could debate the analysis and its interpretation, probably by focusing on the substantial variation in impact that isn’t explained by review panel scores. But together, these two studies look to me like a strike against the view that grant review is such a crapshoot, and/or so biased towards big names, as to be useless. Related old post here.
Speaking of peer review, here’s a brief and interesting history of peer review at the world’s oldest scientific journal.
How long does a scientific paper need to be? Includes some thoughtful pushback against the view, expressed in the comments here, that short papers are more readable. Also hits on something we don’t talk about enough: how online supplements are changing how we write papers. I disagree with the author that online supplements are always a good thing on balance.
One oft-repeated criticism of conventional frequentist statistical tests is that their design encourages mindless, rote use. So I was interested to read about mindless, rote use of a Bayesian approach in psychology. An illustration of how the undoubted abuses of frequentist statistics are not caused by frequentist statistics, but rather are symptoms of other issues that wouldn’t be fixed by switching to other statistical approaches. Here, the issue is the need for agreed conventions in how we construct and interpret statistical hypothesis tests, and associated default settings in statistical software.
An MSc student at the University of Victoria will defend his thesis underwater. No, he’s not a marine ecologist. I wonder what happens if someone on his committee asks him to go to the board.🙂 (ht Marginal Revolution)
This makes me want to change my last name to NA, just to troll programmers.🙂 (ht Brad DeLong)
And finally, the fact that I’m excited about this dates me in multiple ways: They Might Be Giants have a new album out! Here’s a sample, which I justify linking to on the grounds that the video includes a couple of jokes our readers will particularly appreciate: