Also this week: you’re overestimating how many hours you work (especially if you think it’s a lot), most biologists suck at experimental design (apparently), the politics of science in the US and Canada, snow leopard vs. gravity, choose your own
adventure R error message, and more.
I’ve talked about having a hard time saying “no” to requests. It’s a common problem. Perhaps I need to try this approach, which seems much more fun:
This article on a lab that studies how people use their time was interesting. Among other interesting bits, there’s this:
Analyses showed that people in many countries routinely overestimate the amount of time that they spend working — in the United States, by some 5–10% on average7 (see ‘The truth about time’). But those who work longer hours tend to overestimate by the most: people who guess that they work 75-hour weeks, for example, can be over by more than 50%, and those of certain professions — teachers, lawyers, police officers — overestimate by more than 20%. (Scientists were not the worst exaggerators: they estimate working close to 42 hours per week on average, whereas diaries clock them at 39 hours)8.
The part about people having a hard time estimating how much they’re working links with my old post on not needing to work 80 hours a week to succeed in academia. I was also interested to see the estimate of 39-42 hours per week on average for scientists. (Extra note added by Meg: I put this in the links queue last Friday, before the 80 hour a week post started circulating again.) [note from Jeremy: that post is the Energizer Bunny of posts. 🙂 ]
This pressed fern art by Helen Ahpornsiri is beautiful, and if you scroll down just a little, you’ll see some great animated gifs showing things like a pressed fern iguana capturing a fly.
Jeremy Kerr and Isabelle Côté with science policy advice for the new Canadian government. As I said in an old linkfest, I agree with this advice but worry that it won’t get much traction with the new government because it just reverses the previous government’s policies. As much as I liked the status quo ante, new governments want new initiatives, they don’t just want to reverse whatever the previous government did. And now’s the time to be proposing new initiatives; we’re missing an opportunity if we spend all our breath right now arguing for reversing previous policies. But I confess I’m not sure myself what those new initiatives should be. How about a reorientation of basic research funding towards supporting more technicians and postdocs and fewer graduate students? Charley Krebs suggests starting up a Canadian version of the LTER network. Other ideas? Keep in mind that the Canadian economy is currently going sideways (at best) thanks to the low oil price. So even though the incoming government has promised to run modest deficits to keep the economy afloat, anything that involves throwing a ton of new money at basic science is probably a non-starter.
Meanwhile, south of the 49th parallel: an argument that the House Science Committee is worse than the Benghazi committee. I call it a tie. A horrible, horrible tie.
This week in Ecology Is Hard: remember those headlines from a few years ago reporting that marine phytoplankton biomass had declined 40% since the 1950s, and fingering global warming as the culprit? Turns out they were wrong–over 85% of the decline was just phytoplankton acclimating to changing light levels by modulating their chlorophyll production.
Using a prediction market to predict how the REF would evaluate the research output and impact of 33 British chemistry departments. I’m linking to this only because I found the basic idea interesting (and cheeky); the actual execution and results left a lot to be desired. The market only had 16 participants, and didn’t do any better than a survey of chemists or the scores from the previous RAE (the REF’s predecessor). So this doesn’t seem like a context in which a prediction market has much to add. (ht Marginal Revolution)
In a random sample of 2000 papers indexed by PubMed, only 20% (27/134) reported randomizing assignment of experimental units to treatments where that would’ve been appropriate. The figure only rises to 33% if you restrict attention to papers published since 2008. Good fodder for an intro biostats course. Which the majority of biologists either never took or have forgotten, apparently. Note that one of the headline results (papers in higher impact journals are less likely to report randomization when appropriate) is a very weak trend at best. Disappointing that a paper aiming to document widespread lack of basic rigor in the work of others would choose to hype a result that barely exists.
When should you accept the null hypothesis–that is, infer that an effect is “basically zero”? Data Colada on different interpretations of “basically zero”. I’ll have more on this next week.
Hope Jahren sure can write…her memoir. Available for pre-order.
Jeffrey Beal adds Frontiers to his list of questionable publishers. I agree–they’re a borderline operation. I’ve published with them before, but regret doing so and wouldn’t do so again. Are they as dodgy as the worst offenders out there? No, not even close–but “not nearly as dodgy as a complete scam” is very faint praise indeed. If you want an author-pays open access option (and it’s not clear you necessarily should), as an ecologist you have plenty of unquestionably-legit choices: Ecosphere, Ecology and Evolution, Plos Biology, Plos One, the BMC journals…In my view, there’s no reason for you to consider Frontiers.
Spectacular video of a snow leopard chasing its prey down what’s more or less a cliff.
Giving new meaning to the phrase “chalk talk”: a living chalkboard art project, featuring physicists. (ht Not Exactly Rocket Science)
Stephen Heard’s centrifugal theory of species diversity. I’m now imagining Stephen as Goldfinger. 🙂
Mid-career scientists exhibit an optimal level of interest in philosophy of science. Apparently. 🙂
And finally: got an R error message? Pick your poison. 🙂 (ht @kjhealy)