Also this week: why “crunch mode” doesn’t work, the difficult question of “fair” pay for postdocs, rethinking
economics science, a high profile ecology paper comes into question, are scientists becoming less productive, confirmation bias > you, is torture ok if you do it to ggplot, WHEN WILL I HEAR FROM NSF?!?! and more. Lots of good stuff this week!
Aaron Ellison takes issue with van Nes et al.’s suggestion that ecologists define “tipping points” as Malcolm Gladwell defines them. Ellison argues (convincingly, to my mind) that Gladwell’s definition is unhelpfully overbroad and vague. Related: my old post on overbroad ecological concepts. I suggested the examples of biodiversity affecting ecosystem function, and ecosystem engineering. Niche construction arguably is another example of an overbroad concept.
Arjun Raj with a typically-thoughtful post on what constitutes “fair” pay for postdocs (or really, anyone). And how to balance “fairness” against other considerations. I like this sort of post because (i) it addresses an important issue that is both too sensitive and too complicated to be usefully discussed via Twitter, and (ii) it looks at the issue from all sides and comes to no firm conclusions. The follow-up post here includes some concrete advice for pay negotiations for both PIs and postdocs. I particularly like the suggestion to periodically discuss the lab’s finances with postdocs and perhaps grad students too, so that they have some understanding of how the sausage is made. I think and hope that this sort of discussion would mostly be welcomed by trainees, rather than coming off as passive-aggressive on the part of the PI.
A critique of a zombie idea about innovation policy, along with an interesting explanation for why the zombie persists. If you’re an academic scientist doing basic research, it’s quite likely that you believe this zombie idea.
Do you know Nothing when you see it? Ted Talk-style intro to statistics from Amelia McNamara. Covers bootstrapping and randomization tests. Good fodder for undergrad courses. (ht Simply Statistics)
Sticking with statistics: Andrew Gelman on the philosophy of science that underpins his approach to statistics, with comments from Deborah Mayo.
The Economist finds that, on a per-author basis, scientists are less productive than they used to be. That is, the ratio of total papers:total authors is declining. As the article suggests, this probably reflects changing authorship practices. Number of authors per paper is rising mostly because contributions that previously wouldn’t have been regarded as “authorial” now get you co-authorship. An old post from Meg on this, and one from me. Semi-related: no, formal statements of author contributions don’t solve the problem. At least, there’s no sign yet that they’re doing so. Possibly, the Economist’s result also means that the amount of science reported per paper is rising. (ht Retraction Watch)
Science issues an Expression of Concern about a recent paper concluding that low environmental concentrations of microplastics affect larval fish ecology. The authors say they’re unable to provide the original data files to others because the only copy was on a laptop that was stolen days after the paper was published, and less than 24 hours before Science contacted them reminding them to deposit their data in a public repository. A group of researchers alleges that the authors lied about their work; the authors deny the allegations and say their accusers are lying. A preliminary university investigation cleared the authors of misconduct, but a second investigation by a national ethics board is ongoing. Further coverage in Science. It occurs to me that one way to better prevent this sort of situation going forward might be for journals to have a policy of automatically retracting any paper for which authors fail to follow the journal’s data archiving policies. Basically, the idea would be to put data archiving on a par with the journal’s other requirements of publication, such as obtaining necessary IRB approvals. Such an automatic retraction wouldn’t imply anything about whether or not misconduct was committed. That’s a separate matter (and one that journals aren’t usually in a position to investigate fully anyway). Against that, you could argue that automatic issuance of an Expression of Concern is sufficient penalty for failure to follow the data archiving rules. Or that automatic retraction is too insensitive to individual circumstances to be a wise policy. I dunno, what do you think? Semi-related: my old post musing that, anecdotally, known misconduct seems to be especially rare in ecology and evolution.
Retraction Watch interviewed Dan Bolnick about his decision to retract a paper when he discovered an inadvertent programming error that changed the conclusions. I particularly like Dan’s points about how you can’t prevent coding mistakes from being published by requiring people to use particular software or to submit their code for peer review:
- Not every researcher uses statistical tools that leave a complete record of every step, in order. Given the potential problems with coding errors, we shouldn’t require people to do so. That means this probably can’t be an obligatory part of review.
- Any journal that stuck its neck out and required well-annotated reproducible code + data for the review process would just see its submissions plummet. This needs to be coordinated among many top journals.
- Reviewers would either say “no” to review requests more often, or do a cursory job more often, if we required that they review R code. And many great reviewers don’t know how to review code properly either.
I’m a bit late to this, but here is my college classmate Tim Billo’s very nice remembrance of Bob Paine. (ht Greg Crowther)
As final exams approach, a reminder to students that failure is an option. (ht Emily Weigel) As Meg has noted in the past: struggling in a class needn’t hurt your long-term prospects, and indeed often is helpful in the long run because we learn a lot from our failures.
Semi-related: how many rejections should you aim for? Semi-related to that: my shadow cv (aka cv of failures), and Meg on resilience and resistance in the face of rejection.
Economist Tony Yates satirizes scientists like David Sloan Wilson, who try to tell economists that they’re Doing It Wrong.
How to make ggplot scream in pain. I presume. 🙂
And finally: a fun little test illustrating how difficult it is to avoid confirmation bias. Take the test and then think about how it applies to (i) how we do ecology, and (ii) how we generalize from our own experiences and what we read on social media. (ht In Due Course; click this link if it’s not quite clear why I think of the previous link as a test of confirmation bias)
NSF DEBrief tell us about when those of us waiting on funding decisions will hear. Short version: they’re aiming to have all definite declines processed by December 20th, and will let people who are in “definite award” or “gray zone” groups about their status via phone or an email from the program officer. It also says that, if you are getting a decline, you can see that in Fastlane first because the emails only go out in batches at night. As someone who is already inclined to obsessively check Fastlane, I didn’t really need to know that! (While I knew that emails go out in batches at night, I had assumed that the Fastlane status updated at the same time as the email went out, not sooner.) More seriously, this is a great use of the NSF-DEB blog, letting them get information out to a lot of folks who have been wondering about this.
PsycGirl had an important post reminding us that we need to remember that lots of folks are dealing with things we don’t know about, and that we need to be kind. (She also notes that, in her case, it might have made sense to let more people know about her ongoing health problems.) This is true of students, colleagues, staff, and, well, pretty much everyone in your academic and non-academic worlds. I especially liked this part of her post:
Sometimes, in academia (or life), you will come across people who seem like they are a disaster. Sometimes, they will frustrate you and slow you down or generally impede your progress. You might want to lash out at them, tell them how much they suck, or shame them into doing what you want. But please consider that something really difficult might be going on that is making that person a disaster. It doesn’t mean they are a disaster. So many people have so many invisible (to you) hurdles in their way. Please consider that one of them might be occurring to the person who is frustrating you. Find some empathy. Resist the urge to shame.
Why crunch mode doesn’t work: a post going into the reasons why working long hours doesn’t work long term. It includes this summary:
It comes down to productivity. Workers can maintain productivity more or less indefinitely at 40 hours per five-day workweek. When working longer hours, productivity begins to decline. Somewhere between four days and two months, the gains from additional hours of work are negated by the decline in hourly productivity. In extreme cases (within a day or two, as soon as workers stop getting at least 7-8 hours of sleep per night), the degradation can be abrupt.
It relates to my post on not needing to work 80 hours a week to succeed in academia. (ht: Greg Wilson)