Friday links: how to spot nothing, Aaron Ellison vs. Malcolm Gladwell, and more

Also this week: why “crunch mode” doesn’t work, the difficult question of “fair” pay for postdocs, rethinking economics science, a high profile ecology paper comes into question, are scientists becoming less productive, confirmation bias > you, is torture ok if you do it to ggplot, WHEN WILL I HEAR FROM NSF?!?! and more. Lots of good stuff this week!

From Jeremy:

Aaron Ellison takes issue with van Nes et al.’s suggestion that ecologists define “tipping points” as Malcolm Gladwell defines them. Ellison argues (convincingly, to my mind) that Gladwell’s definition is unhelpfully overbroad and vague. Related: my old post on overbroad ecological concepts. I suggested the examples of biodiversity affecting ecosystem function, and ecosystem engineering. Niche construction arguably is another example of an overbroad concept.

Arjun Raj with a typically-thoughtful post on what constitutes “fair” pay for postdocs (or really, anyone). And how to balance “fairness” against other considerations. I like this sort of post because (i) it addresses an important issue that is both too sensitive and too complicated to be usefully discussed via Twitter, and (ii) it looks at the issue from all sides and comes to no firm conclusions. The follow-up post here includes some concrete advice for pay negotiations for both PIs and postdocs. I particularly like the suggestion to periodically discuss the lab’s finances with postdocs and perhaps grad students too, so that they have some understanding of how the sausage is made. I think and hope that this sort of discussion would mostly be welcomed by trainees, rather than coming off as passive-aggressive on the part of the PI.

A critique of a zombie idea about innovation policy, along with an interesting explanation for why the zombie persists. If you’re an academic scientist doing basic research, it’s quite likely that you believe this zombie idea.

Do you know Nothing when you see it? Ted Talk-style intro to statistics from Amelia McNamara. Covers bootstrapping and randomization tests. Good fodder for undergrad courses. (ht Simply Statistics)

Sticking with statistics: Andrew Gelman on the philosophy of science that underpins his approach to statistics, with comments from Deborah Mayo.

The Economist finds that, on a per-author basis, scientists are less productive than they used to be. That is, the ratio of total papers:total authors is declining. As the article suggests, this probably reflects changing authorship practices. Number of authors per paper is rising mostly because contributions that previously wouldn’t have been regarded as “authorial” now get you co-authorship. An old post from Meg on this, and one from me. Semi-related: no, formal statements of author contributions don’t solve the problem. At least, there’s no sign yet that they’re doing so. Possibly, the Economist’s result also means that the amount of science reported per paper is rising. (ht Retraction Watch)

Science issues an Expression of Concern about a recent paper concluding that low environmental concentrations of microplastics affect larval fish ecology. The authors say they’re unable to provide the original data files to others because the only copy was on a laptop that was stolen days after the paper was published, and less than 24 hours before Science contacted them reminding them to deposit their data in a public repository. A group of researchers alleges that the authors lied about their work; the authors deny the allegations and say their accusers are lying. A preliminary university investigation cleared the authors of misconduct, but a second investigation by a national ethics board is ongoing. Further coverage in Science. It occurs to me that one way to better prevent this sort of situation going forward might be for journals to have a policy of automatically retracting any paper for which authors fail to follow the journal’s data archiving policies. Basically, the idea would be to put data archiving on a par with the journal’s other requirements of publication, such as obtaining necessary IRB approvals. Such an automatic retraction wouldn’t imply anything about whether or not misconduct was committed. That’s a separate matter (and one that journals aren’t usually in a position to investigate fully anyway). Against that, you could argue that automatic issuance of an Expression of Concern is sufficient penalty for failure to follow the data archiving rules. Or that automatic retraction is too insensitive to individual circumstances to be a wise policy. I dunno, what do you think? Semi-related: my old post musing that, anecdotally, known misconduct seems to be especially rare in ecology and evolution.

Retraction Watch interviewed Dan Bolnick about his decision to retract a paper when he discovered an inadvertent programming error that changed the conclusions. I particularly like Dan’s points about how you can’t prevent coding mistakes from being published by requiring people to use particular software or to submit their code for peer review:

  1. Not every researcher uses statistical tools that leave a complete record of every step, in order. Given the potential problems with coding errors, we shouldn’t require people to do so. That means this probably can’t be an obligatory part of review.
  2. Any journal that stuck its neck out and required well-annotated reproducible code + data for the review process would just see its submissions plummet. This needs to be coordinated among many top journals.
  3. Reviewers would either say “no” to review requests more often, or do a cursory job more often, if we required that they review R code. And many great reviewers don’t know how to review code properly either.

Click through for Dan’s ideas about possible solutions that would work. Related: Meg’s brave post on discovering an error in one of her papers, and Brian on how everybody makes mistakes.

I’m a bit late to this, but here is my college classmate Tim Billo’s very nice remembrance of Bob Paine. (ht Greg Crowther)

As final exams approach, a reminder to students that failure is an option. (ht Emily Weigel) As Meg has noted in the past: struggling in a class needn’t hurt your long-term prospects, and indeed often is helpful in the long run because we learn a lot from our failures.

Semi-related: how many rejections should you aim for? Semi-related to that: my shadow cv (aka cv of failures), and Meg on resilience and resistance in the face of rejection.

Economist Tony Yates satirizes scientists like David Sloan Wilson, who try to tell economists that they’re Doing It Wrong.

How to make ggplot scream in pain. I presume. 🙂

And finally: a fun little test illustrating how difficult it is to avoid confirmation bias. Take the test and then think about how it applies to (i) how we do ecology, and (ii) how we generalize from our own experiences and what we read on social media. (ht In Due Course; click this link if it’s not quite clear why I think of the previous link as a test of confirmation bias)

From Meg:

NSF DEBrief tell us about when those of us waiting on funding decisions will hear. Short version: they’re aiming to have all definite declines processed by December 20th, and will let people who are in “definite award” or “gray zone” groups about their status via phone or an email from the program officer. It also says that, if you are getting a decline, you can see that in Fastlane first because the emails only go out in batches at night. As someone who is already inclined to obsessively check Fastlane, I didn’t really need to know that! (While I knew that emails go out in batches at night, I had assumed that the Fastlane status updated at the same time as the email went out, not sooner.) More seriously, this is a great use of the NSF-DEB blog, letting them get information out to a lot of folks who have been wondering about this.

PsycGirl had an important post reminding us that we need to remember that lots of folks are dealing with things we don’t know about, and that we need to be kind. (She also notes that, in her case, it might have made sense to let more people know about her ongoing health problems.) This is true of students, colleagues, staff, and, well, pretty much everyone in your academic and non-academic worlds. I especially liked this part of her post:

Sometimes, in academia (or life), you will come across people who seem like they are a disaster. Sometimes, they will frustrate you and slow you down or generally impede your progress. You might want to lash out at them, tell them how much they suck, or shame them into doing what you want. But please consider that something really difficult might be going on that is making that person a disaster. It doesn’t mean they are a disaster. So many people have so many invisible (to you) hurdles in their way. Please consider that one of them might be occurring to the person who is frustrating you. Find some empathy. Resist the urge to shame.

Why crunch mode doesn’t work: a post going into the reasons why working long hours doesn’t work long term. It includes this summary:

It comes down to productivity. Workers can maintain productivity more or less indefinitely at 40 hours per five-day workweek. When working longer hours, productivity begins to decline. Somewhere between four days and two months, the gains from additional hours of work are negated by the decline in hourly productivity. In extreme cases (within a day or two, as soon as workers stop getting at least 7-8 hours of sleep per night), the degradation can be abrupt.

It relates to my post on not needing to work 80 hours a week to succeed in academia. (ht: Greg Wilson)

9 thoughts on “Friday links: how to spot nothing, Aaron Ellison vs. Malcolm Gladwell, and more

  1. Spoilers on the confirmation bias quiz below…

    Hmmm… The text associated with the confirmation bias quiz is not quite “fair” to the reader. If one correctly guesses the general form of the right answer, and assumes (because it’s a fill-in-the-text-box “answer”) that the rule can’t be particularly complicated (must not contain conditionals, bizarre exceptions, etc), then it’s quite easy to rule out large swaths of the non-viable hypothesis space with only a few “no” answers, and one is then left receiving a bunch of “yes” answers, not because they’re fishing for “yes” answers, but because all of the remaining edge-case tests where they were fishing for “no”, turned out to actually produce “yes”.

    I have a suspicion (but today’s head-cold leaves me too fuzzy to prove) that it does not take more than three, possibly only two “no” answers to rule out any reasonable alternative hypothesis, when the “truth” is that simple. I would propose that the author’s assumption that producing more “yes” answers than “no” answers is an example of confirmation bias, is actually the better example of said bias in the experiment 🙂

    • Spoilers continue …

      I think the key here is not that some of us cannot figure it out (I did as well). But the statistics they collected and reported (after you guessed) on how many people guessed without ever getting a no. As a scientist and a moderate puzzle afficionado, it was obvious to me that you could not be certain without getting no answers back as well. But based on the statistics, not many humans approached it that way. And in fairness, there is no telling how much Jeremy’s framing it as confirmation bias made me more cautious than I would have been otherwise.

      • I’m certainly not taking exception to the observation that it’s a bit depressing that so many people offered guesses without ever probing negative examples (though I’d actually suggest that they’re suffering from an interesting sub-class of confirmation bias, where they’ve produced a mental model that is a special-case version of the true model. This is distinctly different from the “ignore counter-evidence” variety of confirmation bias found in, i.e., the Due Course discussion of conspiracy theorists and cultists).

        Where I differ, is on the author’s assumption that people producing more positive tests, and 3 or fewer negative tests, is additional evidence of confirmation bias.

        It is true that people intentionally (consciously or subconsciously) probing “assumed to be true” cases more frequently than “assumed to be false” cases will produce this pattern.

        It is also true that people who initially guessed a more restrictive initial rule (e.g., “every number is double the last”), who are probing “assumed to be false” cases, and successively refining their hypotheses when those unexpectedly return true, will produce the same pattern. The authors need to test a few other rules with different properties, rather than simply plucking a black raven from the sky and assuming the results substantiate that hypothesis.

      • “though I’d actually suggest that they’re suffering from an interesting sub-class of confirmation bias, where they’ve produced a mental model that is a special-case version of the true model. This is distinctly different from the “ignore counter-evidence” variety of confirmation bias found in, i.e., the Due Course discussion of conspiracy theorists and cultists”

        Good point. My deliberate provocation in the original post was slightly tongue in cheek. For reasons like the one you identify (and others), I actually think that fun little tests like this one tell us only a quite limited amount about human reasoning in general. I actually don’t think that ecology is rife with weak confirmatory studies (which I think it is) because of confirmation bias in the sense tested by this little quiz.

        I feel the same about other little quizzes testing other aspects of human reasoning.

  2. I was thinking about the confirmation bias quiz in light of the pattern-recognition exercises my son is being taught in kindergarten. If given the numerical sequence 2, 4, 6, and asked to continue it, the correct answer is supposed to be 8. Not, say, “any number greater than 6”. I was amused to imagine a kindergartner responding to this question with “You haven’t given me enough information. Many different pattern-generating rules are consistent with the sequence of numbers you provided. And to infer the next number in the sequence, I first need to infer the rule that generated the sequence.” 🙂

  3. I think journals should provide some leeway to waive data archiving requirements, based on a project I was involved with. Our study touched on a subject that is somewhat politically and socially, but not scientifically, controversial. The trade group related to this issue pressured the journal, which did not have a data accessibility policy at the time, to force us to publish the data. This group has a history of trying to discredit solid research based on small errors that don’t actually affect the conclusions. They may also take data and re-analyze them in ways that suggest their products do not have negative effects. (The project leaders were able to resist this pressure in the end, and the article is highly cited. It was an interesting experience, though.)

Leave a Comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.