Retraction Watch has the story of a large correction to a recent ecology paper. The paper estimated the cost of invasive plant species to African agriculture. The cost estimate was $3.66 trillion, which turns out to be too high by more than $3 trillion. The overestimate was attributable to two calculation errors, one of which involved inadvertently swapping square hectares for square km. Kudos to the authors for correcting the error as soon as it was discovered.
But should the authors have found the error earlier? After all, as the linked story points out, the original estimate of the agricultural cost of invasive plant species–$3.66 trillion–is much larger than Africa’s entire GDP. The calculation error was discovered after a reader who didn’t believe the estimate repeated the authors’ calculations and got a different answer. But it’s not as if the authors were careless. They’d already double-checked their own calculations. Mistakes happen in science. And sometimes those mistakes pass through double-checking.
This isn’t the first time something like this has happened in ecology. Here’s a somewhat similar case from a few years ago.
Which raises the question that interests me here: what should you do if you obtain a result that seems like it can’t be right? Assume that the result merely seems surprising or implausible, not literally impossible. It’s not that you calculated a negative abundance, or a probability greater than 1, or calculated that a neutrino moved faster than the speed of light. Ok, obviously the first thing you’re going to do is double-check your data and calculations for errors. But assume you don’t find any–what do you do then?
I don’t know. I find it hard to give general guidance. So much depends on the details of exactly why the result seems surprising or implausible, and exactly how surprising or implausible it seems. After all, nature often is surprising and counterintuitive! In the past, we’ve discussed cases in which ecologists had trouble publishing correct papers, because reviewers incorrectly found the results “implausible”. I don’t think it’d be a good rule for scientists to never publish surprising or unexplained results.
Here’s my one concrete suggestion: I do think it’s generally a good idea to compare your estimate of some parameter or quantity to the values of well-understood parameters or quantities. Doing this can at least alert you that your estimate is implausible, implying that you ought to scrutinize your estimate more closely. I think such comparisons are a big improvement on vague gut feelings about plausibility. So yes, I do think you should hesitate to publish an estimate of the effect of X on African agriculture that massively exceeds African GDP, even if you can’t find an error in your estimate.
But it can be hard to implement that suggestion. Because your own subjective judgments as to what’s “implausible” are pretty flexible, even when disciplined by comparisons to well-understood data points. Humans are great rationalizers. Once you’ve double-checked your implausible-seeming result, you’re probably going to start thinking of reasons why the result isn’t so implausible after all. Everything is “obvious”, once you know the answer. For instance, as I said above, I feel like that massive overestimate of the effect of invasive species on African agriculture probably shouldn’t have been submitted for publication in the first place. The estimate is just too implausible. But is that just my hindsight bias talking? I don’t know.
Which I guess just goes to show why we have peer review. Your own subjective judgments as to what’s “implausible” are different than other people’s. So at the end of the day, all you can do is double-check your work as best you can, then let others have a look at it with fresh eyes. All of us working together won’t be perfect. But hopefully we’ll catch more errors than if we all worked alone.
Have you ever found a result that seemed like it “must” be wrong? What did you do? Looking forward to your comments.