Is it really that important to prevent and correct one-off honest errors in scientific papers?

Wanted to highlight what I think has been a very useful discussion in the comments, because I know many readers don’t read the comments.

Yesterday, Brian noted that mistakes are inevitable in science (it’s a great post, BTW-go read it if you haven’t yet). Which raises the question of how hard to work to prevent mistakes, and correct them when they occur. After all, there’s no free lunch; opportunity costs are ubiquitous. Time, money, and effort you spend checking for and correcting errors is time, money, and effort you could spend doing something else.* I asked this question in the comments, and Brian quite sensibly replied that the more serious the consequences of an error, the more important it is to prevent it:

Certainly in the software engineering world it is widely recognized that it is a lot of work to eliminate errors and that there are trade-offs. If it is the program running a pace-maker it is expected to do just about everything to eliminate errors. But for more mundane programs (e.g. OS X, Word) it is recognized that perfection is too costly.

Which raises the sobering thought that the vast majority of errors in scientific papers aren’t worth putting any effort into detecting or correcting. At least, not any more effort than we already put in. From another comment of mine:

Yes, the consequences of an error must be key here. Which raises the sobering thought that most errors in scientific papers aren’t worth checking for or eliminating! After all, a substantial fraction of papers are never cited, and only a tiny fraction have any appreciable influence even on their own subfield or contribute in any appreciable way to any policy decision or other application.

xkcd once made fun of people who are determined to correct others who are “wrong on the internet” ( It’s funny not just because it’s mostly futile to correct the errors of people who are wrong on the internet, but because it’s mostly not worth the effort to do so. [Maybe] most (not all!) one-off errors in scientific papers are like people who are “wrong on the internet”…

What worries me much more are systematic errors afflicting science as a whole, that arise even when individual scientists do their jobs well–zombie ideas and all that.

Curious to hear what folks think of this. Carl Boettiger has already chimed in in the comments, suggesting that my point here is the real argument for sharing data and code. The real reason for sharing data and code is not so that we can detect and correct isolated, one-off errors.** Rather, we share data and code because:

Arguing that individual researchers do more error checking than they already do is both counter to existing incentives and can only slow science down; sharing speeds things up. I love Brian’s thesis here that we need to acknowledge that humans make mistakes. Because publishing code or data makes it easier for others to discover mistakes, it is often cited in anonymous surveys as a major reason researchers don’t share; myself included. Most of this will still be ignored, just as most open source software projects are; but it helps ensure that the really interesting and significant ideas get worked over and refined and debugged into robust pillars of our discipline, and makes it harder for an idea to be both systemic and wrong.

I’m not sure I agree that sharing data and code makes it harder for an idea to be both systemic and wrong. The zombie ideas of which I’m aware in ecology didn’t establish themselves because of lack of data and code sharing. But I like Carl’s general line of thought, I think he’s asking the right questions.

*A small example from my own lab: We count protists live in water samples under a binocular microscope. Summer students who are learning this procedure invariably are very slow at first. They spend a loooong time looking at every sample, terrified of missing any protists that might be there. Which results in them spending lots of wasted time staring at samples that are either empty, or in which they already counted all the protists. Eventually, they learn to speed up, trading off a very slightly increased possibility of missing the occasional protist (a minor error that wouldn’t substantially alter our results) for the sake of counting many more samples. This allows us to conduct experiments with many more treatments and replicates than would otherwise be possible. Which of course guards against other sorts of errors–the errors you make by overinterpreting an experiment that lacks all the treatments you’d ideally want, and the errors you make because you lack statistical power. I think people often forget this–going out of your way to guard against one sort of error often increases the likelihood of other errors. Unfortunately, the same thing is true in other contexts.

**I wonder if a lot of the current push to share your data and code so that others can catch errors in your data and code is a case of looking under the streetlight. It’s now much easier than it used to be to share data and code, so we do more of it and come to care more about what we can accomplish by doing it. Which isn’t a bad thing; it’s a good thing on balance. But like any good thing it has its downsides.

11 thoughts on “Is it really that important to prevent and correct one-off honest errors in scientific papers?

  1. I would add that in sharing data or code, one tends to do an extra round of error-checking. So it’s not just that other people might catch errors, but also that by sharing, you work with the data/code more or in different ways. I’m in the process of publishing data and code, which has prompted me to go back and add comments to the code, making it more re-usable. And we have caught a couple errors in the data because of the structure of the data tables we’re publishing. It’s entirely possible that we would have caught these errors at some point anyway, but by sharing, we conducted a few extra sanity checks early on, and so caught the errors early. We would much prefer to fix our own errors before publishing than to have others come back and say, “uh, your data don’t make sense.”

  2. Hi Jeremy, I love the ‘counting protists’ example because it’s a reminder that when we deal with data we just accept that “observation error” is part of the process — whether it’s the limits of our instruments or of being human and making transcription errors, etc. Of course everyone tries to reduce these errors, but in the end we just include reality of observation errors as part of the model, something we can deal with statistically rather than branding the data with a red A. We’re taught to read scientific papers critically anyway; and so it makes sense to acknowledge that they may have human errors that we should account for in our use of them just as we do in our use of data.

    As a side issue, I actually agree with you 100% that open data doesn’t really impact zombie ideas; where I take your definition of zombie idea to be something that all specialists have since dismissed but still persists in the broader community — things like that delightful example of the watering plants and life expectancy, or the gold standard, etc. Such ideas serve emotional or political roles independent of the data anyway, and thus I think are more issues of education and communication than research. By systemic and wrong, I meant only to address the hypothetical worst-case for being tolerant of human mistakes in published literature — that fear you have when you uncover some little error in something you have published, and unlike the cases people have shared where “fortunately it did not change the results”, the error does change the results (Think Reinhart-Rogoff). One might fret how terrible it is that a human mistake could result in very influential but incorrect conclusions. In general, our scientific process is pretty good at scrutinizing the really big, influential conclusions (systemic ideas) with or without the data, though as analyses get ever bigger and more complex it does become harder. I do think open data makes it even less likely that the whole community would adopt an idea that was the result of some human accident. These wouldn’t be zombie ideas, because they haven’t been killed yet. Sure, they might come back to life as zombie ideas once we discover the mistake (perhaps the R&R thesis lives on as a zombie idea in some circles despite it’s death in academic economics), but that is a different issue.

  3. Regarding the protists example: Since the faster counting most likely results in a systematic deviation due to missing protists, one could in principle correct for the deviation. You need one very slow and diligent summer student who produces as-accurate-as-possible reference numbers and have the same samples evaluated by people with their normal speed. I admit that the deviation may be a very “personal” one since some people will make more mistakes than others at nirmal speed. Still, you would be able to quantify the deviation, report it in your paper, and argue that the (systematic) deviation is indeed smaller than the decrease in standard deviation gained from evaluating more samples.

    • Yes, that could be done. It is itself costly to do, especially since you’re suggesting doing different calibrations for different students. Student accuracy could change over time, so you could also argue for recalibrating periodically. The accuracy might possibly also depend on the species they’re counting (many different species are involved in our experiments). And so on.

      In my lab, we train the summer students and then check their accuracy at the beginning of the summer, with the species they’ll be using that summer. Which is much more minimal validation than what you’re suggesting. My professional judgement is that more validation would be tremendous overkill.

      • Definitely. I’ve had small teams of undergrads counting aphids on plants, which I imagine suffers from some of the the same issues as counting protists. Each week we did 196 plants, and each week I randomly chose 6 plants and had everyone do counts on them. This allowed me to model the observation error among people and over time, which was important because it accounted for quite a bit of the variation in counts. I also shared the results weekly with the undergrads, so that they could learn speed-up-without-reducing-accuracy tricks from one another.

      • I agree that doing such a thing every time and possibly even periodically is total overkill. My reason for bringing up the suggestion was that your “protists counting” is an excellent example for a general decision one faces regularly if one needs to optimize an experiment with different error sources in view of limited total resources for an experiment or project: in this case the systematic deviation from faster counting and missing a certain amount of protists vs. counting significantly fewer samples and an increased statistical error. Usually in such cases there is a crossover. To illustrate the most extreme case: if you count too superficially but very many samples you get an enormous amount of meaningless junk data with a tiny error bar.

        I come from an entirely different field (nothing to do with ecology), and sadly in my field there are examples of an equivalent of the latter category.

        I think this nicely fits your observation that “going out of your way to guard against one sort of error often increases the likelihood of other errors”.

      • And of course I did not mean to imply that your counting produces such data. I was simply “abusing” your example.🙂

      • No worries Chris. Sorry if I seemed a bit touchy about it. I used the same example of my own counting procedures over on Andrew Gelman’s blog in a discussion of the same issue, and had people basically telling me I was incompetent because I don’t have multiple students count every sample or something.

Leave a Comment

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s