On finding errors in one’s published analyses

Dan Bolnick just had a really important – and, yes, brave – post on finding an error in a published study of his that has led him to retract that study. (The retraction isn’t official yet.) In his post, he does a great job of explaining how the mistake happened (a coding error in R), how he found it (someone tried to recreate his analysis and was unsuccessful), what it means for the analysis (what he thought was a weak trend is actually a nonexistent trend), and what he learned from it (among others, that it’s important to own up to one’s failures, and there are risks in using custom code to analyze data).

This is a topic I’ve thought about a lot, largely because I had to correct a paper. It was the most stressful episode of my academic career. During that period, my anxiety was as high as it has ever been. A few people have suggested I should write a blog post about it in the past, but it still felt too raw – just thinking about it was enough to cause an anxiety surge. So, I was a little surprised when my first reaction to reading Dan’s post was that maybe now is the time to write about my similar experience. When Brian wrote a post last year on corrections and retractions in ecology (noting that mistakes will inevitably happen because science is done by humans and humans make mistakes), I still felt like I couldn’t write about it. But now I think I can. Dan and Brian are correct that it’s important to own up to our failures, even though it’s hard. Even though correcting the record is exactly how science is supposed to work (and I did corrected the paper as soon as I discovered the error), it still is something that is very hard for me to talk about.

 

To explain more about what happened in the case of my paper: while I was at Georgia Tech, we did a really huge experiment where we quantified evolution in 7 populations in response to parasite epidemics. This was paired with an intensive field survey (led by my collaborator Spencer Hall) and theoretical work (done with Chris Klausmeier). Chris and I did the theory first, leading to predictions for how evolution should vary with epidemic size. After many, many months of collecting data, when I finally plotted the results, they matched our predictions beautifully. I printed off the figure and ran down the hall to my colleague Mike Goodisman’s office. I was so excited about it that I could hardly explain what we’d found. I don’t think my attempted explanation made any sense to Mike, but he shared my enthusiasm anyway. I will forever remember that moment.

We wrote up a paper laying out that predation and productivity environment drove epidemic size, and that, in turn, determined the type of evolution that occurred in the host population. We thought this was really neat, and were excited that the reviewers at Science did, too. That an image I submitted was selected for the journal cover was icing on the cake. It was such a high when that paper came out.

The paper came out in the spring of 2012. That summer, I moved to Michigan. In early March, I had my second child. In early May (days after I’d turned off the autoreply on my email), I received a request from someone for the data and code related to the paper. The request was kind of strange – it seemed to be a form letter, and was from someone who seemed to be a computer science student who didn’t give any indication for why they were interested in the data and code. (My guess is that it was a study that was testing whether scientists share data and code.) Still, the person asked for the data and code, and I wanted to share it. It was a Friday evening when I got the email, but I had a little time while the baby was napping, and figured I would work on it then. I pulled up the files just to make sure everything was in order and that I could explain in an email what everything was. (We’ve since gone to always publishing data with our studies, but we hadn’t made that shift yet and this was the first time I was sharing the data for this study.) In doing that check, I scanned through the data file and something stood out to me – there were a couple of phosphorus values that were way too high. Way, way too high. I immediately realized something had gone wrong and wanted to vomit. (I may actually have vomited. I can’t remember.)

I couldn’t sleep that night – or indeed, for several nights after. I also couldn’t eat, also for several days (which is saying something, given that I had a two month old who I was nursing.) I knew we had to correct the data file, redo the analyses, and, if necessary, correct or retract the paper. But knowing that I had to do that and actually doing that were two different things. (For starters, the really high anxiety made it hard to focus well enough at first to do anything.)

In the end, we figured out what had happened (a few rows had shifted when I imported the data into Systat, since I didn’t have a “.” in the data file in the data file where we were missing phosphorus data for a particular lake-date.) That I didn’t notice that is 100% my fault. I should have noticed. This experience it abundantly clear that I should always look at summary statistics and/or figures before doing an analysis, just to get a sense for whether anything looks really off.

We then, as quickly as we could (remember, I had a newborn), not only redid the analyses, but went back to the original data sheets to make sure there were no other errors anywhere. (Fortunately, there weren’t.) Fortunately for us, the main conclusions of the paper still held. I then prepared an email to the editor at Science explaining what happened. I didn’t know whether they would want a correction or a retraction, and was relieved when they said they wanted a correction.

I think this situation would have been really stressful and painful regardless of when it happened – it’s clear the experience was very hard on Dan, too. But it surely didn’t help that I discovered the error while I had a newborn (and, therefore, pretty extreme sleep deprivation) and when I was one month away from submitting my tenure dossier. I didn’t think that paper was crucial to me getting tenure, but it surely wouldn’t be ideal to have to retract a high profile paper in the middle of the tenure review process.

The process also felt very lonely at first. Not many people talk about this process, so one can feel alone while going through it. But, when I talked to a few people I trusted about it, the responses were either that they’d had a similar experience, or something along the lines of “there but for the grace of God go I”. In the case of people who said the former, I realized that, in one case, I had known about the correction, but had totally forgotten about it. That was very comforting because, at the time, my anxious, irrational response was to feel like I was branding myself with a scarlet letter and that people would forever view me as a bad scientist. For people with the latter reaction, there were several people who talked of narrow escapes – where they discovered an error right as they were about to submit a paper (or after it had been submitted but before it had been accepted). It became clear that, while no one really talks about this, it’s not such an uncommon experience.

For me, this experience has changed how I do science. It is a large part of why I have moved to making data and code publicly available right away (I truly hope that reviewers go through the data and code when reviewing a paper!) and why I’ve moved to R for most of my analyses. Part of my motivation for moving to R is that reviewers are more likely to be familiar with it (and, therefore, to catch a mistake). Another reason is that it is what my lab uses, and I will not be able to catch errors in their code if I don’t know how to use R. But it does come with the risk that caused Dan problems: because I am still not entirely comfortable in R, there is a chance that I will make a mistake and not know it. This worries me a lot. I fully agree with Dan that we need a better way to monitor the code that we use to analyze our data. My story shows that errors along these lines are not unique to R. But I think that we are more likely to find these errors when the analysis is done in R, because it’s easy to share the code and for others to evaluate it. So, for me, these errors are an argument in favor of using R. But I fully agree that it would be great if we had better systems set up for catching these mistakes.

As someone with anxiety, this experience was really, really hard. Really, really, really hard. As I said earlier, it caused a huge anxiety flare when it happened. Writing this post has made me feel jittery and I am fighting the urge to just trash it. The experience has made it harder for me to get past my perfectionism and to say it’s okay to submit a paper. It leads to a desire to check data obsessively.

Fortunately, time has made the feelings associated with the experience lessen. But I’m pretty sure that, for the rest of my career, my first reaction when I learn of a scientist who discovers an error in their work that leads to a correction or retraction will be to feel intense sympathy for them. It shouldn’t require courage to correct one’s published work, but it does. My hope is that sharing my experience will make it easier for someone else to correct their mistakes in the future.

 

Many thanks to Dan for giving me the nudge I needed to finally write this up.

14 thoughts on “On finding errors in one’s published analyses

  1. I’ve had to issue two corrections for really basic errors in equations. In the first, I got the fundamental Baranov fisheries number equation wrong in a paper. In a paper about that very equation.
    Wrong: C = F/(M+F)*N*exp(-M-F)
    Right: C = F/(M+F)*[1-N*exp(-M-F)]
    (C = catch, F = fishing mortality, M = natural mortality, N = numbers)
    Source: http://www.nrcresearchpress.com/doi/pdfplus/10.1139/F09-107

    In the second one, I made a coding error in R in the basic equation used to project whale numbers by the International Whaling Commission.
    Wrong: N[t+1] = N[t] + r*N[t]*((1-N[t]/K])^z) – C[t]
    Right: N[t+1] = N[t] + r*N[t]*(1-(N[t]/K])^z) – C[t]
    (C = catch, N = numbers, r = intrinsic growth rate, K = carrying capacity, z = 2.39 = ensures that maximum productivity is at 60% of K)
    Correction is pending. Changed the main numbers but not the main conclusions.

    As careful as we are as scientists, we are not perfect, need to own up to our errors, and correct them so that science progresses. Facts matter, equations matter, and truth matters.

  2. Thanks for a very brave post Meg. It is indeed important for science as a whole to be self-correcting–but that often demands a lot of bravery on the part of individual scientists. It’s a hard thing to produce a culture in which people are careful to avoid mistakes *and* quick to correct them. Because the very same values that make us all so careful to avoid mistakes are the ones that make us terrified of what our colleagues will think of us when the inevitable mistakes happen.

    That you, and Dan, (and Janneke, in that case Brian discussed) acted as you did shows just how good you are at science. I’m in the “there but for the grace of God go I” camp so far (as far as I know!). I hope that if I ever find myself in the same situation–and I might, since after all I’m human–that I’ll do the same. And maybe even find the extra courage to write a post about it.

    In reading Dan’s post, I think I disagree with him a bit on one point. Or at least, there’s a possible reading of his remarks that I disagree with. He says that he considers his post a form of public “pennance” or “self-flagellation”, and suggests that this is something everyone should do (“we must own up to our failures”). I agree 100% that we need to correct our mistakes in the scientific record. Dan needed to contact the journal, so did you, so did Janneke. Hard as that was for each of you to do it would’ve been wrong not to. But I worry that saying (in effect) “you *should* beat yourself up over your honest mistakes, you *should* be ashamed” goes too far and if anything is counterproductive. First because I don’t know *anybody* who is insufficiently ashamed when they make a mistake (the rare fraudsters and lazy corner-cutters aside, and they *are* rare). Second because beating yourself up really hard over an honest mistake isn’t healthy, and increases the likelihood that if you do make a mistake you’ll respond in the wrong way rather than the right way. Third because if you say “I am ashamed of myself and I should be”, you’re basically giving other people license to be ashamed of you too. You’re coming dangerously close to saying or implying “no, mistakes aren’t inevitable, and everybody doesn’t make them–it’s only screwups (like me) who make them.” And that’s like a hunting license to the self-righteous shame vigilantes who think, incorrectly, that science as a whole would be improved if individual scientists were publicly pilloried for their honest mistakes (and yes, there are plenty of people who think this–just look in the Retraction Watch and PubPeer threads, or think of Nathan Myhrvold or Dan Graur).

    Also, I’ll re-up this from an old linkfest. John Hutchinson’s own brave blog post on having to retract one of his papers because of an honest mistake: https://whatsinjohnsfreezer.com/2014/05/10/co-rex-ions/

    • Good points.

      That Myhrvold-linked piece is a good one, and important. I struggle with the mentioned issues all the time. It’s not easy deciding how (and what in many cases) to criticize, scientific work-wise. It’s not easy to do it right either, by which I mean, being both correct in the criticisms themselves, and *effective* in moving the science forward. I find there to be disagreement on the latter issue in particular–how to be effective.

  3. Pingback: On finding an error in my own published paper | The Lab and Field

  4. Great post Meghan. Thanks for investing the energy required to write it up–I’m sure it wasn’t easy to relive it.

    People make far larger mistakes routinely, having to do with design, analysis, interpretation and etc., if it makes you feel any better. Such errors are less likely to arise as “honest mistakes”, and yet often nobody says anything or even seems to notice. A data copying error is child’s play compared to that stuff.

  5. Thanks for this post, Meg.

    I would only like to say that I think “there are risks in using custom code to analyze data” could easily be changed to “there are risks in analyzing data.” Doesn’t matter what you use. You can make a mistake using it. That goes for custom code, library code, graphical analysis programs, and, heck, analyses on paper.

  6. Pingback: Inference and being wrong in a post-truth era | Practical Data Management for Bug Counters

  7. Pingback: Friday links: how to spot nothing, Aaron Ellison vs. Malcolm Gladwell, and more | Dynamic Ecology

  8. Pingback: Friday links: a rare retraction in ecology, and more | Dynamic Ecology

  9. Hi Meghan, Thanks for your post. Though it’s been two years since you put it up, I had been researching on making statistical errors in a paper and anxiety, as I am unfortunately suffering from both! I was very glad to find your piece, though my circumstances are a bit different. I just submitting my MSc dissertation in Statisitcs and found mistakes in two regression equations. There are four equations in all. These were typos where the equation has a term (age) that I had decided to remove from my analysis and I used the term gender instead. While I changed the entire regression, output and explanation/discussion, I forgot to change the names of the coefficients in the actual regression equation. So my regression equation shows age, while everything else after it shows the work done with gender. I do not get a second chance – resubmitting is not an option. So I was wondering if you could advise me on whether it is a serious enough error for me to fail my paper? I do not want to ask my prof as she will be the one marking my paper! Any help you can give me will be very much appreciated. I just need to be prepared for the worst. My anxiety levels will thank you.
    Best wishes,
    Satori

  10. Pingback: Friday links: Haeckel vs. Christmas cards, green + bond = green bond, phylogeny of baked goods, and more | Dynamic Ecology

  11. Pingback: Friday links: Covid-19 vs. BES journals, Charles Darwin board game, and more | Dynamic Ecology

  12. Pingback: Friday links: Richard Lewontin 1929-2021, and more | Dynamic Ecology

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.