Meg recently wrote a post acknowledging that crying in science was pretty common place. It really touched a nerve and went viral. Meg’s opening syllogism was masterful: humans cry, scientists are human, therefore scientists will cry.
I want to touch on an even more sensitive syllogism: humans make mistakes, scientists are human, therefore scientists will make mistakes. And a corollary – some mistakes will make it into print.
People obsessed with preserving a united front against science deniers might try to pretend this isn’t true. But it is true. This rarely acknowledged truth about scientists is fresh in everybody’s minds because of a recent retraction of an ecology paper (due to an honest mistake). I’m not even going to link to it since it is a distraction from my main point to single out one group of individuals when I’m talking about collective responsibility (but if its too distracting not to know Jeremy linked to it on Friday).
What I am finding revealing is not that a retraction occurred but other people’s reactions to the fact that a retraction occurred. There seems to be a lot of distancing and blaming. The first commentor on retraction watch even went one step further and very sloppily and inaccurately started throwing around the phrase “fraud scandal” (really? the topic of mistake is so taboo we can’t differentiate the profound difference between mistake and fraud?)
My reactions were rather different, going in order of occurrence, and probably progressively more profoundly were:
- Ouch – I feel bad for the authors
- I’m impressed with the way the authors handled this – it took a lot of courage
- That’s science working the way it is supposed to
- It could have been me
There’s no need to expand on the first one (except its worth noting I don’t know any of the author’s personally so this was more of a 1 degree removed member of my community form of empathy).
But I think it is worth dwelling on the second one for a moment. It must have been very tempting to bluster and deny that there were substantive enough mistakes to require a retraction and hoped this faded away. We all know this strategy has a decent shot at working. In an infamous case in evolution (UPDATE the link in Jeremy’s post is broken – follow this link), it worked for years until a co-author took it upon himself to self-publish and blow the whistle (nobody talks about this but the journals have an obvious interest in not highlighting a mistake). But these author’s didn’t weasel in any fashion. And they thought about the good of science before the good of their careers. Good for them!
As for the 3rd reaction – this is not a failure of science. It is a success of science! It is science working as it is supposed to. And it is exactly why science has a claim to a degree of rigor that other modes of thought don’t have. The reason my syllogism doesn’t eliminate science as a paragon of correctness is that – contrary to the popular view about lone geniuses – science is not about individuals or single papers. It is about the community and the total body of evidence. One individual can be right, wrong, a crack-pot, a genius, mistaken, right for the wrong reasons, and etc. But the community as a whole (given time) checks each other and identifies wrong ideas and mistakes. The hive mind will get the important things right with some time. If you read the details, this is exactly what happened. Good for science!
The last reaction is the touchiest of all (it could have been me*). Of course I do not knowingly have any mistakes in print. But I could have a mistake out there I don’t know about. And I’ve caught some that came close. And I could make one in the future. Should I be thinking that? Should I be admitting that in a public blog? I sure hope your answer to both of these questions is yes. If I’m not asking the first quesiton (and admitting the possibility) how can I be putting my best effort into avoiding mistakes. The same for the community context. And I’m pretty sure any other honest scientist cannot say they are 100% sure they never had made a mistake and never will make a mistake. 95% sure – I hope so. Maybe even 99% sure. But 100% sure? I don’t trust you if that is what you claim. Every lab I’ve ever worked in or been close to (meaning dozens) have challenges and errors with data and coding and replicability of analysis. Most of them are discovered and fixed (or sadly prevent publication). But has anybody here ever run an analysis, gotten a particular t-statistic/p-value and written it up, and then run the analysis later and gotten a slightly different number and never been able to recreate the original? Anybody have one or two sample IDs that got lost in the shuffle and you don’t know what they are? These are admittedly small mistakes that probably didn’t change the outcome. But it is only a difference of degree. And I bet most of you know of bigger mistakes that almost got out the door.
I want to speak for a minute more specifically about coding. In this day and age nearly every paper has some coding behind it. It might just be an R script to run the analyses (and probably dropping some rows with incomplete data etc along the way). But it might be like the stuff that goes on in my lab including 1000+ line computer simulations and 1000+ line big data analysis. Software engineers have done a lot of formal analysis of coding errors. And to summarize a lot of literature, they are numerous and the best we can do is move asymptotically towards eliminating them. Getting rid of even 90-95% of the errors takes a lot of work..Even in highly structured anti-error environments like NASA or the medical field mistakes slip through (like the mis-transcribed formula that caused a rocket to crash). And science is anything but a highly-structured anti-error environment (and we shouldn’t be – our orientation is on innovation). In a future post, I will go through some of the tricks I use to validate and have faith in my code.. But that would be a distraction here (so you might want to save your comments on how you do it for that post too). The bottom line though is I know enough software engineering not to fool myself. I know there are errors in my code. I’ve caught a couple of one line mistakes that totally changed the results while I was in the middle of writing up my first draft. I think and hope that the remaining errors are small. But I could be wrong. And if I am wrong and made a whopping mistake, I hope you find my mistake!
The software industry’s effort at studying errors was just mentioned. But the medical and airline industries have recently devoted a lot of attention to the topic of mistakes as well (their mistakes are often fatal).The Institute of Medicine released a report entitled “To Err is Hman” with this telling quote:
“.. the majority of medical errors do not result from individual recklessness or the actions of a particular group–this is not a “bad apple” problem. More commonly, errors are caused by faulty systems, processes, and conditions that lead people to make mistakes or fail to prevent them.”
Broad brushing the details, both medicine and the airlines have come to the conclusion that the best way to avoid mistakes are to 1) destroy the myth of infallibility, 2) eliminate the notion that raising the possibility of a mistake is offensive, 3) introduce a culture of regularly talking about the possibility of mistakes and analyzing mistakes made for lessons learned, and 4) make avoiding mistakes a collective group responsibility.
I think arguably science figured this all out a couple of hundred years ago. But it is worth making explicit again. And per #3 it is worth continuously re-evaluating how we’re doing. In particular we do #4 extremely well. We have peer review, post-publication review (which is stronger for prominent and surprising results), attempts at replication etc. We’re professional skeptics. We also do pretty well at #2; you expect and accept your work being criticized and picked apart (even if nobody enjoys it!). #1 is more of a mixed bag. I’ve heard a lot of “it could never happen in my lab” comments recently, which is exactly the myth of infallibility. And the same for #3 – I haven’t yet heard anybody say “I’m going to change X in my lab” in response to the recent incident. And more generally across #1-#4, I would suggest that coding is novel enough in ecology that we have not yet fully developed a robust set of community practices around preventing coding errors.
In conclusion, I am sure somebody is going to say I am glorifying mistakes in science. I’m not. Mistakes* are unfortunate and we all need to (and I think all do) put a lot of effort into avoiding them. But I sincerely believe there is no way to guarantee individual scientists do not make mistakes. At the same time, I also sincerely believe that a well constructed scientific community is robust enough to find and correct all important mistakes over time. Which means it really matters whether we respond to mistakes by finger pointing or examining our common culture and how to improve it. The later is the conversation I want to have.
*Probably important to reiterate here that I’m talking about mistakes, not fraud. Whole different kettle of fish. I presume most people can see that, which is why I am not belaboring it.
This is really important, Brian – well said. Spookily, I caught a mistake in a figure moments after reading this… didn’t change the analysis, but somehow I plotted the wrong column and inserted the plot into my MS draft. Can I be 100% sure there’s no similar mistake anywhere in all my published work? Nope, and neither can anyone else…
Great post, Brian.
There is a spelling mistake in your link to “To Err is Human”…but it is actually quite beautiful in a poetic way considering the theme of your post (and the link)!
I agree with you that the likelihood of spotting your own mistakes is directly proportional to your acceptance of your own fallibility. And let’s face it, given a choice, most of us will rather find our own mistakes then have them pointed out to us by others.
Marten Scheffer had a short opinion piece in PNAS last year with the following quotes by Nobel Laureate Kenneth Arrow: “If you are not wrong two-thirds of your time, you are not doing very well.” and “if you are wrong you had better find out yourself, not only because it is more pleasant, but also because it helps you to learn.”
I assumed that was the spelling used in the medical report. Pretty disappointed it is not actually.
It should have been! I can either claim a creative subconscious or blame late night typing. I of course prefer the former: 🙂
@Andrew: Same here!
@Andrew: Same here! 🙂
Very well said. It’s depressing and surprising that it needs saying and isn’t totally, boringly obvious to everyone. I just had a frustrating exchange with a regular commenter over on Andrew Gelman’s blog, someone who’s co-authored with Gelman (so not a crank). He was arguing quite explicitly that the only acceptable rate of mistakes is zero, and that the only reason mistakes happen in science is because scientists haven’t learned how to “audit” like businesses and governments have. The notion that even auditors (and auditors of the auditors, and etc.) also make mistakes was foreign to him.
Advocates of post-publication review won’t want to admit it, but unfortunately one big reason people comment on PubPeer and Retraction Watch is the chance to express self-righteous fury without having to feel guilty about it. You get to point the finger and rip people on the assumption that they deserve it because they must be either unethical or incompetent. So that when good scientists make honest mistakes and correct them the instant they’re discovered, the commenters at RW tie themselves in knots trying to find some excuse to keep ripping the scientists in question rather than praising them.
Such people think they’re defending science as a whole from evil/sloppy individual scientists. What they’re actually doing in cases like the one that prompted this post is making science as a whole worse, by hurting good, honest scientists and creating strong incentives for people to hide honest mistakes and refuse to admit them when they’re discovered.
The blame for review lynch mobs rest squarely on the shoulders of scientists like this – http://www.chemistry-blog.com/2013/08/07/when-authors-forget-to-fake-an-elemental-analysis/ – who for quite some time, have gotten away with publishing fraudulent/irreproducible work with impunity.
There are two main problems in science and both stem from the toxic bully culture.
The first is that mistakes (at least by the students and junior researchers) are not permitted in science (which is ridiculous). Junior researchers are considered incompetent when they make mistakes, tenured-professors are ‘distracted’ or ‘overworked’ or ‘mislead’ by their students. It’s not uncommon to watch senior scientists rip apart students, etc at group meetings, seminars, and the like. This behavior is by far the more likely cause of scientists being reluctant to admit to mistakes, than post-publication peer review.
It’s time to accept that, mistakes are a part of research and will happen to everyone, not just junior scientists/students, that they aren’t always the result of incompetence (overwork and lack of sleep can also cause this (90 hr weeks are pretty common). If we want innovative and creative scientists then we need to allow for mistakes, which aren’t the same as a lack of understanding of the fundamentals.
The second problem is that some scientists erroneously believe that the majority of scientists are honest. If someone is a prof, or graduate student they get away with things that would cause undergrads to be expelled from university, because the assumption is that profs and grad students (by virtue of their positions/or perhaps importance to the university system) make innocent (if incompetent) mistakes (grad students) and oversights (profs), whereas undergrads who behave in a questionable manner are cheaters trying to fake their way through their degrees.
Academic scientists are just like everybody else, and when you create a system that can be gamed, there are always people who will step up and attempt to game it. Having academic titles doesn’t automatically mean that you’re dealing with an honest person. The fact that the department needs TAs, and PIs need people to work in the lab, doesn’t automatically mean that those people are honest (just essential to the university (which are different concepts)).
There’s plenty of incentive for dishonest people to cheat. Many times there’s millions of dollars in grant money, prestigious research positions, tenure, graduation, letters of recommendation at stake. Students with abusive advisors are put under tremendous pressure to do what their advisors say, and many PIs are put under pressure by their institutions and the community at large – http://www.cancerletter.com/articles/20150109_1
Yet nobody is talking about the pain and suffering that whistleblowers who try to call out scientific fraud experience from their peers after doing the right thing. Many of them are pushed out of science altogether – http://www.timeshighereducation.co.uk/features/life-after-whistleblowing/2014776.article
Why are things so bleak for whistleblowers if academia is stocked full of honest scientists who are just making innocent mistakes?
Scientists in academia are under enormous pressure to publish and obtain grants, and the dishonest ones cheat. This really isn’t that shocking since similar behavior has been observed pretty much everywhere else. Look at wall street, look at politicians, look at athletes taking steroids, etc.
As such, we need post-publication peer review (much in the way professional sports needs drug testing), and that process won’t be served by automatically assuming that every mistake is an honest mistake.
I agree with you that honest mistakes ought to be treated as such, and sadly there are trolls on both sides of the fence, but there have to be more serious consequences for researchers who are busted committing fraud, and I think it’s dangerous to assume that most researchers caught up in scandals on post-pub peer review sites are simply innocent victims of the lynch mob. Some are, but many aren’t.
I think your point about whistle blowing is an important one. We usually extract a terrible context from whistle blowers. IN the context of mistakes, just as it should be easy to acknowledge a mistake, it ought to be easy to suggest a mistake.
In the context of fraud, the stakes are much higher, but we have to give whisteblowers more protection. The story about Trivers that I linked to above is sad but pretty typical
“The blame for review lynch mobs rest squarely on the shoulders of scientists like this – http://www.chemistry-blog.com/2013/08/07/when-authors-forget-to-fake-an-elemental-analysis/ – who for quite some time, have gotten away with publishing fraudulent/irreproducible work with impunity.”
I’m sure that’s a contributing factor. But it’s a (partial) explanation, not a justification. That somebody has committed fraud is not a justification for attacking people who haven’t.
“It’s not uncommon to watch senior scientists rip apart students, etc at group meetings, seminars, and the like. This behavior is by far the more likely cause of scientists being reluctant to admit to mistakes, than post-publication peer review. ”
Clearly your experience has been very different than mine, or that of the scientists I know. The main reasons the scientists I know are hesitant to admit mistakes is not that they were bullied as students. Bullying absolutely happens, and it shouldn’t–but most senior academics aren’t bullies (see here for discussion: https://dynamicecology.wordpress.com/2013/12/21/ask-us-anything-academic-bullying-in-ecology/). Most scientists hesitate to admit mistakes first and foremost because they hold themselves to high standards, and so feel embarrassed when a mistake slips through (and no, they don’t hold themselves to high standards because they were bullied as students). And I’m sorry, but another big reason they hesitate to admit mistakes is because they’re afraid that others will leap to the conclusion that they’re incompetent or fraudsters. But don’t just take my word for it: http://whatsinjohnsfreezer.com/2014/05/10/co-rex-ions/
“The second problem is that some scientists erroneously believe that the majority of scientists are honest.”
Please provide links to data showing that the majority of scientists are dishonest.
And if the majority of scientists are dishonest, then it’s not clear why we should accept that mistakes are a part of research. There are serious tensions between your first and second points, so that I’m having trouble understanding your overall views.
“There’s plenty of incentive for people to cheat.”
Yes, and there are plenty of incentives for them not to. And in any case, incentives are one thing and actual behavior another. You seem to have confused the statement “there are incentives to cheat” with “the majority of scientists cheat”. It’s a mistake I’ve made myself in another context (incentives for scientists to do peer reviews)–falsely assuming that, because there are strong incentives for people to behave a certain way, that they either do or soon will all start behaving according to those incentives. See this post: https://dynamicecology.wordpress.com/2014/03/25/do-individual-ecologists-review-in-proportion-to-how-much-they-submit/
I’m not sure why you’re talking about fraud or whisteblowers at all, given that this post is about honest mistakes. You seem to be committing the very mistake that the post (and the first part of your comment) seeks to avoid: conflating fraud and honest mistakes. You ask why no one’s talking about the pain and suffering of whistleblowers. The reason is because the treatment of whistleblowers has absolutely nothing to do with the topic of the post. We’re not discussing the treatment of whistleblowers for the same reason we’re not discussing the drought in California.EDIT: I withdraw this remark, treatment of whistleblowers has come up in the comments. Which I take it is why you brought it up? Or are you suggesting that the original post should’ve talked about the treatment of whistleblowers?“that process won’t be served by automatically assuming that every mistake is an honest mistake.”
Neither Brian or any commenter said or implied that we should assume every mistake is an honest mistake. If you think otherwise, please quote the relevant passage. EDIT: And note that in the case that prompted this post, there are no assumptions involved. The RW post, which quotes extensively from all participants, makes absolutely clear that this is an honest mistake. So it’s not a matter of assuming an honest mistake in the absence of any information to the contrary, or despite contrary information–we have information demonstrating that it *is* an honest mistake. And yet the very first commenter over there tosses around references to fraud. It’s this kind of behavior that motivated my comment, to which you’re replying.
“I think it’s dangerous to assume that most researchers caught up in scandals on post-pub peer review sites are simply innocent victims of the lynch mob. Some are, but many aren’t.”
Again, nobody ever said this. If you think otherwise, please provide a quote. You clearly have very strongly held views, but with respect it seems like they’re so strongly held that they’re causing you to misread the views of others. You’re trying to put words in other people’s mouths, and that can only hinder productive discussion.
We’re happy to have comments that disagree with the post. But putting words in other people’s mouths, wandering off topic, and making very strong assertions without evidence does not contribute to productive debate.
Great post Brian. I recently found a coding error in a large data analysis during a second round of review – no change in results, but if the reviewers had been easier on my paper it could have slipped into print quite easily. I was also grateful that the reviewers and editor were supportive through the process, rather than punitive. I look forward to the post on your best practices for guarding against coding errors.
Reblogged this on The Typewriter.
So here’s a question: when it comes to error checking–not just coding errors, but any sort–how do you distinguish between checks that are worth doing and checks that aren’t? Not just because every error check is costly in various ways, but because sometimes guarding against one sort of error can actually make a different sort of error more likely to occur and/or less likely to be detected. Is it something you have to decide case-by-case or are there well-established general rules of thumb to follow? Honest question.
I’ve been thinking about this this morning because of a discussion over on a baseball blog (of all places!) of “cover your ass security” against terrorism: http://hardballtalk.nbcsports.com/2015/04/14/we-no-longer-need-the-terrorists-were-now-so-good-at-terrorizing-ourselves/ That is, costly security procedures that don’t make us any safer from terrorists, or even make us *less* safe, but that get used because it’s important for those in charge to be seen to be “doing something”. I’m wondering if there are analogies here to the demands in science for increased error checking, fraud prevention/detection, replication, etc.
I also wonder if risk compensation is something we need to worry about in this context. If you build error checking procedures into your scientific workflow, is there any possibility that people get lazy in compensation? (e.g., your lab says it has gold-standard error checking procedures X, Y, and Z, causing reviewers to not bother looking closely at your data and code) Honest question to which I have no idea of the answer–I’m sure it’s something that lots of people must’ve thought about and worked on.
Certainly in the software engineering world it is widely recognized that it is a lot of work to eliminate errors and that there are trade-offs. If it is the program running a pace-maker it is expected to do just about everything to eliminate errors. But for more mundane programs (e.g. OS X, Word) it is recognized that perfection is too costly.
Also re your last paragraph, I know in the medical world, they worry a lot about if there is a computer checking drug prescriptions will the humans still bother. There have to be multiple checkers and each needs to make a good faith effort.
Yes, the consequences of an error must be key here. Which raises the sobering thought that most errors in scientific papers aren’t worth checking for or eliminating! After all, a substantial fraction of papers are never cited, and only a tiny fraction have any appreciable influence even on their own subfield or contribute in any appreciable way to any policy decision or other application.
xkcd once made fun of people who are determined to correct others who are “wrong on the internet” (https://xkcd.com/386/). It’s funny not just because it’s mostly futile to correct the errors of people who are wrong on the internet, but because it’s mostly not worth the effort to do so. I wonder if we should consider the possibility that most (not all!) one-off errors in scientific papers are like people who are “wrong on the internet”. (EDIT: To clarify, I mean one-off errors that already get through existing error checks. I’m not suggesting that scientists quit doing most/all of the error checking they already do–that would be disastrous! I’m questioning whether we all need to do substantially more error checking than we already do.)
What worries me much more are systematic errors afflicting science as a whole, that arise even when individual scientists do their jobs well–zombie ideas and all that.
Jeremy’s comments about the consequences of the errors really resonates with me here.
Things that the authors could recognize as mistakes are only one of the reasons a result might not be true for all time everywhere; after all (most) scientific papers are not mathematical proofs. Most results have so many contingencies already — statistical assumptions are rarely met strictly, models must make approximations, data is noisy, perhaps not representative, etc. There is little utility in validating the little details at great cost while ignoring all of these contingencies. The good ideas get revisited and tested time and again (one of my physics professors once told me that Einstein’s theory of relativity was tested about once every minute). Like Jeremy says, the rest is mostly lost to time, right or wrong.
For me, this is why the publishing of data and methods are more often more valuable — the open-source philosophy of acknowledging that all complex projects (code) have some mistakes and “with many eyes, all bugs are shallow.” Arguing that individual researchers do more error checking than they already do is both counter to existing incentives and can only slow science down; sharing speeds things up. I love Brian’s thesis here that we need to acknowledge that humans make mistakes. Because publishing code or data makes it easier for others to discover mistakes, it is often cited in anonymous surveys as a major reason researchers don’t share; myself included. Most of this will still be ignored, just as most open source software projects are; but it helps ensure that the really interesting and significant ideas get worked over and refined and debugged into robust pillars of our discipline, and makes it harder for an idea to be both systemic and wrong.
“and makes it harder for an idea to be both systemic and wrong.”
Hmm, not sure I buy that last bit. The zombie ideas in ecology of which I’m aware wouldn’t have been prevented or corrected by more sharing of data and code, or even more sharing in general (e.g., everybody publishing all their ideas in some open-access venue). But I think you’re thinking along the right lines. We want science as a whole to not have, or to quickly correct, widespread systemic errors.
I don’t have any great ideas about how to make that happen. In part because widespread, systemic errors seem to come in different flavors. For instance, I’m not sure that the origins and persistence of a widespread conceptual error like r/K selection have much to do with the origins and persistence of a widespread statistical error like “the garden of forking paths” that Andrew Gelman’s always banging on about. Or maybe there are important commonalities here that I’m not seeing?
Perhaps formal retractions should only be used for cases of fraud/misconduct. It is troubling that honest errors are handled by the same mechanism. It is certainly appropriate to publish a clarification/explanation/description of mistaken conclusions under these circumstances. But I don’t see why we have to call that a “retraction”
Completely agree… In fact, just like negative data, sometimes these errors can actually be useful to the rest of the community (i.e. prevent them from falling down the same rabbit hole) and in cases where that isn’t true what does it hurt to differentiate innocent mistakes from retractions?
Great post, Brian. I had to correct a high-profile paper after finding an error in the main data file. When I opened the file months after the paper was published and found the error in the data file, I nearly vomited. Literally. I didn’t sleep at all that night. It was absolutely clear to me that I had to redo the analyses and correct the record, but it was also incredibly stressful to do that. And that was a case where, in the end, the results fortunately were unaffected! I reported it all to the journal and issued a formal correction.
Overall, the experience was incredibly anxiety-provoking for me, not because of anything anyone else said (people were incredibly supportive), but because it meant I hadn’t lived up to the ideal of a scientist who doesn’t make errors. But, as you said, everyone makes mistakes. We need to do our best to avoid them, of course, but we also need to create a culture where people feel okay coming forward to correct mistakes. That will not happen if people are vilified for doing so.
As far as I can tell from talking with others about it, when people find these errors in their own work, they feel like an utter failure as a scientist, even if the work hadn’t been published yet. What I try to stress when talking to students is that mistakes happen, even when we are trying very hard to avoid them. The most important thing is, as you said, to correct the mistake and analyze how it happened and ways to avoid similar errors in the future.
And, with that said, I need to go finish the readings for today’s lab meeting, where we’re starting a series of meetings on how to collect and store data in ways that reduce the possibility of mistakes. 🙂
I have the utmost respect for scientists who recognize that they have made a mistake and issue a retraction. I believe that it takes a lot of integrity to do so. Everyone makes mistakes and some of these are going to make it into publications; it’s what you do next that really matters.
“*Probably important to reiterate here that I’m talking about mistakes, not fraud. Whole different kettle of fish. I presume most people can see that, which is why I am not belaboring it.”
Two issues come to mind concerning “mistakes”. I agree some mistakes are “honest” in nature. By honest, I mean every reasonable effort was pursued to prevent a mistake, but nonetheless it occurred. Mistakes also occur as a result of slop and laziness. These are intentional mistakes, because even though no one was conscious of them when they happened, there is no excusing dereliction of duty. I have also seen mistakes become fraud, and I am still contending with that very issue. In this instance, the PI was the epitome of slop and laziness. She committed, then published, errors of a magnitude I had never seen. Then she and her employer endeavored at all costs to conceal the misconduct. That is when a mistake becomes fraud, and I am betting it happens more than we realize, given the somewhat common occurrence of inflated egos in science.
I think the nuance involving the issue of retraction is a real bugaboo. We should allow for corrections concerning honest mistakes, but should retract articles involving intentional mistakes as I have defined it. The obvious problem is, how can we ever be certain we have differentiated the two?
If what we really care about is correcting the scientific record, then arguably we should seek to do that without worrying about why the error occurred–honest mistake, incompetence, fraud, some combination, whatever. In part because the reason for the error ultimately is irrelevant as far as science as a whole is concerned. In part because detecting errors is very different than figuring out the reasons for errors. For instance, as a peer reviewer I can detect certain sorts of errors-but I generally am not in the position to make any reliable inferences about the causes of those errors (except maybe in very rare cases where the error only has one possible cause). And in part because if you start trying to police the causes of errors in an ad hoc way, without agreed procedures to ensure fairness, you end up with the Wild West–vigilante justice and witch hunts. Which is a recipe for worse rather than better error correction, even leaving aside the damage it does to individual scientists. See https://dynamicecology.wordpress.com/2014/02/24/post-publication-review-signs-of-the-times/
Which isn’t to say that nobody should ever care about the reasons for errors, but rather that the investigation of that should be decoupled from correction of the scientific record and left to the appropriate people or agencies (e.g., the ORI in cases of possible misconduct in federally-funded research in the US).
Agreed, which was why I mentioned the impossibility of ever really knowing the true source of error, short of an investigator issuing a mea culpa or a collaborator coming forward. Without divulging any details concerning the situation to which I refer about mistakes becoming fraud, as I do not wish to ensnare your blog in it, I discovered at least in this instance folks can go to excess in concealing mistakes, investigations not withstanding.
It might be relevant to note that in law 4 different scenarios are recognized (e.g. say in the case of a car accident):
1) accident – no criminal event
2) extreme negligence – is criminal (e.g. manslaughter)
3) leaving the scene of the accident (i.e. lack of accountability) – is criminal
4) done with intent – is criminal (analogy here being to scientific fraud)
I am of course talking about #1 – honest mistakes. For #3 I do believe lack of accountability about mistakes pointed out is morally wrong and culpable and should affect a scientist’s reputation negatively. I have my own story on this front when I pointed out an error to somebody and it remains uncorrected 5 years later (but not important enough I am going to go whistleblower). Conceptually I think #2 (extreme negligence to avoid mistakes in science) is possible, but I think it is rare enough and is used so often to tar people into #4 who actually only committed #1 that I basically don’t find it a useful category in science. I would rather keep the culture positive and biased towards encouraging reporting errors, not have people worry that if they acknowledge an error charges of negligence will soon follow.
Great way to categorize it, Brian. Yes, the issue of how far to take things like whistle blowing, at least for me, was a gut-wrencher… and I only made disclosures after exhausting all other options. I suppose that could be a topic for an entirely different blog post. But you and Jeremy are absolutely correct in that we must have an open atmosphere of trust such that people may feel secure in coming forward and correcting mistakes.
As it concerns the saga I was enmeshed with, I definitely walked away with the sense that the perpetrator just could not, under any circumstances, bring herself to admit fault for anything. Whether that was a character flaw, or the outcomes of training, or both remains a mystery to me anyway. But it has given me pause to the extent that I think we need to rethink the whole educational process in science & engineering. I suspect the pressure for perfection begins in the classroom and if that is the case, the fundamentals of education need a fix.
As somehow mentioned in the previous posts, I think we should only think about errors in publication, but also how we handle errors done by others in our working group/university to create a “culture positive and biased towards encouraging reporting errors” (B. McGill). Before this post I never really thought about it, but I think I handle a few thing not very well.
Yes- you are correct, people can be intimidated into not reporting mistakes. The reasons for this can be many-fold. Sometimes a person is perceived as an all-knowing Guru, and even though that person might be very approachable, the intellect is intimidating. Other times a person is simply abrasive, and that edginess causes others to fear what the response would be to a blunder. Sadly, some people are simply irrational, and they blame-game people akin to a public flogging… and no one enjoys that.
The thing I always try to remember is, how did I perceive the big dogs when I was say, 25 or 30 yo? In general, I was intimidated even when these folks were not intimidating outwardly. What has worked very well for me in recent years is self-deprecation. It kind of comes down to voting for the guy you would most likely have a beer with. Self-deprecation communicates to others you are fallible, that you have stumbled before, and that you are accepting of faults. For example, whenever I teach any course in science, my introductory lecture includes the story of how I failed an exam in a grad level course… and that the exam covered material I had studied & applied over the previous decade… YIKES!
Well, it happened and to this day I still can’t figure out how I managed to blow questions I could answer in my sleep. It was just one of those days. But I recovered, and received a final grade of “A” nonetheless. I found students were MUCH MUCH more willing to approach me, and communicate to me their struggles and ask for help.
Pingback: Is it really that important to prevent and correct one-off honest errors in scientific papers? | Dynamic Ecology
Great post! I completely agree.
Really fantastic post, thanks. As an Editor I would much prefer to see authors come forward with a correction for their paper in our journal than let it sit for fear of embarrassment, so that means both destigmatizing the making of mistakes (thanks for jump-starting that) and making it easy to correct mistakes. Now that journals are publishing online one would think it is easy to simply swap the original with a corrected version, with a footnote detailing the correction history like they do in newspapers (e.g., “Correction, March 26, 2015: This article originally had an incorrect version of figure 1a…the results”) as opposed to (or in concert with if necessary) publishing a formal and separate Corrigendum. One could even identify all authors that cited the original version via WOS or SCOPUS and advise them the paper has been corrected.
My only concern from the Editor’s seat is that too much of “mistakes are no big deal, they are easy to correct” might result in “don’t worry about triple-checking, because if you make a mistake it is easy to correct”. And then there are the hard cases, like the one that motivated your post – if the correction fundamentally changes the conclusions, do you retract or correct? I say for now just correct rather than retract, even if it leaves the paper without the punch that got it accepted in the first place. Doing so will help promote that cultural change for which you are advocating.
Pingback: Mistakes in papers & how to deal with them
“To err is human” (or hman) is part of a famous quote from Alexander Pope. I’m kind of surprised no one has yet quoted the whole thing:
“To err is human; to forgive, divine”.
I think there is no escape from human mistakes and certainly we should not be afraid of mistakes, not with the error we can learn from the mistakes that we did not do the same mistake.
Pingback: On finding errors in one’s published analyses | Dynamic Ecology
Pingback: Friday links: how to spot nothing, Aaron Ellison vs. Malcolm Gladwell, and more | Dynamic Ecology
Pingback: Friday links: a rare retraction in ecology, and more | Dynamic Ecology
Pingback: Friday links: Haeckel vs. Christmas cards, green + bond = green bond, phylogeny of baked goods, and more | Dynamic Ecology
Pingback: Friday links: Covid-19 vs. BES journals, Charles Darwin board game, and more | Dynamic Ecology
Pingback: Friday links: Epstein fallout continues at Harvard, memes vs. intro biostats, and more (includes quick poll) | Dynamic Ecology
Pingback: Friday links: Richard Lewontin 1929-2021, and more | Dynamic Ecology
Pingback: What should you do when you get a result that seems wrong, but you can’t find any problems in the underlying data or calculations? | Dynamic Ecology