Recently I polled y’all on retracting old and superseded papers. Click that link if you need a refresher on what the issues are here, and why I thought they were worth polling on. Below are the poll results, along with some commentary.
tl;dr: the poll respondents mostly oppose retracting papers just because they’ve been superseded. But opinion varies widely on whether there should be a “statute of limitations” on retractions, and if so how long it should be and the circumstances in which it should apply.
Sample size and demographics
We got 151 responses–thanks everyone! As always with our polls, this isn’t a random sample from any well-defined population. It’s mostly sampling longtime readers of this blog, who may be fairly representative-ish of North American academic ecologists. But it’s probably best to just think of the poll as a conversation starter: a large enough slice of opinion to be worth talking about.
Respondents were 45% faculty, 24% postdocs, 15% grad students, 9% non-academic professional ecologists, 7% other.*
Experiences with retraction
I asked a couple of questions about respondents’ personal experiences with retraction, figuring those might correlate with respondents’ views on the core poll questions. But in retrospect, I probably needn’t have bothered, because most ecologists obviously don’t have any direct personal experience with retraction. Only one respondent had had a paper retracted. And only 11 respondents (7%) had ever tried to have someone else’s paper retracted.
The first core question tried to gauge respondents’ general attitudes towards retraction. 28% of respondents agreed that retractions should be more common and faster than is currently the case, vs. 21% who disagreed with that statement. The remaining 51% weren’t sure or had mixed/neutral feelings.
The next question asked if there should be a statute of limitations on retractions (i.e. sufficiently old papers can no longer be retracted), and asked respondents to choose from among several possible answers. Opinion was split. 51% said there should be a statute of limitations on retractions, except in cases of retraction for professional ethics violations such as faking data. 29% said there should be no statute of limitation on retractions under any circumstances. None of the other options (including “I have some other view”) got more than a few percent support. Only 4% wanted a statute of limitations on all retractions.
Then I asked those respondents who thought there should be a statute of limitations on retractions in at least some circumstances: how long should the statute of limitations be? Respondents had to write in their answers, which were all over the map. The modal answers were “10 years” and “20 years”. Answers ranged from “3-5 years” to “50 years”.
When asked if we should retract a paper that’s now known to be false or flawed, even though it was sound science by the standards of the time, 61% of respondents said “no” and 17% said “only if the authors want it retracted”. Another 14% said not sure/it depends. Only 3% said “yes” (the rest had some other view).
Finally, 50% want a mechanism for unretracting retracted papers that turn out to have been correct after all, vs. 15% who don’t want that and 35% who aren’t sure.
There wasn’t much variation in responses by career stage, save that the few respondents who gave their career stage as “other” almost all agreed that retractions generally should be more common and faster
Not surprisingly, the few respondents who’ve tried to have someone else’s paper retracted almost all said that retractions generally should be more common and faster.
A few thoughts, to get the comment thread rolling:
- There’s little support for scrubbing the literature of old papers that were solid by the standards of the time, but that have since been superseded. That doesn’t surprise me. I have yet to see a strong argument for this idea. Plus, it’d be a radical change to current practice. And we know from past polls on other issues that our readers are mostly not in favor of radical changes to scientific practice. Not even when they say they want radical changes. But I’m all ears if any commenter wants to make a case for retracting superseded papers.
- I’m hoping for a debate in the comments between proponents and opponents of a statute of limitation on retractions for reasons other than professional ethics violations.
- Very long statutes of limitations seem pointless to me. After all, what fraction of 30+ year old papers are still being cited in any way other than a passing nod? A tiny fraction, surely (even in ecology, a field that doesn’t change that fast…). What’s the point of retracting a paper nobody cites any more, or even remembers? (And don’t say “30+ year old papers might still be used in future meta-analyses”, because relatively few ecological meta-analyses include any primary research papers that are 30+ years old.) Conversely, the shortest suggested statutes of limitations seem bad to me. IIRC, more than half of retracted papers go >3 years from publication to retraction (and that’s quicker than it used to be). A 3 year statute of limitations would prevent the majority of retractions! Which doesn’t seem like a good outcome to me. But I’ll definitely listen with interest if a commenter wants to make the case for that outcome!
- I expected profs to be less likely than others to support retraction of superseded papers. They were, but the difference wasn’t that big and could well just be sampling error.
- My anecdotal sense of a generational divide wasn’t borne out in the data.
- I’m a bit surprised that people’s general views on whether retractions are too slow and rare didn’t predict their answers to the other questions.
*Sigh. I’m old enough to remember when this blog was new and exciting and good, when we had a much bigger audience that included many more grad students.
I’m interested to see that not retracting superseded papers is a much more popular view than retracting them only if the author wants them retracted. This suggests that, once a paper is published, most ecologists no longer view it as belonging to the author. Rather, it belongs to the scientific community, or maybe doesn’t belong to anyone.
My thoughts exactly! I wanted to comment this on the original post, but thought I’d wait for the results of the poll first.
My view is that because authors are more familiar with details of their papers, they might be more likely to uncover errors that invalidate their own findings. To me, this is the only reason why authors should have more say than regular readers in retracting their own papers. But it is the seriousness of the error, not the authors preference, that should determine whether the paper should be retracted or not (i.e. the retraction decision should be the same regardless of who identified the error).
I have mixed feelings about this myself. There’s part of me that feels as you do–you put it very well. But there’s also a part of me that appreciates why authors might feel embarrassed when a paper of theirs is refuted. Especially if it’s refuted shortly after being published. In the genetics case that inspired my posts on this, my understanding is that the now-retracted 2014 paper was refuted almost immediately by a follow-up paper from different researchers. I can understand why, as an author, it would sting to be refuted so quickly. And I can see why you might want to take the sting away by retracting, even if your paper wasn’t erroneous. It’s different than if, say, 20 years passed between publication and refutation. I don’t think anyone kicks themselves over being refuted decades later.
(I should be clear that I have no idea if I’m capturing how the authors of that 2014 genetics paper felt. I’m just imagining how I’d have felt if I were in their shoes.)
A point in favor of retracting–or at least adding a comment to–papers that have been superseded is that while professionals in the relevant field may be able to easily determine what is and is not outdated, laypeople often can’t. In the age of open access and abundant Internet access, where a lot of non-professionals pay attention to scientific papers, I think this has become a greater concern than it used to be.
Of course, actually attempting to publish retractions and corrections to address this issue–at least in any comprehensive manner–would represent a massive investment of time and effort, generate huge amounts of controversy and animosity, and do all of this in service of a goal that isn’t the primary purpose of the scientific literature.
“In the age of open access and abundant Internet access, where a lot of non-professionals pay attention to scientific papers, I think this has become a greater concern than it used to be.”
Hmm. I feel like this is a much stronger argument in medicine than it is in most other fields. And even in medicine, I don’t think it’s a very strong argument, in part for the reason you note. Doing this would indeed be a massive amount of work to make the scientific literature more legible to a relatively small number of people who aren’t the intended audience.
I think authors (both scientific authors, and other sorts) are entitled to tailor their writing to an intended audience. One side effect of that is that people outside the intended audience often will misunderstand the writing. They’ll lack the background knowledge and context needed to understand it. But there’s just no getting around that. It’s a fact of life, not a fixable problem.
Maybe probably more of an issue on biomed, but there are a couple of local ecological/natural resource issues here and we have folks quoting studies about beach erosion and fire management all the time. Also I see a lot of layman references to scientific studies that reference the effects of off-shore sonar testing for oil deposits on marine mammals.
No one really cares about the role of plant hormones in basal sprout development on trees though, except me. 🙂
Yeah, my own perspective on this is no doubt skewed by the fact that nobody but me and a few other professional ecologists cares about what I work on.
I didn’t answer the poll because I wasn’t paying attention, but basically I fall in line with the majority responses noted above.
This is slightly off topic, but if a journal is going to retract an article then it really should be retracted. For example, the retraction of the Start et al 2019 paper that you linked to last Friday should show up when you search the paper itself. And yet, an hour ago when I went to the March 2019 issue of Am Nat, the original paper is still fully there. There is no retraction statement or anything. I think the retraction statement should have loaded instead of the paper, otherwise how would the standard lit search tell someone that the paper was retracted? Potentially, after the retraction statement loads, you could provide a link to the original article for the curious or for those tracing citations from other papers, perhaps to see if later papers were predicated on faulty logic/basis. But simply saying 2 or more years later that a paper was retracted without it also appearing at the DOI link of the original paper isn’t good enough. Do we now have to do a retraction search on every citation before we finalize a paper? Or before we download/read a paper? I think that’s a little much.
For old references, if they are central to my manuscript, I frequently look a more recent citations to see what the current state of the science is. Similarly, for new references I often look to see what the foundational studies stated, because I think citing the foundation is better than citing something that is, for example, a repackage of an idea in a different location. Also, I think seeing how the theory/application has changed over time is helpful. In that context, I’d prefer that superseded papers be left alone. If, for example, we retract the Intermediate Disturbance Hypothesis, then we also make a lot of papers that cite IDH (either pro or con) nonsensical.
On another side note, apparently I’ve been spelling superseded wrong for decades (with a c)! At least it’s a word that I rarely use.
“I think the retraction statement should have loaded instead of the paper,”
Agreed. Some publishers are better on this than others. The Royal Society is pretty good. They put retraction and correction notices prominently in red type at the top of the page from which you’d download the pdf of the paper (or read the html version of the paper). Not sure if they also put a “retracted” watermark on the pdf. Definitely room for some other publishers to improve on this, it shouldn’t be hard.
Not sure what to do about all the other copies of the now-retracted paper that are likely to be floating around online. Maybe there’s not much that can be done.
fwiw I like how CABI handled the correction on the Economic costs of invasive species in Africa paper you linked to in the last post. The updated tables and text are what you get when you bring up the link. At the top is a clear statement that the paper was updated and link to the corrections. The erroneous text/tables have disappeared, except on people’s hard drives if they downloaded it before the correction.
Hmm. Not sure I like that the original text and tables have disappeared completely. In this case, maybe it’s fine. But in general, you don’t want to allow any possibility that authors who faked data can “correct” the data to cover their tracks. Which means that journals need to be *very* careful about the circumstances in which they erase a previous version of the paper from their websites and replace it with a corrected version.
I believe that this is why the Data Dryad repository doesn’t allow authors to correct their repositories by deleting or replacing files–only by adding updated files to the respository.
I’m somewhat surprised that > 1/20 respondents have tried to have a paper retracted. If 7% even approximately reflects the community of ecologists out there, that’s a very large total number of papers out there that someone thinks is deserving of outright deletion from the scientific record. Imagine if all of them succeeded. It would be mayhem keeping track of things, and an incentive for an even larger number of attempts to have papers retracted (assuming one reason it’s not >7% is because it’s so unlikely to succeed).
Well, unless that 7% of respondents have all tried to get the same few papers retracted. Maybe several respondents were all involved in #pruittdata! That seems quite possible to me.
Also possible that people who’ve tried to get a paper retracted were especially likely to complete the poll.
Excellent points. Comment retracted!
We can’t let you retract it just because it’s been superseded. 🙂