It’s often hard to get scientific journals to retract papers. When retractions do happen, it’s often years after the paper was first published.* And even after papers are retracted, many of them still get cited, even years after they’re retracted. All of which has led to numerous calls to speed up the process. As the argument goes, think of all the damage that retracted papers do to the progress of science before they’re retracted, and even after. Think of all the researchers who waste time, effort, and money going down the blind alleys that now-retracted papers steered them into. Think of all the follow-up papers that are invalidated, because they were built on the shaky foundations of now-retracted work. And think of all the pointless grant proposals that never would’ve been written, if only proposal authors had known that their proposals were based on unreliable results.
Frustration with the difficulty and speed of the retraction process is understandable and often justified.** It’s a problem. But I think we can be more precise about the problem–about its scale, and about who is primarily affected. So I decided to compile a bit of data.
I compiled data on two now-retracted papers by Jonathan Pruitt: Pruitt & Pinter-Wollman 2015 Proc B***, and Pruitt et al. 2013 Animal Behav. I chose those papers haphazardly. I picked them because I think they represent something close to a worst-case scenario: papers by a prominent author, published in leading journals, that went several years from publication to retraction, that are among a number of papers by the same author that have recently been retracted or subjected to Expressions of Concern. But I doubt my broad conclusions are sensitive to my choice of paper.**** For each of the two papers, I skimmed all the papers that cited it (according to Web of Knowledge), looking at who cited it and how it was cited. For instance, was it cited by Jonathan Pruitt, one of his collaborators, or someone with no connection to him? And was it cited merely in passing, or cited in such a way that its retraction would completely invalidate the citing paper, or what?
I don’t want to bury the lede, so here’s the tl;dr version of the conclusions: it’s mostly the coauthors of retracted papers whose research programs are set back by the retractions. In all likelihood, they comprise most or all of the people who’ve put serious time and effort into following up the two now-retracted papers in any very direct way. And they likely comprise most or all of the people who will have to go “back to the drawing board” and rethink their research programs, now that papers on which they were heavily relying have been retracted. Which isn’t to minimize the damage to science done by papers remaining in the literature for years before they’re retracted. It’s just to be precise about who gets damaged, and why. An individual scientist’s research program builds cumulatively over the years. You get ideas, you pursue them, you get results, you think about the implications of those results, you get new ideas suggested by those those results, and so on. So it’s a serious setback to your research program if it turns out that years-old results of yours can no longer be relied on. But it’s usually not a serious setback to anyone else’s research program, at least not in ecology and evolution. And it’s usually not a serious setback to “science as a whole”.
For the data and details, read on:
Pruitt & Pinter-Wollman 2015 has been cited 23 times. Those citations break down as follows:
-14 self-citations from papers co-authored by Jonathan Pruitt, some of which also were co-authored by Noa Pinter-Wollman. Some of these papers have themselves been retracted or subject to Expressions of Concern. And the citations of Pruitt & Pinter-Wollman 2015 invariably are accompanied by citations of other papers co-authored by Jonathan Pruitt. Not surprisingly, if an author has several papers retracted, it does often undermine the validity and interest of their follow-up papers. Or at least makes the validity and interest of those follow-up papers difficult to evaluate.
-1 self-citation, in passing, from a paper coauthored by Noa Pinter-Wollman, but not Jonathan Pruitt.
-8 citations in papers by others, all in passing. In many cases, Pruitt & Pinter-Wollman 2015 was cited in passing along with many other papers by various authors, in support of some brief throwaway remark. For instance, a review paper on quantitative genetics and social networks cites Pruitt & Pinter-Wollman et al. 2015 once, in passing, in support of a statement that “keystone individuals” might be a thing in social species. Perhaps the most substantive of these 8 citations of Pruitt & Pinter-Wollman et al. 2015 was its citation, along with other papers by Jonathan Pruitt, in a paper that asks whether individual ants can pause their movements and thereby start a chain reaction that causes other ants to aggregate nearby. Based on reading the introduction, it sounds like this paper on ant movement was inspired in part by papers by Jonathan Pruitt, including Pruitt & Pinter-Wollman 2015. But as best I can tell, the validity and interest of the paper is unaffected by the retraction of Pruitt & Pinter-Wollman 2015, or of any other paper co-authored by Jonathan Pruitt. Whether or not this paper has in fact correctly determined whether and how individual ants can cause ant aggregations, and whether that’s an interesting and important result, really has nothing to do with the retractions of Pruitt & Pinter-Wollman 2015, or other papers co-authored by Jonathan Pruitt, as far as I can see. It’s a bit like when August Kekulé came up with his (correct) hypothesis about the structure of benzene rings after dreaming of a snake eating its own tail. I mean, one probably wouldn’t recommend “take inspiration from your dreams” as a generally reliable way to come up with good hypotheses about chemical structures. But once you’ve developed the hypothesis and found that it checks out, the fact that the hypothesis originally came to you in a dream is no longer relevant. As another example, political scientists David Broockman and Joshua Kalla decided to check whether a recently retracted, high profile result in their field would replicate–and found that it did (with a smaller effect size). The lesson here is that just because a study was inspired by some unreliable or unlikely source–such as a now-retracted paper, or even a dream–doesn’t mean the study is unreliable.
Pruitt et al. 2013 Anim Behav has been cited 76 times. They break down as follows:
-34 self-citations in papers co-authored by Jonathan Pruitt, some of which were also co-authored by authors of Pruitt et al. 2013. These 34 papers invariably cited Pruitt et al. 2013 along with many other papers co-authored by Jonathan Pruitt (note: these are pretty much the only citations Pruitt et al. 2013 got for the first couple of years after it was published. Only after it was in the literature for a couple of years did it really start to accumulate citations from others.)
-5 citations, all in passing, by people who’ve co-authored papers with Jonathan Pruitt but who weren’t co-authors of Pruitt et al. 2013.
-22 citations, all in passing, by authors not connected with Jonathan Pruitt.
-1 citation in a paper by authors unconnected to Jonathan Pruitt, from an obscure journal I couldn’t locate. Judging from the paper title, I’m confident it only cited Pruitt et al. 2013 in passing.
-4 citations in narrative review papers (1 by co-authors of Pruitt’s, 3 by others). None of the four are altered appreciably by the retraction of Pruitt et al. 2013 specifically. All of them do also cite other papers by Jonathan Pruitt that have now been retracted or subjected to Expressions of Concern. I’d say that two of the four are altered in small ways by all those retractions taken together. The other two are are altered more substantially by all those retractions taken together, but by no means are they completely undermined.
-5 citations by authors unconnected by Jonathan Pruitt, along with other now-retracted papers by Jonathan Pruitt, where the citations provide motivation to look for the same (or related) behavioral phenomena in other species not studied by Pruitt. One might wonder if these studies would’ve been conducted at all, had Pruitt et al. 2013, and other now-retracted papers by Jonathan Pruitt, been retracted years ago. But having read these 5 papers, in no case did I come away feeling like the validity or interest of the papers had been much undermined by the retractions. I say that for two reasons. First, these 5 papers all cited papers by various other authors as part of the background and motivation; they weren’t solely building on Pruitt et al. 2013 and other now-retracted Pruitt papers. Second, none of these 5 papers got entirely negative results; they all reported some positive discoveries. So it’s not as if any of these 5 papers was a wild goose chase. Although in two cases the results now have a slightly different interpretation in light of the retraction of Pruitt et al. 2013. The two papers in question got different results than Pruitt et al. 2013, and tried to explain this contrast. But of course, that contrast no longer needs explaining now that Pruitt et al. 2013 has been retracted.
-1 citation in a paper by authors unconnected to Jonathan Pruitt, that used the same behavioral assay used in Pruitt et al. 2013. The validity of this assay isn’t undermined by the retraction as best I can tell, so the validity of the citing paper isn’t undermined by the retraction.
-this list doesn’t add up to 76 because apparently I messed up and accidentally skipped a few papers. 🙂
Conclusions
Again, I think it’s clear from these data that, when a paper lingers in the literature for years before being retracted, it’s mostly the coauthors of that paper who are hurt by the years-long time lag. Worry about them and their science when you worry about the damage that slow retractions do. Don’t worry about the damage to “science as a whole”, except in very rare cases. “Science as a whole” will be just fine.***** After all, millions of papers are published every year; hardly any of them can possibly matter to “science as a whole” all that much. Remember, even if a retracted paper concerns a topic of broad interest to many scientists, it’s almost certainly only one among many papers on that topic, by many unconnected research groups. For instance, there are many papers by authors other than Jonathan Pruitt, documenting individual ‘personalities’ in many different species of animals. It’s very rare for the validity or interest of all research on a topic of wide scientific interest to depend on some foundational paper(s), in such a way that all work on the entire topic would be fatally undermined by the retraction of the foundational paper(s).******
Science as a whole is like a brick wall. The integrity of the whole wall isn’t threatened by a few flawed bricks, not even if takes a while to remove and replace the flawed bricks. The reason why we want to remove the flawed bricks as quickly as reasonably possible isn’t because we’re worried the whole wall will fall down if we don’t. The reason we want to remove the flawed bricks as quickly as reasonably possible is so that the scientists working on that bit of the wall can get on with their work.
*Though the average time to retraction is dropping; see data summarized here.
**Though occasionally it does remind me of unreasonable complaints about the speed of the peer review process.
***Technically, this paper is subject to an Expression of Concern, and an “author removal correction“: Noa Pinter-Wollman has removed her name from the paper because she no longer considers the results reliable. My own view is that this is functionally equivalent to a retraction. And I think it should and will be treated as functionally equivalent to a retraction by the vast majority of scientists. So I’m going to just refer to this paper as “retracted”.
****Note that, for purposes of this post, it doesn’t matter why those papers were retracted, merely that they were retracted.
*****The possible analogy between “Think of the science!” and “Think of the children!” is left as an exercise for the reader. 🙂
******The only potential example I can think of would be a case in which some widely-used piece of technology turns out to have some serious flaw that went undiscovered for years. If it turns out that some widely-used R package has been spitting out mistaken numbers all these years, that would undermine the validity of a lot of scientific work.
Acknowledgments
Thank you to Ambika Kamath for feedback on a draft version of this post. The views expressed in the post are mine.
To which: I’m aware of cases in medicine of papers that did appreciable damage before they were retracted, because they influenced treatment of patients or public health. Andrew Wakefield’s misconduct linking vaccines and autism is perhaps the most famous and damaging example, but not the only one.
Conversely, retractions of years-old fundamental papers like Jonathan Pruitt’s obviously aren’t going to do any damage at all to applied fields.
In ecology and evolution, are there any papers that ended up getting retracted, that did significant damage to applied fields by remaining in the literature as long as they did? I know there’s a recent book arguing that Florida panther conservation policy was appreciably distorted by work that arguably should’ve been retracted. But I haven’t read the book, so don’t know any details.
I’m no expert, but outside of medicine I’m having trouble thinking of many examples of scientific papers that did appreciable damage to applied work by lingering in the literature for years before they were retracted. After all, it’s surely only a tiny fraction of all papers–and a small fraction of retracted papers–that have any important applications. I know there was that econ paper from a few years back arguing that there’s a sharp threshold of the ratio of national debt to GDP, above which GDP growth is reduced. That paper was a key driver of national spending cuts (“austerity”) during the Great Recession, but it turned out to have errors that invalidated its conclusions (though I’m not sure if it was ever formally retracted). Then there was political scientist Michael LaCour’s faked Science paper that got retracted; I linked to that story in the post. That one certainly had applied implications for political canvassing organizations. But it was retracted so quickly it didn’t really have time to do any damage, as far as I understand from reading news reports. Plus, the results look like they might actually replicate, so maybe the paper wouldn’t have done much damage even if it hadn’t been retracted!
Re: retractions of years-old papers damaging the public perception of science (or in cases like Wakefield’s, damaging the public perception of science while they remain in the literature)–yes, I’m sure the highest profile cases do. Though it’s hard to say how much damage they do. I mean, retractions are much more common these days than they were back in, say, the 1980s. But has public trust in science declined that much since the 1980s? And if it has declined, is that because of retractions of years-old flawed papers, or for other reasons? Heck, outside of high profile cases in medicine like the Wakefield case, how much is the public even aware of papers that remained in the literature for years before getting retracted? I mean, Jonathan Pruitt for instance has now had numerous papers retracted, but is anyone who’s not a professional scientist even aware of this?
I’d argue there’s a better construction-related analogy to use here than a brick wall, which is a temporary shelter in the woods build from logs, moss, etc. In a brick wall, every piece is the same, every piece only interacts directly with the two below it and the two above it, and one dodgy brick could do a lot of damage. By comparison, a shelter in the woods is combines big logs (foundational studies) with twigs and thatching (studies of specific regions or processes that don’t necessarily generalize, and that only make sense as sitting on top of the foundational studies). Further, a long stick can bridge disparate parts of the shelter (an interdisciplinary study, say).
In this framing, the authors with retracted papers are working on one corner of the shelter by continually adding lots of fragile sticks on top of each other. As you implied with the brick-wall analogy, that doesn’t mean the whole shelter fails, just that the one corner of it is not trustworthy.
Your analogy is definitely more detailed than mine. But I’m still happy to stick with my analogy because there’s a big brick wall near my house (a long sound barrier along a busy road) that has a hole punched in it from an old car accident. The rest of it is still standing up just fine! 🙂
But maybe a little noisier close to the hole in the wall?
YOU WIN THE THREAD
There’s a potential publication-bias issue in part of this analysis:
“Second, none of these 5 papers got entirely negative results; they all reported some positive discoveries. So it’s not as if any of these 5 papers was a wild goose chase.”
A wild goose chase that led to entirely negative results would have been very difficult to publish, so we have no way to know if this happened, or how often it happened. I think most scientists have experienced at least a few failed projects that did not generate any publications, but it’s a tough topic to study. With a lot of work one might be able to compare funded grants with published papers. (Though my one completely failed project does claim some papers, it’s just that there was no publication for the central experiment since I never got it to work.)
Yes, it’s true that I can’t say how many investigators wasted time producing unpublishable negative results in an attempt to replicate or build on these retracted papers. As you say, tough to study. In the case of the two papers discussed in the post, I doubt there are many such unpublished papers, if any. But I say that just based on my own gut feelings, not based on any data.
Harm Nijveen and I did a similar inquiry (https://researchintegrityjournal.biomedcentral.com/articles/10.1186/s41073-016-0008-5) and found no propagated erroneous results. Later, Luwel and co-workers (https://openaccess.leidenuniv.nl/handle/1887/65213) found that a significant proportion of paper by fraudster Hendrik Schön were still cited in a positive way.
Thank you for sharing your work with Nijveen on this, looking forward to reading it.
I think you and David Steen are spot on about misconduct in narrow, fundamental fields versus applied science. In the Jonathon Pruitt affair, he severely harmed his former co-authors, mostly from lost opportunity costs in that they could have been working on something else. If he had advisees, hopefully they’ve found new advisors.
But damage to whistleblowers shouldn’t be overlooked. Readers here will be familiar with the saber-rattling of FOIA demands by Pruitt’s lawyer that are designed to intimidate and harass (more like empty scabbard rattling). In the Oona Lönnstedt affair with lyin’ fish and plastics-preferring-perch, she and her co-authors responded to questions about their work by accusing the whistleblowers of being motivated by jealousy and vindictiveness. Her co-authors seemed to have slipped by unscathed. One of her lionfish studies was questioned for reporting testing 50 fish, but only 12 were reported on her collection permit. Her Canadian committee members traveled to the Great Barrier Reef “but did not see her actually conduct the experiments. ‘We had no reason to think anything was suspicious,” they write. Because lionfish are nocturnal, they believed the experiments took place “in the middle of the night,’ when they were asleep.” (some committee!) No obvious damage to co-authors there.
For purposes of this post, I wanted to focus narrowly on damage done to scientists and science by retracted papers remaining in the literature for years. The damage caused by authors trying to intimidate whistleblowers and investigators is a whole ‘nother kettle of fish that would need its own post…
Re: Jonathan Pruitt’s current advisees, his website lists a number of them: http://pnb.mcmaster.ca/pruittlab/index.html. But it’s not up to date (e.g. a couple of the postdocs listed on the page are now profs elsewhere).
Yes, the events of this year have been rough on Pruitt’s current advisees, even those not working with data collected by Pruitt:
Pingback: Friday links: philosopher vs. baseball, following Am Nat’s lead on data sharing, and more | Dynamic Ecology
Pingback: One year into #pruittdata, how have citing authors reacted? Here are the numbers. | Dynamic Ecology
Pingback: What I learned about scientific misconduct from reading the NSF OIG’s semiannual reports | Dynamic Ecology