The history of retractions from ecology and evolution journals (UPDATED)

In a recent post I summarized the scholarly literature on retractions, including but not limited to retractions for misconduct. In this post I’ll focus on ecology and evolution specifically. How many papers have ever been retracted from ecology and evolution journals? For what reasons? Are retractions from EEB journals similar to, or different than, retractions from other fields, such as biomedicine?

To answer those questions, I searched the Retraction Watch database for retractions and expressions of concern from a long list of ecology and evolution journals (namely, every journal with “ecology” and/or “evolution” and/or “evolutionary” in the title, plus every other EEB journal I could think of).* The database is large but not comprehensive, so I added in one retraction I know of from an EEB journal that’s not in the database. I included partial retractions for misconduct. They’re classified as “corrections” in the database, but I consider them “retractions”. I ignored four retractions due to publishers inadvertently publishing the same paper twice.

Here’s what I found:

  • 56 retractions or expressions of concern from EEB journals. I don’t know how many papers EEB journals have published, but 56 has to be some vanishingly tiny fraction of all EEB papers.
  • The oldest now-retracted paper from an EEB journal was published in 1998. At least, the oldest one I know of. That’s Anders Pape Møller’s retraction from Oikos, which isn’t in the database. The oldest EEB retractions in the database are from 2003. So even allowing for the possibility that a few older retractions are missing from the database, retractions in ecology and evolution basically weren’t a thing before I started my my PhD in 1995. The timing also coincides with the internet becoming a thing. So clearly, the advent of EEB retractions is either my fault, or the internet’s fault. 🙂 #posthocergopropterhoc
  • The time from publication to retraction is getting shorter for EEB journals. EEB papers published in 2010 or earlier that ended up getting retracted (or receiving an expression of concern) took an average of over 4 years to do so. It’s down to about 1.5 years for EEB papers published from 2011 onwards, and down further to an average of about 0.5 years for papers published from 2017 onwards. It’s not just EEB; the time to retraction is dropping in all fields. It’s one big reason why the frequency of retractions is growing. Many journals these days have retraction policies and use them. And these days, the raw data for many papers gets put online, so that if it’s going to be scrutinized it can be scrutinized quickly. So papers that end up getting retracted, get retracted much faster these days. Another way to say the same thing is to say that the large majority of retractions from EEB journals have happened within the last few years. It’s just that some of them were retractions of old papers. (UPDATE: as commenter Andrew McAdam notes, the mean time to retraction for papers published in the last couple of years might well increase in future, just because as of this writing it’s impossible for (say) a 2019 paper to have gone >1 year from publication to retraction. Just eyeballing the data, it doesn’t look to me like this constraint fully accounts for the drop in average time from publication to retraction. For instance, the 8 retracted EEB journal papers published in 2014 could have times from publication to retraction of anywhere from 0-6 years. But they all went 2 years or less from publication to retraction. It’s a similar story for other recent-ish years–observed times to retraction are bunched up near the low end of the range of mathematically-possible values. Further back, that’s not the case–times to from publication to retraction are spread more uniformly throughout the range of mathematically-possible values. But the mathematical constraint is worth being aware of. /end update)
  • 46% of retractions or expressions of concern were for some form of author misconduct, or possible/likely misconduct. That breaks down as nine cases of duplicate publication, nine cases of data fabrication, four cases of plagiarism, three cases of possible/likely data fabrication, and one case of fake peer review. That 46% isn’t too far from the percentage of all retractions across all fields that are due to misconduct, once you allow for the small sample size of EEB retractions. So I stand corrected. In an old post, I speculated that retractions for misconduct were especially rare in EEB compared to other fields. That speculation was based on anecdotes. Now that we have more comprehensive data, I don’t see any reason to think that misconduct, or retraction for misconduct, is notably rarer (or notably more common) in EEB than in other fields.
  • 36% of retractions or expressions of concern in EEB journals were for unintentional author error in data collection, data analysis, or mathematical derivation. That percentage isn’t too far from the percentage for all retractions from all fields that are due to author error.
  • 18% of retractions or expressions of concern in EEB journals were for some other reason, or for an unspecified reason. That includes a couple of disputes over whether all authors had given permission to publish, a case in which authors could not accurately state where their fossils were housed, a dispute over whether the authors had legal permission to collect their samples, a retraction for “copyright reasons”, and several cases for which no reason for retraction was given.
  • Retractions for duplicate publication and plagiarism from EEB journals mostly only happen in certain specific contexts. All but one of the duplicate publications were in obscure EEB journals. The remaining one was a case of someone republishing a paper previously published in an obscure non-English language journal in an English-language journal, without first getting the English language journal’s permission. All but one of the cases of duplicate publication involved authors based in low- or middle-income countries, many of which lack strong research integrity policies, and some of which have, or used to have, policies of rewarding publications directly with cash payments. And all four cases of plagiarism involved authors from low- or middle-income countries. Obviously, the sample sizes here are small, so you wouldn’t want to overgeneralize from them. But these results for EEB journals broadly line up with data from other fields.
  • Retractions for unintentional author error from EEB journals mostly only happen in certain specific contexts. Namely, they happen almost exclusively in internationally-leading, high-impact EEB journals. My interpretation of this is not that authors and editors for leading EEB journals are sloppier than others. Rather, just the opposite: leading EEB journals, and the authors who publish in them, care a lot about getting things right. Papers in leading EEB journals probably also get more closely scrutinized by more readers than do papers in obscure or highly specialized journals. So the rare serious errors in papers in leading EEB journals are more likely to be discovered than are serious errors in more obscure journals, and are more likely to lead to retraction when retraction is warranted.
  • Most of the twelve cases of fabrication or possible/likely fabrication in EEB journals came from a few repeat offenders. Five were from plant physiologists Harsh Bais and Jorge Vivanco. Bais fabricated data in numerous papers as Vivanco’s postdoc and Vivanco covered it up after he found out. Two were from a single Russian group. All authors but one asked for the two papers to be retracted after a Russian university investigation found that the remaining author had fabricated data. Two were from Jesus Lemus and Javier Grande, who may have doctored samples and invented a co-author. And it’s possible that at least two of the three remaining cases of fabrication or possible fabrication were from repeat offenders too. Anders Pape Moller only has one retraction for misconduct, but there are reasons to think other papers of his were products of misconduct (see links here). And the outcome of the Jonathan Pruitt case remains to be determined officially, but he is likely to end up with numerous retractions from EEB journals. Some ecologists who’ve looked closely at his data have publicly stated their personal view that he fabricated data (see links here**). Broadly speaking, these results for EEB journals line up with data from other fields. A disproportionately large fraction of all retractions, and an even larger fraction of all retractions for misconduct, come from a very small number of serial offenders.
  • These data give no reason for blanket distrust of specific EEB journals. Yes, some EEB journals have more retractions than others. But no EEB journals has more than five. And the EEB journal that has the most–Journal of Chemical Ecology–is in the top spot only because it was a favored outlet of serial fraudsters Bais & Vivanco. Three of their retractions were from J Chem Ecol.

*It’s harder to search for retractions of EEB papers from general biology or general science journals, so I didn’t. I’m prepared to put X amount of work into a blog post, where X is >0 but not large. 🙂

**I have my own views on the Pruitt case, which I’m choosing not to share publicly at this time. But I don’t think anyone who’s chosen to state their views publicly should be chided for doing so. There are cogent reasons to publicly express a view (if you have one), and cogent reasons not to.

21 thoughts on “The history of retractions from ecology and evolution journals (UPDATED)

  1. Really interesting post, Jeremy. It would be useful to also consider historical examples of fraud (or suspected misconduct) which _should_ have resulted in retractions but didn’t, because retractions weren’t a thing back in the day. Two that immediately spring to mind are papers resulting from John Heslop-Harrison’s work:

    And as far as I’m aware none of the Piltdown papers have been retracted, e.g.

    Keith, A (1914). The Significance of the Skull at Piltdown. Bedrock. 2: 435–53

    Woodward, A. S. (1913). Note on the Piltdown Man (Eoanthropus Dawsoni). The Geological Magazine. 10: 433–34

    I’m sure that DE readers can crowd-think other examples.

  2. Thanks for posting. And good to see you call out my former collaborators directly. Those were not the only papers that had issues. For example, several papers were corrected rather than retracted, and Ecology Letters refused to do anything, saying that because (at that time) the paper was 5 years old, that it was too old to bother. Shameful! It’s never too late to work to correct the literature.
    Science should retract a paper that was only corrected: All but one part of that paper was reported to be incorrect (read, falsified in one way or another) yet somehow the authors convinced the editors at Science it could still be “corrected”. I wrote a letter to the editor at the time, and they echoed Ecology Letters – it had been too long to publish a letter to the editor. These papers are still available, and PDFs can be downloaded directly (original version in most cases, no indication of correction), for example from Web of Science. No one would ever know that they had been “corrected” without some really deep searching.

    • Thanks for the further background re: Bais & Vivanco. Your comments illustrate that even though journals take misconduct more seriously than they used to many years ago, there is still room for improvement.

      • I like to think, and I think it’s probably actually true, too, that even in the last 10 years things have improved, in part because with social media, it’s harder to sweep stuff under the rug.

      • I might even say they’ve changed on that front just in the last couple of years. There’s an analogy here to #MeToo making it more costly for institutions to sweep accusations of sexual harassment under the rug. Or at least, some institutions (not all) now perceive it as more costly, which comes to the same thing.

  3. “… and one case of fake peer review.”
    Did you notice the details? In other fields authors discovered it to be expeditious to create sock puppet reviewer IDs so that the “mouse-click editors” would send them their own papers to review. Or they would create a fake email for a real person to get their own papers to review. Worked for many papers until the editor finally wondered how come the reviews were unusually prompt.

    • I confess I’ve always found it ridiculous that these fake peer review scams ever work. They’re like the “Nigerian prince” emails of peer review. I’ve been an editor myself for years at two different good EEB journals, and I know a bunch of other editors at good EEB journals. None of them would ask for a review from some rando they’ve never heard of, just because the author provided an email address! And none of them would ask for a review from some real person by using a made-up Gmail address that the author provided for that person! Just like no one I know would ever be taken in by a “Nigerian prince” email scam.

      The solution to these scams seems to me to be some combination of “get editors who do their frickin’ job” and “ignore journals that employ editors who don’t do their frickin’ job”.

    • “Did you notice the details?”
      That one case of fake peer review was at Plant Ecology in 2014 (that’s the paper date, it was retracted in 2016). It involved authors from Pakistan and Malaysia.

  4. Other cases:
    One that particularly troubled me was when a group of researchers succeeded at spiking a paper on endangered bird habitat requirements by a rival group by arguing that their rivals didn’t have permission to include a dataset that they considered proprietary. That despite that the data collection in question had been supported by public funding. The notion of publicly-funded but not publicly available data doesn’t sit well with me, although others argue “which public” and are quite happy with it.

    Is botany close enough to EEB to mention here? Skilled botanist and photoshopist Oliver Voinnet is worth reading about because he (or co-author(s)) apparently polished and embellished otherwise valid results. A real head scratcher.

    • Ah yes, thanks for the reminder of the Voinnet case. I only know a little about it, because I wouldn’t really consider Voinnet an ecologist or evolutionary biologist. But yes, another example of serial misconduct.

  5. I suspect you’ve corrected for this but more recently published papers haven’t yet had the opportunity to be retracted long after their publication (ie incomplete cohorts). This would bias toward a shorter mean retraction time in more recently published papers if not accounted for.

    • True. And no, the averages in the post don’t correct for that (not sure how you would…). I had thought that was pretty obvious, but I’ll update the post to make it explicit.

      • Could perhaps subset older retractions to the same maximum length for recent years (3 yrs??: 2017 -2020?).

      • See the updated post. I didn’t do a formal test, but just eyeballing the data it’s clear that, in recent years, times from publication to retraction are bunched up towards the shorter end of the range of mathematically-possible values. To a much greater extent than was the case earlier on (say, before 2009-ish).

  6. Pingback: Friday links: a cat tale (of possible scientific misconduct), COVID-19 vs. everything, and more | Dynamic Ecology

  7. Pingback: Friday links: an EEB retraction, the best/worst abstract ever, and more | Dynamic Ecology

Leave a Comment

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.