How long do institutional investigations into accusations of serious scientific misconduct typically take? Here’s some data. (UPDATED)

When a scientist is credibly accused of serious scientific misconduct, one or more institutions–often but not always the accused’s employer–launches a formal investigation. How long do those investigations typically take? Here’s some data I compiled.

I did some quick googling for information on institutional investigations into the top 10 researchers on the Retraction Watch leaderboard for most retractions all time. I also googled for information on a few other cases I chose because they’re famous (e.g., Jan Hendrik Schön), or because they’re from ecology and evolution (e.g., Oona Lönnstedt). All the institutional investigations I looked at concluded that the researcher in question had committed scientific misconduct. But they varied in how long they took to reach that conclusion:

Yoshitaka Fujii: An anesthesia journal began investigating Fujii’s work in 2010, I think in collaboration with other journals. The journal hired a statistical consultant as part of the investigation. The journal released a report into 168 of Fujii’s papers in Mar. 2012. Then, the Japanese Society of Anesthesiologists began investigating >200 of Fujii’s papers in Apr. 2012, and reported in June 2012 that most were fabricated. So that’s a ~2 year investigation, followed by a ~3 month investigation.

Joachim Boldt was suspended from a German hospital on 10 Nov. 2010. The hospital investigation concluded in Aug. 2012. 1 year 9 month investigation.

Yoshihiro Sato: 4 institutional investigations began in 2017; 3 were complete by March 2018. I couldn’t find information on how long the 4th investigation took. 3 investigations of, oh, approximately 1 year each.

Jun Iwamoto: see Yoshihiro Sato above. Iwamoto was Sato’s collaborator, so I didn’t double-count those investigations.

Ali Nazari: a whistleblower alerted Swinburne University to anomalies in many of Nazari’s papers in 2017. Swinburne completed its investigation by late Oct. 2019. Approximately 2 year investigation.

Diederik Stapel: Tilburg University suspended him for data fabrication in Sept. 2011 and announced an investigation. The investigating committee released an interim report the following month. The final report was due around Nov. 2012. 1 month (interim investigation), 11 months (final investigation)

Yuhji Saito: see Yoshihiro Sato, above. Saito was a collaborator of Sato’s. I didn’t double-count those investigations.

Adrian Maxim: Couldn’t find any information.

Peter Chen: An investigation by ETH Zurich began sometime around March 2009 (no earlier). The investigative report was completed on 15 July 2009. ~4 months

Scott Reuben: A Baystate Medical Center routine audit turned up irregularities in May 2008. In Mar. 2009, Baystate announced that Reuben had admitted faking data. 10 months

Jan Hendrik Schön: Bell Labs investigation began in May 2002. Report released Sept. 25, 2002. 4 months

Fazlul Sarkar: This one’s a special case. Sarkar resigned from Wayne State in May 2014 to take up a position at Mississippi. Suspicions had already been raised about his work on PubPeer, but as best I can tell Wayne State hadn’t started formally investigating at that point (UPDATE: I seem to be wrong about whether Wayne State had started investigating at that point; see the comments). On June 7, 2014, Mississippi notified Sarkar that they had received allegations of misconduct from PubPeer. Later that month, Mississippi notified Sarkar that the job was rescinded. Sarkar subsequently attempted to unresign from Wayne State. They refused to give him his old position back, and instead reappointed him only to a 1-year position. So I guess you could say that Mississippi completed an investigation in less than a month. But really, it’s probably better to say that there wasn’t any formal investigation comparable to the others on this list. At least not in May-June 2014.

Oona Lönnstedt: Sweden’s Central Ethics Review Board began investigating Lönnstedt (and Peter Eklöv) for data fabrication in a Science paper sometime in 2016 and finished investigating in Apr. 2017. Subsequently, James Cook University began investigating Lönnstedt in late 2019, and publicly announced the results on Aug. 13, 2020. ~1 year investigation, and a ~9 month investigation

Anders Pape Møller: The Danish Committee on Scientific Discovery began investigating after receiving a complaint on 29 Mar. 2001. On 25 Sept. 2002 they released their report. 1 year, 7 month investigation.

So if you’re scoring at home, the mean and median time for these 14 investigations is ~1 year. That’s true whether or not you prefer to count the time to the interim report on Stapel, or the time to the final report. The range is 3 months to 2 years (counting the final report on Stapel rather than the interim report).

Note that two of the investigations that were shorter than 1 year were preceded by longer investigations into the same individual. That might be a blip–or it might be a sign that completion of one investigation accelerates subsequent investigations by other institutions.

No doubt there are many factors that might explain variation in investigation length. These investigations were conducted by different institutions in different countries. They were initially prompted by different things. The investigations covered widely varying numbers of papers and other scientific works. Etc.

I think that these data provide some useful context for the ongoing McMaster University investigation into Jonathan Pruitt. That investigation has been running for approximately 1 year at this point. So, still well within the range of timespans within which broadly similar investigations have been completed in the past. Of course, that doesn’t tell you whether the McMaster investigation is proceeding fast “enough”. That’s a different conversation. But it does suggest that the factors determining the length of the McMaster investigation mostly aren’t specific to the McMaster investigation. So if you want these sorts of investigations to be shorter, you should probably focus most of your attention on systemic factors that affect investigation length.

26 thoughts on “How long do institutional investigations into accusations of serious scientific misconduct typically take? Here’s some data. (UPDATED)

  1. I wish you had done a poll first to get people’s intuitions on this. I would have been very, very wrong – I would have guessed at a mean time of 4-5 years. I am astounded at how fast these seem to move! I wonder if I’m the only one?

      • I’m actually (pleasantly) surprised at how quickly most of these have moved too. Did you notice any correlation between number of papers flagged and the investigation time? Along these lines I wonder about the goal of the investigations: are they to determine *which* papers contain evidence of misconduct or just whether *any* paper offers evidence of misconduct? In that if their goal is to determine the validity of each flagged paper, you would expect investigations into lots of suspicious papers to take much longer. But if the goal is only to show misconduct on >=1 because that is sufficient grounds for the institution to make a decision about further employment, then it seems investigations could move a lot faster.

      • Yeah, the possibility of a correlation between number of papers flagged and investigation time occurred to me too. I didn’t look into it, but just offhand I don’t think there is one. I think there are just too many other variables that affect investigation length. For instance, the investigations into Lonnstedt only concerned a single paper or a couple of papers, but they weren’t exceptionally short investigations. Whereas the investigation into Stapel concerned dozens of papers but the interim report was completed in a month.

        My casual impression is that none of these investigations just stopped as soon as they found strong evidence of misconduct in a single paper.

      • I am surprised that we now have two commenters who are surprised at how fast these investigations typically move. I had thought most people would be either unpleasantly surprised the investigations take this *long* (“A year?! How can it possibly take longer than a typical peer review or a typical criminal investigation?!”), or else would be unsurprised and unhappy (“Yup, just as I thought, they usually take about a year. Which sucks–that’s way too long!”)

        Now I’m really kicking myself for not doing a poll on this first, as Stephen Heard suggested.

    • I am also surprised they were so quick. I have seen harassment investigations at my own institution drag on much longer than that, and Retraction Watch is full of complaints that journal or institution investigations of alleged fraud are still hanging after 2+ years. I wonder if the fact that most of these were very high profile cases makes a difference.

      • “I wonder if the fact that most of these were very high profile cases makes a difference.”

        Could be. I have no idea. I did try to google for data on the average length of US ORI investigations, but came up empty. Most of those investigations concern what might be called “garden variety” scientific misconduct. But you’ve inspired me to have another go…

      • Ok, see here: https://cen.acs.org/research-integrity/misconduct/New-Office-Research-Integrity-leaders/98/web/2020/08

        Apparently, the US ORI receives about 200 allegations of misconduct annually. But many aren’t investigated, for instance because they’re not in ORI jurisdiction or because the statute of limitations has passed. The US ORI “investigates and closes 40-70 cases annually”. Now, I guess they could be taking >1 year/investigation on average. So that each year they’re opening 50ish new cases and closing 50ish old ones from >1 year ago. But I dunno, that seems a little implausible to me? So I’m guessing that a typical ORI investigation takes <1 year.

        p.s. I was interested to read that "most" of those 40-70 cases the ORI investigates and closes each year have no findings, or at least no "legally defensible" misconduct findings. Apparently, that's often because relevant evidence is unavailable (whether because of accidental loss or deliberate destruction).

        p.p.s. Another interesting tidbit: the large majority of misconduct accusations referred to ORI apparently involve image manipulation.

      • “Retraction Watch is full of complaints that journal or institution investigations of alleged fraud are still hanging after 2+ years.”

        It does look to me like institutional investigations of scientific fraud allegations tend to be quicker than journal retraction decisions (though the latter have gotten quicker over time).

  2. I would have guessed 6 months (and that is not totally random – that is based on knowledge of multiple investigations in two education institutions I am familiar with, although neither were about research fraud).

    “So if you want these sorts of investigations to be shorter, you should probably focus most of your attention on systemic factors that affect investigation length.” You mean like involvement of lawyers and universities’ choice to weight avoiding law suits much more heavily than doing the right thing relative to the weightings used in business?

    Really if it is a priority for a university to finish (i.e. they commit resources) and universities are willing to ignore marginal risks of law suits they are likely to win if they do occur, there is no reason this needs to take more than six months.

    I’m curious Jeremy if in your survey you found any cases where the accused sued the university and won their job back?

    • “You mean like involvement of lawyers and universities’ choice to weight avoiding law suits much more heavily than doing the right thing relative to the weightings used in business?”

      Yes, that’s part of it. Though I don’t know that it’s always fear of lawsuits from the accused that drives university behavior in these cases. For instance, if the accused is a member of a union, the university is (rightly!) going to care about the union’s reaction to the investigation.

      “I’m curious Jeremy if in your survey you found any cases where the accused sued the university and won their job back?”

      That’s another post I was thinking of writing. I haven’t researched this systematically as yet. But just thinking back to the serious misconduct cases I have looked into, I can’t think of many in which the accused overturned the findings of an investigation against them, or got the sanctions reduced on appeal.

      Schon lost his PhD (among other sanctions from various institutions). He sued to get it back and won. But the university appealed and won their appeal. So Schon did lose his PhD in the end.

      The David Baltimore case is the only high profile case that comes to mind in which the outcome of an initial investigation subsequently got overturned. But that was a complicated and unusual case. Congress and the FBI got involved… Baltimore himself wasn’t accused of fabrication, but rather of failing to exercise sufficient oversight and failing to take a whistleblower sufficiently seriously….I don’t recall all the details.

      I haven’t gone back and looked at the Moller case. I vaguely recall that there were various appeals in that case, at least some of which Moller won?

      I do think you’re asking the right questions. If we want university investigations of research misconduct to go faster, well, what would that take? Are there already-existing models that could be followed? Should we throw more resources at the investigations? Should we not turn the investigations over to lawyers? Etc. And what are the likely costs and benefits of a changed approach? I feel like, in science, the conversation about this stuff is just people saying or implying “It only took *me* [X amount of time] to satisfy myself that so-and-so committed research misconduct, so any investigation that takes longer than X is too slow.” I’m a little uncomfortable with that attitude, even though I tend to agree with you that most institutional investigations could and should be rather quicker. I mean, I’ve seen 12 Angry Men…

    • Heh, coincidentally, Retraction Watch’s entry for today is about a former star researcher at the Univ. of Illinois College of Medicine who lost his position over “recklessness” (the uni investigation stopped short of finding him guilty of misconduct)*. He sued to get his job back and lost. https://retractionwatch.com/2021/03/15/falsifying-elements-prompt-retraction-of-three-more-papers-by-former-peorian-of-the-year/#more-121715

      *Well, recklessness, plus he also had a serious gambling problem that led to him gambling at casinos when he was supposed to be working. And there were accusations he mistreated employees. So it sounds to me like he lost his job for a combination of reasons, not solely because of conduct verging on scientific misconduct.

    • Perhaps part of the reason that people who committed scientific misconduct can’t usually sue to get their jobs back is because of how thoroughly they were investigated in the first place? So maybe if institutional investigations were much faster–and as a consequence, less thorough in the eyes of civil courts–more people would be able to sue successfully to get their jobs back? I dunno, just spitballin’ here.

  3. I’d have to say they take about as long as it takes to muzzle the whistleblower…

    … Confucius say “Circle the wagons, boys!”

    • No, with respect I think that’s far too cynical a view, at least when it comes to the sorts of cases covered in this post. As I said in the post, every one of the investigations covered in the post concluded that the accused had committed scientific misconduct. And the people who are found to have committed extremely serious scientific misconduct ordinarily receive significant sanctions, including losing their jobs: https://dynamicecology.wordpress.com/2020/08/10/what-happens-to-serial-scientific-fraudsters-after-theyre-discovered/

      • Well, then I’d have to say your experiences differ significantly from mine. I’ve worked in both private and public research environments, spanning the USA. Both as a casual observer and a person involved with these complaints/investigations, I witnessed and reviewed evidence of blatant research misconduct. Only once was one such violator shown the door. On these other occasions?

        The whistleblowers were shown the door, and the manner in which they were treated prior to that was nauseating. My view is not “far too cynical,” Jeremy- it’s what I and many of my colleagues have witnessed for decades in the USA. Here’s a tip of that iceberg; read it & weep:

        https://khn.org/news/research-misconduct-allegations-shadow-likely-cdc-appointee/

      • Fair enough. My earlier comment was informed primarily by looking at what happened to the worst serial fraudsters in scientific history (as measured by # of retractions). Quite possibly, that admittedly non-random set of cases is unusual. Maybe the most serious serial offenders are the only offenders who are likely to lose their jobs and receive other sanctions.

        Outcomes of more garden variety scientific misconduct investigations do seem to me to be more of a mixed bag than investigations of the worst serial offenders. NSF OIG for instance investigates about 80% of the allegations it receives, and finds misconduct in about 24% of the cases it investigates (https://dynamicecology.wordpress.com/2021/03/17/what-i-learned-about-scientific-misconduct-from-reading-the-nsf-oigs-semiannual-reports/).

        Based on my reading of the literature on this stuff, I don’t feel great about the overall landscape of institutional misconduct investigations. But nor do I feel like the landscape is so bad as to justify blanket cynicism–a blanket assumption that misconduct is always swept under the rug, with only very rare exceptions.

        But maybe I’m wrong. Perhaps I’m not making sufficient allowance for the possibility that numerous cases get swept under the rug in such a way that they never show up in the literature on this stuff.

  4. About the 17 papers from P. Mishra described in the CHM (Crystallography Horror Museum http://cristal.org/CHM/CHM.html ) opened at the end of 2020 (papers commented at PubPeer in December), investigations officially started in February 2021 in Nepal and I received this by February 7, 2021 “Investigation committee is about to submit report in this week. and the matter will gain media coverage.” However, nothing more up to now…

    • Yes – I think this is exactly it. Investigations themselves don’t take that long. It is that the investigation then gets put on hold internally while lawyers negotiate over the consequences. And meanwhile science is left looking for guidance.

      • “Investigations themselves don’t take that long. It is that the investigation then gets put on hold internally while lawyers negotiate over the consequences. ”

        Hmm. I’m sure that’s right in some cases. But I’m not sure if that’s generally right or not. It’s not right for the one scientific misconduct investigation with which I’m sufficiently familiar.

  5. Thread indicating that Wayne State was investigating Sarkar for a while before he left:

    Also argues that it’s not informative or helpful to focus as I did on the lengths of completed investigations. I leave it to readers to consider that point, because I had a bad day today for totally unrelated reasons and I don’t have the energy for anything but a beer right now…

  6. I tweeted a few remarks, mostly about the Fazlul Sarkar case:

    Regarding questions above about why investigations that do happen may still take a long time: usually nobody is in a hurry! Authors obstruct as much as possible, institutions worry about their reputations, often lawyers get involved. It’s also spectacularly thankless work – usually unpaid, unrecognised, often unpleasantly conflictual, etc.

  7. Pingback: Friday links: behind the scenes of the first 17 months of #pruittdata, another serious data anomaly in EEB, and more | Dynamic Ecology

  8. Pingback: BIG #pruittdata news: McMaster U investigation concluded. Jonathan Pruitt placed on paid administrative leave “until the process is complete”. Pruitt has no access to students or research funds while on leave. | Dynamic Ecology

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.