Some data and historical perspective on scientific misconduct (UPDATEDx4 with additional data)

There’s been much discussion recently of irregularities in the raw data underpinning numerous papers by prominent behavioral ecologist Jonathan Pruitt. No formal investigation by Pruitt’s current or former employers is yet complete; it’s far too early for that. But inevitably, discussion and speculation about the Pruitt case has morphed into broader online discussion of scientific misconduct, defined for purposes of this post as fraud, fabrication, and plagiarism.*

This post is about those broader discussions, not the ongoing Pruitt case. How prevalent is scientific misconduct, what causes it, and what if anything should be done to reduce its prevalence? From what I’ve seen, those discussions of broader issues around scientific misconduct have mostly been informed by single examples, such as the recent case of Peter Eklöv and Oona Lönnstedt. I think that’s understandable but also a little unfortunate. We’re scientists; we don’t ordinarily generalize from a sample size of n=1. There’s a literature on scientific misconduct–we should learn from it! So I spent a bit of time reading the literature on scientific misconduct, some of which I’d read before but some of which was new to me. Here’s a summary of what I found. I am by no means an expert on scientific misconduct. But hopefully this post advances the ongoing discussion in some small way, by raising awareness of the relevant literature.

You should totally grab a coffee and read the whole thing, because some of these data are probably the opposite of what you expect them to be!

Note: this is a long-ish post. (sorry!) Please do read the entire post before you tweet about it or comment on it. And please don’t leap to conclusions about my views on anything that’s not explicitly stated in the post. Scientific misconduct is an important issue on which people have strongly-held views. So I did my best to phrase this post carefully. I can’t promise I was perfect; nobody’s perfect. By all means ask in the comments if anything is unclear, and if necessary I’ll update the post and flag the updates as such.

I went with a Q&A structure:

Is scientific misconduct a recent phenomenon? How far back do cases of scientific misconduct go?

Cases of scientific misconduct go back more than a century at least. The Retraction Watch database of retractions (which is quite extensive though not comprehensive) lists 48 retracted papers from before 1980, many of them retracted for misconduct. For instance, this fake medical case report from 1923.

Notable old cases of scientific misconduct (plus a few borderline/controversial old cases) include:

  • In the “Piltdown Man” case of 1912, someone (almost certainly Charles Dawson) faked sensational Pleistocene fossils purportedly from an ancestor of modern humans. Dawson faked many other antiquities.
  • In the early 20th century, Cyril Burt fabricated data from twin studies so as to inflate the apparent heritability of IQ.
  • Controversial psychologist Hans Eysenk is on track to have dozens of papers retracted, some of them 60 years old.
  • R. A. Fisher famously accused Gregor Mendel of fudging his genetic data to improve conformity with Mendel’s laws. Hartl & Fairbanks (2007) defended Mendel.
  • In his 1981 book The Mismeasure of Man, Stephen Jay Gould made numerous dubious data analytical choices in order to falsely accuse early 20th century anthropologist Samuel Morton of scientific fraud regarding human skull measurements. Whether Gould’s dubious analytical choices themselves rise to the level of scientific misconduct is a question on which you’d probably get different answers if you asked different knowledgeable people.
  • Wikipedia’s incomplete list of notable scientific misconduct cases includes other cases that go back before 1980, besides some of those noted above. Former Harvard cardiologist John Darsee got a 10-year NIH funding ban in 1983 for a track record of serial misconduct going back many years before that. Prominent Boston University medical researcher Marc Straus admitted to using false data in 1982. Dermatology researcher William Summerlin admitted to scientific fraud in 1974.
  • Bernard Kettlewell’s famous experiments on evolution of melanism in peppered moths in the 1950s and ’60s were claimed to be fraudulent in a 2002 book by journalist Judith Hooper. My understanding from everything I’ve read is that Hooper’s claims of fraud are groundless. But the case is famous, so I’m including it here because otherwise I’m sure someone would bring it up in the comments. (UPDATE: error in Bernard Kettlewell’s name fixed now. My bad.)

Scientific misconduct is thus much older than many current features of academic science, such as the competitive academic job market or pressure to “publish or perish”. See below for data speaking to the question of whether pressure to “publish or perish” is among the main drivers of scientific misconduct.

How many scientists commit scientific misconduct? Are the ones who get caught just the tip of a very large iceberg?

All the evidence indicates that only a very small minority of scientists ever commit misconduct, though it’s hard to put an exact number on it.

In anonymous surveys, about 2% of scientists admit to having fabricated, falsified, or modified data at least once (Fanelli 2009). Note though that there aren’t that many surveys, many of them have tiny sample sizes (<200 scientists), some are highly non-random samples, and many of them focus on US biomedical researchers. Also, some people won’t admit to bad conduct even in anonymous surveys. UPDATE: On the other hand, some people will say all sorts of things in surveys just for the LOLs. 4% of Americans claim to believe that lizardmen control the Earth. So I wouldn’t necessarily assume that that 2% number is an underestimate. As that last link points out, when you’re trying to poll on the prevalence of any rare, unpopular belief or behavior, any little source of noise, such as a few jokesters, can easily distort your estimate. /end update FWIW, the random survey with by far the largest sample size–Martinson 2005 (n=3247)–found that well under 1% of US NIH-funded researchers admitted to scientific misconduct.

As you’d expect, rates of scientific misconduct estimated from the outcomes of formal misconduct investigations run much lower than that. Presumably because not every instance of misconduct gets caught. Somewhere between 1 in 10,000 and 1 in 100,000 US researchers have been convicted of misconduct in US government investigations (Marshall 2000, Steneck 2006).

How many papers are the product of scientific misconduct?

Few, though it’s of course hard to give an exact percentage because not all papers that are the product of misconduct are detected as such, or are publicly revealed to have been detected as such.

Currently, only about 0.04% of papers are retracted for any reason, so the frequency of papers retracted for misconduct has to be lower than that. And indeed, about 0.02% of papers in the PubMed database had been retracted for misconduct (Claxton 2005, Campos-Varela et al. 2019).

Back in the early oughts, 1% of submissions to the Journal of Cell Biology had improperly manipulated digital images (Steneck 2006). In a much larger study across many journals (mostly biomedical) and a greater time span, Bik et al. found that 2% of published papers had images with features suggesting deliberate, inappropriate manipulation.

UPDATE : Random audits of cancer clinical trials found that only 0.28% of trials contained “scientific improprieties”. Random audits of FDA clinical trials conducted between 1977 and 1988 found evidence sufficient to initiate a “for cause” investigation in 4% of trials. Data cited and discussed in Fanelli 2018. /end UPDATE

Is the rate of retractions for misconduct increasing? If so, is that because of increasing frequency of scientific misconduct, or because we’re getting better at detecting and responding to misconduct?

Yes, the rate of retractions for misconduct is increasing, because we’re getting better at detecting and responding to misconduct.

The absolute number of retractions has grown over time. So has the fraction of retractions that are due to misconduct as opposed to some other reason (Fang et al. 2012, and see here). About half of retractions, or perhaps somewhat more than half, are now due to misconduct (Fang et al. 2012, Li et al. 2018, and see here). So the absolute number of papers retracted for misconduct has grown over time. But of course, the number of scientific papers has grown over time, so you’d expect the absolute number of retractions for misconduct to increase for that reason alone. You want to look at the frequency of retracted papers among all papers. The frequency of retracted papers roughly doubled from 2003-2009, but stopped increasing around 2012. That increase in the frequency of retractions likely does not reflect an increase in the frequency of scientific misconduct. Rather, it likely reflects increasing efforts to detect misconduct, and to retract papers resulting from misconduct. There are several lines of evidence for this view:

  • Journals are now retracting papers much more quickly than they used to (Steen et al. 2013). That is, the average time from when a retracted paper is first published, to when it gets retracted, is dropping.
  • In 2004, just 1/4 of high-impact biomedical journals had policies on retractions. In 2009, the influential Committee On Publication Ethics published a model journal retraction policy. By 2015, 2/3 of high-impact biomedical journals had a retraction policy (see here).
  • Errata are not increasing in frequency (Fanelli 2013). If you think of scientific misconduct as falling on one end of a continuum, with minor unintentional errors on the other end and various forms of questionable research practices in the middle, then you might expect that everything else on that continuum would increase in frequency if misconduct increased in frequency. But errata, which correct minor errors, are not increasing in frequency.
  • The number of journals that have published at least one retraction has increased dramatically over time. But among journals that have published at least one retraction, the mean number of retractions per journal has not increased (Fanelli 2013). That’s consistent with more journals starting to take both mistakes and misconduct seriously, but not with increasing prevalence of mistakes or misconduct. The latter would also lead to an increased number of retractions per journal, among journals that have had at least one retraction.
  • The number of queries and allegations made to the US government Office of Research Integrity (ORI) has increased over time, but the ORI’s frequency of misconduct findings has not increased (Fanelli 2013).
  • In 2010, the popular Retraction Watch website launched, increasing the attention paid to retractions by scientists and media outlets.
  • In 2012, the popular PubPeer website launched, providing a novel means by which potential cases of scientific misconduct could be brought to the attention of journal editors and other scientists.
  • Starting around 2004, some journals started using text-matching software to detect possible plagiarism in all their submissions. The subsequent increase in use of text-matching software is presumably what explains the post-2004 increase in the fraction of retractions due to plagiarism.
  • Similarly, many biomedical journals now routinely use software to automatically check for certain forms of image manipulation, particularly gel and fluorescence images. Presumably for this reason, the frequency of papers containing inappropriately-manipulated images has been declining since it peaked in the mid-oughts.
  • There was a big spike in retractions of conference abstracts around 2009, when the Institute of Electrical and Electronics Engineers started paying more attention to whether its many conference abstracts met its guidelines, and found that thousands of abstracts didn’t.

UPDATE: How much money does scientific misconduct cost funding agencies?

Not much.

I updated the post because I just stumbled across a paper addressing this question in the context of US biomedical research. From 1992-2012, the US NIH spent approximately $58 million of direct research funding on papers that later got retracted, and on researchers later found guilty of misconduct by the US ORI. That’s less than 1% of the NIH budget over that period. /end update

UPDATE : Looking at the semiannual reports of the US NSF Office of the Inspector General, I see that in recent years OIG reports about $8-10 million annually in “questioned costs”, “investigative recoveries”, and “funds put to better use”. That includes costs and recoveries associated with financial misconduct, as well as costs and recoveries associated with scientific misconduct. Note as well that much of the financial misconduct is by institutions rather than individual PIs; it’s mostly not PIs stealing grant money for personal use. For instance, it’s stuff like institutions misspending research grant money on teaching assistants, or not properly accounting for rebates they received on equipment purchases. For context, in recent years NSF’s annual budget (not just grants, everything) has been a bit over $8 billion. So we’re talking about ~0.1% of the total NSF budget going to misconduct that later gets detected, most of which (in terms of the money involved) isn’t the sort of misconduct considered in this post. So even if 90% scientific misconduct associated with NSF grants goes undetected, it’s still <1% of NSF’s annual budget. (Random aside: until I read the NSF OIG’s semiannual reports, I didn’t realize that universities and contractors defrauding NSF is a much bigger deal than scientific fraud is, in terms of misspending or wasting NSF money.) /end update

Is it mostly men who commit scientific misconduct?

Depends if you’re considering everyone who commits scientific misconduct, or just the most prolific serial offenders.

Almost all of the most prolific serial scientific fraudsters are men.

More broadly, men are somewhat overrepresented among researchers found to have committed misconduct in US government investigations (most of which are investigations of biomedical researchers), compared to their representation among all biomedical scientists. However, US government misconduct investigations focus on government grant holders. Government grant holders are more male-skewed than are all biomedical scientists for various reasons, many of which have nothing to do with propensity to commit misconduct (e.g., social forces and sex discrimination that steer women towards teaching careers rather than research careers). If you instead compare authors of retracted papers to authors of non-retracted papers from the same issue of the same journal, you find that men are not overrepresented among authors of retracted papers (Fanelli et al. 2015).

Are there other predictors of who commits scientific misconduct, and where papers based on misconduct are published? In particular, is there evidence that scientific misconduct is more common in countries with a stronger “publish or perish” culture? Are “top” researchers especially likely to commit misconduct? Are papers in “top” journals especially likely to be based on misconduct?

Based on what I’ve read, the answers to those questions seem to be (in order), “yes”, “no, just the opposite”, “no, just the opposite”, and “no, just the opposite”.

If you compare authors of retracted papers to authors of non-retracted papers from the same issue of the same journal, you find that authors of retracted papers are more likely to be based in countries that lack research integrity policies, be based in countries in which individual publication performance is directly rewarded with cash (i.e. $X per paper), and in the early phases of their careers (Fanelli et al. 2015). Productive, experienced, high-impact researchers, based in countries that are thought to have a stronger “publish or perish” culture, are less likely than others to produce retracted papers, and are more likely to publish corrections to their papers for minor errors (Fanelli et al. 2015).

Other analyses are broadly in line with the results of Fanelli et al. (2015). For instance, among papers published in Plos One, papers originating from the US, Canada, western Europe, Australia, Japan, and South Korea have a lower frequency of inappropriately manipulated images than expected, given the total number of papers originating from those countries. Papers originating from China, India, and Taiwan have a higher frequency of inappropriately manipulated images than expected, given the total number of papers originating from those countries.

The frequency of papers with inappropriately-manipulated images declines with journal impact factor. Note that that result comes from a random sample of many images across many journals. And among biomedical papers, the proportion of retractions that are due to misconduct, as opposed to some other reason, decreases with journal impact factor.

UPDATE : Much of the evidence cited above regarding predictors of scientific misconduct is cross-country comparative evidence. One might argue that one should also look at within-country comparisons instead. Fanelli et al. 2022 did that. Their findings reinforce the cross-country evidence. Using a similar matched-pairs design to Fanelli et al. 2015 (cited above), Fanelli et al. 2022 find that, within wealthy countries like Canada, the UK, and the US, image manipulation is not associated with measures of researcher productivity, experience, or prestige. It’s only within low- and middle-income countries that pay researchers $X/publication (especially China) that image manipulation is associated with measures of researcher productivity, experience, and prestige, in such as way as to suggest that the incentives cause misconduct.

My tentative interpretation of these data is that many factors affect the prevalence of misconduct, different factors have opposing effects, some of those factors are at least somewhat collinear. One broad implication is that there may be many different policy interventions that would reduce the prevalence of misconduct in any given context. In principle, one could imagine dialing down any of a number of misconduct-promoting factors, and/or dialing up any of a number of misconduct-reducing factors. Which factors to try to dial up or down seems like a pragmatic empirical question to me, the answer to which depends on all the usual sorts of considerations–marginal costs, marginal benefits, externalities, etc.

Is a disproportionately large fraction of scientific misconduct committed by a small number of serial offenders?

Yes.

Steen et al. (2013) found that over 40% of retracted biomedical papers were written by authors with multiple retractions to their names. And in a more recent analysis of a more comprehensive retraction database, the 500 authors with the most retractions (out of a total of 30,000 authors with at least one retraction) accounted for 25% of all retractions. 7% of all retractions in the database between 1980 and 2011 (so, >7% of all retractions for misconduct during that time) are due to a single author (!)

What do we know about the motivations and other attributes of people who commit scientific misconduct? In particular, what do we know about the motivations and other attributes of the rare serial fraudsters who become prominent in their fields?

Not much.

Above, I noted some systemic factors that predict occurrence of scientific misconduct (e.g., cash payments for publications, lack of research integrity policies). But systemic factors alone can’t fully explain occurrences of misconduct, since after all the large majority of scientists never commit misconduct.** So is there anything that individuals who commit scientific misconduct tend to have in common, that’s associated with them specifically? Rather than with the broader milieu in which they and their many honest colleagues work? In particular, what drives the very rare people who rise to prominence in their fields via serial misconduct?

Hard to say, unfortunately, beyond the fact (noted above) that the most egregious serial fraudsters are almost exclusively men. As best I can tell, serial scientific fraudsters mostly seem to deny wrongdoing, and then just leave science if and when they’re convicted of enough wrongdoing to end their careers (here’s just one of many possible examples). It seems to be rare for anyone who commits scientific misconduct to admit what they did, much less explain their own motivations.

You can make some reasonable inferences about motivation in a few cases. For instance, many of the researchers who’ve racked up dozens of retractions for misconduct were medical researchers pushing their own pet medical techniques or devices. As another possible example, two of the famous old cases of scientific misconduct listed above are to do with IQ heritability and eugenics (Burt, Eyesenk). The common thread here is researchers who were super-attached to their own beliefs. Beliefs they were prepared to push at any cost, up to and including fabrication.

One notable commonality among many of the most egregious serial fraudsters is that they continued to commit fraud long after they reached secure, well-paid, senior positions. Positions that they would have kept even if they’d dialed back their research programs, or wound them down entirely. The worst serial fraudsters seem to keep committing fraud long after, and out of all proportion to, any reasonable “need” to get or keep a good job in science. In my admittedly-cursory research, I haven’t found any examples of serial scientific fraudsters who stopped voluntarily, before being caught. Are there any?

A few years ago, social psychologist and serial fraudster Diederik Stapel wrote a book confessing what he did and purporting to explain why he did it. I’ve skimmed bits of it. It’s engaging, because it’s well-written, because it’s such a rare window into the mind of a serial scientific fraudster, and, well, for the same reason it’s hard to look away from a car crash. But it’s also transparently self-serving. So I dunno. I’m not a psychologist, so I find it hard to separate honest self-reflection from dishonest excuse-making here. Stapel is now a motivational speaker (if that’s the right term…). One wonders if the book was his way of kickstarting his new career.

Conclusions

On their own, these data obviously don’t tell us what, if anything, we should do differently at a systemic level in order to prevent, detect, and punish scientific misconduct. See Dan Bolnick and Andrew Hendry’s blog for some good concrete discussion of that. But hopefully, these data inform that broader discussion. I hope to have a few thoughts of my own on what to do about misconduct in a future post, inspired (as is usual with me) by something I read that isn’t about science. In the meantime, the comments are open. Looking forward to learning from your knowledge and opinions.

Footnotes:

*I use this definition not because I oppose other, broader definitions, for instance those that define bullying and harassment as scientific misconduct. Rather, I use this definition just to keep the post to a manageable length, and to keep the focus on the specific subcategories of misconduct that have been widely discussed online among ecologists recently. If you would prefer to discuss other forms of misconduct, such as bullying, you are welcome to comment on our numerous past posts on those other forms of misconduct (for instance here and here and here). The post authors will still see your comments, and reply (or not) just as they always do.

**Like Tal Yarkoni wrote in a slightly different context, “it’s not the incentives, it’s you“.

34 thoughts on “Some data and historical perspective on scientific misconduct (UPDATEDx4 with additional data)

  1. You mentioned a few things which seemed to suggest that medical research had more of these problems (though I may be misreading!). Did any of the papers you saw compute a frequency of misconduct by field? I know lots of anecdotes about medical research, but I’m a bit skeptical because I haven’t really looked at the data compared to other scientific fields.

    • It’s funny, I do recall seeing something that said the rate of retractions for misconduct seems to be higher in biomedicine than in other fields. But I can’t find it now, so maybe I imagined it.

      • And of course, even if the rate of retraction is higher in biomedicine than in other fields, that doesn’t mean the rate of misconduct is higher. Could just be that more attention is paid to biomedicine.

  2. Kind of kicking myself that I didn’t put a poll in this post. Curious what fraction of our readers came into this post believing that misconduct is most common among “top” researchers, and that papers based on misconduct are most prevalent in “top” journals.

    I think that we get misled on this because so much publicity accrues to the rare cases of misconduct involving “top” researchers and papers in “top” journals. People who are asked to guess the prevalence of [thing] do so via how easily they can call instances of [thing] to mind. Here, [thing] is scientific misconduct. The only instances anyone ever hears about, and therefore the only instances anyone can recall, are instances involving “top” researchers” and “top” journals. And the problem is, of course, that those instances attract so much attention precisely *because* they’re statistically unusual, not statistically typical.

    Now, you could of course argue that the rare instances of misconduct by “top” researchers publishing in “top” journals are the important ones to worry about. I can imagine plausible arguments for that view. But that’s a different argument than “misconduct is most prevalent among ‘top’ researchers, and among papers in ‘top’ journals”.

    • Retraction Watch contributes to creating this mistaken impression. They’ve compiled and maintain a retraction database that tries to be comprehensive. But they only post about a few of those retractions, namely the ones that seem “newsworthy”. Which means they’re much more likely to post about retractions from “top” journals, and by “top” researchers, than about other sorts of retractions.

      I don’t say this as a criticism of them. They write about the retractions that their audience cares the most about. I do the same! (our old Friday linkfests contain posts about the few “newsworthy” retractions in ecology, but not about any retractions of obscure ecology papers). And as noted in my previous comment, there’s an argument to be made that the most “newsworthy” retractions are also the ones that matter most for science. But there’s an inevitable and unfortunate side effect of everyone only writing about and paying attention to “newsworthy” retractions: it creates the mistaken impression that such retractions are the most common sort.

  3. 1. I wounder how many cases of misconduct are caught during anonymous peer review and do not become publicly known.
    2. Is it misconduct to overlook plagiarism? I am familiar with this during dissertation defenses, and abstract and article peer review. In the discussion among the peers, it may be cast as a form of cultural relativism–it might be wrong for an American student but part of the foreign student’s culture, so we will let it pass.
    3. It seems to me that self-plagiarism in particular is not well understood. For instance, when one writes an annual project report and then recycle parts of it in a journal article. I also wonder how often grad assistants contribute to the report, but are left off the article.

    • No, overlooking plagiarism is not in the same category as what we are talking about here. In fact, I don’t think plagiarism itself meets the standard of misconduct we are talking about here. There are all kinds of behaviors that are bad and deserving of some kind of sanction but faking or manipulating data results in something qualitatively different than than many other types of misconduct – it results in people believing that there is evidence for something that there isn’t evidence for. It’s bad if a grad student doesn’t get credit for their work. It’s bad if I get credit for other people’s work. But if the science is good, the world hasn’t been duped. Ultimately, who gets what credit and how much, matters mostly to scientists. Sure, if the incentives are messed up it will likely affect productivity and maybe even the reliability of scientists but those kinds of misconduct are at least one step removed from the consequences of falsifying data.
      And I suspect (although based on no evidence) that few cases of misconduct get caught during peer review. I just can’t figure out how the kind of reviewing that I do would detect falsified data. Faked ‘well’, I’m not sure how fake data would be detected. Might be an interesting – though controversial – experiment.

    • You’re absolutely right, Jim, that plagiarism was included with data falsification under the misconduct definition. But, it sounds like we might agree about the relative severity of plagiarism versus data fraud. I don’t mean to imply that plagiarism isn’t a bad thing – but I do think that the effects of plagiarism on public welfare are usually minor to nonexistent. Does it really matter to our reading pleasure if Shakespeare got credit for plays he didn’t write?
      Here’s a thought experiment – imagine a critically important piece of scientific work was shoved in a drawer, discovered a decade later by somebody cleaning out the drawer and then published under the name of the person who found it in the drawer. Given a choice between the paper (1) not being published or (2) being published by somebody who didn’t do any of the work – which would you choose? Further, what harm to the public good would have been caused by this blatant and clearly unethical form of plagiarism?
      (I get that there are other ethical alternatives, but if these were your only choices.)
      It does seem to me that academics can get all twisted up about certain kinds of plagiarism that make little sense to regular folks (e.g. copying and pasting methods descriptions from one of your own papers to another). I’m not even quite clear about why it would be such a bad thing to copy and paste somebody else’s method description if you were using an identical method. We have no problem with copying R code from the web without attribution but copying a very technical description of a method is considered completely unethical. The distinction between those two forms of ‘copying’ seem pretty subtle to me.

  4. A point that got left on the cutting room floor: as best I can tell from reading about an admittedly-small sample of fraudsters, there are no commonalities among scientific fraudsters in terms of how well-liked they were by their colleagues before they were found out. Some were unpopular, but many seem to have quite popular. Diederik Stapel for instance seems to have been generous in giving time to students and collaborators. As another example, Michael LaCour came off as unusually outgoing and keen. At conferences, he was always sitting down with people to show them his work in progress. But as best I can tell, nobody thought he was a jerk or whatever

    My tentative inference is that there’s no correlation between scientific fraud, and “everyday” behavior/personality.

  5. Well done. I’d say you captured the gist of the issue in this post, and reports of 1-4% overt misconduct rates in published science are probably the best estimates there are. If the question was broadened from egregious misconduct to papers lacking overall scientific integrity (e.g., unreliable owing to bias, selective reporting or trimming of data, irreproducibility) the percentages it would undoubtedly be higher but I’ve never seen anyone try to put credible numbers to it.

    One silver lining in the Pruitt situation is the exemplary response of affected co-authors, editors, and colleagues in the transparent and prompt yet deliberate, careful and fair evaluations of the datasets. The community response has been far superior to any incident I’ve heard of, and these are issues I’ve been following for several years now. This is the case study on how to fairly resolve data integrity questions in the Web 2.0 environment without a rush to judgement but not letting it drag on for years through opaque institutional investigations. I hope COPE highlights it, for it really reflects the best behavior in a scientific community in a difficult situation.

    Contrast this with the handling of questions raised about Oona Lönnstedt’s lionfish studies while a PhD student. A whistleblower noted that the methods required 86 fish for the experiments, but the collection report of the fish taken from the reef said only 12 were taken. In response, the authors issued a correction saying that contrary to the initial methods description, some fish were reused and only 40 fish were required, and they provided a collage of fish photos showing “evidence of the number of lionfish.” Then it turned out that some of the evidence of number of lionfish were duplicated images, some of which had been altered to look different, and others appeared to be different images of the same fish. This led to explanations that they were just pictures of lionfish and not really intended as evidence. Her co-authors also explained that while they traveled from Saskatchewan to the Great Barrier Reef to see the study, they didn’t see Lönnstedt actually conduct the experiments. ‘Because lionfish are nocturnal, [the co-authors] believed the experiments took place “in the middle of the night,” when they were asleep.‘ It’s been 3 years since these allegations were first made and her university ‘promised to do an investigation’ but as of September 2019 that investigation had yet to start.

    • Re: university investigations into cases of possible misconduct, it will be interesting to see if McMaster U investigates the Pruitt matter. And if they do, how quickly the investigation reaches a conclusion. AFAIK, McMaster’s public statements so far have been limited to saying they’re aware that issues have been raised, plus boilerplate statements that they take research integrity seriously and have policies on integrity. Obviously, they owe the scientific community, government funding agencies, taxpayers, and Pruitt himself a prompt, thorough, fair investigation.

    • “This is the case study on how to fairly resolve data integrity questions in the Web 2.0 environment without a rush to judgement but not letting it drag on for years through opaque institutional investigations.”

      Speaking as someone who’s participated a bit in the community response to the Pruitt situation, I find this comment both gratifying and…impractical? exhausting? (not sure what the right word is…) I completely agree that the community response has been exemplary; a lot of people have really gone out of their way to do the right thing on a lot of different fronts. But precisely because a lot of people *have* gone out of their way, I don’t know if the community response to this situation really provides a model for others to follow. I mean, yes, absolutely, having functional institutions and good written policies helps. But at some point, an appropriate community response to this sort of situation comes down to having a critical mass of good people. People who are prepared to suck it up and do what needs to be done. Even if doing it is exhausting, and no fun, and isn’t officially their job, and takes time away from their official jobs, and won’t be rewarded in any concrete way. “Have lots of people around who are willing and able to go above and beyond when the situation demands it” just doesn’t strike me as very practical, actionable advice for others to follow! But perhaps it can serve as an inspirational example for others. Inspiring examples often seem impractical for others to follow; that’s part of what makes them inspiring. 🙂

      • It would be totally impractical in a huge field like biomedicine and perhaps impractical in most settings. It’s certainly unwelcome, having to go to a lot of work quickly to clean up a dungshow created by others. Still the the forthright behavior of the affected collaborators and the leadership of EIC Dan Bolnick were remarkable. And yes, fortunately he had loyal AEs so when he reached out for help to one particular disinterested and numerically literate AE, that AE stepped up. At first I thought this could only be done in a right sized science niche, but the Lönnstedt questionable article was also published within the behavioral ecology niche, and they linger. (There the co-authors were reportedly literally asleep on the job.) I’m sure this really sucked for everyone involved and for those unlucky to be within the splatter zone, but hopefully their stand-up and transparent behavior will be noticed and remembered outside of their circle.

  6. Pingback: Friday links: proof that hindsight really is 20:20 vision, lizard art, and more | Dynamic Ecology

  7. Not terribly relevant to this conversation, but did Kettlewell go by the nickname “George”? I’m just not sure how you get “George” from Henry, Bernard, or Davis. Typo?

  8. Pingback: The history of retractions from ecology and evolution journals | Dynamic Ecology

  9. Pingback: Scientific fraud vs. financial fraud: the “Canadian paradox” | Dynamic Ecology

  10. Pingback: Friday links: a cat tale (of possible scientific misconduct), COVID-19 vs. everything, and more | Dynamic Ecology

  11. Pingback: Scientific fraud vs. financial fraud: the “fraud triangle” | Dynamic Ecology

  12. Pingback: Scientific fraud vs. financial fraud: the “snowball effect” and the Golden Rule of fraud detection | Dynamic Ecology

  13. Pingback: How do you quantify the contribution of “structural” causes to the occurrence of some particular event? | Dynamic Ecology

  14. Pingback: What happens to serial scientific fraudsters after they’re discovered? | Dynamic Ecology

  15. Pingback: How much damage do retracted papers do to science before they’re retracted, and to whom? | Dynamic Ecology

  16. Pingback: What I learned about scientific misconduct from reading the NSF OIG’s semiannual reports | Dynamic Ecology

  17. Pingback: Friday links: a major scientific fraud case (no, not that one), vaccines vs. on-campus classes, and more | Dynamic Ecology

  18. Pingback: Friday links: critiquing your own papers, why scientists lie (?), and more | Dynamic Ecology

  19. Pingback: Friday links: science is “interesting” but not “awesome”, evolution of peer review, and more | Dynamic Ecology

  20. Pingback: Why are scientific frauds so obvious? | Scientist Sees Squirrel

  21. Pingback: Friday links: dAlice/dt in Wonderland, 100 things a scientist should know, and more | Dynamic Ecology

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.