Institutional investigation finds star marine ecologist Danielle Dixson guilty of serial data fabrication

Science news story here.

I’m struck by both the similarities and differences to the Pruitt case.

An incomplete list of similarities:

-repeated data fabrication across numerous papers over many years, often taking the form of duplicated sequences of observations indicative of copying and pasting data

-current and former trainees of the accused were crucial to the investigation, going above and beyond to reveal the truth.

An incomplete list of contrasts:

-Dixson was given away in part because of the physical impossibility of her methods. It just wasn’t physically possible for her to have collected the data she claimed to have collected, in the time frame she claimed to have collected it, using the methods she claimed to have used. In contrast, I’m not aware of any instances of the Methods sections of Pruitt’s papers describing any physical impossibilities.

-Pruitt had no public defenders of any consequence, save for his own lawyers. In contrast, Dixson has–indeed, continues to have!–very vocal public defenders, including her own doctoral and postdoctoral supervisors and other prominent marine ecologists. Those defenders have defended Dixson not by addressing the specifics of the allegations against her (e.g., “Here’s why duplicated data X in paper Y don’t actually indicate fabrication”), but rather by (i) imagining that the whistleblowers have bad motives and attacking them for those purported bad motives, and (ii) talking about how hard-working, dedicated, and smart Dixson is. It’s immensely to the credit of Pruitt’s many former friends, trainees, and collaborators that all of them followed the evidence where it led.

-The University of Delaware’s institutional investigation into Dixson was much faster than McMaster University’s investigation into Pruitt.

I don’t know what larger lessons to draw from these similarities and differences, or even if any larger lessons should be drawn. I just find them striking.

23 thoughts on “Institutional investigation finds star marine ecologist Danielle Dixson guilty of serial data fabrication

  1. I think Pruitt, like Dixson, claimed to simultaneously record behaviors from multiple individuals (as an excuse for why there were data duplications in Pruitts case), and to have measured the timing of behaviors with a precision that would not be possible (e.g. tenths, hundredths of a second). So at least after the fact, he claimed some improbable methods.

    • I see what you mean but I don’t think it’s quite the same. The Methods sections of Pruitt’s papers are sometimes vague or ambiguous as to exactly how he collected the data he claimed to collect. E.g., not saying whether he used multiple observers so as to simultaneously record the behaviors of multiple individuals. As far as I can recall, you can’t ever tell *just* from reading the Methods sections of Pruitt’s papers that the study could not possibly have been conducted as described. Whereas Danielle Dixson’s papers describe things like a flume that ran so fast that any larval fish would’ve been washed out the back of it, unable to swim against the current. Or you add up the amount of time required for Dixson to have collected all the observations she claimed to have collected (observing each of N fish for 5 seconds at a time or whatever), and you discover that she claims to have been doing *nothing* but observing fish for some impossibly large amount of time per day, for weeks on end. Or you find that she was purportedly observing fish for (say) 12 hours per day for X weeks, but her methods section says her study was completed in X-Y weeks. That’s what I mean by “physically impossible”–does that make sense?

  2. This is shocking (presuming it all turns out to be true of course). Thanks Jeremy.

    It does make me feel quite uncomfortable. It strikes me that some of the main reasons these people were caught out were due to ‘sloppy mistakes’ (e.g., copy and pasting data, documenting physically impossible amounts of fieldwork). It makes me wonder whether there are any/many more competent fraudsters out there going under the radar. Or more generally, how difficult would it be to fake this stuff convincingly, and how many people are tempted to attempt it? I suppose there’s the actual fabrication, but then also the nerve and ‘social skills’ in manipulation to pull the wool over your colleagues eyes…

    • “presuming it all turns out to be true of course”

      Well, a lot of very careful, competent data forensics folks (including but not limited to the original whisteblowers) have identified a whole pile of anomalies inconsistent with any explanation other than deliberate fabrication. As noted above, Dixson’s methods sections often describe physically impossible studies. A number of her own trainees over the years seem to have been suspicious of Dixson and testified to behavior by Dixson they observed, that is consistent with fabrication and hard to explain in any other way. And now the university’s own investigation committee has concluded from this evidence that serial fabrication occurred. So I’m not sure why we need to “presume it all turns out to be true.” I mean, it’s already been shown to be true beyond any reasonable doubt, right? This isn’t a matter of withholding judgment while a formal investigation proceeds–the formal institutional investigation is over.

      As to whether Dixson, Pruitt, Newmaster, and other recent high-profile fraudsters in EEB were “sloppy”, I have a post on that: https://dynamicecology.wordpress.com/2021/08/25/why-are-so-many-scientific-frauds-so-easy-to-detect/. The short answer is that most of this fakery is “obvious” only in retrospect. It’s only “obvious” if you’re looking for it, and know what you’re looking for.

      That’s not to say some of these frauds couldn’t have been better disguised. I have no idea why Dixson or Pruitt copied and pasted data, when using random number generating functions in R would’ve been both easier/quicker *and* more difficult to detect. And I have no idea why Dixson would claim to have run a flume at a speed too fast for any larval fish to swim against. But it’s actually very hard to do serial scientific fraud in high profile journals without raising *any* suspicions, for *any* reason. For instance, if anyone collects data for you, you somehow have to fake/alter/replace the data they collect, without making them suspicious. That’s hard. But if you collect the (fake) data yourself, with no assistance, well, (i) in many fields that’s already weird/unusual and so potentially suspicious, (ii) are you going to be able to plausibly claim to have collected enough data all on your lonesome to support an important paper, and (iii) how are you going to avoid raising any suspicions among your labmates? (e.g., “Why do you keep coming up with weird excuses to only go to the field site alone?”) We have an old post discussing the “greatest” scientific frauds of all time, where “greatest” means (among other things) “hardest to detect”. https://dynamicecology.wordpress.com/2020/11/02/whats-the-greatest-scientific-fraud-of-all-time/ Commenters came up with some cases that seem to me to have been (even) better disguised than Dixson’s or Pruitt’s–and even in those cases there seem to have been suspicions long before the fraud was eventually exposed.

      You’re absolutely right that a big part of getting away with high profile, serial scientific fraud for a long time is being bold enough to try it, and having the right combination of social skills to brazen through the inevitable questions and suspicions. As a fraudster, you’re mostly exploiting people’s trust. Serial fraud is super rare, so most people are never going to think it could possibly be happening right under their noses. Science’s, and society’s, main defense against most forms of bad behavior is “raise people to be decent, rule-following human beings, so that few people as possible would even try to behave badly.” And I don’t think it could be otherwise, or that we should want it to be otherwise: https://dynamicecology.wordpress.com/2020/03/04/scientific-fraud-vs-financial-fraud-the-canadian-paradox/

      • When I said “presuming it all turns out to be true of course”, I was more trying to communicate my own relative ignorance on the topic than anything else (just having read an article and your blog post). If I’m honest, I didn’t really question anything in the Science article you linked to. Seems pretty cut and dry at this point, as you say.

        I agree that it seems like it would be hard to get away with it for the reasons you lay out. It would be cool to have to some big clandestine research program, where a group of researchers try to publish series’ of convincing fraudulent papers to see if and how they turned out to be detectable… You could even vary the level of ‘convincingness’. Obviously the ethics of this would be ropey at best. Also, I don’t know who might even have the authority to give it the go-ahead, and it would take many years to assess whether it seems to be working. Plus I suppose that assessing how difficult or possible it is to get away with fraud may not actually tell you anything about whether it’s really happening (and might provide a useful resource for people who want to try it). In the course of writing this I think I’ve convinced myself it’s actually a terrible idea…

      • Heh. I keep waiting for some serial fraudster to explain themselves by saying “You finally caught me! I’ve been testing your defenses this whole time!” 😀

  3. Thanks for these insightful comments – getting pretty close to lapsing back into blogging? This long-time lurker from an adjacent field appreciates your thoughtful commentary – in any format – on topics like this.

  4. Begs a three-way contrast with the Newmaster case. UDel investigation, comparatively quick and released; McMaster, slow and not released; Guelph, quick and move folks, nothing to see here folks.
    Closest analog I can think of for the Newmaster case is the Hap Shaugnessy character in The Red Green Show.

    • Yes, and a multiway comparison that ropes in cases from outside N. America, from other fields, etc. would get even messier! Which is why I hesitate to draw any larger lessons from a comparison of Pruitt vs. Dixson.

      Having said that, here’s one larger lesson I feel pretty confident about: mandatory data sharing makes it much easier to detect and investigate cases of possible fraud. Newmaster got away with it in large part because he was able to make fig-leaf excuses for why his raw data weren’t available. Or think of how psychologist Dan Ariely is likely to keep his job because very little of the raw data underpinning his papers is publicly available.

  5. I am quite familiar with Dixon’s work and methods. She has never claimed to have observed multiple animals simultaneously. She has claimed that she could be observing one animal at a time while the other animal is acclimating, which considerably reduces the amount of time it takes to do the experiments. They are not physically impossible. This is not a repudiation of Delaware’s ultimate findings, but a comment on this specific issue.

    I appreciate that we are trying to police ourselves here since everything we do is based on trust. However, accurate understanding of the context and evidence is essential.

    • With respect, I’m afraid your incorrect about her methods being physically possible. For example, quoting from the Science news story, which includes quotes from the UDelaware investigation report:

      “The committee calculated that to produce the paper’s data, which Dixson said she had collected herself, she would have had to carry out 12,920 fluming trials, generating some 860,000 data points and taking 1194 hours of observation time. The ecologist would have needed 11,628 liters of sea water to flow through the flume, which the draft report says she had to collect 2 kilometers from the shore. “It is highly unlikely that she had the time available to do all the experiments and trials as detailed in the paper,” the panel wrote.”

      Another example of physical impossibility, not referred to in the passage I just quoted, is the speed at which she claimed to have run her flume. The claimed speed would’ve resulted in larval fish being washed out the back.

      With respect, I believe I do have “accurate understanding of the context and evidence”. I quoted a passage from the Science news article that supports the post’s statement that Dixson’s methods were physically impossible. I hope you will take the time to clarify your own views on this point, and provide evidence for them. Are you saying that the Delaware investigative report, as summarized and quoted by Science, is incorrect? If so, please explain the specific error in the committee’s calculations.

      More broadly, you say that your comments are not a repudiation of Delaware’s ultimate findings. Just to make sure we’re 100% clear and that everyone has their cards on the table: I accept Delaware’s finding that Dixson committed serial fabrication. Do you also accept that finding?

      • Quoting from the Science piece does not by itself make this statement correct, and the conclusion of physical impossibility regarding the data in the 2014 science paper is flawed and lacks context. Dixon was in the field for months at a time, and the period required is less than 60 days. 12000 l of water is about 3000 gallons, which would be roughly 50 gallons per day, assuming 60 days. (These flumes are tiny and the flow rates very low*). That does not sound impossible to me, nor would it for anyone who works in the field like this. One could do this in an hour with a small boat and a truck in that sort of place.

        I obviously do not know what evidence Delaware has that apparently allowed them to reach that conclusion (i.e. it was impossible to have done those experiments), but nothing in the analysis quoted suggests this is so. Difficult does not mean it cannot be done, and as a field ecologist I have done experiments requiring hauling hundreds of cages and animals out into salt marshes. So I could easily imagine a hard working person do this over a period of months.

        Did Dixon fabricate the data? I have no idea.

        Based on what is publicly available, it is very clear the data of record (found in the public archives) is completely erroneous. It contains massive row and col duplications that cannot be correct. So at the very least, the data have been so mishandled that verification is impossible and retraction was wholly appropriate, that is, I agree with this “ultimate” finding.

        This sort of loss of data integrity also qualifies as misconduct, and not a trivial one. I often impress on my students there can be no publication if there is no way to trace data in a publication back to it’s source. This violates a fundamental tenant of good science.

        I cannot comment on the charge of deliberate falsification ( that is, the experiments were not done and the data made up), as opposed to doing the experiments and losing or otherwise mangling the data so that it cannot be recovered now. Perhaps this distinction seems trivial to some, but the level of intentionality about the misconduct seems important to me, at least.

        Again, Delaware has other evidence besides the vital stats on the experiments that you reference, but without knowing what this is, it seems unwise to come to a conclusion. I only know that the analysis of the experiment above does not prove anything, or is even very suggestive.

        These are important issues that everyone should take seriously. I am not trying to throw stones; only to be accurate in stating what we know, and what we can or cannot easily infer.

        * I don’t recall the flow speeds, but I have seen videos of this set up and the fish definitely don’t get washed out. People who know fish larvae do not appear to have made this claim (none I know anyway), and there might be some sort of confusion since some of Dixon’s experiments were done with coral planulae that only swim at 10-100 u s-1.

      • You wrote in an earlier comment: “She has never claimed to have observed multiple animals simultaneously”.

        I’m afraid you are incorrect. Dixson has in fact claimed just that–though, tellingly, she only started claiming it after questions were raised about the timeframe in which her experiments were conducted. Quoting further from Science’s news piece and its summaries of the Delaware report:

        “The draft report also found misconduct in a 2016 paper on whether anemone fishes can sniff out the condition of potential host anemones, published in Proceedings of the Royal Society B by Dixson and marine ecologist Anna Scott of Southern Cross University in Australia. Again, the timeline was implausible, the committee concluded. Collecting the data would have taken 22 working days of 12 hours, it wrote, “working continuously without any breaks or doing any preparation work, recalibration, cleaning, bucket switchouts,” and so on. Yet the paper said the study was done in 13 days, between 12 November and 24 November 2014.

        Scott and Dixson posted a correction to the Proceedings B paper in early July, stating that the studies actually took place between 5 October and 7 November 2014, adding 20 days to the timeline. The correction also says two flumes were used simultaneously, effectively doubling the observation time. (Dixson reported using two flumes simultaneously in other studies as well, according to the investigative committee, which wrote it was “at a loss to understand” how she could keep an eye on two animals and record positions for both every 5 seconds.)”

        Can you explain how Dixson could have kept an eye on two animals simultaneously and recorded the positions of both every 5 seconds? Because, by her and Scott’s *own correction*, that’s exactly what she did.

        And it certainly seems like a striking coincidence that, when questions were raised about whether Dixson could’ve done what she claimed in the timeframe in which she claimed, her response has been to correct her own previous statements so as to expand the timeframe.

        In my view, the duplications and other anomalies in Dixson’s data and methods could not have arisen through sloppiness or incompetence. I don’t see how those data duplications and other anomalies could have arisen for any reason besides intentional fabrication. I think the intentionality is something we can easily infer. Put it this way: if it’s plausible that Dixson was merely extremely sloppy, it’s equally plausible that Jonathan Pruitt was extremely sloppy. Do you think Jonathan Pruitt was merely extremely sloppy, or at least that this is a reasonable possibility?

        You wrote: “Again, Delaware has other evidence besides the vital stats on the experiments that you reference, but without knowing what this is, it seems unwise to come to a conclusion.”

        I don’t see why. We don’t know every word of the investigative report. But we have Science’s detailed summary of it. And we have the report’s conclusion, which the university has accepted: that Dixson committed serial fabrication. Why should we withhold judgment regarding Dixson’s intentionality until such time (which likely will never come) when we can each read the full report ourselves? By that standard, we should also withhold judgment as to the intentionality of almost every intentional scientific fraudster in history. Again, I think the Pruitt case is a useful example here. Do you think we should withhold judgment as to whether Pruitt committed intentional fraud?

        In the (extremely implausible) event that all these duplications and other anomalies in Dixson’s work were somehow totally unintentional, I agree with you that the level of sloppiness/incompetence involved is so high as to itself constitute serious professional misconduct. I think the appropriate penalty for such sustained, serious sloppiness/incompetence is termination.

        I’m happy for this conversation to proceed a bit further, but I suspect we may be coming close to reaching a point at which further conversation would go in circles. If you have anything new to say in response to the additional points and quotations I’ve just made, I’d welcome them. But if not, it might be best if we agree to disagree rather than simply repeating ourselves.

      • 1) Can you explain how Dixson could have kept an eye on two animals simultaneously and recorded the positions of both every 5 seconds? Because, by her and Scott’s *own correction*, that’s exactly what she did.

        This is not what she did from accounts at meetings and by people who have observed this work. She had two flumes. She observes 1 flume while the animals are acclimating in the second. Once the run in the first is over the animals come out, a new one is placed in and allowed to acclimate while she switches to the one in the second flume. This of course considerably shortens the time required; perhaps not by 1/2 but by large fraction. This certainly is clear to me and others, but why the committee is so taken aback by that, only they know. Either Delaware does not accept this, do not know this, or the multiple people who have said this are incorrect. The first two possibilities are somewhat worrisome and make it important to understand more clearly the evidence Delaware used to come to their conclusion.

        And yes, it is a bad sign when someone cannot accurately report the time period over which experiments were done, but nothing here by itself indicates outright fabrication to me. You may feel able to make this inference, but I do not.

        2) Put it this way: if it’s plausible that Dixson was merely extremely sloppy, it’s equally plausible that Jonathan Pruitt was extremely sloppy. Do you think Jonathan Pruitt was merely extremely sloppy, or at least that this is a reasonable possibility?

        Irrelevant. They are different cases.

        3) I don’t see why. We don’t know every word of the investigative report. But we have Science’s detailed summary of it.

        Really? You trust the press enough to summarize the detailed investigation? You trust other people to summarize work you have not read? Do you make citations based on what other people have said about a piece of work without reading it?

        All conclusions are subject to revision; if I ever get a chance to read the Delaware report it may well change my mind.

        3) By that standard, we should also withhold judgment as to the intentionality of almost every intentional scientific fraudster in history.

        This is a classic straw man argument and it’s surprising to see it here. Science is about transparency and no conclusion can be transparent unless the evidence supporting this conclusion is made available. This is not the situation, Delaware has not released anything to anyone according to the Science summary-only select conclusions from Delaware have been made available to reporters. Without access to the evidence gathered, we should use the same decision making framework we apply to other situations. I have attempted to do that here; you may disagree with my conclusions based on what I have said, but this always is a problem when things are uncertain.

        Of course, Delaware could resolve a lot of this by releasing the full report in a redacted form.

        I think we’ve exhausted this topic.

        Be well.

      • “You trust other people to summarize work you have not read? ”

        Oh please. You’re asking (for instance) “Do I trust meta-analyses to accurately summarize the data they claim to have compiled, without having repeated the underlying data compilation and analyses myself?” Yes. Or “Do I ordinarily trust that people have done what they claim to have done in their methods sections, without going to great lengths to verify that the actual methods were as described, and the data not misrecorded or fabricated?” Yes.

        And do I trust professional news reporters at Science not to lie about the contents of reports they’ve read, at least in the absence of any concrete evidence that they lied?” Yes. Come on.

        Re: the Pruitt case, I don’t accept that it’s irrelevant at all. It’s relevant context to allow others to assess your own professional judgment. If someone is presented with two cases in which the facts are substantively similar, and comes to different conclusions in those two cases, others are entitled to wonder if (for instance) the person has some bias that prevents them from judging one of the two cases objectively. When faced with a similar fact pattern in the Pruitt case and in the Dixson case, I came to the same conclusion in both cases. I’d like to think that reflects well on my professional judgement.

        I’m now confused about why you accept that there are extensive anomalies in Dixson’s data, indicating (at a minimum) serious serial failures in basic data management, and serial failures in fully and accurately reporting her own methods (e.g., the timeframes in which studies were conducted). I don’t understand why, on your view, we should we believe *anything* in Science’s summary of Delaware’s report. I take it you’ve gone through *all* of Dixson’s raw data yourself, and verified to your own satisfaction that there are in fact serious anomalies? And if you have done so, frankly I’m unclear why you think anyone else should trust you on that. Surely, on your view, everyone ought to reserve judgement on whether those data duplications and other anomalies even exist, unless they’ve verified them for themselves. You’ve stated previously that you accept that trust necessarily is part of science. I agree. But I am struggling to grok the circumstances in which you are happy to rely on trust, and the circumstances in which you are not.

        I agree that continuing this conversation would be unproductive.

  6. Kind of on point: “No evidence that mandatory open data policies increase error correction” in Ecology and Evolution (E&E) journals, https://www.nature.com/articles/s41559-022-01879-9. There have been some humdinger retractions retractions or stonewalled non-retractions in E&E in recent years (Lonnstedt, Newmaster, Dixson, Pruitt, more?). These were mostly discovered by data anomalies. However, these cases are lost in the averaging, therefore no effect of open data. I’m sure there’s a better name than signal to noise for losing a story in a noisy or overbroad averaging scheme.

    • Agreed. Corrections of honest errors are rare in E&E–only a tiny fraction of papers have corrections. And insofar as corrections are becoming less rare, that’s surely for various reasons besides just mandatory data sharing policies. And as the authors of the linked paper note, compliance with mandatory data sharing policies often leaves a lot to be desired. So I’m not at all surprised that there’s no detectable signal of mandatory data sharing policies leading to an increase in corrections.

      Re: Pruitt/Dixson/Newmaster/Loonstedt, the paper you linked to excluded retractions and expressions of concern for data fabrication. The paper you linked to only looked at corrections of honest errors. So it’s not that those cases were lost in the averaging–they weren’t in the dataset at all.

      Personally, I think the main reason to have mandatory data sharing policies is to aid detection of the Pruitts and Dixsons of the world. The policies aren’t too onerous for authors (at least in my experience; perhaps it’s different for others?), and they make it much harder to get away with (certain forms of) fabrication.

      • I agree that it might help with detection to some degree, at least after the fact, but how often do people really look at the raw data? In all the reviews I’ve done and all the papers I’ve read with publicly available data, I’ve never checked the data for fraud (though I know a couple people do this frequently). Seems like an effective deterrent though – if coming up through the ranks young scientists know that they’ll have to publicly share all of their data, it will likely lead to (A) more careful data management, and (B) less inclination to even consider fabrication as an option. It certainly has made me more thoughtfully organize data from the start, so that I don’t have to spend hours converting data sheets into a version that someone else could understand. No way to know the answer, but I wonder if fraud is more or less common now than 40+ years ago (extending back forever), when there was basically no data-sharing.

        To me, the most unsettling part of the Dixson story is just the realization of how much we have to rely on trust in the scientific community. For example, the report said that it was “standard practice” to record and keep videos of all behavioral experiments. It’s a little ridiculous given technological advancement to say that this should have been the case 15 years ago (a cabinet full of thousands of dust-covered VHS tapes or DVDs somewhere?) for some of the early Dixson work, but that’s not the point here. I mean, sure, there should be video where possible, but we only believe observations of if there is video? Is there video of someone holding a pH probe and that day’s newspaper to ensure they didn’t lie about what pH the water was? A video of someone weighing shells to make sure the scale readout matches the data? A video of a scientist counting invasive moths in a field, to make sure they aren’t making up their count data? In basically every scientific study, we just trust that the authors are telling the truth, and while recording thousands of hours of video that nobody will ever watch might be helpful in theory, it seems like an odd standard to have only for behavioral research. It is interesting that a few of the recent fraud cases have been in behavioral research though – a pattern, or do these get more scrutiny, or just easier to catch somehow?

      • @Josh Lord:

        Yes, I mean that mandatory data sharing mostly helps with detection of fabrication after the fact. Particularly, detection of fabrication once there’s some reason to look for it (because as you note, it’s very rare for readers or reviewers to check raw data for fakery, unless they have some specific reason to suspect fakery). In the days before data sharing (and even today, in journals or fields where data sharing isn’t mandatory), a common and often-effective way to stonewall a credible accusation of fabrication was to claim that the data weren’t available. Because the laptop was stolen, or the lab notebooks were lost in a move, or etc. With mandatory data sharing as a condition of peer review, or as a condition of publication, that common and effective stonewalling tactic is lost.

        As to whether widespread data sharing requirements deter fraud, I don’t know for sure. I’m not aware of much data that speak to that, though maybe those data exist somewhere. I guess mandatory data sharing might deter some fraud, but I doubt it deters a lot. Based on what I know about cases of detected fraud (and about academic misconduct by undergraduate students), I don’t think a lot of scientific fraud (or academic misconduct) is deterrable at the current margins. Scientific fraudsters mostly aren’t making rational risk:reward calculations, I don’t think. Put it this way: the kinds of people who recognize that mandatory data sharing makes it easier to catch certain kinds of fraud mostly aren’t the sort of people who would’ve committed fraud even before data sharing existed. On the other hand, IIRC there is some evidence that one specific form of fraud–image manipulation–has become less frequent over the last decade, perhaps because automated tools to detect it have become more widespread and had some deterrent effect. And in my own undergraduate teaching, I have seen a bit of a drop in academic misconduct over the years (pandemic bump aside). Perhaps because word has gotten around that I’m quite good at detecting the most common forms of academic misconduct, compared to other profs at my institution. So I dunno–the fraction of fraud that’s deterrable isn’t 0, but it isn’t 1 either. My gut instinct is that the fraction is closer to 0 than 1, but that’s just gut instinct and I could be wrong.

        The other issue here is that, if you deter one specific form of fraud (such as Dixon- or Pruitt-style copy-pasting of data), do you merely push determined fraudsters to engage in some other form of fraud instead?

        As to whether fraud is less common now than it used to be: I don’t know, it’s hard to say. The frequency of detected frauds certainly has a long-term increasing trend. No doubt because there’s more effort now to detect fraud, and more effective tools for detecting some forms of it. Hard to say if the increasing frequency of *detected* frauds is also because of an increase in fraud, or despite a decrease in fraud, or despite no change in fraud.

        You’re absolutely right that science runs on trust. I have an old post on that: https://dynamicecology.wordpress.com/2020/03/04/scientific-fraud-vs-financial-fraud-the-canadian-paradox/. tl;dr: the optimal level of scientific fraud is not zero. Because in order to get to zero fraud, we’d have to stop trusting one another, at which point science would grind to a halt.

  7. A further thought on the defense of Dixson as smart, dedicated, hardworking, etc. The implicit argument (as best I can tell; it’s rarely stated explicitly) is that no smart, dedicated, hardworking scientist would ever engage in such obvious fabrication. Because that would make no sense. There’s no rational cost-benefit calculation that could possibly lead someone who’s clearly capable of doing good honest science to do obviously-fake science. You’ll surely get caught eventually, and then your career will be over. It’s irrational, so a rational person like Dixson can’t possibly have done it.

    This (implicit) argument is wrong, for two reasons. The obvious one, discussed upthread, is that it doesn’t speak to the evidence for fabrication. A bunch of Dixson’s data were in fact copy-pasted, and the methods sections of Dixson’s papers do often describe physical impossibilities. Saying, correctly, “there’s no rational reason why anyone would copy-paste data or describe physically impossible methods” just shows that Dixson was irrational, not that she didn’t fabricate data or describe physically impossible methods. Which leads to the second reason this argument is incorrect: scientific fraudsters mostly *are* irrational. They mostly do *not* think about the costs and benefits, or risks and rewards, of fabricating data, in order to decide whether to fabricate data.

    Just because you personally would never fake data, and can’t understand why anyone else would, doesn’t mean no one ever would. Almost by definition, the people who do things you would never do are very different than you!

    • My remarks assume that describing Dixson as smart, dedicated, hardworking, etc. is in fact intended as a defense against the charge of fabrication. It might not be, of course. The description might have some other intended purpose (e.g., to show your support for a friend and colleague; to express how surprised you are that Dixson fabricated data, etc.). Here, I’m only interested in the description as a defense against the charge of fabrication, not in any other purposes it might have.

Leave a Comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.