Recently, I started a little series of posts about scientific fraud, inspired by a book about financial fraud, Dan Davies’ Lying For Money. In the first post in the series, we talked about how the optimal level of financial fraud, or scientific fraud, isn’t zero. Because the only way to have literally zero financial or scientific fraud is for no one to ever trust anyone. Which leaves everyone much worse off than if they all default to trusting each other, and tolerate the resulting non-zero level of fraud as a price worth paying. To paraphrase Steve Randy Waldman, you can have a trust-based economy that admits some level of financial fraud, or you can herd goats. Analogously, you can have a trust-based scientific research system that admits some level of scientific fraud, or you can do alchemy.
Today, let’s think about the causes of fraud. Davies suggests a simple framework for thinking about this: the “fraud triangle”.
The fraud triangle is analogous to a famous dictum from murder investigations: the murderer must have the means, motive, and opportunity to commit murder. Analogously, Davies, relying on Donald Creesey’s earlier work Other People’s Money, says that financial fraud happens when the following three conditions are simultaneously met:
- Need. Fraud happens when someone feels they need more money than they can come up with via honest means. There are many different sorts of “need”: greed, institutional pressure, fear of admitting that your business is a failure, etc.
- Opportunity. Weaknesses in the systems of fraud prevention are opportunities to commit fraud. A subdivision can be made between “incidental” financial fraudsters who commit frauds against targets of opportunity they stumble across accidentally, and “entrepreneurial” financial fraudsters who seek out specific weaknesses to exploit.
- Rationalization. Financial fraud is committed by people in positions of trust. Most people are averse to breaking trust unless they can somehow rationalize it to themselves. They have to mentally redescribe the crime to themselves in a way that seems to justify it. “It’s only a temporary measure until I’m back on my feet; after it’s over I’ll make good”. “Everyone else is doing it; I’d have to be a sucker not to do it too.” “The system is rigged against people like me; I’m just leveling the playing field for myself.” Etc.
I think this framework applies to scientific as well as financial fraud. Here are a couple of things I like about it:
- It puts systemic and individual factors on an even footing. When a scientific fraud happens, some people react by highlighting the systemic background conditions that they claim led to the fraud, such as pressure to publish and get grants. They emphasize that lots of people have an incentive–a “need”–to commit scientific fraud, and argue that we should reduce fraud by reducing the need to commit it (e.g., by somehow reducing pressure to publish). To which others (including me on occasion!) respond by noting that the vast majority of scientists don’t give in to that “need”. They may feel pressure to publish papers and get grants, but yet they behave honestly anyway. See for instance Tal Yarkoni’s argument that, when it comes to scientific fraud (and other, lesser sorts of corner-cutting), it’s not the incentives, it’s you. This response suggests that, to prevent scientific fraud, we need to understand why rare scientists are able to rationalize fraud to themselves (which we don’t really understand, unfortunately). And still others argue that scientific frauds highlight the need for, and value of, control measures that help prevent fraud, such as mandatory data sharing (e.g., on Data Dryad) and data editors. The “fraud triangle” framework highlights that everybody is right, at least in principle. In principle, you can reduce fraud to the desired level (which should not be zero!) by reducing need, opportunity, rationalization, or any combination of those. Which factors to focus on is an empirical question of marginal costs and marginal benefits. For instance, international comparative data suggest that there are some countries that could reduce scientific fraud by removing certain strong, direct incentives to commit it, such as direct cash payments for publications. Of course, if everyone is right in that sense, that also means everyone is wrong in a different sense. If you think that the only way to prevent scientific fraud is to reduce the (perceived) need to commit fraud, you’re wrong. If you think the only way to reduce scientific fraud is to reduce the opportunity to commit it, you’re wrong. Etc.
- It makes me feel good about how I talk to my students about scientific fraud. One thing I tell my grad students, and also the students in my undergrad courses, is that at some point you’re going to feel a strong “need” to commit misconduct (scientific fraud in the case of grad students, academic misconduct in the case of students in my undergrad courses). That is, you’re going to be tempted to commit fraud or misconduct at some point, whether because your thesis project seems to be failing, or you forgot about an assignment that’s due in a hour, or whatever. And in that moment, your panicked brain is going to work overtime coming up with a rationalization to justify the misconduct. So in that moment is when you need to remember three things. First, your brain is looking for a rationalization. Don’t let it find one. Instead, decide right now that, if you’re ever tempted to commit misconduct in future, you’re going to choose not to. Because right now you’re calm and thinking clearly and in a much better position to make good choices. Second: you may feel the need to commit misconduct in a moment of temptation, and you might even find a rationalization for it, but you don’t have an opportunity. You’re going to get caught. Third, the need to commit misconduct isn’t nearly as big as you think it is in that moment of temptation. One common part of the rationalization process is convincing yourself that that you have some huge, urgent need to commit misconduct. But in fact, you don’t. The stakes aren’t nearly as high as you think they are. There’s some alternative thesis project you could switch to, so that you don’t have to fake the one you were originally planning. That assignment that you forgot was due in an hour is only worth a tiny fraction of your mark in one course; it’s no big deal in the grand scheme of things. Etc. (UPDATE: this paragraph edited slightly from its original version, to clarify it in response to a comment)
In what context do you talk to undergraduate students about scientific fraud? In lecture? Just with project students?
I talk to them about academic misconduct.
It seems to me your take on this is that generally good people find themselves in a tight spot and succumb to the green fraud monster. However, I suspect that there are also people who get a thrill out of the hustle and “fooling” others. The difference is that the former feel they have lost control, and the latter that they are in control. There may be a third group that made an honest mistake and found out about it latter, but did not tell anyone. I suspect that effective approaches to reduce fraud would be different for each of these groups.
When I’m training my grad students and teaching my undergrads, yes, I do indeed assume that they’re most likely to commit fraud or misconduct because they find themselves in a tight spot, panic, and make a bad choice they’ll come to regret. My experience with the undergrads I teach confirms that assumption.
You’re right that a disproportionate fraction of all academic misconduct (or all scientific fraud, or all financial fraud) is committed by a small minority of serial offenders, whom Davies calls “entrepreneurial” fraudsters. People who deliberately and repeatedly seek out opportunities to commit fraud. Whether they do it because they find the “hustle” thrilling, or for some other reason, I don’t know. Davies dismisses the psychological motives of entrepreneurial fraudsters as uninteresting and unimportant, but that’s one place where I disagree with him. I think it would be very useful to know if there are any common threads in the motivations of serial fraudsters. In a recent linkfest (sorry, can’t find it just now), we linked to a Medium piece by a knowledgeable person who proposed a little taxonomy of the motivations of serial scientific fraudsters. Suggested that there were three main ‘types’ of serial scientific fraudsters, IIRC…
I find the Davies’ articulation of the “fraud triangle” interesting. It mimics what I perceive to be a misunderstanding of bad actors in some other scenarios – specifically in the aspect of “rationalization”.
A common theme I have observed in various institutional failures to deal with bad actors, is that many of them appear to be induced by a belief that the bad actor is rationalizing their behavior as acceptable – in the fashion that Davies mentions. This leads them to attempt to modify the behavior by addressing that perceived rationalization.
Unfortunately, some of these people seem to be bad actors simply because they “enjoy watching the world burn”. They will enthusiastically engage in behavior that is actively detrimental to their own situation, apparently because it causes more damage to others.
To me, this argues for the psychology being quite important. I’m going to have to think about how this fits into scientific fraud. I’m having a bit of trouble trying to figure out what a scientific-fraud version of a Heath Ledger Joker looks like, but if the niche exists, someone must inhabit it.
Hmm. From what I’ve read or inferred about serial scientific fraudsters, I have yet to encounter a figure at all like Heath Ledger’s Joker. Which isn’t to say such a figure hasn’t ever existed. But even among serial scientific fraudsters, I suspect that people who are committing serial scientific fraud for kicks, so that they can “watch science burn”, are pretty rare.
Which isn’t to say that you’d be able to stop serial scientific fraudsters by addressing their own rationalizations of their own behavior, of course. Frankly, I doubt you could in most cases.
A tidbit about the motives of many (though far from all) financial fraudsters: they’re relieved when they finally get caught. To preview the next post in this series: many financial frauds have a natural tendency to snowball. Committing financial fraud increases the need to commit financial fraud, because the only way to cover up the fraud you’ve already committed is with yet more fraud. Ponzi schemes are the classic example but far from the only one. Many financial fraudsters get overwhelmed by this snowballing tendency, searching with increasing desperation for a way to end their growing fraud, and are relieved when they’re caught.
I don’t think scientific frauds have the same inherent tendency to snowball (though I’m still thinking about that). Consistent with that lack of inherent snowballing tendency, I’ve yet to hear of a case of a scientific fraudster being relieved to be caught, because keeping the fraud going was just growing too overwhelming.
This is interesting as it also applies to people breaking the ‘lockdown’ rules here in New Zealand (and elsewhere probably).
This is a refreshing, honest and compassionate way to address academic integrity with students – thanks for the ideas!
If we consider Kuhn’s theory of scientific revolutions, there can come a point at which propping up a standard but increasingly dubious view becomes fraudulent. The ‘need’ to do so can be as broad as simple conservatism or as narrow as maintaining personal influence and authority. The persistence of ‘scientific creationism’ (however marginal) exemplifies a more ubiquitous concern of late 18th to early 20th century naturalists and their successors. But is it fraud if the perpetrator believes in it? Can fraud be a matter of cutting corners in service of something suspected but not understood? If so, what if ‘On the Origin of Species’ where Darwin not only knew, but admitted, that he didn’t quite have all the evidence he needed? Had he been shown, in retrospect, to be largely wrong, would we consider him a fraud, or just mistaken? Consider how Lamarck is routinely caricatured using giraffe cartoons. His ideas, mostly forgotten now, were much more fantastic (and wrong) than simple neck-stretching. Look up phlogiston theory. Look up N-Rays. Remember cold fusion? The lines between being fraudulent, being deluded, and being assertively wrong aren’t always very clear. And, with reference to the purposes of this list, ecology is still very much in the throes of picking through multiple incompatible overgeneralizations, some of which persist merely because it is professionally inconvenient or hazardous to admit (even to ourselves) that, like Darwin, et al, we suspect more than we know.
“And, with reference to the purposes of this list, ecology is still very much in the throes of picking through multiple incompatible overgeneralizations, some of which persist merely because it is professionally inconvenient or hazardous to admit (even to ourselves) that, like Darwin, et al, we suspect more than we know.”
I agree that some of the same circumstances that encourage various sorts of questionable research practices might also encourage outright fakery. And I agree that, sometimes, the conscious or unconscious motivation for both QRPs and fakery is strong belief in some scientific idea that isn’t fully supported by the current evidence. But I’m still a bit lost as to your broader point here. I mean, there are some cases of absolutely clear-cut fraud in ecology and evolution (https://dynamicecology.wordpress.com/2020/02/24/the-history-of-retractions-in-ecology-and-evolution/). It’s the causes of those cases of clear-cut fraud that are the focus of this post. I don’t think it’s true that most cases of fraud in ecology and evolution are really just cases of researchers being deluded or incorrect. And conversely, I can’t think of many cases in which someone who was merely wrong or deluded was falsely accused by others of being a fraudster.
I also struggle to connect some of your examples to your broader point. Yes, the usual shorthand way of summarizing Lamarck’s ideas is indeed a caricature–but what does that have to do with scientific fraud?
Re: cold fusion and other fringey scientific ideas, we have various old posts on that. Though again, I struggle to see the connection you’re drawing between fringe science and the topic of today’s post.
And see this recent post by Andrew Gelman: https://statmodeling.stat.columbia.edu/2020/03/06/junk-science-then-and-now/
Pingback: Scientific fraud vs. financial fraud: is there a scientific equivalent of a “market crime”? | Dynamic Ecology