Recently, I started a little series of posts of thoughts about scientific fraud, inspired by a book about financial fraud, Dan Davies’ Lying For Money.
In the first post in the series, we talked about how the optimal level of financial fraud, or scientific fraud, isn’t zero. Because the only way to have literally zero financial or scientific fraud is for no one to ever trust anyone else. Which leaves everyone much worse off than if they all default to trusting each other, and tolerate the resulting non-zero level of fraud as a price worth paying.
In the second post in the series, we talked about the “fraud triangle”: the three preconditions for a financial or scientific fraud.
Today, we’ll talk about Davies’ argument that the “fraud triangle” isn’t a complete basis for preventing financial fraud. You also need to consider that financial frauds tend to snowball. Which in turn motivates Davies’ proposed “Golden Rule” of financial fraud detection: be suspicious of anything that grows unusually fast. An analogue of Davies’ “Golden Rule” has been proposed in the context of scientific fraud detection, but I’m not so sure that’s a good idea. Because I’m not sure that scientific frauds are subject to the same “snowball effect” as financial frauds.
Many financial frauds have an inherent tendency to snowball, i.e. to grow exponentially over time. The reason is compound interest. Quoting Davies:
It’s intrinsic to capitalism – money goes into business, and comes out as more money. Then the increased sum is invested in business assets, and grows even more…But one key difference between fraudsters and legitimate businesses is that compound interest…is the enemy of fraud.
The reason for this is that unlike a genuine business, a fraud does not generate enough real returns to support itself, particularly as money is extracted by the criminal. Because of this, at every date when repayment is expected, the fraudster has to make the choice whether to shut the fraud down and try to make an escape, or to increase its size; more and more money has to be defrauded in order to keep the scheme going as time progresses.
Ponzi schemes are the classic example of the snowball effect, but as Davies shows, most financial frauds tend to snowball. Which leads Davies to suggest what he calls the “Golden Rule” of financial fraud detection:
Anything which is growing unusually quickly needs to be checked out, and it needs to be checked out in a way that it hasn’t been checked before.
(Aside: That last clause is there because determined financial fraudsters deliberately seek out and exploit weaknesses in fraud control and prevention systems. So if something is growing unusually quickly, but has passed the standard checks for legitimacy, you should check it out in some non-standard way.)
I’ve seen Davies’ Golden Rule suggested in the context of scientific fraud as well. Usually, the suggestion is that reviewers, editors, and other scientists should be suspicious of any paper that’s “too good to be true”, and of any scientist who publishes “too often”. And one can certainly point to examples that seem to support this Golden Rule in the context of scientific misconduct. Think for instance of serial fraudster Jan Hendrik Schön, who was averaging one new paper every eight days in 2001, before he was caught.
But I’m skeptical that Davies’ Golden Rule is a good idea in the context of scientific fraud detection. Because I don’t think scientific frauds tend to snowball.
When scientific frauds grow, they tend to grow linearly, not exponentially. The cumulative size of the fraud grows by some constant additional amount per unit time, not some constant multiplicative percentage. Each fraudulent paper you publish doesn’t create an inherent need for you to publish (say) two more fraudulent papers to cover for the previous fraudulent paper. Faking your way into a research grant doesn’t mean that you have to then fake your way into (say) 20% more grant money every year in order to pay back the previous year’s grants. Because after all, you don’t have to pay back the previous year’s grants. There’s no scientific equivalent of a Ponzi schemer having to pay returns to investors. So yes, it’s true that some serial scientific fraudsters publish a lot, or get lots of grants, or publish results that seem too good to be true. But not even the worst serial scientific fraudsters publish at an exponentially-increasing rate, or obtain exponentially-increasing amounts of funding, or publish exponentially more amazing results over time.
Which I think makes it difficult in practice to distinguish between serial scientific fraudsters and honest productive scientists, just on the basis of how productive they are. Sustained exponential growth is a pretty reliable sign that something has escaped existing control systems and needs to be checked out–but sustained linear growth is not. And so I think that trying to apply Davies’ Golden Rule to scientific fraud would just encourage unwarranted suspicion of every productive scientist.
For instance, a quick glance at Google Scholar will reveal to you many successful ecologists who have years in which they’ve published a dozen papers or more. That’s a lot! But is that suspicious? Heck, there are prominent senior ecologists whose Google Scholar pages have 500+ entries, or even 1000+, meaning that they’ve averaged well over 10 papers (and preprints, book chapters, etc.) per year over their entire careers. Is that suspicious? Personally, I don’t think any of those cases are suspicious. But if you’re serious about using “being too productive” as a sign of possible scientific fraud, then I don’t see how you can avoid blanket suspicion of every leading ecologist. And I don’t see how blanket suspicion of every leading ecologist (or more broadly, of every leading scientist) would appreciably reduce the already-low incidence of serial scientific fraud, or lead to appreciably earlier detection of serial scientific fraud. At least not without creating costs that wouldn’t be worth paying. (note: paragraph edited from its original version, to avoid using specific named individuals as examples.)
p.s. One interesting consequence of the snowball property of financial frauds is that many (though far from all) financial fraudsters are relieved to be caught. They feel overwhelmed by the ever-increasing amount of fraud they need to do to cover up their past fraud. Not overwhelmed enough to turn themselves in, usually, but enough to feel relief when they’re finally found out. In contrast, I’ve yet to hear of any example of a serial scientific fraudster who was relieved to be caught. Which is consistent with the fact that scientific frauds don’t tend to snowball. Keeping scientific frauds going involves some roughly constant amount of work per unit time on the part of the fraudster.
Publishing is also incredibly idiosyncratic as well, as sometimes a single paper can take the same amount of “work” as dozens of others, or papers can be stuck in review for long periods, or people take long breaks from research or… I’d be wary of over-relying on heuristics like this Golden Rule simply because I think every academic’s career will have strange twists and turns, and as you’ve mentioned previously, this only rarely indicates fraud.
Hi Jeremy, great post!
Do you think that if some aspect of the way science is conducted and funded change, then fraud might grow exponentially?
One situation I could think of is if one major determinant of success of grant proposal funding is a history of productive research and fulfilling of the aims of the grant, and an evaluation of achievements of the aims in the grant at some point (and some kind of punitive measure if you haven’t achieved some criteria – for example withdrawal of 50% of the money or blacklisting the author in future grants across various agencies), then anyone that has done fraud and wants to continue being in research will have to do even more fraud. Now that I think about it, I’m not sure whether fraud would be exponentially growing – but the main idea is that one fraud wouldn’t be enough to sustain – you’d have to consistently keep doing it, and it becomes a positive feedback kind of thing (and hence snowballing).
I have almost no idea about the grant application and funding processes, so if what I said seems ridiculous that might explain it.
“Do you think that if some aspect of the way science is conducted and funded change, then fraud might grow exponentially?”
Oh lord I hope not.
In regards to your specific scenario, no, I don’t think that creates an exponential growth scenario. Because something like your scenario already exists in Canada. Here in Canada, 1/3 of the score on your NSERC Discovery Grant (the standard single-investigator grant for basic research in non-biomedical fields) is based on your track record of productivity over the previous 6 years. So yes, to maintain your funding you do have to show productivity over the previous 6 years. But it doesn’t have to be ever-increasing, exponentially-growing productivity, whether within a 6 year period or over some longer period across multiple grant cycles. So someone who wanted to build a scientific career based on fraud in Canada wouldn’t have to keep doing ever-increasing amounts of fraud per-unit time in order to maintain funding. Some constant amount of fraud per unit time would work just fine.
Oh I see. Yes, that makes sense, a linear rate of increase of total fraud with time should be enough.
Pingback: Scientific fraud vs. financial fraud: is there a scientific equivalent of a “market crime”? | Dynamic Ecology
Pingback: Scientific fraud vs. art forgery (or, why are so many scientific frauds so easy to detect?) | Dynamic Ecology