A few months ago I read economist Dan Davies’ Lying For Money, a popular history of financial fraud. I read it just because it sounded fun and interesting (and it was!), and because I’d read and liked some of Davies’ blog posts on other subjects (for instance). I didn’t think it would have any relevance to my professional life.
Then this year happened. Recent events in which I have some involvement have prompted a broad discussion among ecologists about scientific fraud. This post is contributes to that broad discussion; it’s not about the specific recent events that kicked off that broad discussion. Are there any commonalities between financial fraud and scientific fraud? What can we scientists learn about fraud, and how to prevent it, from the world of business and finance?
As a scientist, one reason to think about fraud in a non-scientific context is to get a bit of distance, and so hopefully get some objectivity. Like most scientists, whenever someone tells me about a case of scientific fraud, I unconsciously start interpreting that case in light of whatever I already thought or believed. And not just what I already thought or believed about scientific fraud specifically, but everything I thought or believed about scientists and science more broadly. “This just goes to show [thing I already believed]!” is one of the most tempting sentences for anyone to say, about anything, which is why it gets said so often. But I don’t have any strong preexisting beliefs about financial fraud, and I’m guessing you don’t either. So thinking about financial fraud is a way for me, and you, to think about “fraud in general” without any preconceptions. And then once we have some general insights in hand, we can apply them to scientific fraud, thereby hopefully coming to a better understanding of scientific fraud.*
Hence this, the first of what I hope will be a little series of posts on insights about scientific misconduct that I took away from Lying For Money. Today: the “Canadian paradox”.
Canada is a high-trust society. We Canadians mostly assume (usually without even thinking consciously about it) that the businesses we deal with are honest, that laws are fair and will be enforced, that contracts will be honored and debts repaid, that strangers we meet are trustworthy, etc. Yet, Canada is infamous for financial fraud. (illustration: The Vancouver stock exchange was widely known as the “scam capital of the world“) Contrast Canada with low-trust societies, in which large-scale financial fraud is rare and people will do business deals on a handshake. These observations raise two questions, which together comprise the “Canadian paradox”. First, how come Canada has so much financial fraud despite the fact that we have laws against it and robust institutions to enforce those laws? Second, how come we Canadians still trust each other, given all the financial fraud around?
The answer to both questions is that economies run on trust, in large part because they run on division of labor. Societies in which nobody trusts strangers are societies in which people only do business with close friends and relatives, with whom they’re willing to do handshake deals. Which makes those societies poor. So yes, trust creates opportunities to steal from others by abusing their trust. But trust also creates the wealth that’s worth stealing in the first place. So Canada has financial fraud because it is a high-trust society. And fighting that fraud by becoming a low-trust society would rob Canadians of much more wealth than fraudsters ever have or ever will. Paraphrasing Davies: it would make no more sense for all Canadians to check the legitimacy of every financial transaction they engage in, than it would for them all to sew their own clothes and grow their own food.
The generalizable insight here for scientific fraud is that there’s some optimum level of scientific fraud, and it’s not zero. The prevalence of scientific fraud is already pretty low, even allowing for the possibility that a large majority of fraud goes undetected. Science is like a “high trust society”–most scientists unconsciously default to assuming that the scientists they meet and the papers they read are trustworthy. And science is like a wealthy society with a lot of division of labor: science as a whole has made a lot of progress over the decades. Scientific fraud exists because science runs on trust–but science would be much worse off if scientists were untrusting. Now, these considerations don’t prove that current scientific safeguards against misconduct are optimal. But they do indicate that the current system is pretty good, and that any possible improvements are likely to be marginal.**
That’s not a very original point–many scientists have made the same point recently. But as Lying For Money shows, the same point applies to finance, not just science. Which makes the point more compelling, at least to my mind.
*Or, you know, not. 🙂 Elsewhere, Dan Davies himself argues that history holds no lessons for the present day, because it’s so hard to tell which historical episodes were sufficiently similar to whatever present-day episode we’re trying to understand. If we’re trying to solve some present-day problem, we might as well just try to solve the problem, without trying to learn from history how to solve that problem. Because trying to learn from history how to solve the problem will be at least as difficult and error-prone as just solving the problem. This argument generalizes to all attempts to learn from analogies, such as the analogy between financial fraud and scientific fraud. I don’t entirely buy this argument; I think it proves too much. But I do think there’s something to it.
**Which doesn’t mean we shouldn’t discuss and implement possible improvements, of course. After all, a lot of cumulative progress is based on making marginal improvements!
Interesting! This is a generalized version of Type 1 and Type 2 errors, with the take-away that you can’t minimize both simultaneously. Fraud is a Type 2 error (a false positive to the assessment we make that a paper we read was produced honestly). As usual, the costs of making either error and the costs of reducing those errors influence what optimum levels of oversight should exist. It is a worthwhile discussion to have for scientific fraud. My personal view is the parallel issue of whether researchers are treating animals unethically has gone too far toward reducing Type 2 errors; the costs to ethical researchers now often exceeds the benefits of catching the very rare cases of unethical behavior. One hopes the move toward transparency, which is currently very good for the field, doesn’t overshoot similarly
Agree, the analogy to Type I and Type II errors is a good one.
Dammit, just thought of a scientific (well, proto-scientific) example I should’ve included in the post: alchemists. Alchemists didn’t trust one another. So they didn’t collaborate, didn’t share full information about their results, etc. Alchemists didn’t defraud one another either, at least as far as I’m aware (did they?). But yet, despite that comparative lack of fraud, alchemists didn’t make much progress.
Ok, you can of course argue that they didn’t make much progress because they lacked background knowledge and background theory, or had incorrect background knowledge and background theory. And you’d be right. So maybe alchemists aren’t a particularly good example. I dunno; I’m curious to know if any historians of science have tried to tease out the extent to which alchemists’ collective failure to make progress was due to their lack of trust in one another.
Question which sounds like trolling but is actually serious: are there any contexts in science in which the level of fraud is *too low*? That is, are there contexts in which there’s too little trust, and the costs of that lack of trust outweigh the benefits of fraud detection/prevention?
Possible example: oversight rules and associated paperwork vetting even very small grant purchases to make sure they’re legitimate.
I shared this post quoting the sentence I found most true and provocative, “The generalizable insight here for scientific fraud is that there’s some optimum level of scientific fraud, and it’s not zero.” However, on further reflection I wonder how funding agencies/politicians, and more generally the public, would view such a statement? I think most scientists are nuanced enough to see your argument (here I am trusting them 😉 ), but I worry that this may be difficult to convey to non-scientists or just generally people that don’t have time to dig into your justification for this claim.
Hopefully I didn’t contribute to tabloids printing “Canadian Scientist advocating fraud!” 😉
Heh. 🙂 Yes, the way to make this argument to policymakers and the public is not the way I made it in the post. The way to make it (well, *a* way to make it) is to talk about the costs of burdensome regulations and bureaucracy.
As you note, my deliberately-provocative way of putting it–“there’s some optimum level of scientific fraud, and it’s not zero”–is actually a little misleading. Because “amount of fraud” is not some exogenous parameter that we can dial up or down. As Davies says, the amount of fraud is an “equilibrium phenomenon”–it’s affected by the system parameters, not a parameter itself. “Optimum level of fraud” really means “whatever the level of fraud happens to be in a system in which the parameters that affect the level of fraud (and that affect lots of others things too) are set to their optimum values”.
“The way to make it (well, *a* way to make it) is to talk about the costs of burdensome regulations and bureaucracy.”
My possibly-wrong anecdotal impression is that this argument doesn’t have much traction these days, in any context. For instance, criticisms of airport security procedures as “security theater” don’t seem to have much traction with the powers that be. When was the last time any airport security rule was rolled back?
The only context I can think of in which this sort of argument has some traction these days is in the context of streamlining FDA approval of new drugs. But I don’t know much about that (or about airport security!). I actually have no idea if we have the right amount of airport security, or medical drug approval regulations, or whatever! Just noting my casual impression that, in general, if you argue “this well-intended regulation to prevent this bad thing is actually doing more harm than good”, you don’t seem to get a lot of traction these days.
Ooh, just thought of a context in which the argument “these regulations are too burdensome on too many people; they’re not worth the rare malfeasance they catch/prevent” seems to be winning converts: no longer requiring doctor’s notes from students to excuse missed course work. I don’t have systemic data, but my casual impression is that, these days, more and more profs are worrying about making many honest students go to the doctor for sick notes. Fewer are worrying about rare dishonest students lying about illnesses to get out of course work. And my own uni recently changed its policy so that now students can sign a declaration that they were ill (which they can do for free and conveniently at several places on campus), instead of getting a sick note from a doctor.
There’s a common thread between the cases of streamlining FDA drug approval, and not requiring doctor’s notes from students who miss course work due to illness. In both cases, one can point to specific vulnerable/powerless people who are harmed (or at least, plausibly might be harmed) by regulations intended to prevent malfeasance. Sick patients who could be helped by a new drug, or powerless/busy/cash-strapped students who are being forced to spend money on a doctor’s visit they wouldn’t otherwise need. That is, the cost of the regulatory burden doesn’t fall on powerful people, or on “everybody”, or at least it isn’t seen to do so.
In contrast, consider the case Brian brings up elsewhere in this thread: public universities and government agencies spending a lot of time and money auditing even very minor expenses, to make sure nobody buys pens for personal use or whatever. Much of the burden of following those regulations falls on people (like me!) who aren’t particularly vulnerable or otherwise badly-off. Or contrast the burden of “security theater” at airports. It falls on everybody, at least everybody who flies.
The application of this tentative line of thought to scientific misconduct (specifically, to new regulations and other reforms intended to stop scientific misconduct) is left as an exercise for the reader. 🙂
I think the oscillation in speed limits over the past 40 years might qualify as an example of ebb and flow in some sort of assessment (likely superficial) of the costs of “errors” (accidents) and the cost of preventing them.
Interesting example.
Debates over the age at which people should start getting screened for certain medical conditions is one context in which the costs and benefits of type I vs. type II errors get weighed. At least, that’s my casual impression. I don’t recall seeing anybody who’s skeptical of (say) universal prostate cancer screening for 40 year old men getting dismissed as “pro-cancer”.
I always think of this in terms of reimbursement (for travel, etc). A fair amount of money is spent on accounting. And the more rigorous the accounting is, the higher the costs (on both the core team, teachers and researchers in a university, and on accounting staff). It is very easy to spend more on preventing fraud than the amount of fraud prevented. There is a clear trade-off. The costs of making sure nobody steals paper clips or rounds up a blank taxi receipt is quite high and wouldn’t begin to save the money to justify it. But we are pretty likely to catch somebody funelling thousands of dollars into their own bank account. In general in my experience business is much more comfortable with this trade-off (which is viewed is a purely $ tradeoff) and invests less in fraud prevention than universities (which as government institutions are often also factoring scandalous newspaper headlines into the cost side).
Yes, good example.
Here in Canada NSERC has just announced a policy change that, if I’m understanding it correctly, will now allow things like pens and whiteboard markers to be purchased on grants, as long as the purchases are used for work related to the grant. Previously, purchase of such items on NSERC grants was banned because such items aren’t solely for scientific use. So NSERC at least seems to agree with your point.
Here’s a striking bit of data from a Guardian piece by Dan Davies:
“Since the invention of stock markets, there has been surprisingly little correlation between the amount of fraud in a market and the return to investors. It’s been credibly estimated that in the Victorian era, one in six companies floated on the London Stock Exchange was a fraud. But people got rich. It’s the Canadian paradox. Although in the short term, you save your money by checking everything out, in the long term, success goes to those who trust.”
From here: https://www.theguardian.com/news/2018/jun/28/how-to-get-away-with-financial-fraud
A Twitter thread from a non-scientist on the application of Davies’ book to science. I disagree with some of it, but this is a topic on which there’s scope for reasonable disagreement, so wanted to share it:
You mention that science in general is a “high-trust society”. I’m having a hard time fully accepting that. I don’t think blind faith is very common among scientists, nor is ignorance. Don’t most scientists always have a skeptical part of their brain on alert, trying to spot mistakes or errors in another’s work?
I think this varies heavily by field. In areas of applied mathematics that I work in, we rarely ever check the fine details of a paper unless we really need to use similar reasoning. Far more often we defer to the referees to have vetted the details and just trust the results. Of course this varies quite a bit depending on how one plans to use the results or the methods, but I have definitely cited quite a lot of related work which I never bothered to check very carefully, as this would be enormously time consuming.
I suspect similar things happen in empirical work – as Jeremy mentioned earlier, no one would have casually noticed Pruitt’s errors unless they went looking for them.
Of course we are broadly skeptical, but rarely can we be comprehensively skeptical. I think our trust is in methods and ideas outside of our specific expertise.
That is true. Good point, yet I somehow still do not resonate with using the term “high-trust”. Perhaps it doesn’t matter so much anyway.
“Don’t most scientists always have a skeptical part of their brain on alert, trying to spot mistakes or errors in another’s work?”
Most have a skeptical part of their brain regarding alternative hypotheses that might explain others’ work, or whether others’ work had a large enough sample size, or etc. But nobody is routinely suspicious that others’ work is fraudulent, or routinely checks others’ data for evidence of fraud.
Pingback: Scientific fraud vs. financial fraud: the “fraud triangle” | Dynamic Ecology
Pingback: Scientific fraud vs. financial fraud: the “snowball effect” and the Golden Rule of fraud detection | Dynamic Ecology
Pingback: Scientific fraud vs. financial fraud: the “snowball effect” and the Golden Rule of fraud detection | Dynamic Ecology
Pingback: Friday links: four (!) more #pruittdata papers, remembering Ben Nolting, and more | Dynamic Ecology
Pingback: Scientific fraud vs. financial fraud: is there a scientific equivalent of a “market crime”? | Dynamic Ecology
Pingback: Friday links: Covid-19 vs. BES journals, Charles Darwin board game, and more | Dynamic Ecology
Pingback: Friday links: behind the scenes of the first 17 months of #pruittdata, another serious data anomaly in EEB, and more | Dynamic Ecology
Pingback: Friday links: critiquing your own papers, why scientists lie (?), and more | Dynamic Ecology
Pingback: Friday links: a major case of fake data in psychology, the Avengers vs. faculty meetings, and more | Dynamic Ecology
Pingback: Why are scientific frauds so obvious? | Scientist Sees Squirrel
As I was saying: https://bam.kalzumeus.com/archive/optimal-amount-of-fraud/