A few months ago I read economist Dan Davies’ Lying For Money, a popular history of financial fraud. I read it just because it sounded fun and interesting (and it was!), and because I’d read and liked some of Davies’ blog posts on other subjects (for instance). I didn’t think it would have any relevance to my professional life.
Then this year happened. Recent events in which I have some involvement have prompted a broad discussion among ecologists about scientific fraud. This post is contributes to that broad discussion; it’s not about the specific recent events that kicked off that broad discussion. Are there any commonalities between financial fraud and scientific fraud? What can we scientists learn about fraud, and how to prevent it, from the world of business and finance?
As a scientist, one reason to think about fraud in a non-scientific context is to get a bit of distance, and so hopefully get some objectivity. Like most scientists, whenever someone tells me about a case of scientific fraud, I unconsciously start interpreting that case in light of whatever I already thought or believed. And not just what I already thought or believed about scientific fraud specifically, but everything I thought or believed about scientists and science more broadly. “This just goes to show [thing I already believed]!” is one of the most tempting sentences for anyone to say, about anything, which is why it gets said so often. But I don’t have any strong preexisting beliefs about financial fraud, and I’m guessing you don’t either. So thinking about financial fraud is a way for me, and you, to think about “fraud in general” without any preconceptions. And then once we have some general insights in hand, we can apply them to scientific fraud, thereby hopefully coming to a better understanding of scientific fraud.*
Hence this, the first of what I hope will be a little series of posts on insights about scientific misconduct that I took away from Lying For Money. Today: the “Canadian paradox”.
Canada is a high-trust society. We Canadians mostly assume (usually without even thinking consciously about it) that the businesses we deal with are honest, that laws are fair and will be enforced, that contracts will be honored and debts repaid, that strangers we meet are trustworthy, etc. Yet, Canada is infamous for financial fraud. (illustration: The Vancouver stock exchange was widely known as the “scam capital of the world“) Contrast Canada with low-trust societies, in which large-scale financial fraud is rare and people will do business deals on a handshake. These observations raise two questions, which together comprise the “Canadian paradox”. First, how come Canada has so much financial fraud despite the fact that we have laws against it and robust institutions to enforce those laws? Second, how come we Canadians still trust each other, given all the financial fraud around?
The answer to both questions is that economies run on trust, in large part because they run on division of labor. Societies in which nobody trusts strangers are societies in which people only do business with close friends and relatives, with whom they’re willing to do handshake deals. Which makes those societies poor. So yes, trust creates opportunities to steal from others by abusing their trust. But trust also creates the wealth that’s worth stealing in the first place. So Canada has financial fraud because it is a high-trust society. And fighting that fraud by becoming a low-trust society would rob Canadians of much more wealth than fraudsters ever have or ever will. Paraphrasing Davies: it would make no more sense for all Canadians to check the legitimacy of every financial transaction they engage in, than it would for them all to sew their own clothes and grow their own food.
The generalizable insight here for scientific fraud is that there’s some optimum level of scientific fraud, and it’s not zero. The prevalence of scientific fraud is already pretty low, even allowing for the possibility that a large majority of fraud goes undetected. Science is like a “high trust society”–most scientists unconsciously default to assuming that the scientists they meet and the papers they read are trustworthy. And science is like a wealthy society with a lot of division of labor: science as a whole has made a lot of progress over the decades. Scientific fraud exists because science runs on trust–but science would be much worse off if scientists were untrusting. Now, these considerations don’t prove that current scientific safeguards against misconduct are optimal. But they do indicate that the current system is pretty good, and that any possible improvements are likely to be marginal.**
That’s not a very original point–many scientists have made the same point recently. But as Lying For Money shows, the same point applies to finance, not just science. Which makes the point more compelling, at least to my mind.
*Or, you know, not. 🙂 Elsewhere, Dan Davies himself argues that history holds no lessons for the present day, because it’s so hard to tell which historical episodes were sufficiently similar to whatever present-day episode we’re trying to understand. If we’re trying to solve some present-day problem, we might as well just try to solve the problem, without trying to learn from history how to solve that problem. Because trying to learn from history how to solve the problem will be at least as difficult and error-prone as just solving the problem. This argument generalizes to all attempts to learn from analogies, such as the analogy between financial fraud and scientific fraud. I don’t entirely buy this argument; I think it proves too much. But I do think there’s something to it.
**Which doesn’t mean we shouldn’t discuss and implement possible improvements, of course. After all, a lot of cumulative progress is based on making marginal improvements!