Scientific fraud vs. financial fraud: is there a scientific equivalent of “control fraud”?

Continuing my little series of posts on the analogies between scientific fraud and financial fraud, inspired by Dan Davies’ book Lying For Money. As with past posts in the series, the hope is that the looking at scientific fraud through the lens of financial fraud provides some novel and useful insights. Just thinking out loud here, trying ideas on for size.

Today: is there a scientific equivalent of a “control fraud”? For British readers, a more clickbait-y title for this post would be “Is there a scientific equivalent of the PPI mis-selling scandal?”

A “control fraud” is a financial fraud that’s only possible for someone who controls one or more economic entities, such as a bank trading desk (think Nick Leeson) or a savings & loan (think Charles Keating). Because you control the entity, you can extract money from the entity “legitimately”. Well, legitimately in the sense that it’s normal for whoever controls some economic entity to extract money from it, usually in the form of salary and benefits. The trouble is that the profits that would support and justify the salary and benefits are fake.

For instance, in a “cash for trash” scam, Keating’s S&L would loan borrowers much more money than their real estate collateral was worth on the open market. In exchange, said borrowers would use some of the money to buy similar real estate at an inflated price from a firm that Keating also controlled. That’s a great deal for both Keating’s real estate firm, and Keating’s S&L. The real estate firm makes an enormous profit on the sale. And because the sale inflates the apparent “market value” of that particular sort of real estate, the borrower’s collateral can now be revalued to a value greater than that of the loan it secures.

The distinctive feature of “cash for trash”, and other control frauds, is that there’s nothing illegal about the individual transactions comprising the fraud, at least not necessarily. For instance, there’s nothing illegal about someone paying substantially more for a piece of real estate than anyone else is willing to pay. The fraud is at the level of the whole interlocking network of component transactions.

Most of the biggest financial frauds are control frauds. If you want to steal a lot of money, control fraud is the way to go.

So, are there scientific equivalents of control frauds? I’m not sure; I’m having trouble thinking of any really clear-cut examples. Someone like Diederik Stapel doesn’t quite fit. Yes, he controlled his lab. But he used control of his lab to conceal the fact that he was providing his grad students with fake data he’d purportedly collected on their behalf. You don’t have to be a lab PI to fake data. And faking data–the most common sort of scientific fraud–is analogous to a fraudulent financial transaction. It’s not analogous to a network of interlocking financial transactions that’s only illegitimate at the level of the whole network.

Maybe a journal editor who systematically forces submitting authors to cite other papers from the journal, in order to boost the journal’s impact factor, could be considered a control fraud? Or a journal editor who arranges for the journal to publish many of their own papers?

It’s interesting to reflect that the very rare scientific frauds that did significant damage to science as a whole mostly weren’t control frauds. That seems like a contrast between scientific fraud and financial fraud.

But there’s another, more abstract sort of control fraud that might have a scientific analogue, or at least so some would argue: a distributed control fraud. Quoting Davies’ book:

Now consider this – what would happen if, rather than organizing the fraudulent inflation of the corporation yourself [as in a “cash for trash” scam], you simply set up a system of (non-) checks and balances, such that other people were likely to inflate it for you? In other words, rather than committing crimes yourself to inflate the value, you just created a massively criminogenic environment in the firm, and let nature take its course?…That’s pretty abstract. But there’s a level of abstraction even higher than that. What if…there really was no intention to create a criminogenic scheme at all? If you were lucky enough to set up a company with bad incentives and internal controls by accident…[then] it would be possible for a massive control fraud to take place purely by accident, without any criminal responsibility at all. This would present a really unattractive [legal] case; there would be huge amounts of criminality and misrepresentation, but all of it would be carried out by relatively low-level employees, most of whom would hardly have profited from doing so, and many of whom could credibly claim that they were not sophisticated enough to realize that what they were doing was illegal. Meanwhile, you would have a top tier of fantastically rich senior managers, who should have known what was going on, and about whom everyone has a strong suspicion that they ‘must have known’, but no possibility whatsoever of being able to meet criminal standards of evidence of them having done so, because they in fact didn’t know.

Davies suggests the Payment Protection Insurance (PPI) mis-selling scandal as an example of a distributed control fraud. Briefly, large numbers of British people who took out mortgages, loans, or credit cards were sold insurance that would cover their repayments under certain conditions, such as job loss. But the insurance was expensive, structured to make payout on claims unlikely, pushed on people who didn’t understand what they were buying, and sometimes sold under false pretences. Davies discusses the broader “criminogenic” circumstances that led to this, summarizing them as “the natural result of what happens when a dysfunctional industry meets a weak management structure, under [competitive] pressure”, adding “To the frustration of all, it is not a crime to set stupid targets for your sales force, and it is not a crime to fail to check up on them.”

It’s easy to see an analogy to science here; the question is just how seriously to take the analogy. There are surely some who would take it very seriously indeed. Who would argue that academic scientific research as a whole–all of it–is analogous to a distributed control fraud! Just for the sake of argument, let’s take that possibility seriously for a moment. Put as starkly as possible, the argument goes something like this: competitive pressures to get grants and publish papers lead lab PIs to set unreasonably high expectations for their grad students and postdocs without exercising much oversight of them. The result is that grad students and postdocs crank out tons of low-quality science, using questionable research practices and in the worst cases outright fakery.

Without wanting to suggest that the incentives and control measures in academic science are perfect, I don’t buy this analogy. I don’t buy it because I think academic science as a whole is too high in aggregate quality for the analogy to work. Scientific misconduct is just too rare for science as a whole to be analogized to a distributed control fraud, even if one defines “misconduct” broadly enough to include questionable research practices and thinks that most misconduct goes undetected. I do think there are rare cases of scientific fields becoming sufficiently dysfunctional that the analogy to distributed control fraud miiiiiight fit (*cough* social psychology *cough*). But even in those cases I don’t think the analogy really fits. I think the replication crisis in social psychology has revealed some collective, self-reinforcing blind spots in that field, but not collective behavior that looks unethical and fraudulent when you step back from it, in contrast to the case of PPI mis-selling. And like I said, I definitely don’t think the analogy to distributed control fraud fits science as a whole. But this is an argument about gradations and degrees, not about something black and white. There’s no clear bright line between “distributed control fraud” and “a basically well-run competitive industry”.

Looking forward to your comments as always.*

*Note: if you’ve never read this blog or commented here before, welcome! Please do have a look at our “About” page and familiarize yourself with our commenting policy. Note that I, in my role as moderator, have a low tolerance for politically-motivated conspiracy theories. Nor do I want this blog to become yet one more forum for the same old flame wars that have been going on since the days of Usenet. In the interests of promoting a productive conversation, I’ll block anyone claiming that climate science, evolutionary biology, mainstream vaccine research, etc. are rife with fraud. And if you want to argue about IQ research, please take it elsewhere, this isn’t the appropriate venue.

11 thoughts on “Scientific fraud vs. financial fraud: is there a scientific equivalent of “control fraud”?

  1. Maybe I don’t really understand it, but wouldn’t Predatory Publishers count as a form of control fraud?

    I’m not referring to the obvious spam we receive daily, but to those questionable for-profit publishers that blur the boundary between reputable and predatory publishing (I won’t name these publishers, but I’m sure you know which ones I am referring to). Most of these journals have a veneer of peer-review, they have independent editorial boards and impartial peer-reviewers, but the standards are so low that you can’t wonder if it is just a way of extracting page fees from desperate researcher.

  2. Hmmm. I propose a scenario in which an editor for a journal (or possibly multiple journals, since, at least in my world, many review/associate-level editors sit on multiple boards) who intentionally chooses lenient/uncareful reviewers for manuscripts in a particular area, where the claims go beyond the support of the data. This creates a situation where there is an emergent (poorly supported, but not yet refuted) literature base around a “hot new topic”, which then is used to increase interest in funding applications by the editor.

    I’d actually be surprised if variations of this weren’t happening constantly – not necessarily at the outright “I intend to profit from this” level of fraud, but rather at the “I’m interested in this, therefore I will pick reviewers who I expect to be kind to it” level, which inevitably does bias what gets published, and therefore the fundability of the areas of the editor’s interests.

  3. I love this discussion! What you refer to as “control fraud” I typically call “institutionalized fraud”.

    The answer is: Yes!, Yes! Yes! One little well placed intentional spin of science can cause mass amounts and billions of dollars of institutionalized fraud in multiple types of policies, practices, physician educational materials, and litigations.

    Below is the root of a massive scam that I have been tracking for sixteen years. Its a bogus toxicological risk assessment model created in 2001 by a Big Tobacco scientist and a retired deputy director of CDC NIOSH. It’s the “Veritox Hypothesis”. It survives because of the dishonest financial benefits it provides in many places, and embarrassment for many respected, but unclean hands, that have enabled it to continue.

    The Veritox Hypothesis
    In single-dose in vivo studies, S. chartarum spores have been administered intranasally to mice31
    or intratracheally to rats.76,77 High doses (30 x 106 spores/kg and higher) produced pulmonary
    inflammation and hemorrhage in both species. A range of doses were administered in the rat
    studies and multiple, sensitive indices of effect were monitored, demonstrating a graded dose
    response with 3 x 106 spores/kg being a clear no-effect dose. Airborne S. chartarum spore
    concentrations that would deliver a comparable dose of spores can be estimated by assuming that
    all inhaled spores are retained and using standard default values for human subpopulations of
    particular interest78 – very small infants,† school-age children,†† and adults.††† The no-effect dose in rats (3 x 106 spores/kg) corresponds to continuous 24-hour exposure to 2.1 x 106
    spores/m3 for infants, 6.6 x 106 spores/m3 for a school-age child, or 15.3 x 106 spores/m3
    for an adult. If the no-effect 3 x 106 spores/kg intratracheal bolus dose in rats is regarded as a 1-minute administration (3 x 106 spores/kg/min), achieving the same dose rate in humans (using the
    same default assumptions as previously) would require airborne concentrations of 3.0 x 109
    spores/m3 for an infant, 9.5 x 109 spores/m3 for a child, or 22.0 x 109 spores/m3 for an adult.
    In a repeat-dose study, mice were given intranasal treatments twice weekly for three weeks with
    “highly toxic” s. 72 S. chartarum spores at doses of 4.6 x 106 or 4.6 x 104 spores/kg (cumulative
    doses over three weeks of 2.8 x 107 or 2.8 x 105 spores/kg).79 The higher dose caused severe
    inflammation with hemorrhage, while less severe inflammation, but no hemorrhage was seen at the
    lower dose of s. 72 spores. Using the same assumptions as previously (and again ignoring doserate implications), airborne S. chartarum spore concentrations that would deliver the nonhemorrhagic cumulative three-week dose of 2.8 x 105 spores/kg can be estimated as 9.4 x 103
    spores/m3 for infants, 29.3 x 103 spores/m3 for a school-age child, and 68.0 x 103 spores/m3
    for adults (assuming exposure for 24 hours per day, 7 days per week, and 100% retention of spores). The preceding calculations suggest lower bound estimates of airborne S. chartarum spore
    concentrations corresponding to essentially no-effect acute and subchronic exposures. Those
    concentrations are not infeasible, but they are improbable and inconsistent with reported spore
    concentrations. For example, in data from 9,619 indoor air samples from 1,717 buildings, when S.
    chartarum was detected in indoor air (6% of the buildings surveyed) the median airborne
    concentration was 12 CFU/m3 (95% CI 12 to 118 CFU/m3).80
    Despite its well-known ability to produce mycotoxins under appropriate growth conditions, years of
    intensive study have failed to establish exposure to S. chartarum in home, school, or office
    environments as a cause of adverse human health effects. Levels of exposure in the indoor
    environment, dose-response data in animals, and dose-rate considerations suggest that
    delivery by the inhalation route of a toxic dose of mycotoxins in the indoor environment is highly
    unlikely at best, even for the hypothetically most vulnerable subpopulations.

    Below is a link to a 5 page, 2007 Wall Street Journal article about it, how it’s mass marketed as legitimate science, and for (some of) its usages:

    “Court of Opinion, Amid Suits Over Mold Experts Wear Two Hats, Authors of Science Paper Often Cited by Defense Also Help in Litigation”

    Click to access 02.%20Amid%20Suits%20Over%20Mold%20Experts%20Wear%20Two%20Hats%2002.07.pdf

    IF you would like for me to tell you more about how this applies to what you are working to understand, I would be happy to!

  4. There’s an example on Retraction Watch:

    The papers are, if reports are to be believed, exceedingly bogus. But it looks as though the fraud investigation capabilities of the authors’ institution have been captured by the authors, so no institutional report will be forthcoming: and the journals don’t want to act without an institutional report.

    This is maybe closer to “regulatory capture” but it’s a fair match for control fraud as well.

      • I know this has happened in program supporting research in a particular geographic region. It was only broken up because the funding line got eliminated. It was not a fair way to allocate the funds but it did seem to produce a lot of publications because those involved were able to build a base of support in the region and do longer term projects.

        I am uncertain that society or science was hurt by this. Its biggest “crime” was wasting the time of outsiders who submitted proposals. I am hesitant to give details because those involved would likely say they only gave grants to the most qualified proposals and that my concern was just jealousy because my proposals were not good enough. This could have been resolved by the panel being ineligible to receive funding from the program and/or having panel appointments expire.

      • My postdoctoral advisor, a big name in phylogenetics, feared for many years that cladists would monopolize the NSF grant panel and deny non-cladists grants. I don’t know if it ever actually happened.

        True story: I gave a guest lecture at a distant university many years ago, when this conflict was a bit hotter. Of course I was scheduled to chat with many local scientists, which was fun. Then at the end of the day my host drove me across a river to a dark, spooky, otherwise closed Museum of Natural History to talk to two more colleagues. “Just so you know,” he said before driving away, “they’re cladists.”

        So we had an awkward chat for about five minutes. Really awkward. They had not cared for my talk at all…. And then I said, “Have you seen the attempts to use parsimony analysis in linguistics? What do you think of them? I’m really not convinced–” and we spent the rest of the hour cheerfully agreeing that we didn’t think this approach was sound. Whew!

Leave a Comment

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.