Prominent botanist Steven Newmaster seems to have faked and exaggerated his work for many years, according to a major investigation by Science (UPDATED)

Kudos to then-grad student Ken Thompson for speaking up about the anomalies that kicked off this investigation, and kudos to Science for pursuing the investigation so well.

I thought #pruittdata was jaw-dropping, and honestly I’m not sure it measures up to this story. Jesus.

Hopefully Science’s investigation will make it hard for Guelph to sweep the matter under the rug, as they seem to be trying to do. I guess we’ll see. Doesn’t it depend how shameless the Guelph administration is? I mean, what if they just don’t care about people (including many senior members of their own staff) criticizing them in public?

I hadn’t realized that Science’s outstanding investigative journalism team is funded in part by a donation from emeritus biology prof Daniel Pinkel. That suggests a thought: maybe one way to improve investigations into scientific misconduct is to fund more investigative journalists. Obviously, that only works for high profile cases, but it’s something.

UPDATE: You can donate here to support Science’s investigative journalism. I just did. I hope you’ll consider it.

24 thoughts on “Prominent botanist Steven Newmaster seems to have faked and exaggerated his work for many years, according to a major investigation by Science (UPDATED)

  1. Oh jeez – this is terrible. Thanks for posting, Jeremy; I’d missed this story. It’s highly unfortunate, for all kinds of reasons. Not the least of which is that it may smear completely conscientious scientists who have also used DNA tests to show that products on sale were fraudulent (e.g., showing that supposedly legal meat and sushi in Japan were in fact from endangered whales and other illegal sources).

    • Yes, the “fraudulently accusing others of fraud” bit was pretty jaw dropping. Though not totally without precedent; I’m thinking of Stephen J. Gould apparently fudging skull measurements so as to be able to accuse someone else of fudging skull measurements (https://journals.plos.org/plosbiology/article/info:doi/10.1371/journal.pbio.1001071).

      The thing is, that bit’s got a lot of competition for the title of “most jaw dropping bit”! The bit where Newmaster claims to have sequenced SARS-CoV-2 in the summer of 2019 probably takes the cake for me.

      As I said in an old post about what I learned from reading old NSF OIG reports summarizing scientific fraud cases at NSF: you read about cases like Newmaster, or #pruittdata, and they’re just so bonkers, it’s exhausting. It’s almost a relief to read about garden variety scientific fraud. A bit like how it seems like every movie these days is some tentpole franchise in which the fate of the entire universe is at stake. It’s exhausting. Can’t we have a movie that’s about, like, normal people facing normal problems?

      • Cheers for that, hadn’t seen that. Looks interesting.

        Having only read the abstract of Kaplan et al. as yet, my first thought is that Kaplan et al. and Lewis et al. could both be right. Both of the following can be true at once:

        1. Gould consciously or subconsciously put a thumb on the scale with his analytical choices. He suspected that Morton was guilty of putting a thumb on the scale, and that suspicion led him to consciously or unconsciously put a thumb on the scale himself in order to confirm his suspicions.

        2. There is not any single “best” or “objective” or “most appropriate” analysis of these data. Because you can’t answer any meaningful biological question by measuring these skulls.

        Where I suppose I might differ from Kaplan et al. a bit (?not sure; haven’t read the paper yet) is that I don’t think #2 excuses #1. If you’re faced with a situation in which there’s no one “right” data analysis (here, because there’s no possible data analysis that would actually address a meaningful scientific question), I still don’t think that’s an excuse for picking an analysis that points to your desired conclusion.

        I’m thinking of how, in some of these “many analysts, one dataset” exercise, analysts provided with the same dataset and asked to address the same scientific question come up with answers that are all over the map. Which is consistent with #2: maybe the answers are all over the map because there is no “best” answer. Either because the scientific question is too vague or the dataset isn’t suitable for answering it. But reassuringly, in all the many analysts, one dataset exercises I’ve seen, there’s never any correlation between the analysts’ pre-existing beliefs, and the analytical outcomes. That is, the answers are *not* all over the map because analysts’ pre-existing beliefs are all over the map and every analyst just finds in the data whatever they already believed would be there. I’m very glad that that’s the case. That data analysts faced with what Andrew Gelman calls a “garden of forking paths” choose their paths in an unbiased way. Rather than consciously or unconsciously choosing the path (set of analytical choices) that leads to the analyst’s preferred conclusion.

  2. Wow wow and wow. Thanks Jeremy. The article is so in depth it is hard to read the whole thing. We gotta scrutinize the system, not this individual.

    • Well, except that the vast majority of individuals working in this same system–including the vast majority of prominent individuals–are honest. Cases like Newmaster’s get tons of attention precisely because they’re unusual. I think it’s probably a bad idea to use unusual cases as the only (or even main) spur or data point to inform our thinking about systemic issues. If you want to think about systemic issues, I think you should pay most attention to systemic data (https://dynamicecology.wordpress.com/2020/02/17/some-data-and-historical-perspective-on-scientific-misconduct/)

      Having said that, if I could wave a magic wand and change something systemic about how scientific fraud is handled in Canada, I’d want to see a well-funded national investigative body, run and staffed by scientists rather than (say) lawyers. But Canadian scientists have been calling for that in the wake of every high profile fraud case in Canadian history, and it never happens, so it’s probably wishing for a pony. More realistically, I’m most into tweaks like “giving journal datasharing requirements more teeth”. Am Nat is leading the way on this; hope other journals follow. As I said in the post, I also like the idea of giving Science money so they can do more investigative reporting.

      • This really hits the nail on the head, Jeremy. As you note above, this is a particularly egregious and unique case. And yet, the University of Guelph still failed to take a proper look when I was the lone complainant. If this case cannot even proceed to an investigation without a major article in Science (Martin Enserink’s article in June 2021) putting the pressure on, there’s no way that more typical cases (and other whistleblowers) will stand a chance. This should strike anybody as a major problem.

        Even the nature of the data renders our claims quite a bit more ‘verifiable’ than with #PruittData. With behavioural ecology data you can always say you lost your notebook or your laptop was stolen, and so it really requires the detailed forensics that the co-authors used to generate a preponderance of evidence that could not be overcome (kudos to them). With DNA sequence data, there are institutional databases that track every single sample through time and so it really is possible to prove and falsify individual claims about data being generated. There either is a record, or there is not. In our case, the ‘lost notebook’ was a ‘mis-specified sequencing core facility’ and lo and behold, the facility where the sequencing “actually” took place deletes all of their records every three years.

        When I wrote to the University of Guelph in Feb 2020 I really was just asking them to take a look at these databases—they refused to do this. If we care at all about scientific integrity in Canada, we will move quickly to wrest power away from institutions when it comes to addressing allegations about their faculty.

  3. I think I disagree with this very commonly expressed view:

    This is a very common view, but I’ve never been able to grok it.

    An analogy: no one ever says “Banks store a lot of money, which creates a systemic incentive to rob them. That’s the real problem and we need to fix it by removing that incentive”. No one ever says that because it’s incorrect. The threat of rare bank robberies should not lead us to get rid of banks, or force all banks to store no more than some small amount of money, in order to reduce the incentive for bank robberies! Not least because there are also strong *disincentives* to rob banks, such as “alarms”, “security cameras”, and “if you get caught, you’ll go to jail”. The analogy to scientific fraud cases is straightforward. (and yes, I do think that if you want to call the rewards of scientific success “incentives” for scientific fraud, you need to call safeguards and sanctions against scientific fraud “disincentives”).

    If you don’t like the bank robber analogy, here’s another version of it that should hit closer to home for many scientists. No one ever says “The rewards associated with getting an ‘A’ grade creates a strong incentive for undergraduates to cheat on exams. We need to get rid of those incentives so as to reduce the frequency of cheating on exams.”

    A further broad point: even if you do want to insist that systemic incentives are the real problem here, you don’t actually *have* to fix the systemic underlying causes of some bad behavior in order to reduce the frequency of that bad behavior to some low, socially optimal level. There are plenty of examples of successful policy interventions in all walks of life that don’t address the systemic underlying causes of societal problems.

    Another problem with the argument that the “real” problem is the incentives created by the rewards of scientific success: it presumes that we know Steven Newmaster’s motivations, and that he did a rational cost-benefit calculation before deciding to commit fraud. I don’t think we should presume that. I mean, what rational cost-benefit calculation led him to claim *he sequenced SARS-CoV-2 in the summer of 2019*?!

    Finally: most scientific fraud isn’t committed by prominent scientists. So even if we did reduce the rewards associated with becoming a prominent scientist, we wouldn’t be making much of a dent in scientific fraud writ large.

    • Jeremy, I disagree on each point! I’ll address one.
      You said “No one ever says “The rewards associated with getting an ‘A’ grade creates a strong incentive for undergraduates to cheat on exams. We need to get rid of those incentives so as to reduce the frequency of cheating on exams.”
      First: That “no one ever says” has never been evidence that something is not a valid point which seems your meting. No one ever said gravity bends light, but one would look pretty silly using that as an argument to tell Einstein he was wrong.
      But, I disagree nobody ever said that. There is a whole movement in education to reduce emphasis on grades, and more on learning. Grades can change the student mindset, and entire educational model. Similarly our reward system of having lots of citations, lots of grants, 20 papers a year, accolades, for research. What drives people? That is work for psychologists and sociologists. So I think your example makes the point that there is something in the system to consider.
      And…I can’t help but addressing one more. That not everybody that commits fraud is famous (where famous = success as interpreted by scientists, perhaps its $), does not mean it is not a strategy for increased success. In fact, one would expect that if people use fraud to become famous, many would not be.

      • Fair enough Scott. So rather than debating analogies further, let’s talk more directly about the topic of interest. What incentives would you remove from US or Canadian science in order to appreciably reduce the frequency of scientific fraud in the US or Canada?

    • You’re absolutely right, Jeremy – scientists will be rewarded for doing good science – with jobs, promotions, grant money, attention…it seems absurd to suggest they shouldn’t get rewarded. And a minority will find ways to ‘fake’ good science. And if we change the incentives some people will find a way to ‘counterfeit’ the ticket for getting the incentives. The solution can’t be to get rid of incentives.

      • I’m hoping that Scott or someone else will reply to my question upthread: exactly how would you change the incentives in the US or Canada to cut fraud?

        One answer to that question that I’ve heard elsewhere is: eliminate certain really big rewards for the most successful scientists. So in Canada, no more Canada 150 Chairs or Canada Research Chairs, for instance. Internationally, no more Nobel Prizes. I’m not aware of any good reason to think that getting rid of major awards and chairs would be very effective at reducing fraud in the US and Canada, but perhaps there’s evidence I’m unaware of. And I tend to be reflexively suspicious of those answers, because in my admittedly-anecdotal experience the people giving those answers tend to dislike research chairs and awards for all sorts of reasons having nothing to do with fraud. I get the sense that, for them, the real goal is to get rid of research chairs and awards, for any and all reasons that seems handy. To which, maybe getting rid of chairs and awards is a good goal! But I’d prefer it for people to argue for that goal explicitly (“Here are 5 reasons why we should stop awarding Nobel Prizes”). I’d rather that discussions about how to reduce scientific fraud remain focused on that topic, rather than getting hijacked by discussions about how to achieve some other goal.

        Note that there are other countries in which incentives do seem to encourage fraud, or have done so in the past before the incentives were changed. My (possibly incorrect) understanding of the cross-country comparative data is that paying scientists large amounts of money per high-impact paper (e.g., $X per Nature paper, where X is some large number relative to a typical scientist’s salary) is associated with higher rates of fraud.

        Note as well that there could be an “interaction term” between incentive structures and personal attributes. One might imagine that the large majority of US and Canadian scientists would never commit fraud no matter what incentives they were faced with. And that a small minority would commit fraud under pretty much any set of incentives they were faced with. But maybe there’s another small minority for whom the incentives matter on the margin–who would commit fraud under some incentive structures, but not under others.

      • Me 3. Living a competitive context is simultaneously unavoidable and not in any direct way the creator of fraud – not when 99.99% of scientists don’t do it.

      • Brian, this issue of 99.99% is one that interests me. Because my sense (no empirical evidence) is that the incidence of unethical science would be higher than 0.01%. But it doesn’t happen at the level of outright data fabrication. It is about putting our thumbs on the scale to get a preferred result even if we’re not cpmpletely aware of it. For example, my rresearch finds some very exciting evidence that A causes B, but only if I include covariates C and D, but if I also include covariate E the evidence is weak to non-existent. There are similarly strong logical reasons for including each of covariates C, D and E, but there is also a general argument that as you include more covariates your parameter estimates are getting worse. So, there is a reasonable rationale for including all covariates. I decide to split the difference and include C and D but not E. That is, I pick the path that provides maximum publishability.
        My sense is that we are often in this situation and I would guess that people would frequently choose the most self-serving path – no idea how frequently that would be. I think it’s plausible that more harm is done to scientific progress (if not to the reputation of science) by the accumulation of “the garden of forking paths” decisions than by the Newmaster/Prutt cases. But maybe I’m in the minority of folks who think this is a widespread and common problem.

      • I agree that there is a blurry grey area between “suboptimal or mistaken, but totally honest, scientific practices” and “straight-up fraud”. Although I don’t think the greyness extends as widely as you appear to suggest (perhaps I’ve misunderstood you?). I don’t think we should define ‘walking through garden of forking paths’ as a form of misconduct. I agree that we should be concerned about whether our results are robust to different analytical decisions we might’ve taken. And that we should be more worried about robustness (or the lack thereof) than we should be about straight-up fraud. I just don’t think we need to define ‘walking through the garden of forking paths’ as a form of misconduct (not even borderline misconduct) in order to justify worrying about it.

        Especially because I’m not so sure that, when faced with a garden of forking paths, investigators routinely choose self-serving paths. Indeed, in the “many analysts, one dataset” exercises I’ve seen, there’s no correlation between investigators’ prior beliefs about the topic, and the answers they report. On the other hand, there certainly are cases in which we should worry about everyone who works on a topic having similar biases, that bias the answers a field reports without anyone ever doing anything even borderline fraudulent. I’m thinking for instance of habitat fragmentation researchers’ tendency to emphasize negative effects of fragmentation on biodiversity in their abstracts.

        So I dunno. I think it’s hard to say exactly how often, or in exactly what contexts, scientific fields go off the rails because of biases shared by everyone working in the field.

        I think ongoing “many analysts, one dataset” exercises will give us more insight into how the garden of forking paths plays out in practice. Though one issue I’ve been wondering lately is how representative the analysts participating in those exercises are of working scientists. Perhaps the analysts in “many analysts, one dataset” exercises tend to be unusually impartial, because when they participate in these exercises they’re not working on topics directly related to their own research?

      • Yeah biased path choice in the garden of forking paths is detrimental to science and a problem. But qualitatively different order of magnitude of problem than outright data fakery in my opinion. Among other things, readers can form judgments and use caution against garden of forking path issues (although perfect information and full disclosure would certainly be better), but they really have no realistic defense against data fakery. If we want to talk about a world where incentive structures cause the problem – it is the garden of forking paths and the need to crank out papers every year regardless of how results turn out. Incentives like getting into Nature or have a tenure-worthy track record are not causing people to fake data. But having a tenure-worthy track record probably is driving a bit of the garden of forking paths problem. In short I have to assume on some level scientists know about the garden of forking paths and perceive it as an acceptable compromise. Not defending it. But I can protect myself against it. Personally, I see papers that aren’t obviously a priori hypothesis testing (or explicitly exploratory) and have small effect sizes and contrasting results in different years or locations or taxa and I just move on.

  4. Expanding on a thought in the post: imagine that the Guelph administration really is shameless enough to think to themselves “We need to protect Steven Newmaster, no matter how much people rip us in the media, because he brings in a lot of money. We want to keep that money coming in. We also need to protect him because we want our other top money-acquiring scientists to feel valued and supported. We don’t want our other top money-acquiring scientists to think that we’re suspicious of them or going to start checking up on them more. Because then they might leave and take their money with them.” If the Guelph administration, or really any uni administration, really was that shameless, what could be done about it, realistically? Honest question.

    And yes, it’s a hypothetical–I don’t actually know that the Guelph admin is *that* shameless. Hopefully they’ll turn out not to be that shameless! But Guelph profs quoted in the Science piece seem to be worried that the Guelph admin is that shameless, or at least pretty shameless. So it doesn’t seem like a completely outlandish hypothetical that there’s no point to thinking about.

  5. Put me in the skeptics camp for outsourcing fraud policing to journalists. Clearly they did a good job here (indeed moved the needle when other paths weren’t). But ultimately, journalistic incentives aren’t the same as scientists either. Just for example, are they ever going to devote attention to a grad student who committed fraud on their first paper or just really big names with lots of papers? And if its a complex or borderline case, is the press really the best place to sort this out?

    If scientists want fraud accountability it has to come from scientists. And if scientists need a big stick to get the ball rolling I would think funding agencies are closest to scientists and with more aligned interests. An independent body (run by the National Academy/Royal Societ/etc) with funding agencies agreeing to respect their decisions seems like the best outcome.

    But for sure leaving it with university employees where it is now is the worst place.

  6. Thanks for highlighting this Jeremy. I read it the same day that I watched The Tinder Swindler documentary on Netflix and there are interesting parallels between the two cases. Specifically, it is clear that there are some people who are able to commit the most outrageous, high stakes, high claim fraud, out in the open, in plain sight without feeling any remorse. And the fact that it’s so out in the open and of such a large scale means that most of us cannot comprehend the fact that it’s fraudulent until it’s revealed.

    If you’ve not seen that documentary, I highly recommend it. And I think that it supports your view that it’s not that the system creates fraudsters, it’s that the system attracts fraudsters because their psycopathy (if that’s what it is) sees opportunities that most of the rest of us would not consider pursuing.

  7. Andrew Gelman on Newmaster: https://statmodeling.stat.columbia.edu/2022/08/28/the-latest-scientific-fraud-story-as-is-so-often-the-case-people-had-been-warning-us-about-this-guy-for-a-long-time-but-the-people-in-charge-werent-listening/

    Includes some further examples of Newmaster’s fabulism, that apparently have been known for many years. On his UGuelph biography page, he falsely claims to have done a postdoctoral fellowship in matrix mathematics in Australia, with an organization that has no record of him. His CV also inflated the length and value of his NSERC grants, which are a matter of public record.

    And it turns out colleagues complained about him to the university admin as far back as 2010! So Guelph has been knowingly covering for this liar for over a decade.

Leave a Comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.