Friday links: a major case of fake data in psychology, the Avengers vs. faculty meetings, and more (UPDATEDx2)

Also this week: automating ecology, data transformation vs. global warming, Simpson’s paradox vs. Covid vaccine efficacy, vaccine hesitancy (polio edition), the case for pandemic optimism, another retraction for Denon Start, and more.

From Jeremy:

Shu et al. 2012 PNAS, an influential psychology paper on how the right “nudge” can reduce dishonest reports of information, is…dishonest. Yes, really. The fakery–and that’s definitely what it is–was discovered when the raw data were included in the supplement of a 2020 PNAS paper by different authors. The 2020 paper failed to replicate the 2012 findings; now we know why! This case illustrates what I now think of as the main argument for mandatory data sharing: data sharing makes it harder to get away with faking data. Kudos to the anonymous authors of the linked piece for some excellent data forensics (I say this as someone with a bit of experience in data forensics myself…). Noticing the use of two different fonts in the data file, and using that clue to work out exactly how some of the data were faked, is an especially good bit of sleuthing. The linked post in turn links to responses from four of the authors of Shu et al. You should read them as well. And here’s first author Lisa Shu’s Twitter thread on the case. There is strong evidence pointing to the source of the fakery: it seems to be either co-author Dan Ariely (a very famous psychologist at Duke), or the insurance company from which Ariely says he obtained the data. In his statement, Ariely blames the insurance company, and indicates that he would welcome an investigation by Duke. Note that there are some apparent conflicts between Ariely’s statement, and the available evidence. We’ll have to wait and see if an explanation for those apparent conflicts is forthcoming. Hopefully there will be a quick and thorough investigation that will get to the bottom of all this. It would sure speed the investigation if Ariely would just name the insurance company…This isn’t the first time that questions have been raised about data that Ariely claims he got from a private company (and as an aside, that story illustrates one reason why Ariely might not want to name the insurance company…). Meanwhile, others have started doing deep dives into Ariely’s other papers, somewhat hindered by the fact that he ordinarily only provides summary statistics, not raw data. Anyway, as in all such cases, you feel terrible for the co-authors who’ve been burned for doing what we all do (and could hardly avoid doing): assuming that the people we’re working with are honest. A bit of further context: Ariely received an Expression of Concern for another paper of his just last month, due to a bunch of statistical discrepancies that couldn’t be fully explained because Ariely couldn’t provide the raw data. And finally, what would you bet that PNAS will retract Shu et al. 2012? That’s not a rhetorical question: you literally can bet money on it. UPDATE: BuzzFeed identified the insurer that purportedly provided the data. It’s The Hartford, which didn’t reply to multiple requests for comment. /end update

UPDATE #2: The Hartford confirmed that it partnered with Dan Ariely on a “small project” in 2007-8, but says it can’t locate any data, results, or other deliverables from the project. /end update #2

Start et al. 2019 Am Nat has been retracted at the request of all authors but Denon Start. The retraction notice is admirably detailed. Kudos to Ben Gilbert and Art Weis for doing the right thing here, after spending considerable effort getting to the bottom of the problems with the data and analyses. Sorry that the burden of correcting the scientific record fell on them, rather than on Denon Start where it belonged. Am Nat EiC Dan Bolnick has a tl;dr version of the retraction notice. Add this paper to the growing list of papers by Start that have been retracted, corrected, or subjected to expressions of concern (see here for a very incomplete list; there are PubPeer threads about 16 different papers of his). As I’ve noted previously, most EEB researchers seem to have stopped citing any of Denon Start’s papers.

This is old but I missed it at the time, and it’s newly relevant in light of some of the links above: maybe scientific funding agencies should allocate a small amount of their research budgets (say, 1%) to researchers who want to reproduce or double-check the work of others. What do you think?

Sociologist David Weakliem with a series of three blog posts on hesitancy to take the polio vaccine back when the vaccine was first introduced in the 1950s (part 1, part 2, part 3). Very interesting historical comparative context for covid vaccine hesitancy.

Timothy Keitt and Eric Abelson on automating ecology.

The Guardian interviews self-described “militant covid centrist” Prof. Francois Balloux.

Reasons to be optimistic about the pandemic.

Simpson’s paradox vs. Covid vaccine efficacy. Good example for teaching Simpson’s paradox, and more broadly the importance of covariates.

Ooh, here’s a good example for my collection of statistical vignettes. Wolkovich et al. 2021 GCB show how failure to use a linearizing transformation makes it look like biological processes are becoming less sensitive to temperature change as the earth warms. Here’s a blog post with the story behind the paper. Fascinating (and depressing, and unsurprising) to learn that there was so much resistance to this paper from the reviewers.

Need a version of this for profs like me who’ve suddenly realized they need to start prepping to teach. 🙂

The Avengers vs. faculty meetings. 🙂

And finally, this is lovely:

Have a good weekend. 🙂

Bonus! I’ve linked to this a couple of times before, most recently in the comments on a post earlier this week. But it’s funny and timely so I’m going to link to it again:

5 thoughts on “Friday links: a major case of fake data in psychology, the Avengers vs. faculty meetings, and more (UPDATEDx2)

  1. Dan Ariely!?!? Say it ain’t so. I’ll be watching this one. Thanks for the other refs on suspected poor science. That dentist one is disturbing.

  2. We have worked on enough fraud cases in the last decade to know that scientific fraud is more common than is convenient to believe, and that it does not happen only on the periphery of science. Addressing the problem of scientific fraud should not be left to a few anonymous (and fed up and frightened) whistleblowers and some (fed up and frightened) bloggers to root out.” (Data Colada). .While I still prefer to believe this is uncommon, there’s a commonality with some retracted ecology and other “field study” datasets because of suspected fabrication (or at least upon close scrutiny were shown to be highly implausible). A single person collected or compiled the data and provided them to co-authors, no original lab notebooks, field forms, or such were retained. (Loonstedt, Pruitt, LaCour, Stapel, Otherwise data fudgers can get tripped up by technicians who pay attention.

  3. The way retractions are published, it’s possible to have an article retracted and few would know. The links to the 2019 Am Nat paper by Start, Weis, and Gilbert don’t obviously show it as being retracted – one has to dig around. https://www.amnat.org/an/newpapers/MarStart.html; https://www.journals.uchicago.edu/doi/full/10.1086/701785
    The full text of retraction notice doesn’t even link to the paper. The web landing page for the abstract has a link “Corrections to this article” but it has to be clicked to show that the “correction” is actually a retraction. Neither the Scopus or Web of Science records for the article show it as being retracted, and since the retraction notice doesn’t even appear in the citing documents . To me the proper way to publish retractions is “Retraction: Indirect Interactions Shape Selection in a Multispecies Food Web” so that a search for the original turns up the retraction. The retraction was only published 8 days ago so hopefully this will change, but …. Worth checking back in a month or so.

    • I too feel like it should be possible to improve matters here. I obviously don’t know much about web design, or how DOIs work, or etc. But I feel like publishers ought to be able to do better on this.

      Whether anyone could get Google Scholar or Web of Science to adjust their algorithms and outputs in such a way as to better highlight retractions, I dunno. And I don’t see any feasible way to make sure that “retracted” gets stamped on all the random copies of a given paper that might be floating around on JSTOR or university servers or preprint servers or whatever.

Leave a Comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.