Also this week: a fake Science paper replicates (?!), and more. Most of it not good. Sorry, it’s been that kind of week.
So, I guess we’d better talk about what some are calling #PruittGate (EDIT: and others are calling #PruittData)…Here’s a statement from Jonathan Pruitt’s past students, postdocs, and collaborators regarding recent retractions of three of his papers, and the serious questions that have been raised about many others (including at PubPeer). It’s unfortunate that so much of the work of correcting the scientific record is falling on the shoulders of Jonathan Pruitt’s past students, postdocs, and collaborators, when they’re not the ones who introduced the errors into the scientific record. They didn’t collect the raw data that’s been questioned; Jonathan Pruitt did (and as an aside, note that not every paper on which Jonathan Pruitt is a co-author contains data he collected). It’s to their great credit that they’re taking on this unasked-for burden of correcting the record. Editors at various journals, and other people, are also investigating. I’m glad to see that there have been so many public expressions of support for Pruitt’s former students, postocs, and collaborators, to help them get through an extraordinarily difficult time. Kate Laskowski, one of the collaborators in question, has a blog post detailing the full backstory of how the initial retraction came about. You should definitely read it. It’s an alternately fascinating, heartbreaking, and horrifying deep dive into how you find out that the data in your own paper can’t be trusted. And here’s the latest Retraction Watch story. Discussion of the case is now all over Twitter, and some discussants are departing from the diplomatic wording of the previous links to speak bluntly. If after reading up on this you need a laugh (because the alternative is to cry…), here you go. 🙂 😦 [pause, deep breath] Ok, I don’t think it’s really a secret anymore, so I may as well tell you that I’ve been involved in the investigation. After the initial retraction from Am Nat, at EiC Dan Bolnick’s request I took a close look at the datasets from Pruitt’s other Am Nat papers. Dan asked me to do this because I’m on the Am Nat editorial board, I’m reasonably quantitatively sophisticated, but I’m not a behavioral ecologist, hadn’t handled any of Pruitt’s papers, and had never met Pruitt. So I came into this with fresh eyes. I started with Pruitt’s Am Nat papers, but then expanded out to other papers because I felt like I needed to understand as well as possible what sort of irregularities there might be in the Am Nat papers. I spent something like 20-30 hours spread over several days, looking carefully at the Am Nat papers and about a dozen other haphazardly-chosen papers for which data were available on Data Dryad. Like other folks, I too found numerous irregularities and possible irregularities in datasets that Jonathan Pruitt collected (I didn’t find any irregularities in the datasets I looked at that he didn’t collect). Some of those irregularities are…let’s say very difficult to explain via normal biology, data collection, or data processing. Which is why they need investigation. So, as others have done, I passed on my findings to the EiCs of the journals concerned. Those EiCs in turn have done their jobs under COPE guidelines, which includes alerting the Academic Integrity officers at Jonathan Pruitt’s current and former institutions. It’s obviously not my place to determine the root causes of any data irregularities. I continue to pitch in on the ongoing collective effort to identify data irregularities and correct the scientific record. I hasten to add that others have spent far more time on this than me, for which they deserve a ton of credit. Am Nat EiC Dan Bolnick has been a sort of unofficial coordinator of these efforts. Here’s his blog post on the current state of play, and his detailed timeline of events. There is now a public view-only spreadsheet that a number of people (including me) are working together to populate, compiling information on the current status of every one of Pruitt’s papers–who has looked at the data, were there any concerns found, what action has been taken, etc. A key goal of this spreadsheet is to let others know which papers have irregularities, and which don’t. That way, hopefully no one feels they have no choice but to throw up their hands and disbelieve every single paper ever co-authored by Jonathan Pruitt or anyone he’s ever worked with. If you have information that you think should be added to the spreadsheet, please email Dan Bolnick (firstname.lastname@example.org). Please be patient with all the individuals and organizations working to correct the scientific record and sort all this out; these things take time. Please trust that everybody concerned is doing as much as they can, as fast as they can. For those of you who are wondering how so many data irregularities could possibly have gone unnoticed for so long, allow me to confirm from personal experience what others have already said: most of these irregularities are only easy to spot once you know what to look for. They’re mostly not the sort of thing you’d be likely to notice if you were doing ordinary data analyses. None of Pruitt’s collaborators deserve any blame for not noticing these irregularities sooner. (UPDATE #1: And if you need further confirmation of that, Erik Postma and the other folks who were the first to start looking into these data did so not because they stumbled across irregularities in the course of an ordinary data analysis, but because they were tipped off by an anonymous junior scientist with inside information. /end update #1) Finally, via Dan Bolnick: Jonathan Pruitt is currently in the field with family and friends; he’s taking steps to support himself. As others have already said better than I could, everyone involved is human, including Jonathan Pruitt. There’s a personal side to all of this. When he returns from the field, I hope that Jonathan Pruitt will find the strength to provide a full explanation for others to hear.
UPDATE #2: Science has a story on the situation that includes some quotes from Jonathan Pruitt.
UPDATE #4: And a second, very good post from Joan Strassmann, asking “what’s the alternative to trusting your collaborators?” (answer: there isn’t one) (UPDATE#5: to clarify something that came up in the comments, the part of Joan’s post that I really like is the bit about how we have no choice in the end but to trust our collaborators. As discussed in the comments, I can appreciate why Joan says what she says about Jonathan Pruitt’s publication rate–it’s a line of thought I’ve heard from a couple of other people–but I don’t agree with that bit. Meaning, I don’t think it would be a good idea to have blanket suspicion of the validity of anyone’s scientific work purely on the grounds that someone publishes “too often” in selective journals.)
UPDATE #6: Behavioral ecology graduate student Alexandra McInturf comments. Eloquent piece.
UPDATE #7: Globe and Mail story. Not much new information.
UPDATE #8: And here’s the Nature story. New quotes from some of the people leading the collaborative investigation into the integrity of Pruitt’s data. No new quotes from Pruitt, who told Nature he would comment to them but then never did, and didn’t respond to follow-up emails. As Dan Bolnick says in the linked piece and others have said elsewhere, this lack of response from Pruitt is totally inadequate. A bunch of his former students, postdocs, and collaborators, as well as other concerned parties, have set aside their own research and other obligations and are knocking themselves out trying to correct the scientific record. Ghosting them because “you’re in the field” just does not cut it. Whatever Jonathan Pruitt is doing in the field, it’s not nearly as important as sorting out this mess. (And yes, I know he may well have received legal advice to stay mum. If so, he’s of course entitled to do what’s in his own best legal interest, just like anyone else is. Just speaking generally, I’m glad to live in a world in which people can do what’s in their own best legal interests. But it’s not inconsistent to also lament that, sometimes, doing what’s in your own best legal interests means leaving other people with burdens that they don’t deserve to bare.)
UPDATE #9: Leticia Aviles on why the mounting problems with papers for which Jonathan Pruitt collected the data shouldn’t cause you to question/doubt/ignore all research on social spiders or animal personality. I confess I’m a little surprised this needed saying, though it’s really not my field. I’m now thinking back to my old post on how my side won the “microcosm wars”, without me even noticing. Now, I wouldn’t criticize anyone working on social spiders or animal personality who is worried about the broad perception of these topics. It’s only natural to be a little worried about that in the wake of #Pruittdata! But I do wonder a little how well-founded that worry is. Imagine you polled behavioral ecologists, or ecologists and evolutionary biologists more broadly, on whether #Pruittdata has changed their views on social spider research, or on animal personality research. What fraction would say “I now have a lower opinion of all research on social spiders” or “I am now skeptical of all animal personality research”? I feel like it would be a small fraction; am I wrong? What do you think?
An editor at JTB has been found to have committed “editorial malpractice“. Click through and read the whole thing. It involved repeated, appalling misconduct at every stage of the review process, with the goal of boosting the editor’s publication and citation counts. Reading it, I was left wondering a little why some of the abuses weren’t caught sooner?
Also this week in wild stories that I would discourage you from overgeneralizing from, because I doubt that they tell us much about any broader trend.
Big new meta-analysis of 492 studies finds that (i) interventions to educate people about, and reduce, their implicit biases have little effect on measures of implicit bias, and that (ii) changes in measured implicit bias have no effect on measures of either explicit bias or actual behavior. I’m still mulling over what conclusions/implications to draw from this.
Many of you will recall that, a few years back, Michael LaCour faked the data in a high-profile Science paper purporting to show that prejudice can be appreciably reduced for months just by a single conversation with canvassers. The political scientists who discovered the fakery have been repeating the study and finding some success, though it’s still early days.