Many retracted papers continue to be cited years after they’re retracted. Some are even cited about as often after retraction as before–i.e. the retraction appears to have had no effect on how often they’re cited. See here for discussion and links to some data. Which is a pretty depressing commentary on scientists’ citation practices.
But #pruittdata is an unusual case. Approximately a year ago, very serious concerns were raised about anomalies in the data underpinning dozens of papers co-authored by Jonathan Pruitt. Those concerns were widely reported in the scientific press and even made it into some newspapers, and were widely discussed by scientists on social media. Numerous papers of Pruitt’s have now been retracted, corrected, or subjected to Expressions of Concern (EoC), and various investigations are still ongoing. How have citing authors reacted to #pruittdata? Are papers co-authored by Jonathan Pruitt still being cited? And are citing authors differentiating between papers for which Jonathan Pruitt collected data, and papers for which he did not collect data (e.g., review papers)?
I was prompted to look into this a bit after seeing a tweet noting that, according to Google Scholar, Pruitt has been cited much less often in 2020 than he was in 2019. It’s very unusual for an active researcher who publishes regularly in selective journals to be cited much less often one year than in the previous year. That certainly suggests that the #pruittdata news caused many people in Pruitt’s field to stop citing his papers. But of course, looking at Google Scholar citation counts is only a crude first pass. It doesn’t differentiate self-citations from others. It doesn’t differentiate review papers from others. It counts citations from sources other than peer-reviewed papers. And it doesn’t differentiate citing papers that were already in review or in press when the #pruittdata news broke from citing papers that were submitted after the news broke.
So I did some digging on Web of Science. I looked at how often each of Jonathan Pruitt’s 20 most-cited papers (i.e. most cited all time) were cited in 2019 vs. 2020. Those 20 included 4 review/perspectives papers and 16 research papers. I also looked at whether the 2020 citations were from early or late in the year, and whether or not they were self-citations. I also spot-checked 8 other haphazardly-chosen papers of Pruitt’s from 2016-19 (1 review paper, 7 research papers), since his 20 most-cited papers were all from 2015 or earlier. I didn’t count any citations from notices of retraction/correction/EoC. And I didn’t count two citations from a book review. I recorded citation dates the way WoS records them: by the date the peer-reviewed paper first appeared online and was added to the WoS databse, even if that was before it appeared in a paginated journal issue.
Here’s what I found:
- Citations to review papers co-authored by Pruitt haven’t dropped much. Well, citations to a couple of them dropped somewhat from 2019 to 2020, citations to a couple of them were about the same, and citations to one of them (the most recent one) increased. So, no pattern to speak of. Further, citations to review papers were an appreciable fraction of all of Pruitt’s 2020 citations. So if you’re wondering why his citations didn’t drop even more from 2019 to 2020, that’s the main reason why. Because…
- Citations to previously well-cited research papers co-authored by Pruitt cratered as soon as the #pruittdata news broke. Pruitt’s 16 most-cited research papers got 150 citations in 2019 vs. just 64 in 2020. Further, most of those 2020 citations were from papers that were published in the first half of 2020 and so likely were submitted before the #pruittdata news broke. Those 16 research papers only received 13 citations from July 2020 on, much lower than the 32 you’d have expected if the 2020 citations had been distributed evenly throughout the year. Further, those 13 late-in-2020 citations included 1 from a philosophy journal. Philosophy papers often go years from submission to publication (yes really). So really, it’s only 12ish citations that seem like they might date from post-#pruittdata. (I say “12ish” because of course it’s possible that some papers published in the first half of 2020 were submitted after #pruittdata started, and that some papers published in the second half of 2020 were submitted before #pruittdata started). Further still, even the 51 citations these 16 papers received in the first half of 2020 are well down from the 75ish you’d have expected if these 16 papers had continued to be cited as often as they were in 2019. Finally, if you eyeball the pre-2019 citation data, these 16 papers mostly looked to be on flat citation trajectories in the years before 2020. So I don’t think these results are even partially attributable to a long-term trend of decreasing annual citation counts for Pruitt’s older papers. But just to be sure, I checked some recent papers (see following bullet).
- Citations to Pruitt’s more recent research papers also dropped when the #pruittdata news broke, but not quite as much. The 7 recent research papers I haphazardly checked were cited a total of 41 times in 2019 vs. just 29 times in 2020. Of those 29 citations in 2020, only 10 were from the second half of the year. Further, several of these recently-published papers looked to have been on an upward citation trajectory prior to 2020. In the absence of #pruittdata, you’d have expected them to be cited more often in 2020 than in 2019. So it looks to me like citations of Pruitt’s recent papers were down just as much as citations of his older papers, relative to how often you’d have expected them to be cited in 2020.
- I didn’t see any obvious temporal pattern in the 2020 self-citations that distinguished them from the other 2020 citations. And just offhand, it didn’t look to me like self-citations were an appreciably larger, or smaller, fraction of the 2020 citations than they were of the 2019 citations.
- There was no big obvious difference in citations between papers subject to retraction/correction/EoC and other papers.
In conclusion, it looks to me like most people in Pruitt’s field stopped citing Pruitt’s research papers as soon as the #pruittdata news broke in late January 2020. That’s a measure of just how widely and rapidly the news spread, and just how seriously the news was taken by people in Pruitt’s field.
Further, it looks to me like most people in Pruitt’s field have adopted the heuristic “don’t cite any research papers co-authored by Jonathan Pruitt”. Rather than, say, only avoiding citation of papers that have been formally subjected to retraction/correction/EoC. And rather than distinguishing between papers by Pruitt for which Pruitt collected data, and papers by Pruitt for which he did not collect data. None of which surprises me. Anecdotally, I’ve heard from several ecologists who said (in so many words) “Pruitt has over 100 research papers. I don’t have the time or ability to keep track of which ones are subject to what level of concern. So I’m just going to err on the side of caution and not cite any of Pruitt’s research papers.” I think that’s a totally understandable approach for citing authors to take. Especially if you’re just looking for a citation in support of some broad point for which you could cite many papers, or a citation in support of some passing remark. Of course, not citing any of Pruitt’s research papers has the unfortunate side effect of cutting citations for some papers co-authored by Pruitt that remain reliable. I sympathize with Pruitt collaborators who will lose a few citations for that reason. But I don’t see any realistic way of avoiding that side effect. After all, people use all sorts of heuristics all the time when choosing papers to cite or not cite. It’s not realistic to expect them to not use a heuristic in this very unusual situation. Fortunately, I doubt anyone is going to suffer any career consequences just from losing a few citations. Nobody’s career outcome is so sensitive to the exact frequency with which they’re cited that a few lost citations will make any appreciable difference at the margin. There absolutely are substantial negative consequences to Pruitt’s collaborators and trainees from #pruittdata–but those negative consequences mostly aren’t to do with a few lost citations to still-reliable papers that were co-authored by Jonathan Pruitt but for which Pruitt didn’t collect any data.
So if you’re concerned that journals have yet to retract some #pruittdata papers about which serious concerns have been raised (which I am), or that some journals have allowed corrections in cases where they arguably should’ve retracted (which I am), well, try to keep your concerns in perspective. Because all signs are that, whatever formal decisions journals make, most authors have already stopped citing Jonathan Pruitt’s research papers. And I wouldn’t be surprised if the rate at which they’re cited continues to drop (asymptotically) in future.
I’m more concerned about a possibility this post doesn’t address: that some people will stop citing papers by Pruitt’s collaborators and trainees that weren’t co-authored with Jonathan Pruitt. Nick Keiser says he’s seeing this happening to his papers, which is really unfortunate. That’s something that actually could have career consequences, not just because of the lost citations themselves but because those lost citations are a symptom of lost reputation. It would be really perverse for the reputations of Pruitt’s collaborators to suffer because of #pruittdata. Many of Pruitt’s collaborators and trainees have been at the forefront of identifying and addressing anomalies in papers for which Pruitt collected the data, at considerable cost to themselves in terms of time and emotional well-being. They’re good scientists who ended up in a terrible situation through no fault of their own, and who’ve been going above and beyond to do the right thing. If anything, I think you should trust their non-Pruitt-co-authored papers more than you would have before #pruittdata broke, not less. As I said above, I totally get that people need heuristics to decide what papers to cite. But “Don’t cite anything written by anybody who’s ever worked with Jonathan Pruitt” is a bad heuristic. Nobody should adopt that heuristic, no matter how cautious they want to be about only citing reliable work.
I am curious; do citations appear (yet) of the form “Pruittpapers X, Y, and Z purported to show that this Thing to be true, but they have since been retracted”? That is, explaining to a reader why those papers about the Thing are not being cited.
Hi Hal,
I didn’t read any papers to see exactly how Pruitt’s papers are being cited. I just looked at the dates of the citations and the authors of the citing papers. Offhand, I’d be surprised if anyone ever cites Pruitt in the way you describe. It just isn’t how scientists usually write. But we’ll see I guess!
That seems strange. Pruitt made a big impact in his field (I gather; it’s not a field I know anything about), and collaborated with lots of people. On topics where his work led to important conclusions, I think there is a responsibility to explicitly point out where those conclusions have been retracted.
I don’t think it’s strange. For a lot of his broad conclusions, there are many other papers one could cite. You don’t become prominent in your field by being the only person working on topic X, and so the only possible source of citations relating to topic X.
Pretty much the only people who cite Pruitt’s research papers in support of anything other than broad points, or else in support of unimportant passing remarks, are Pruitt himself and his own collaborators and trainees. I have an old post with some data on this: https://dynamicecology.wordpress.com/2020/10/05/how-much-damage-do-retracted-papers-do-to-science-before-theyre-retracted-and-to-who/
And I don’t think any of this is unique to Pruitt. I think the same would be true of most prominent EEB researchers.
Jeremy — I think we have very different views on the purposes of citations in scientific writing. “For a lot of his broad conclusions, there are many other papers one could cite.” I don’t think that’s the point. Would your description of the literature and the “broad conclusions” have been viewed, pre-pruittgate, as deficient if you didn’t cite him? Then I think it is appropriate, post-pruittgate, to point out the retraction[s] of his work that is relevant to what you are writing about.
As I watch the unfolding discussions about this whole mess, I keep reminding myself that the purpose of those retracted papers was not to document how many spiders jumped when and where, although that’s where the problems with the data appear. Their purpose was to draw some kind of “broad conclusions” about behavioral ecology. Had they not done so, Pruitt would not have become so prominent. So, I think that there will often be a need to explicitly address the retractions, regardless of whether there are other papers one could cite or not.
“Would your description of the literature and the “broad conclusions” have been viewed, pre-pruittgate, as deficient if you didn’t cite him?”
From looking in detail at how a couple of his papers have been cited, I’d say the answer to that question is “yes” for only a minority of citations to his papers. And that most of that minority are self-citations.
I’ve now heard from someone who had a paper in review when #pruittdata broke that cited a bunch of Pruitt’s papers. Both reviewers recommended that the citations be removed. So that could be another mechanism contributing to the cratering of citations to Pruitt’s research papers in 2020.
I’m now wondering if any authors of papers that were in review or in press when #pruittdata broke revised their papers to remove citations to Pruitt, without being asked to do so by reviewers or editors.
About a dozen readers have clicked the link to my year-old post from when #pruittdata first broke. It’s amusing to wonder if any of them are just hearing about this for the first time (say, because they’re brand-new grad students or something). If so, I wouldn’t want to have to be the one to get them up to speed, it’d be like Inigo in The Princess Bride. “Let me explain…No, there is too much, let me sum up.” 🙂
An anecdotal observation that I decided to leave out of the post: a small number of Pruitt’s citations from July 2020 on were from authors whom you might’ve expected not to have heard about #pruittdata. That citation from a philosophy paper that the post mentions. A citation from a veterinary paper on cow behavior. A citation from a theoretical physics paper.* I wouldn’t be surprised if lots of philosophers, veterinarians, and theoretical physicists haven’t heard about #pruittdata. Going forward, I’m sure Pruitt’s research papers will continue to be cited a little bit, and I’ll be curious what fraction of those citations come from authors whom you wouldn’t have expected to have heard about #pruittdata. Such authors might include brand-new grad students in behavioral ecology, as well as more experienced researchers in fields far removed from behavioral ecology.
*Don’t look so surprised. There’s a whole subfield of theoretical physicists building models of ecology and evolution and publishing them in theoretical physics journals.
TREE editor here
1. I’ve had reviewers ask me to check that the data in the Pruitt paper being cited wasn’t under question. In one case, it was a key, foundational reference, so couldn’t be easily swapped over. Co-authors confirmed all was fine.
2. If the most recent review paper you mention is Pruitt’s 2019 TREE paper on tropical cyclones – it’s (almost) entirely being cited by cyclone researchers not behavioural ecologists, again people who might not be so aware of the Pruitt concerns
Interesting, thanks for sharing. Glad to hear the reviewers and editors at TREE are paying such close attention to this.
No, the recent review I looked at wasn’t Pruitt et al. 2019 TREE. Interesting that it’s almost entirely being cited by cyclone researchers rather than behavioral ecologists. That could be because cyclone researchers might not have heard about #pruittdata. But it could also be because it’s a review/opinion paper that people still feel ok with citing because they know it’s not based on data collected by Pruitt.
Co-author of the TREE reivew here. Given it is mainly ideas and not data-based, with the data makig up a figure taken fro other published work, I would hope that it will still get cited. However, I appreciate that hope may be optimistic, and I agree that there will be minimal hit to my career if that review gets fewer citations. However, all my Pruitt co-authored stuff getting read and cited less? Seems detrimental
I don’t think anyone has adopted the heuristic of “Don’t cite anything written by anybody who’s ever worked with Jonathan Pruitt.”
You bring up the example of Dr. Keiser, but the fact is he only has about 8 publications without Pruitt as a coauthor compared to ~30 with him as coauthor.
Not citing any paper with Pruitt as a coauthor is a reasonable heuristic to adopt given the number of his publications that have been found to be problematic. That view is bolstered by the length of the investigations that still have no outcome, the number of papers with unaddressed issues on pubpeer, and the instances where “corrections” to Pruitt papers have been issued, only to have it revealed that they completely missed glaring issues. For example:
https://pubpeer.com/publications/41B0D0284D234750447DC3809E3F77
https://pubpeer.com/publications/6CAE0D5A1FE835EEA2BADFFB126872
“You bring up the example of Dr. Keiser, but the fact is he only has about 8 publications without Pruitt as a coauthor compared to ~30 with him as coauthor. ”
Could be, I don’t know. To really get at this, you’d need to look at citations of many papers by Pruitt’s collaborators and trainees, including many papers co-authored by Pruitt and many papers not co-authored by Pruitt. It’s certainly doable, but I don’t have time to do it right now (I’ve already procratinated too much just compiling the data for this post).
Nick Keiser’s comments no doubt reflect his own experiences seeing how others react to him and his work in various contexts (e.g., conversations at conferences; job interviews). He’s talked about those experiences in various Twitter threads. I won’t try to summarize because I don’t want to put words into this mouth; I’d encourage anyone interested to read Nick’s threads for themselves. And I wouldn’t necessarily assume that Nick’s experiences are (or aren’t) representative of those of other Pruitt collaborators and trainees. Further, even a single individual’s experiences can be mixed. From lurking on Twitter and in comment threads on Retraction Watch, I’ve seen both a lot of praise and support for Pruitt’s collaborators and trainees, and also a small but distressing minority of comments making totally baseless accusations against them. I think it’s a normal human response for negative comments to loom larger in one’s mind than praise. So if some of Pruitt’s collaborators and trainees worry that their own reputations have been tarred (or even just a little bit tarred) by their association with Pruitt, well, I think that’s a natural worry for them to have. And if my comments in this post help ensure that that worry doesn’t come to pass, well, that’s surely a good thing, however likely that worry was to come to pass.
” and the instances where “corrections” to Pruitt papers have been issued, only to have it revealed that they completely missed glaring issues. ”
Completely missing that data from one paper were in part relabeled and duplicated from a completely unrelated paper published in a different journal is not a “glaring” issue. Investigators (of which I’m one) have been working on individual papers. They’ve worked very hard and turned up a *ton* of anomalies, some of which are obvious *once you know what to look for*, and others of which are subtle *even when you know what you’re looking for*. Looking for duplication of data *across* papers is much harder still. Hats off to the PubPeer commenter who caught that duplication, that’s some very sharp data sleuthing!
The fact that many investigations still have not had an outcome has nothing to do with how effectively the various investigators have identified data anomalies, or with lack of awareness of anomalies that have been identified by anonymous PubPeer commenters. Were you suggesting it does? (aside that probably doesn’t need saying, but just in case: the lack of outcome for many #pruittdata papers about which serious concerns have been raised has to do with the willingness of the institutions and individuals with decision-making power to act on those concerns, often in the face of real or perceived legal jeopardy. Now, if someone wanted to argue that some individuals and institutions have been over-cautious in their decisions or the speed with which they’ve made them, well, I might agree, or might disagree. Depends which individuals and institutions we’re talking about, and which decisions we’re talking about.)
My point about the length of investigations is that they don’t help restore trust, since any work that is done is hidden to outside observers. There are steps people have been taking to restore trust in their work (and done a successful job, in my opinion). For example, Dr. Laskowski and Dr. DiRienzo have both done similar things, in that they:
1. Publicly released the data
2. Stated where the data came from and who collected what
3. Described what they did to check and validate the data
4. Listed any anomalies or potential anomalies that they found
While some other Pruitt coauthors have done similar things, others have not. Realistically, due to the current lack of responses from “formal” channels such as universities and journals, the onus is on individual researchers to build back trust in their work. Whether or not you think that’s something we should be expecting from everyone, that is a concrete action they can take. Personally, I think it’s reasonable to expect that, given that Pruitt’s coauthors likely received an initial benefit from publishing with him, such as increased numbers of publications and citations. However, now that the benefit has been found to be derived from unethical means (even if it is through no fault of their own), they should recognize that unearned privilege and take steps to be open and transparent about the whole process.
To return to a previous point, I still maintain that the issues missed in the corrections were indeed glaring. In the case of one (https://pubpeer.com/publications/6CAE0D5A1FE835EEA2BADFFB126872), a simple check for duplicate values led to finding the new issues. In the case of the other (https://pubpeer.com/publications/41B0D0284D234750447DC3809E3F77), which had the data recycled from another paper, it is actually much simpler than it appears. Though it was not mentioned in the pubpeer post (it is vaguely mentioned in the correction), that paper has identical segments of ~10 duplicate values within the dataset. That was very similar to another dataset that had segments of ~10 duplicate values with a slight offset (https://pubpeer.com/publications/BF638A197BC80D145674D8118BE37F). It was no great intellectual leap to check if those segments of duplicate values were the same (they were). I would expect anyone who has investigated both those datasets to make the same connection.
Finally, while I’m sure there are many things going on behind the scenes, outsiders are not privy to that information. Given that lack of information, it’s reasonable to be suspicious of publications by Pruitt where the datasets are not public or issues have been flagged and the only publicly available information is in the form of some sporadic posts on pubpeer. Plus. in at least some cases, promised data has not yet been released, or the responses by authors are incomplete, flippant, or even combative. To take a couple examples, see this recent response by a Pruitt coauthor (https://pubpeer.com/publications/0FBCA715359182E62F8568E65D94C0) that took 10 months, and to me at least, feels incomplete. Do you think that response builds trust or suspicion?
Or another example, where a Pruitt coauthor appears to admit to some level of mistake (https://pubpeer.com/publications/A6682428F6257063784D8AFE7B97E), but with no further explanation or release of data in the 8 months since. Perhaps there are valid reasons for the lack of followup, but it certainly doesn’t build trust.
It sounds to me like you think a good professional norm would be “authors have a professional obligation to monitor PubPeer comments and respond promptly on PubPeer, to the full satisfaction of PubPeer commenters.” Do you agree with that? I don’t want to put words into your mouth.
I guess my concern here is that people vary in how trusting they are. I’m not sure that “you’re not doing enough to establish trust, unless the least-trusting people trust your results” is really a feasible norm. 100% of people are never going to trust any result. There’s always going to be some equilibrium level of distrust.
Like everyone (including me), PubPeer commenters aren’t infallible. Because of my involvement in the investigation of #pruittdata for Am Nat, I’ve had occasion to respond to investigate numerous PubPeer comments and anonymous emails, alleging anomalies in Pruitt’s papers, and in papers by Pruitt’s trainees that weren’t co-authored by Pruitt. So I’ve been keeping a *very* close eye on PubPeer comments on Pruitt’s papers. Many of those PubPeer comments, and some of the anonymous emails I’ve received, did indeed identify real anomalies. Others were mistaken–the comment author thought they’d identified anomalies, but actually just hadn’t read the methods section carefully, or lacked background knowledge of the study system. More than once, I’ve patiently explained to mistaken anonymous commenters why they were mistaken. Sometimes they accepted my explanations. And sometimes they responded by attacking me personally, without responding to the substance of my explanations. Of course, you’ll have to trust that everything I’ve just said in this paragraph is true…
The point here isn’t that anonymous commenters are uniquely prone to error. I don’t think they are. The point is just that, I don’t think “some anonymous PubPeer commenters are dissatisfied with the speed and thoroughness with which authors respond to their comments” is really good evidence for a widespread breakdown of trust. It’s evidence that some people are less trusting than others, and that the least trusting will never be satisfied. I suspect that the least-trusting people are going to be both quickest to think they’ve spotted anomalies (=highest power, and also highest type I error rate), and the most reluctant to accept that any non-anomalies they find weren’t actually anomalies.
Personally, I think formal procedures and policies are a good way to build trust. “Rules of the game” that everyone follows and can be seen by others to follow. See here: https://dynamicecology.wordpress.com/2014/02/24/post-publication-review-signs-of-the-times/ I think it was brave and admirable of, say, Kate Laskowski to post as she did when #pruittdata first broke. But I don’t think the fact that not every one of Pruitt’s collaborators and trainees has posted like that in response to every single PubPeer comment is something that should be held against them.
In terms of the specific author response you link to, Ambika Kamath’s response doesn’t strike me as at all combative or flippant, and it seems to me to address the question asked. Now, you might not consider Ambika’s response fully satisfactory. But personally, and speaking as someone who’s spent a *lot* of time looking at purported data anomalies along these lines this year, the comment to which Ambika is responding doesn’t really identify a super clear-cut anomaly in my view. Perhaps if I looked more closely at the data myself I might feel differently. But that’s the point–I’m not going to bother looking at the data myself because I trust Ambika.
As for the notion that Pruitt’s co-authors and trainees owe it to PubPeer commenters, or to the field as a whole, to be especially public and detailed in their responses to concerns about Pruitt’s work, because they benefitted from working with Pruitt…I mean, *that’s what they’re doing*. They *are* ALL spending a *lot* of time helping to correct the scientific record! The fact that they haven’t always done it in public, or as fast as some PubPeer commenters would like, just shows that some PubPeer commenters are impatient and untrusting, frankly. When this is all over, Pruitt’s papers are going to have an average time from initial allegation to correction/retraction that’s faster than is typical, even compared to other recent similar cases. (many years ago, retractions used to take much longer than they do these days) And again, insofar as some institutions haven’t taken corrective action as quickly as I think they should have, that’s almost never because Pruitt’s collaborators and trainees have been dragging their feet. Pruitt’s collaborators and trainees aren’t the bottleneck.
You seem to be hinting that Pruitt’s trainees and collaborators are even now net beneficiaries of having worked with Pruitt, rather than the primary victims. Are you suggesting that even now they’re deliberately slow-walking correction of the scientific record, because they want to keep their undeserved benefits of having worked with Pruitt, without having to pay the costs involved in correcting the scientific record. Is that your claim? That they’re in the same position as someone who receives a mistaken money transfer from a bank and then tries to keep it rather than paying it back? I mean that as an honest question; apologiest if I’ve misunderstood you. I want to make sure I’m understanding you and don’t want to put words into your mouth. But I also don’t want to have a discussion in which people hint at claims they aren’t explicitly making.
Oops, here is the correct link to the last pubpeer example: https://pubpeer.com/publications/A6682428F6257063784D8AFE7B97E9
I don’t think we are quite on the same page on the point I’m trying to make. What I’m trying to say is that there is currently a cloud over a large body of research. It is a reasonable reaction for other scientists to avoid citing that research, whether it has been directly implicated or not. There are steps that Pruitt’s coauthors could take to help dispel the cloud that not all of them are taking. This could take many forms, one of which includes responding to comments on pubpeer. However, this could take other forms, as demonstrated in the case of Dr. Laskowski, who has done a good job addressing issues despite never posting or responding on pubpeer.
You seem very hung up on pubpeer, and it’s supposed potential to ruin reputations. I disagree with that view, since I trust other scientists to do as you did, and read the comments and make informed decisions. Do the authors have an obligation to respond? No. But it’s probably in their best interest to address concerns in some way (again, not necessarily on pubpeer) if they want people to cite their work. I also see no issue with publicly discussing published works. To quote the pubpeer FAQ: “Authors who don’t want their work discussed should consider not publishing it.”
My point about flippant and combative responses was meant in general, not about the example by Dr. Kamath specifically. To cite a couple specific examples:
Flippant: https://pubpeer.com/publications/13DCC6D036F123D650233182C6ADF4#2
Combative: https://twitter.com/Thatsregrettab1/status/1256366169595850752 There were a few more comments along those lines of the twitter example but they have all been moderated by pubpeer.
Thanks for taking the time to comment further.
“There are steps that Pruitt’s coauthors could take to help dispel the cloud that not all of them are taking.”
I disagree. Sadly, I don’t think there’s much that Pruitt’s co-authors can do that would cause others to adopt some other heuristic regarding which papers to cite.
More broadly, it’s perfectly possible to praise Kate Laskowski for posting as she did, without explicitly or implicitly criticizing other Pruitt co-authors for not doing the same.
More broadly still, I think Pruitt’s co-authors are their own best judges of what’s in their own self-interest. As an outsider, I don’t think I’m in any position to judge if, say, Nick Keiser or Ambika Kamath or whoever would be better off spending more time responding to PubPeer comments, or writing Kate Laskowski-style blog posts. Rather than, or in addition to, privately assisting with journal investigations, rethinking their own research programs, writing up new papers, doing teaching prep, or any of the thousand other things people do with their lives. And I think it’s a little weird for an outsider (you) to want to debate with another outsider (me) what’s in the best interests of Pruitt’s co-authors.
“I also see no issue with publicly discussing published works.”
Don’t see why you’re bringing that up. Seems like a bit of a straw man to me. Can you quote any of Pruitt’s co-authors saying “we don’t want our papers discussed in public:”? (And no, just because some of Pruitt’s co-authors haven’t responded to PubPeer comments in any public forum in the way that you personally would’ve liked doesn’t mean that any of them don’t want their work discussed publicly.)
“My point about flippant and combative responses was meant in general,”
I personally don’t find that first Nick Keiser comment flippant. As for the second one, with respect I think you need to put yourself in Nick Keiser’s shoes. In various public and non-public fora, Nick (and other Pruitt collaborators and trainees) has been anonymously and baselessly accused of all sorts of bad behavior. Are those accusations a minority among all the public and private commentary about #pruittdata? Yes. Is it nevertheless a very natural human reaction to be hurt by those comments, and to be very sensitive to anything that even hints that Pruitt’s collaborators and trainees are anything other than the biggest victims of #pruittdata? Yes.
I mean, think about it: as a Pruitt co-author, you’re already spending a *lot* of time and emotional energy dealing with the #pruittdata fallout. And now somebody says, or hints, that you ought to be doing even *more* to deal with the fallout. How would you feel?
“My point about flippant and combative responses was meant in general, not about the example by Dr. Kamath specifically.”
Then why link to a comment from Ambika Kamath to support your point?
I hope you aren’t claiming anything “in general” (your words) about how Pruitt’s co-authors have behaved based on that one understandably-testy comment from Nick Keiser, are you? I’m sure you wouldn’t want me generalizing about PubPeer commenters, or anonymous commenters more generally, just based on the very worst minority of comments I’ve seen, would you?
I think we’ve both said what we have to say, and I doubt that further conversation would be productive. I think it’s clear the points on which we agree, and those on which we’ll have to agree to disagree. I’m also increasingly uncomfortable with this entire topic of conversation, which has become focused on the behavior of specific individuals rather than on broader issues of scientific interest. Thank you again for taking the time to comment. I consider this subthread closed now.
A reminder that is really exceptional just how much and how fast Pruitt’s citations have dropped:
Speculating, I wonder if the reason why Loondstedt is still being cited reasonably often is that, while highly publicized, her case wasn’t nearly *as* publicized as #pruittdata. Presumably that’s in part because Loonstedt herself was a grad student with “just” a few dodgy papers, not an already high-profile mid-career researcher with dozens of dodgy papers. The news about Loonstedt was *just* about her papers. The news about Pruitt was also about *him* (*the* Jonathan Pruitt, famous behavioral ecologist), and so was much bigger news.
I also wonder if it’s to do with coherence of different research subfields. “Behavioral ecology” is a reasonably well-demarcated subfield into which Jonathan Pruitt’s work clearly falls (without also falling into any other subfields, save maybe arachnology), and with which lots of researchers self-identify. So the big news about #pruittdata quickly made it onto the radar of everyone in behavioral ecology, which is pretty much all the people who would ever cite Pruitt. Is the same true for Loonstedt’s work? Is there a coherent, well-demarcated subfield of, say, “microplastics research” with which lots of people self-identify, and into which Loonstedt’s work clearly falls without also falling into any other subfields? I don’t think so, but it’s far from my area of expertise so I don’t really know. What do others think?
A correspondent reminds me of a paper I’d seen but forgotten: https://www.sciencedirect.com/science/article/abs/pii/S0048733317301154
Broadly speaking, the fact that Pruitt seems to have experienced a bigger drop in citations than Loonstedt is consistent with previous experience. That paper finds that eminent scientists who commit fraud (as Pruitt is believed by many of have done) experience a larger drop in citations than do less-prominent scientists who commit fraud.
Is there no news about an investigation of all this at McMaster University?
I know that the McMaster investigation is ongoing.
I heard a while back that Tennessee was investigating Pruitt’s PhD work, but I don’t know anything about the status of that investigation.
Pingback: Friday links: Covid-19 vs. BES journals, Charles Darwin board game, and more | Dynamic Ecology
Pingback: Friday links: a very concerning cross-paper data duplication in EEB, and more | Dynamic Ecology