Friday links: Jonathan Pruitt retraction fallout, AE “malpractice” at JTB, and more (UPDATEDx9)

Also this week: a fake Science paper replicates (?!), and more. Most of it not good. Sorry, it’s been that kind of week.

From Jeremy:

So, I guess we’d better talk about what some are calling #PruittGate (EDIT: and others are calling )…Here’s a statement from Jonathan Pruitt’s past students, postdocs, and collaborators regarding recent retractions of three of his papers, and the serious questions that have been raised about many others (including at PubPeer). It’s unfortunate that so much of the work of correcting the scientific record is falling on the shoulders of Jonathan Pruitt’s past students, postdocs, and collaborators, when they’re not the ones who introduced the errors into the scientific record. They didn’t collect the raw data that’s been questioned; Jonathan Pruitt did (and as an aside, note that not every paper on which Jonathan Pruitt is a co-author contains data he collected). It’s to their great credit that they’re taking on this unasked-for burden of correcting the record. Editors at various journals, and other people, are also investigating. I’m glad to see that there have been so many public expressions of support for Pruitt’s former students, postocs, and collaborators, to help them get through an extraordinarily difficult time. Kate Laskowski, one of the collaborators in question, has a blog post detailing the full backstory of how the initial retraction came about. You should definitely read it. It’s an alternately fascinating, heartbreaking, and horrifying deep dive into how you find out that the data in your own paper can’t be trusted. And here’s the latest Retraction Watch story. Discussion of the case is now all over Twitter, and some discussants are departing from the diplomatic wording of the previous links to speak bluntly. If after reading up on this you need a laugh (because the alternative is to cry…), here you go. 🙂 😦 [pause, deep breath] Ok, I don’t think it’s really a secret anymore, so I may as well tell you that I’ve been involved in the investigation. After the initial retraction from Am Nat, at EiC Dan Bolnick’s request I took a close look at the datasets from Pruitt’s other Am Nat papers. Dan asked me to do this because I’m on the Am Nat editorial board, I’m reasonably quantitatively sophisticated, but I’m not a behavioral ecologist, hadn’t handled any of Pruitt’s papers, and had never met Pruitt. So I came into this with fresh eyes. I started with Pruitt’s Am Nat papers, but then expanded out to other papers because I felt like I needed to understand as well as possible what sort of irregularities there might be in the Am Nat papers. I spent something like 20-30 hours spread over several days, looking carefully at the Am Nat papers and about a dozen other haphazardly-chosen papers for which data were available on Data Dryad. Like other folks, I too found numerous irregularities and possible irregularities in datasets that Jonathan Pruitt collected (I didn’t find any irregularities in the datasets I looked at that he didn’t collect). Some of those irregularities are…let’s say very difficult to explain via normal biology, data collection, or data processing. Which is why they need investigation. So, as others have done, I passed on my findings to the EiCs of the journals concerned. Those EiCs in turn have done their jobs under COPE guidelines, which includes alerting the Academic Integrity officers at Jonathan Pruitt’s current and former institutions. It’s obviously not my place to determine the root causes of any data irregularities. I continue to pitch in on the ongoing collective effort to identify data irregularities and correct the scientific record. I hasten to add that others have spent far more time on this than me, for which they deserve a ton of credit. Am Nat EiC Dan Bolnick has been a sort of unofficial coordinator of these efforts. Here’s his blog post on the current state of play, and his detailed timeline of events. There is now a public view-only spreadsheet that a number of people (including me) are working together to populate, compiling information on the current status of every one of Pruitt’s papers–who has looked at the data, were there any concerns found, what action has been taken, etc. A key goal of this spreadsheet is to let others know which papers have irregularities, and which don’t. That way, hopefully no one feels they have no choice but to throw up their hands and disbelieve every single paper ever co-authored by Jonathan Pruitt or anyone he’s ever worked with. If you have information that you think should be added to the spreadsheet, please email Dan Bolnick (daniel.bolnick@uconn.edu). Please be patient with all the individuals and organizations working to correct the scientific record and sort all this out; these things take time. Please trust that everybody concerned is doing as much as they can, as fast as they can. For those of you who are wondering how so many data irregularities could possibly have gone unnoticed for so long, allow me to confirm from personal experience what others have already said: most of these irregularities are only easy to spot once you know what to look for. They’re mostly not the sort of thing you’d be likely to notice if you were doing ordinary data analyses. None of Pruitt’s collaborators deserve any blame for not noticing these irregularities sooner. (UPDATE : And if you need further confirmation of that, Erik Postma and the other folks who were the first to start looking into these data did so not because they stumbled across irregularities in the course of an ordinary data analysis, but because they were tipped off by an anonymous junior scientist with inside information. /end update ) Finally, via Dan Bolnick: Jonathan Pruitt is currently in the field with family and friends; he’s taking steps to support himself. As others have already said better than I could, everyone involved is human, including Jonathan Pruitt. There’s a personal side to all of this. When he returns from the field, I hope that Jonathan Pruitt will find the strength to provide a full explanation for others to hear.

UPDATE : Science has a story on the situation that includes some quotes from Jonathan Pruitt.

UPDATE : Joan Strassmann comments. So does Terry McGlynn.

UPDATE : And a second, very good post from Joan Strassmann, asking “what’s the alternative to trusting your collaborators?” (answer: there isn’t one) (UPDATE#5: to clarify something that came up in the comments, the part of Joan’s post that I really like is the bit about how we have no choice in the end but to trust our collaborators. As discussed in the comments, I can appreciate why Joan says what she says about Jonathan Pruitt’s publication rate–it’s a line of thought I’ve heard from a couple of other people–but I don’t agree with that bit. Meaning, I don’t think it would be a good idea to have blanket suspicion of the validity of anyone’s scientific work purely on the grounds that someone publishes “too often” in selective journals.)

UPDATE #6: Behavioral ecology graduate student Alexandra McInturf comments. Eloquent piece.

UPDATE #7: Globe and Mail story. Not much new information.

UPDATE #8: And here’s the Nature story. New quotes from some of the people leading the collaborative investigation into the integrity of Pruitt’s data. No new quotes from Pruitt, who told Nature he would comment to them but then never did, and didn’t respond to follow-up emails. As Dan Bolnick says in the linked piece and others have said elsewhere, this lack of response from Pruitt is totally inadequate. A bunch of his former students, postdocs, and collaborators, as well as other concerned parties, have set aside their own research and other obligations and are knocking themselves out trying to correct the scientific record. Ghosting them because “you’re in the field” just does not cut it. Whatever Jonathan Pruitt is doing in the field, it’s not nearly as important as sorting out this mess. (And yes, I know he may well have received legal advice to stay mum. If so, he’s of course entitled to do what’s in his own best legal interest, just like anyone else is. Just speaking generally, I’m glad to live in a world in which people can do what’s in their own best legal interests. But it’s not inconsistent to also lament that, sometimes, doing what’s in your own best legal interests means leaving other people with burdens that they don’t deserve to bare.)

UPDATE : Leticia Aviles on why the mounting problems with papers for which Jonathan Pruitt collected the data shouldn’t cause you to question/doubt/ignore all research on social spiders or animal personality. I confess I’m a little surprised this needed saying, though it’s really not my field. I’m now thinking back to my old post on how my side won the “microcosm wars”, without me even noticing. Now, I wouldn’t criticize anyone working on social spiders or animal personality who is worried about the broad perception of these topics. It’s only natural to be a little worried about that in the wake of ! But I do wonder a little how well-founded that worry is. Imagine you polled behavioral ecologists, or ecologists and evolutionary biologists more broadly, on whether has changed their views on social spider research, or on animal personality research. What fraction would say “I now have a lower opinion of all research on social spiders” or “I am now skeptical of all animal personality research”? I feel like it would be a small fraction; am I wrong? What do you think?

An editor at JTB has been found to have committed “editorial malpractice“. Click through and read the whole thing. It involved repeated, appalling misconduct at every stage of the review process, with the goal of boosting the editor’s publication and citation counts. Reading it, I was left wondering a little why some of the abuses weren’t caught sooner?

Also this week in wild stories that I would discourage you from overgeneralizing from, because I doubt that they tell us much about any broader trend.

Big new meta-analysis of 492 studies finds that (i) interventions to educate people about, and reduce, their implicit biases  have little effect on measures of implicit bias, and that (ii) changes in measured implicit bias have no effect on measures of either explicit bias or actual behavior. I’m still mulling over what conclusions/implications to draw from this.

Many of you will recall that, a few years back, Michael LaCour faked the data in a high-profile Science paper purporting to show that prejudice can be appreciably reduced for months just by a single conversation with canvassers. The political scientists who discovered the fakery have been repeating the study and finding some success, though it’s still early days.

I need a drink.

40 thoughts on “Friday links: Jonathan Pruitt retraction fallout, AE “malpractice” at JTB, and more (UPDATEDx9)

  1. Quite sad news all around, though hopefully this is evidence of our systems working to correct these things. I am very grateful for your *tremendous* efforts in trying to correct all of these things, and still remaining compassionate.

    • “hopefully this is evidence of our systems working to correct these things.”

      Yes, this is absolutely a case of the system working as it should. It’s also a reminder that “system” here means, in large part, “good people who do the right thing”.

  2. A few further, broader thoughts inspired by the Pruitt case:

    -this case illustrates one virtue of open data. This case began when someone downloaded some of Pruitt’s data from Dryad and noticed irregularities. As others have pointed out on social media, t’s a striking coincidence that the initial retraction in this case was from Am Nat, because Am Nat led the push to found Dryad.

    -having said that, I would not expect the rate at which problematic data are detected to increase all that much now the open data is routine. That’s for two reasons. One, for most papers hardly anybody ever looks at the raw data, even when those data are on Dryad or Figshare or wherever. Two, as noted in the post, unless you’re looking at the data *for the purpose of trying to find irregularities*, you’re not likely to notice irregularities. At least, not the sorts of irregularities that have been found in a number of Pruitt’s datasets.

    -which leads to another broader point, one that Kate Laskowski makes well. Science can’t work without trust. Ordinarily, we scientists all default to trusting that everybody’s data is on the up-and-up. That has to remain the default, because otherwise science ceases to function. It would be a tremendous waste of effort, with massive opportunity costs, if everybody defaulted to assuming that everybody else’s data was mistake-laden or fraudulent until they convinced themselves otherwise. Which, yes, means that sometimes problematic data are going to get published, and go undetected for a long time (maybe even forever!). Put another way, there’s some optimum rate at which problematic data will get published, *and it’s not zero*. Dan Davies is good on this in the context of financial fraud, in his recent book Lying For Money. He calls it the “Canadian paradox”. Why is there a decent amount of financial fraud in a high-trust society like Canada? Shouldn’t Canadians stop trusting each other in light of those frauds? To which the answer is, no, they shouldn’t. Those frauds happen in part *because* Canada’s a high-trust society, and a low-trust Canada would have less financial fraud–but a low-trust Canada would also be a lot poorer. The level of financial fraud in Canada is an “equilibrium phenomenon”, as Davies puts it, and the Canadian equilibrium is a good one, not a bad one. The same argument applies in the context of data irregularities in science, including both irregularities that arise from mistakes, and those that arise from misconduct. (And yes, I know some folks are working on an R package to automate checking for the sorts of irregularities that are cropping up in Pruitt’s datasets. I appreciate what they’re trying to do. But personally I don’t think “checking for bad data” is a task that can be automated to any meaningful degree. There are far too many ways in which data can be bad.)

    • I strongly agree. Another way to look at it is at a guess there are conservatively at least 20,000 ecology/evolution researchers worldwide (9,000 members of ESA alone and then add other countries+evolution). So this is basically a 1 in 20000 or 0.005% scenario. Almost any energy spent protecting against something that rare is going to be wasted energy. (although I’m sure it doesn’t feel like that to the people affected right now)

  3. Have to admit, I’m pretty disappointed in your take on this story. It seems obvious and their appears to be overwhelming evidence that Pruitt cheated. There should be consequences for his actions and we should all be hoping he can get the help he needs to get through this very difficult time.

    On a blog that I regularly follow and appreciate the depth of discussions about mental health, the stress of academia, the pressure of publish or perish, I expect better than gossipy hashtags and links to tweets from random people mocking and attacking Pruitt. Rather than make a public mockery of him, we should use this time to look inward and talk about how his alleged actions are the result of a deeply flawed system?

    Maybe the 2020 reader survey results are related to this unnecessary addition of drama seeming to creep into this blog?

    • I confess I’m confused by your comments. Perhaps you can clarify? You say that there’s “overwhelming evidence” that Pruitt cheated, but yet I think you’re also complaining about me linking to someone who’s said exactly the same thing on Twitter. Yes, the person I linked to on Twitter made his statements about the evidence for fraud in a mocking tone; I don’t endorse that tone, as I would hope the post made clear. But one of the roles for this post is to provide a summary of the current state of affairs, which in my view means taking passing note of the range of views being expressed, even if I don’t always endorse those views or the tone in which they were expressed.

      You say that there should be consequences for his actions, and that he should get the support he needs to through this difficult time. I agree, and said so in the post. So again, I don’t understand why you’re disappointed in the post?

      As for “links to tweets from random people”, most of the links in the post are to tweets and blog posts by people who are directly involved in the situation–Dan Bolnick, Kate Laskowski, Ambika Kamath, and so on. None of them has mocked Jonathan Pruitt. So I’m afraid I’m confused. Perhaps you can clarify exactly which of the individuals linked to in the post are “random people” who are “mocking and attacking” Pruitt?

      There are two links that go to threads of people (including some of those who are doing the most work sorting out this situation, such as Noa Pinter-Wollman) making silly jokes about the situation, which seems to me to be a very human response to stress.

      As for whether Pruitt’s alleged actions are the result of a “deeply flawed system”, I’d merely note that everyone involved in investigating this situation works in the same system and is subject to the same incentives. Just speaking generally, I agree with this post: https://www.talyarkoni.org/blog/2018/10/02/no-its-not-the-incentives-its-you/

    • Re: the reader survey results being related to “unnecessary addition of drama creeping into this blog”, you may be unfamiliar with, or have forgotten, many of our old posts, which predate the drop in our year-on-year traffic that seems to have started about 11 months ago. So while I’m not sure exactly what you mean by “drama”, if you mean “controversies about the behavior of individual scientists that are currently being widely discussed on social media”, I can tell you that we’ve long posted about “drama”, among many other things. Here’s a small sample, all of which are more than a year old:

      Evolutionary biologist Francisco Ayala resigns from UC Irvine for serial sexual harassment

      Chill out about Jingmai O’Connor’s criticism of bloggers (UPDATEDx2)

      Guest post: The day I broke some twitter feeds: insights into sexism in academia, Part 1

      I of course appreciate that some readers don’t like those posts; others do. There’s no pleasing everyone, so I’m sorry if the first topic in today’s linkfest isn’t your cup of tea. But as you’ve no doubt noted from seeing other comments about this linkfest entry here and on social media, other folks–including many of those principally involved in sorting out the situation with Jonathan Pruitt–appreciated this linkfest entry. So again, sorry, all I can do is the best I can, recognizing that it will never please everyone.

      Getting back to the hypothesis that our recent choice of post topics has driven our traffic down, as I said in my post on the reader survey results, some readers no doubt have the subjective impression that we post about “drama” more frequently than we used to, and are turned off as a result. But there’s nothing we can do to address that impression, because it’s not grounded in anything objective. As illustrated by the fact that many readers have the opposite subjective impression–that we’re always posting on the same old stuff.

      EDIT Also, regarding the effect of today’s post on our traffic: this post is drawing more traffic than any post of ours has for months. Not that that means it’s a good post, or a bad one, of course. Sometimes bad posts draw traffic, and sometimes good ones don’t. But it does rather cut against your suggestion that the reason our traffic is down lately is because of posts like today’s (whatever “like today’s” might mean).

  4. I read Laskowski’s blog about this last night and I must say I was amazed in her demeanor and strength in handling this sort of situation. As a first year TT assistant professor, I shiver in thinking about this happening with papers I have worked on with collaborators, let alone papers in such well respected journals. As we all know, there is enough pressure on academics to begin with without the fear of your credibility and past labors falling apart around you. I’m so glad to see Laskowski getting well-deserved support through this process.

    For Pruitt, there is little question that some form of scientific ethics was violated (at least from what I’ve read) and actions should be taken to right these wrongs. Yet as you mentioned Jeremy, everyone involved in this is human, and regardless of how angry something like this might make us, we must remember that these sort of situations can be very mentally dangerous for the responsible party. I hope he gets the help he needs.

    It’s really an unfortunate circumstance all around but as previous comments have mentioned, at least a takeaway is that open source is truly an accountability system for both publishers and scientists.

      • I agree as well, but I am confused then by your link to Joan Strassman’s post which does it’s best to judge and condemn Dr. Pruitt as a rather diabolical villain (well before all of the information has been analyzed). She does so by citing his “meteoric rise” (I agree that is an unfortunate turn of phrase) as evidence that his entire body of work must be dismissed as fraudulent. She even compares his number of publications to her own in a way that, to a lay person like myself, smacks more of envy/jealousy than actual scientific concern. If she couldn’t achieve his success in the same amount of time, he was clearly cheating, right? Her near constant snarky jabs are very off-putting.

        I agree that any data manipulation or modification brings a stain on your entire community, but so does the naked envy, jealousy and desire to debase a fellow scientist I have seen on display. It makes me think so much less of a community of people whose work I used to enjoy discovering.

      • Links don’t always equal endorsements. I sometimes link to things that I think are worth reading, even though I don’t agree 100% with them.

        Personally, I have mixed but mostly negative feelings about the notion that anybody who publishes sufficiently often in good journals should be regarded with at least slight suspicion, just for that reason alone. I emphasize that I’m speaking generally here, not specifically about the Pruitt case. (Because I think the main reason to talk about topics like this is to identify more broadly-applicable lessons…) On the one hand, one can certainly point to historical cases of serial fraudsters whose track records of rapid publication look in retrospect like red flags (was it Schon who was publishing a paper every 8 days or something, or am I misremembering?). I can see how it’s tempting to look at those historical cases and think “We should be suspicious of anyone who publishes too often in good journals”. On the other hand, a lot hinges on what rate of publication is “sufficiently fast” to arouse suspicion. For instance, our own Meghan Duffy has published 12 papers in the last year, during her sabbatical. Many are in selective journals. Should she come under suspicion? As another example, everybody who’s ever won the ESA Eminent Ecologist Award has spent decades publishing often in the top journals in the field. Should they all come under suspicion?

        I think what’s going on in Joan’s post is that, once someone serious irregularities are discovered in several papers by the same author, there’s a tendency to then retrospectively read everything else about that author and their work as either a warning sign that went unnoticed, or as the surprising *absence* of a warning sign (e.g., I’m thinking of various historical cases of serial scientific fraud perpetrated by people who, before the fraud was discovered, were popular with their colleagues. Those colleagues were surprised that such a nice person could also be a serial fraudster.) Duncan Watts’ book Everything Is Obvious (Once You Know The Answer) is good on this in many different contexts.

        My other reason for being generally skeptical of any purported “warning flags” of sloppy or fraudulent science is that is smacks of profiling to me. Trying to use profiling of any sort to flag potential cases of anything is a terrible idea when that thing is rare (as sloppy and fraudulent science are). You’re mostly going to turn up false positives and waste a lot of effort following up those false positives. And if the thing you’re trying to detect is some form of bad behavior, you’re going to unfairly stigmatize a lot of innocent people who fit the profile but never engaged in the bad behavior.

        As to whether National Academy of Sciences member Joan Strassmann is jealous of Jonathan Pruitt…two things. One, the thought that, in retrospect, maybe we should’ve been suspicious of the validity of Pruitt’s data given how often he publishes, is a thought I’ve heard from a couple of professionally-successful ecologists whom I know well, and who are *definitely* not jealous of Jonathan Pruitt! So it’s false to assume that anyone who would think that must surely be jealous of Jonathan Pruitt. Two, my own admittedly-anecdotal experience as a blogger is that people who try to guess my attitudes from reading a single blog post are usually wrong. They’re usually reading into the post something that wasn’t there. In fact, they often read into the post something the post *explicitly denied*–there’s an example earlier in this comment thread. So if Joan Strassmann’s post comes off to you as jealous, then with respect I think you might want to reflect on whether your reading perhaps says more about you than about her. I say this as someone who doesn’t know Joan at all.

      • Twelve papers isn’t really excessive for an active PI with students and collaborations. Chou, the one in the JTB/Bioinformatics case, was publishing an average of over 30 papers per year, with nearly 20 already in 2020. Still *possible*, but a little fishy, especially when you see most have 200+ citations within two years of publication.

  5. I am working together with others to get retracted a fraudulent paper in a Taylor & Francis journal on the breeding biology of the Basra Reed Warbler (an endangered bird species which is only breeding in Iraq), see https://osf.io/5pnk7/ for backgrounds.

    Its a clear case where the first author has admitted that he is willing to retract the study and there are no raw research data. The case is unsolved, and already for almost 5 years, because all key players do not want to discuss about the whereabouts of this non-existing set of raw data, and that in the widest possible sense.

    It is therefore very encouraging that many parties in the Pruitt’s case are right now scrutinizing the available data (great that these data are available!!).

    It is excellent that several high profile biologists are showing to their students and to the rest of the world that they are indeed leaders in the field, and thus do not hesitate to retract papers when the underlying dataset is unreliable. Great!!!

    • Agreed, this is a case in which the system is working as it should, thanks to a lot of good people who are willing to take on a lot of unasked-for and stressful work. Whether this example will lead to better handling of other similar situations by other people in future, I don’t know. It’s just one example, there’s probably a pretty low limit to how much it will change the world for the better. But all anybody can do is the best they can. So personally, I’m just trying to do my bit as best I can, take heart that others working on this situation are also doing the best they can, and leave it at that. I can’t control whether everybody else in the world, now and in future, is going to do the best they can in situations like this one, so I try not to worry about that. I just worry about what I can control.

      • Thanks for the friendly responses. Spending lots of time scrutinizing files with raw research data of Jonathan Pruitt’s papers is not a waste of time. It is towards my opinion the other way around: checking these files for unexplained anomalies is a key element of conducting science. It is about ensuring that the scientific body of knowledge (aka papers in peer-reviewed journals) does not get polluted. Besides, the people who are right now spending much time in scrutinizing these files can publish papers about their findings, together with recommendations. It is of course an unexpected event, but that’s a common practice for ecologists who are doing field work (in remote areas).

  6. Seems like we should be wary of trusting coauthors with self-reporting and checking the data, particularly data that is not publicly available.

    For example, see the recent pubpeer post highlighting potential problems in a paper currently shown as cleared in the google doc with the comment “Data not collected in Pruitt’s lab”

    https://pubpeer.com/publications/9BBCDE0A1E31EC56052A8C2937EE8E

    It follows that one could still potentially manipulate data even if they weren’t involved in data collection.

    • Are the people who are taking the lead on checking these data always going to be perfect and never miss anything of potential concern? No. That’s one reason why they’ve invited others to notify them with issues, and why they’re keeping an eye on PubPeer. There’s even a column in the spreadsheet for “please flag this paper if concerns have been raised in a public venue”. I really do not think we need to worry about any of Pruitt’s co-authors engaging in a cover-up! After all, several of them have already come forward with problems they discovered in data that weren’t publicly available!

      Kate Laskowski’s remarks about trust being essential to science also extend to investigations of breaches of that trust. At some point, you are *always* just going to have to trust *somebody*.

  7. Pingback: Unordered thoughts on the Pruitt situation | Small Pond Science

  8. Took a look at the J theoret Biol editorial that you cited to. As you said, seems surprising that such over-the-top misconduct by a handling editor could carry on over a period without being noticed.

    I was also a bit puzzled that the fairly detailed account in the editorial studiously avoided naming anyone. After all, partly what this is all about is a sort of reputational credit as a good potential collaborator, that’s one of the currencies of the research world. I suppose you could work out who the person was within a couple of hours by listing people who had come off the JTB editorial board recently and finding out which of them had developed a “well-known algorithm” that they wanted to increase citations to. But still, I couldn’t quite see why the editorial was so cautious about naming names. Maybe there’s some sort of legal issue …..

    • It only took about 15 minutes to figure out who it was, by checking the Wayback Machine and seeing who was on the editorial board then but not now. There are only five who left, only one of whom has 200 papers since 2014, many with 200+ citations in just a couple of years. The algorithm is “Chou’s five steps rule”.

  9. With all this bad news, let’s not forget about the corona virus. One bit of good news there, perhaps, is for biology teachers (including uni and grad). I think this outbreak could provide valuable material and timely data for teaching core concepts like R0, transmission chains, genomes, phylogenies & more. Students can even analyze and/or interpret new incoming data themselves to draw their own inferences. To see what I mean, check out the superb multi-slide presentation of concepts, data, and inferences based on the first 42 sequenced genomes of the virus produced and made freely available by the @nextstrain team here:
    https://nextstrain.org/narratives/ncov/sit-rep/2020-01-30

  10. A very good and balanced post. One comment on the update:

    “And if you need further confirmation of that, Erik Postma and the other folks who were the first to start looking into these data did so not because they stumbled across irregularities in the course of an ordinary data analysis, but because they were tipped off by an anonymous junior scientist with inside information.”

    This is not very meaningful by itself. What is ordinary data analysis of someone else’s research data in the first place?

    Can you tell us what kind of inside information the anonymous junior scientist had? Did they look into the data and saw something suspicious or did they have first hand information about data collection practices? Or something else?

  11. Pingback: Trust your collaborators? | Sociobiology

  12. “I don’t think it would be a good idea to have blanket suspicion of the validity of anyone’s scientific work purely on the grounds that someone publishes “too often” in selective journals.”

    Agreed. there are a very, very few gifted folk. But for the VAST majority, a prolific publication rate likely means either (a) the author(s) have not taken sufficient time to consider the logical and conceptual issues under scientific scrutiny and/or (b) the audience has failed in a similar matter.

    A generality — more thought and less production would greatly benefit the social sciences (among others). For example, psychology is a conceptual quagmire at present — both with regard to its “mission” as well as its approach (to what exactly? The effects of mind on behavior? What then is a mind?)

  13. Pingback: Some data and historical perspective on scientific misconduct | Dynamic Ecology

  14. Pingback: The history of retractions from ecology and evolution journals | Dynamic Ecology

  15. Pingback: Scientific fraud vs. financial fraud: the “Canadian paradox” | Dynamic Ecology

  16. Pingback: Here’s the letter Jonathan Pruitt’s lawyers have been sending to journals and his former collaborators #pruittgate #pruittdata | Dynamic Ecology

  17. Pingback: Friday links: another retraction for Jonathan Pruitt, journal cover art, and more | Dynamic Ecology

  18. Pingback: Friday links: #pruittdata rolls on, philosophy vs. Jeremy’s papers, Kate Winslett vs. Mary Anning, and more | Dynamic Ecology

  19. Pingback: One year into #pruittdata, how have citing authors reacted? Here are the numbers. | Dynamic Ecology

  20. Pingback: Friday links: RIP Philip Grime, the end (?) of #pruittdata at Am Nat, negative logging, and more | Dynamic Ecology

  21. Pingback: Schneider Shorts 7.05.2021 – For Better Science

  22. Pingback: Friday links: tell me again what “follow the science” means, another serious fraud accusation in ecology, and more | Dynamic Ecology

  23. Pingback: A bit of #pruittdata news: Jonathan Pruitt’s doctoral dissertation has been withdrawn | Dynamic Ecology

  24. Pingback: #pruittdata latest (and last?): Jonathan Pruitt resigns from McMaster University | Dynamic Ecology

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.