Writing in Science, Lambert et al. report that they were unable to recreate the results of Scheele et al., attributing declines in many amphibian species to chytrid fungus, based on a synthesis of various lines of evidence. Scheele et al. reply. I’m interested in this exchange itself. But I’m more interested in a broader issue it raises, regarding how to structure comment-reply exchanges in the scientific literature.
Ok, regarding the exchange itself: It’s striking because it’s not to do with the interpretation of the data (as is usual for most comment-reply exchanges), but with what the data actually are and whether the analyses are reproducible. Now, I am far from an expert on amphibian conservation; I’m merely a curious bystander here. So while I certainly have my own opinion on who gets the better of the exchange, it’s probably not worth much.*
(UPDATE: Scheele et al. have updated the data file and associated references on Science’s website. The original data file is no longer available, which seems like bad practice to me.)
I’m more interested in this exchange because raises a broader issue. Reading the exchange, it seemed to me that Scheele et al.’s reply often talked past Lambert et al.’s comment rather than directly addressing it. Or at least, I found it hard to match up Lambert et al.’s criticisms with Scheele et al.’s replies to those criticisms. And there’s one point at which Scheele et al. reply to a criticism that Lambert et al. not only didn’t make, but explicitly disavowed.** I think it’s bad when authors reply to a comment on their work by talking past the comment. In my admittedly-anecdotal experience, that happens fairly often in comment-reply exchanges. Authors address the questions they think were raised, or the questions they wish had been raised, rather than the questions that were actually raised. It’s easy to see how this can happen. As an author, it’s often annoying when somebody comments on your paper. It’s easy for that annoyance to cause you to misunderstand the comment. But whatever the reason for authors talking past commenters, it confuses scientific debates when it happens.
So, how should comment-reply exchanges in journals be structured? The current approach at many journals is that authors are entitled to say more or less whatever they want in reply to a comment on their work. Comments and replies do get peer-reviewed, but at the end of the day most editors prefer to publish comments they deem meritorious, let the authors reply however they want, and let readers make up their own minds. There’s certainly a lot to be said for that approach. It’s loosely analogous to how, at a criminal trial, the defendant is entitled to defend themselves however they want (within various legal bounds). The defendant isn’t obliged to respond point-by-point to every claim made by the prosecution.
But there’s an alternative possibility, to which I confess I’m partial: structure comment-reply exchanges like peer reviews and replies to reviews. That is, the comment should ideally be structured as a series of numbered points, and the reply has to respond point-by-point to the comment. With the reply to each point being preceded by an extended quotation of the point. Just like how, when you reply to peer reviews, you are expected to quote each point in the review before replying to it, you can’t skip replying to any points (on pain of seriously annoying a good editor), and you can’t reply to any points that the reviewers didn’t actually make (again, on pain of seriously annoying a good editor). Quoting each point before replying prevents authors from ignoring or talking past comments. It also helps prevent authors from accidentally misreading comments on their work. If you want readers make up their own minds, well, shouldn’t you put them in the best possible position to do so, by providing them with a clear series of points and counterpoints?
So I think commenters should be obliged to structure their comments as a series of numbered points. And I think any authors who don’t reply to every point by quoting it and then providing an on-point reply should have their reply rejected. In that case, the comment would be published along with a note from the editor reading “The authors were offered the opportunity to reply point-by-point to this comment. They chose not to do so.” Again, I’m assuming here that any comments that don’t merit a reply will be rejected by the editor, without being forwarded to the authors. Much like how a good editor will do everything possible to ensure that authors receive only thoughtful, professional peer reviews.
Like I said, I can see good arguments for both these possibilities. There’s a bit of a tension between allowing authors to defend their work however they see fit, and ensuring that comment-reply exchanges are maximally informative for readers. How do you think this tension ought to be resolved?
*FWIW (not much), I find Lambert et al.’s comment more convincing than Scheele et al.’s reply. Indeed, it seems to me that Scheele et al. more or less grant Lambert et al.’s criticisms; Scheele et al. just see those criticisms as features rather than bugs. But again, I’m not an expert here!
**Lambert et al. go out of their way to emphasize that they’re not criticizing the use of expert knowledge in conservation, writing “We are not critiquing the importance of expert opinion, but failing to clearly report how and when expert opinion is used impedes conservation efforts.” In reply to which, Scheele et al. write, “Lambert et al. treat expert knowledge…as unreliable (at best) and suspicious (at worst)”.
I think you raise a really important point here, Jeremy. Some exchanges I’ve been involved with in the past have been very unsatisfactory, in part because authors refused to address the major issues. In one case this resulted in the authors critiquing one of my own papers in return, which led to us responding to their critique…. A structured set of responses, as you suggest, would have certainly helped.
I do wonder about the role of editors in this, though. Two things stand out:
1. Your wrote: “I’m assuming here that any comments that don’t merit a reply will be rejected by the editor, without being forwarded to the authors.” Editors acting as gatekeepers worries me; because some editors are not even allowing critiques to go to authors or be published, even when they are making important points. It’s worth noting that editors are not neutral in this regard if they supported the publication of papers that turn out to be flawed.
2. How carefully are editors, or even reviewers, reading these critiques and responses? Your very last **footnote illustrates this perfectly. If I had been the editor I would have responded to Scheele et al. that, no, Lambert et al. did not “treat expert knowledge…as unreliable (at best) and suspicious (at worst)” – go back and rewrite this. Isn’t this an editor’s job, to ensure that the critique and response match up?
Re: your #1, at some level, that’s what a good EiC is for. A good EiC who realizes that it enhances rather than degrades the reputation of the journal if seriously flawed papers are allowed to stand. Any institution is going to have rules and procedures, but at some point the institution’s functioning and reputation is going to depend on good people who do the right thing.
Re: your #2, I think your suggestion complements and builds on mine. Yes, if my suggestion in the post is going to fly, the editor absolutely will need to take a strong hand (just as they do during peer review, if they’re good editors). I like the idea of a comment-reply system in which the editor can say to the authors things like you imagine the editor saying to Scheele et al. And in which the editor can say to the commenters something like “Your points 1-8 are cogent and important enough to merit a reply. Your point 9 is inappropriate for reasons XYZ. Please remove it so your comment can be published.”
*waives hand* as the Lambert of Lambert et al. I wanted to give a quick note. I’ve had the weird chance to write two rebuttal-type papers over the past year. One is this one and another in Conservation Biology (link below).
The editorial process was way different between the journals. In both cases, the comment and rebuttal both went to peer review & in both cases I never saw the rebuttal to me at any stage. But at Conservation Biology, both the peer-reviewer and editors provided explicit feedback on tone and messaging. This never happened at Science. At Conservation Biology, the process was actually delayed because the editor wasn’t actually sure the rebuttal to us was a legitimate response to the points we made and gave the authors a second try to actually respond to the points we made.
I don’t know how much of that variation is journal-specific or editor-specific.
Thanks all for this discussion!
Thanks for sharing your contrasting experiences Max. My own sense is that this variation is mostly journal-specific, rather than variation among editors within journals. But I’d be curious to hear from others on this.
(replying to @lambertmr comment)
I’ve had a reply published in Science (https://science.sciencemag.org/content/360/6387/389.1), but it was to a Perspective piece, which I think has a different editor to the research articles. For this, we received good comments on tone & style from editor before publication, but I didn’t know about the second critique article, or about the commissioned reply from original authors until they were all published.
I think this is a really important area that is usually done really poorly. Most of my experiences (either as an author of a critique or of a critiqued paper) have left me unimpressed. Yet, this kind of direct dialog is important for the advancement of science.
After much thinking here is what we do at GEB:
1) When we receive a critique we assess it for the same degree of fit and novelty as any other paper. If it meets that standard then:
2) We commission a reply from the authors
3) We send the package out to review (via an associate editor). The reviewers are instructed that we can publish neither piece, both pieces or just the critique. We get the two involved parties as reviewers but we also get independent reviewers. After peer review the AEs make a recommendation (which is then reviewed and usually accepted by EiC).
Advantages of this approach:
a) from the readers view it all appears as one package simultaneously
b) it is assesed for novelty and importance just like any other publication
c) vaguish non-responsive replies to critiques are not guaranteed publication
d) reviewers inject an independent review and can substantially improve both pieces.
In short, rather than using a very explicit mechanism (point-by-point) like you suggest we use a robust peer review process with the two correspondence as a package so everybody has the full picture to ensure responsiveness. More than once reviewers have said to publish the critique but not the response. That’s a real incentive to be constructive. We’ve also several times had a joint correspondence emerge which I think is usually a win-win.
That’s an interesting alternative approach to get to the same endpoint I want–a comment-reply exchange that’s worth reading and that is helpful to readers.
As you say, the review and editorial process has to be robust for this to work. Which means there has to be a real possibility of rejection. Good for GEB that you’ve been willing to publish the critique and not the response, when the response was a non-response.
I like the sound of this process a lot and now I’m going to try to go find an exchange in GEB to read through it. It seems like a valuable opportunity for all authors to get their points well represented and for readers to end up with the best information.
I was surprised to hear that Lambert et al. weren’t allowed to read the reply to their comment ahead of time:
Which is more often the norm?
Interesting to hear that Max Lambert and his co-authors themselves were frustrated with Scheele et al. talking past them and putting words into their mouths. I was totally unaware of that until I saw your comment Colin. As a neutral bystander, I have to say, I can totally see why Lambert et al. are frustrated.
Many of the examples I indirectly referred to of bad experiences are in “glossy” journals. They can enormously time consuming as an author over very small points. And the processes don’t always result in robust, transparent balanced discussions.
I could see a potential pitfall with one aspect of GEB-style approach (present company excepted): “When we receive a critique we assess it for the same degree of fit and novelty as any other paper.” But a critique is not the same as any another paper: it’s a criticism of a paper that the editors have already decided was sufficiently novel and is a fit for the journal. So wouldn’t it get a pass there? If it’s a substantive, well supported criticism of the science (not the scientists!), written in a measured tone, why wouldn’t it go forward for review? While an editor might not wish to publish a clutter of repetitive comments, filtering out valid criticisms on the grounds of novelty seems troublesome.
Consider for example the flurry after Science published research arguing that a bacterium that could grow using arsenic instead of phosphorus: The editor’s note said that among the extensive comments they selected 8 to publish. In that case, the journal’s interest in novelty may have gotten ahead of the data.
Fair points. But I don’t think there are that many cases where it is as black and white simple as the paper is right or wrong (although Jeremy might have lead with one). In my mind if a paper was interesting enough to us to publish and we receive a critique that clearly shows or strongly suggests its wrong that would be interesting enough to publish too. Plenty of other critiques are interesting too (I haven’t done a study but my impression is that even with that standard GEB publishes more critiques than average). Critiques that don’t disprove the paper but add a new interpretation or improve the science of the original paper are certainly also of interest. In general bringing diverse points of view into dialogue is of value.
But so many critiques that claim to show that a paper is wrong, really just are small tweaks or arguments over interpretation. Very often in these less black/white scenarios publishing a new research paper is a better outcome for everybody than publishing a critique/rebuttal (and often times the reason they’re pursuing it as a critique rather than an original paper is it wouldn’t be valued very much as a stand alone paper). I’ll stand by the notion we only want to publish critiques that are going to be of interest to our readers – we don’t have an obligation to take every form and type of disagreement with a paper just because we published the original paper (or somebody else – GEB is unusual in taking critiques of papers published elsewhere although again it has to rise to the level of being interesting).
That said I would certainly agree that I’ve seen journals who filter critiques through a rather different lens (like whether it makes them look bad or not). But I don’t think the problem is that they want critiques to be “interesting and novel”.
Interesting comment on Twitter here about viewpoints versus flaws: https://twitter.com/hooliamonk/status/1242098257863548936
I have seen good point by point comments and measured responses in agreement and disagreement, although I can’t put my finger on an example. A couple of other thoughts.
First, two of the problems in the Science Scheele-Lambert-Scheele et al exchange example are inequity and time. First is the inequity, as lambertmr noted in the comments, is that the authors of the paper being commented on get the last word. They get to see and rebut the comment, but it’s not even handed. Second, the time it takes to publish a formal comment seem far too long. The original articles stand for a full year or more to accrue reads and citations. Once the commenter submits a formal comment to the journal for consideration, speaking out on blogs, Pubpeer, and such would seem poor form and invite rejection of the formal comment. Consider the Scheele-Lambert-Scheele et al’s timeline: time from Scheele publication to Lambert’s comment submission: 2 months; time from Lambert’s submission to Scheele’s: 1 month; time for Science to review and ponder both: almost 9 months; time from acceptance of both to publication, 1 month. Max Lambert’s ConBio example was just as slow, with a full year from comment received to publication, with 9 months between acceptance and publication. Not all journal editors exactly embrace critical conversations (see, How to Publish a Scientific Comment in 123 Easy Steps).
Not all journals even publish letters to the editor anymore, and OA journals may demand full payment, even to point out an invalidating error. This brings further inequity, where scientists backed by monied interests would be better able to publish challenges than peers who just see something amiss.
My experience (togeher with Mike Hutchings) with a reanalysis of Fraser et al 2015 productivity-diversity paper in Science was very similar to Lambert et al. Never saw the authors´ response before it was published, and when I finally saw it, their arguments were pretty bluffy and kind of cynical, and my initial reaction was also – how did it pass the editor… The full story with all the references etc. can be found here: https://niinemetslab.wordpress.com/2015/12/11/new-paper-published-comment-on-worldwide-evidence-of-a-unimodal-relationship-between-productivity-and-plant-species-richness/
Re: diversity-productivity relationships and comment-reply exchanges about them, I have some relevant old posts that I’ll use this opportunity to shamelessly re-up. 🙂
That last link focuses on a different critique of Fraser et al. 2015.
Something that I think could help (and that I’ve been surprised to find isn’t routine) is when a manuscript sets out to attack Bloggs et al (20XX) for something, then shouldn’t it be sent to Bloggs to double-check that’s what they actually said? And if Bloggs et al deny they said that, then the editor should insist the manuscript proceeds via verbatim quotes from Bloggs et al, not by writing out their own version of Bloggs’s opinions or arguments.
I’m not suggesting Bloggs et al should have rights to review the attack manuscript in general, much less rights to reject it, only that they should have rights not to be misrepresented.
I have been involved in this sort of thing as Editor (more than once) and author (once) – as an Editor I feel that it is important to let the original author get a chance to review and respond to the comment/critique. As an author, I didn’t get a chance to rebut and in fact that journal that published the comment wouldn’t let me do a rebuttal, when the critique, (by the late Denis Owen), appeared and I had to publish it in another journal! OK, this was many years ago, and I think things have changed since then.
Going back to Jeremy’s post, I like the point by point rebuttal/critique/comment approach.
Important discussion…(I haven’t read the papers you frame the post within & I have no amphibian expertise), but this is really an important issue generally. I understand the benefits of allowing original authors right of reply, but don’t really understand why so many journals appear to just publish that reply as standard, even when it fails to actually respond to major flaws (which seems to happen frequently). I like the approach of GEB, as described by Brian, but that does rely on assumptions of peer review system and doesn’t seem to happen at all journals.
As a related note, what incentive is there now for authors to bother with submitting critiques to the original journal? Most critiques aren’t listed/linked to in Google Scholar, and most journals don’t provide ‘related links’ on the original paper. This means very few future readers actually find the (valid or not) critiques of older papers. From my own field, the flawed ‘global insect decline’ paper in Biol Cons is a good example. Although most scientists with relevant expertise recognised the flaws in the original paper, most (including myself) chose to highlight those flaws in other journals or popular science avenues, because the in-house journal response system rarely reaches the target audience.
And totally biased opinion, but I think that blogs are becoming increasingly more important as a venue for publishing critiques (as per our paper on benefits of science community blogging 🙂 https://royalsocietypublishing.org/doi/full/10.1098/rsos.170957).
I gather from your prior comments, Jeremy, that you staunchly support the “gate keeper” system of peer review. I realize most do, in part because this is the way things have been done for a very long time. That said, I believe it is time for science to catch up to the 21st century concerning peer review procedures.
I fully endorse open access, open data, and voluntary open review of manuscripts. Within this rubric, authors would post manuscripts and data in “online open depots” consistent with their area of specialization. Peers would access those documents and volunteer as reviewers, and those reviews would be posted online and available to anyone. Authors would respond and modify manuscripts in the open format too. Once approved, the manuscript would be published in an online, open access journal.
I understand you and many others say science is not yet at the point where such a system would operate to such an extent that we could ensure the quality of published material. I kindly disagree, and assert the quality would vastly improve.
Thank you for taking the time to comment. I appreciate your perspective, although as you guessed I don’t agree with it.
One issue with your model is that most papers would get no reviews at all, and most that did get reviews would get only very cursory, superficial reviews. As illustrated by data on post-publication review:
Also, some fraction of papers would only get reviews from the authors’ collaborators, close colleagues, and personal friends. Which aren’t the sort of reviews I personally would put much stock in.
Another problem with that model is that many people, especially early career researchers, will be reluctant to provide negative reviews.
Another problem with that model is that most people don’t want it. That of course could change in future–or it might not. Here’s some poll data on various aspects of the model you propose:
A final problem with that model is who pays for it, and how to transition from the current system to a totally different one. Many authors don’t have the funds to pay typical open access fees for even one paper/year. Here’s are some relevant old posts:
I’m all in favor of experimentation with alternative models of publishing and peer review. I’ve published in author-pays open access journals, and I was on the editorial board of Axios Review (https://dynamicecology.wordpress.com/2013/12/02/im-joining-the-editorial-board-of-axios-review-heres-why/). Let people vote with their feet and see what mix of outcomes emerges. And give people with different preferences different options, so that everybody can satisfy their own preferences. I don’t have a crystal ball, so I have no idea how the current period of experimentation will shake out in the long run. I suppose one possible long-term future is one not too different from the current state of affairs: the publishing and peer review model you prefer coexists with other models (both the traditional one, and other newfangled ones), as part of a diverse publishing and peer review ecosystem. If that happens, I think it would be a perfectly good outcome. But I can imagine that it might be a pretty dissatisfying long-term outcome for someone who’d like to see a rapid and total revolution.
Excellent points, Jeremy! These should be incorporated into the model I propose. One point you made I am not in total agreement with. Most, if not all sub-disciplines in science are already highly “inbred”. By this I mean there is always a common core of persons publishing in the same journals, going to the same conferences & regularly chit-chatting on ideology and approaches- to such an extent that friends already evaluate friends rather frequently. This inbreeding effect is further exasperated within grant review study groups, and so it self perpetuates the effect.
My point being all models are flawed.