A couple of recent incidents have gotten me thinking about how norms of scientific criticism are changing. With one side effect being nasty fights.
Here’s the first incident (ht Retraction Watch). Computational biologist and blogger Lior Pachter has a series of posts strongly criticizing the network research of A.-L. Barabási, Manolis Kellis, and others. This is very prominent work, reported in a long string of papers in Nature and other top-tier journals. In particular, Pachter digs into one recent paper by Kellis and co-authors in great detail and argues that it’s “nonsense”. And he goes further and accuses Kellis and co-authors of misconduct: being intentionally-vague about their methods in order to get the paper into a leading journal and then trying to cover their tracks by “correcting” the paper post-publication. Pachter has taken to his blog because his correspondence with the authors left him unsatisfied and journals rejected the comments he submitted. A reply from Kellis is here, and a reply to the reply is here.
The second incident is the recent dust-up over some prominent papers on dinosaur growth rates. Nathan Myhrvold, a millionaire dinosaur hobbyist, has strongly criticized a series of papers by paleobiologist Gregory Erickson and colleagues for numerous omissions of methodological details that make it impossible for others to fully reproduce the data analyses. And also for various mistakes (including two different figures in which the fitted curve shown in the figure doesn’t come close to matching the equation provided in the figure legend). Myhrvold has described his concerns publicly and in great detail in a peer reviewed paper, and apparently has had private correspondence with Erickson. But he also took his criticisms to the New York Times, and in an interview with the Times said the errors in Erickson’s work were “consistent with scientific misconduct”. And while some of the co-authors of the criticized work have said that Myhrvold has identified some important issues, Erickson’s public response has been limited to noting that his papers were peer-reviewed.
I’m not qualified to judge the first dispute, and while I do feel I’m qualified to judge the second I don’t really want to get into that.* I just found both incidents to be striking signs of the times. Whatever the rights or wrongs of these two particular incidents, this kind of incident is going to become more common, I think. Because like Bob Dylan sang, the times they are a changin’. What follows are my admittedly anecdotal impressions and probably-rash generalizations. I’m tossing them out there in the hopes that others will chime in with their own impressions.
It seems like more and more people are increasingly demanding about the level of detail in which published work should be documented. Are increasingly less likely to see any mistake or omitted detail as minor. Instead taking the view that all mistakes and omissions are serious by definition, or at least that they should all be corrected and their seriousness left to the reader to judge. Are increasingly disinclined to trust authors, editors, or the pre-publication peer review process, preferring to at least have the in-principle option of checking everything themselves.** Are increasingly disinclined to care about voicing post-publication criticisms through the “proper” channel (i.e. by submitting formal comments for peer review). Instead taking the view that once something’s published, it’s fair game. And that peer review of comments is at best too slow, and at worst functions as a way to suppress legitimate criticism. So that it’s better to just publish criticisms of peer-reviewed papers in whatever venue seems most convenient and visible. And are increasingly comfortable with strongly-worded language, and disinclined to care about tone. Indeed, often viewing worries about tone as a distraction from discussion of substantive issues.***
I’m actually not sure if more and more people feel this way, or if thanks to the internet people who feel this way are merely more visible and vocal than they used to be. Could be some of both. In any case, I do think times are changing, and that they’re going to keep doing so. So if you don’t like the way things are changing, well, for better or worse I think you’re going to end up on the wrong side of history in the long run.
We’re living through a culture clash, I think. People who feel more or less as described above certainly can give reasons for thinking that science would be better off if everyone felt the same. And conversely, people who don’t feel that way can give reasons for thinking that science will be better off if everyone felt as they do. And I actually think everyone’s reasons (including mine!) mostly are post hoc rationalizations. For instance, people who are used to scientific communication working a certain way would like it to continue working in that familiar way. And people who are used to other things working a certain way (e.g., they’re used to the informality and openness of social media) would like scientific communication to work in that familiar way. Put it this way: do you often see people arguing that scientific communication should change in a way that they personally would hate? And culture clashes are a recipe for arguments.
But having said that, just because people’s proffered reasons are post hoc rationalizations doesn’t mean they’re merely post hoc rationalizations. It’s by giving reasons that we can (sometimes) get others to understand where we’re coming from and appreciate our own point of view. Even if we don’t ultimately hold that point of view for rational reasons. And through trying to rationalize your point of view and thereby justify it to others with different preferences, you’ll often come to realize that you are rationalizing. Which is a useful thing to recognize. So if indeed there is a culture clash here, I don’t think that means there’s no hope for mutual understanding or compromise.
On balance, I think the trends I’ve described are good for science, but my feelings are somewhat mixed. Which isn’t surprising, since I’m something of a ‘tweener when it comes to novel ways of communicating and criticizing science. So here are my post hoc rationalizations for my own views. 🙂
I think the bar always gets raised rather than lowered in science. Our standards for everything go up over time, and that’s good. Right now, the bar is getting raised on the detail and completeness with which authors are expected to report their methods. I think that can only be good for science.
I also think it’s good for science if we all get more comfortable with mutual criticism and vigorous debate. As Andrew Gelman recently remarked, the only way for the vaunted self-correcting nature of science to work is if we actually allow corrections. Various lines of evidence show that science self-corrects more slowly than it could or should (e.g., this quite interesting paper I just found [UPDATE: link fixed]). And I don’t like the view that any post-publication criticism of peer reviewed work necessarily is nitpicking, and in any case is never worth publishing because any seriously flawed work will be ignored and so needn’t be publicly criticized. I think that view (which is common though far from universal) is based on false premises, and holds back scientific progress. And I don’t think we need to worry about most authors having to deal with a barrage post-publication criticisms. The vast majority of papers aren’t read often enough or carefully enough to attract any post-publication criticism. (Which still leaves significant concerns about the minority of authors who do have to deal with such criticisms; see below)
And I think it’s a good thing on balance if people have less ability these days to dismiss criticisms on the basis of the critic’s credentials or the venue in which the criticisms were published, or any other basis besides the substantive content of the criticism itself. Science is supposed to be all about logic and evidence. Logic and evidence don’t care what credentials you have, or the venue in which they’re published, or etc.
On the other hand, I do think we need some professional norms or other about post-publication review. Because otherwise post-publication review isn’t going to work well. Pre-publication review works because everybody agrees to participate voluntarily. It’s part of what you’re signing up for when you choose to do science. But while at some level everyone’s aware that once they publish something others are free to read and react to it as they see fit, lots of people don’t feel like they signed up for post-publication “review”, at least not in all its forms. I mean, do you feel like being accused of serious mistakes and possible misconduct in the New York Times just “comes with the territory”?
Pre-publication review also works because it’s a system with formal and informal rules, not a free-for-all. For instance, it’s private, which saves people from the embarrassment of having their mistakes pointed out publicly, thereby encouraging them to participate. And there’s an editor involved, which helps ensure (though of course doesn’t guarantee) that referees and authors address each other professionally, and obliges authors to respond point-by-point to referees rather than just ignoring them or whatever. The fact that it’s a system with rules is a big part of what encourages people to participate voluntarily. People know what they’re signing up for.
Which is why I don’t think you can just tell people to “have a thick skin” or “don’t take it personally” when it comes to post-publication review. As this very good piece about another recent post-publication review dust-up points out, post-publication review these days seems to slip quite easily into personal criticism of the author. Which isn’t something you should be expected to “have a thick skin” about. Nobody should be expected to “have a thick skin” about implications or accusations that they’re incompetent or unethical. And further, sufficiently-strong criticism of someone’s work is awfully hard to distinguish from an attack on their competence or integrity, whether or not you preface it by saying “Don’t take what I’m about to say personally” (again, see the very good piece I just linked to).**** Further still, just as authors sometimes make mistakes, sometimes because they’re incompetent or unethical, so do critics. It seems like advocates for “anything goes” in post-publication review presume that critics are always right. They’re not. So what’s needed isn’t for everyone to have a thick skin. A thick skin is what you need to survive perceived or actual attacks. What’s needed are new norms and practices that everyone buys into.***** So that, as far as reasonably possible, people don’t get attacked and don’t feel they’ve been attacked, and so don’t need to have thick skins.****** For instance, in the post linked above, Andrew Gelman offers some suggestions on changes to how journals publish criticisms. In the absence of widely-accepted norms and practices, effectively critiquing the published literature is a really tricky thing to do.******* This isn’t (just) about sparing people’s feelings or protecting their reputations–it’s about creating a post-publication review system that actually works. That actually does effectively correct the scientific record, and that is seen by all to do so.
I don’t have any answers here–I don’t know what the new norms and practices will be, or should be. And I don’t know how to create a set of norms and practices that everyone would buy into and happily participate in. Attempts to codify rules of good commenting behavior on the internet are infamous failures. So we may have no alternative but just to let things sort themselves out through some sort of quasi-evolutionary process. But until we have wide agreement on new norms and practices, expect more dust-ups like the two described above.
*For the record, I’ve read Myhrvold’s paper carefully, and I feel like I’m competent to judge it because his concerns all relate to data processing and statistical analysis. Assuming that he’s truthfully described what he did (and he does seem to have described it in sufficient detail for anyone to reproduce it), then I think he has indeed identified some mistakes that really ought to be corrected. And while the mistakes may not alter the broadest qualitative conclusion (“dinosaurs grew fast”), the quantitative changes seem big enough to me to be more than trivial. As for the various methodological omissions he’s identified, I suspect that this is an example of changing reporting standards. The work was originally reported in enough detail to satisfy the referees that it was done competently. But like an increasing number of readers, Myhrvold wants to see methods reported in sufficient detail so that others can exactly reproduce all of the results without any need to contact the authors.
**E. D. Deming supposedly said “In God we trust; all others must bring data.” That’s never been more true and it’s going to get truer.
***Although there’s an irony, noted by a commenter on Pachter’s blog, that people who feel comfortable expressing even minor criticisms in strong language sometimes also complain about authors overhyping their results. Leaving one wondering why it’s not ok to hype results, but is ok to hype criticisms of those results.
****I’m probably one of the guilty parties here. Probably one mistake I’ve made at times in the past has been phrasing criticisms of people’s ideas so forcefully that it sounds like I question the competence of anyone who disagrees with me. Even though I’m usually careful to say that I’m not questioning anyone’s competence.
*****Some would disagree. For instance, the comments on that piece I just linked to include the view that not only will scientific debates get personal sometimes, but that they should get personal. I don’t buy that at all. But the fact that that view is even out there is another sign of the times.
******A bit of a thick skin is always going to be needed, no matter what our norms and practices are. Some people find pre-publication review quite hard to take. Which is a feeling they have to learn to get over, since pre-publication review is standard practice.
*******”Effectively” is a key word here. If you want your post-publication critique to actually convince people rather than falling on mostly-deaf ears, you can’t just voice it however you want and then write off anyone who doesn’t like your tone as a wimp. If you want to be convincing, you have to write for the audience you have, not the audience you wish you had. See this old exchange of comments.
I more or less agree with you here. Post-pub review is probably good on the whole, and some form certainly inevitable. My concern is the same with all of these “let’s change how science is done” proposals. The proposed new system is going to have cracks, just like the old one. Even if it is an improvement overall. Better to try to address those things now than wait for down the road.
With regard to PPR there clearly need to be some guidelines about what is fair game and what is not (at least without serious evidence). Accusations of unethical behavior need to have a high standard. Other changing standards, such as publishing data, should help with this.
“The proposed new system is going to have cracks, just like the old one. Even if it is an improvement overall.”
I agree 100%. But unfortunately, if you try to say something like this to a “true believer” in some new way of doing things, you get tarred as a “concern troll”.
And yeah, we definitely need guidelines about when it is fair to accuse someone of misconduct (or even raise it as a possibility), and the proper way to do that. But I think that will be a tough conversation to have, as there’s a highly vocal minority who will see any attempt to have that conversation as an attempt to defend unethical behavior. We may almost need some kind of unfortunate, high profile incident to bring such people to their senses. The scientific equivalent of that witch hunt associated with the Boston Marathon bombing, when a bunch of Reddit users all homed in on some completely random, totally innocent person as the likely culprit. Or maybe we can somehow oblige everyone who wants to accuse someone of misconduct to first watch the movie “12 Angry Men”. 🙂
One of the things that I think you at least alluded to, but is occasionally ignored, is that there will almost certainly be differential post-publication review. It takes a lot to step up and take Big Name Person on, even in private peer review, where much less concern is given to eviscerating someone with much less standing in the field. As this also gets entangled with issues of diversity, I think the “Come what may, grow a thick skin” crowd may be somewhat idealistic in terms of the outcomes of that kind of post-publication review.
That’s a good point, and yes, it’s something I’ve alluded too but probably should’ve emphasized more. Post-publication review is highly differential. In all sorts of ways. For instance (and this is just one example), it’s mostly high-profile papers that get subject to close post-publication scrutiny. So while it does take courage to take on a Big Name Person, I think that effect is outweighed by the “people mostly only notice and critique high profile papers” effect. For instance, both of the examples in the post concern critiques of high profile work by famous authors. But your broad point is one I very much agree with. Another key feature of pre-publication review is that everyone, and every paper, has to go through it. Not so with post-publication “review”.
all great points. I’d be hesitant to accuse someone of misconduct or fraud for something like post-hoc rationalization of a method (even the hidden, “arbitrary” changing of model parameters) that gives a desired result. I think our brains do this quickly, effortlessly, and not entirely consciously unless we are very on-guard. This is why I think the post-publication reviews are valuable because they will make all of us stay much more on guard and more honest with ourselves.
The problem with PPR is that it opens a lot of space for some people who just want to “make a name” by attacking silverbacks. In fact, PPR is already there, grows stronger each day with the help of social networks, and is not going away. The question is how to tame PPR and make it more civilized, in order to help us point and correct mistakes without attacking reputations lightly. Peer-review has much to do with a law trial: in order to acuse someone, people need to collect evidence, make a strong case and go through the official channels. PPR on Facebook is like a witch trial on the street.
The analogy to a legal trial is one I thought about making in the post, but didn’t. It’s a very useful analogy here, I think.
As for accusations of misconduct, one possible norm might be: don’t every publicly accuse someone of misconduct, ever, unless it’s via some appropriate “official” channel. The idea being that, if the purpose of PPR is to correct the scientific record, accusations of misconduct are totally irrelevant. Misconduct has to do with *why* a mistake or omission was made, not *that* a mistake or omission was made. And it’s only the fact of the mistake or omission, not the reason for it, that needs to be addressed in order to correct the scientific record. And further, reading the paper and supplementary material ordinarily only allows one to determine *that* a mistake was made, not why. (well, maybe except for really dead-obvious cases of misconduct like figure duplication and obvious plagiarism)
Which isn’t the same as saying nobody should ever accuse somebody else of misconduct. It’s just to say that accusations of misconduct should be handled differently. How accusations of misconduct should be handled isn’t an easy question, of course.
One could make a similar argument about incompetence. If the goal is to correct the scientific record, there’s never any reason to state or imply that anyone is incompetent. Although at least there the stakes are usually (not always) lower.
I don’t actually agree that nobody should *ever*, under *any* circumstances, publicly accuse a fellow scientist of misconduct or incompetence, outside of some “official” channel. But I don’t think it would be a terrible norm to follow, either.
Of course, all this is kind of like wishing for a free pony, given that nobody has any idea how to tame PPR and make it civilized!
Agreed. Unfortunately, social networking will surely make PPR much more poisonous in the coming years. But maybe one day the scientific community will find a middle way. It’s always like that with human behavior: exaggeration to one side (no PPR at all), then exaggeration to the other side (ferocious PPR), and finally a little balance (well-tought pre- and pos-PR through official and non-official channels). I tend to be optimistic anyway.
Jeremy, very interesting post that raises some good questions. I largely agree with your take on the situation (such discussions are valuable but should be civil) but I’m unconvinced about the “changing times” part. You mention that people “Are increasingly disinclined to care about voicing post-publication criticisms through the “proper” channel”, but it sounds like the critics in both examples tried that first unsuccessfully?
You also raise a good question about whether the bar for reproducibility is going up. Of course it is difficult to generalize about such things, but I’m inclined to see this rather as a time-lag than a change in scientific norms. Most really classical literature I’ve read has been perfectly reproducible (though filtered by the test of time, I know!), but it’s also been simple. Methods today have simply outgrown what we can easily capture in a nice methods paragraph the typical article or supplementary material, and it is only now that we’re getting the infrastructure to share these really big complicated processes more precisely.
I’m sure scientists have been uncivil in the past and have expressed themselves in the communication media of the times before. Do you feel these examples reflect changes in scientific norms and processes, or do you think we would be able to read disagreements much like these if our predecessors of 50, 100, 200 years ago had blogs?
Yes, both critics in the incidents I mentioned did try to go through traditional peer review, and in Myhrvold’s case succeeded. I guess the question I’m asking is about what’s appropriate when one doesn’t get satisfaction through that route. It’s not clear to me that the appropriate answer is always “just publish your criticisms in some public, non-peer-reviewed place”. At least not if your criticisms include accusations of misconduct or incompetence.
Good point re: some of today’s methods having outgrown what could be easily encapsulated in traditional methods sections.
Good question re: levels of civility now vs. in the past. Were past scientists less inclined to voice strong, public criticisms of their peers? Or maybe such criticisms were voiced just as often back in the day, but merely less widely noticed? Or maybe there’s actually no real difference at all between today and the past, it’s just that we’ve forgotten all the many highly-visible incidents of public incivility in the past? I don’t know the answer, though I do think there is an increasing tendency today for people to say in public what they might’ve previously only said in face-to-face conversations or private correspondence.
As for the last point (past vs. present), it seems obvious to me that past criticisms must have had a much more limited scope and resulted in less public exposture. There were fewer scientists, communicating in more closed circles (scientific journals were not easy to obtain), no direct “publishing” (as e.g. blogs) and newspapers/magazines had narrower circulation. Also, there was no possibility to do direct searches on the scientific record along with comments to research results (e.g. google). This means that criticism would mostly spread to those in the field (and probably with lower penetration). The current situation allows for researches to lose face on an unprecedented scale, and possibly to be unjustly attacked with the whole world as your audience (well, maybe all of them won’t be interested…). However, this is only me speculating, and it would be interesting to hear comments from researches with experience from e.g. the 60-80s.
I’m sure that researches were uncivil and voiced criticism (justified and unjustified) just as strongly in the past as well, but maybe the negative consequences became evident in other ways. Since the number of working scientists within a field was much smaller, I can imagine that cronyism and disfavouring through direct intervention as a result of disagreements (coming from senior scientists) could have a larger influence than today (which is not to say that those factors are unimportant now). However, I seems plausible that e.g. “Darwin’s bulldog” would have blogged if he had the means to. Come to think of it, historical disagreements over evolution would probably be a good way to study the civility of public scientific criticism of the past.
“However, I seems plausible that e.g. “Darwin’s bulldog” would have blogged if he had the means to.”
Thomas Henry Huxley would *totally* have had a blog! 🙂 Though I doubt he’d have limited himself to that. He and his buddies started their own journal (Nature) to promote their ideas, no reason why he couldn’t do the same today.
Which gives me a good idea for a fun, silly post: Which past scientists would’ve made the best bloggers?
A passing remark from Data Colada (http://datacolada.org/2013/10/14/powering_replications/) that I like, and that seems apropos:
“Replications in particular and research in general are not about justice. We should strive to maximize learning, not schadenfreude.”
Pretty interesting piece! I am relatively new to social media. I tend to like what I am seeing, but I feel it is tricky. With the ‘social’ component also comes the ‘political’ aspect where the publicity an idea gathers becomes more powerful than its actual soundness.
I am mostly worried about papers that do not get the attention needed to detect flaws. Should these papers be published in the first place? Two reviewers and a single editor usually improve the quality of a paper (at least in my limited experience), but is probably not enough to protect against important flaws. Should we put papers in a pre-print purgatory to see if they recruit enough ‘scrutiny’. I feel this could limit the strategy of creating new terminology and hot topics like we create hashtags, and prioritize the work a community needs to review in order to move forward.
“I am mostly worried about papers that do not get the attention needed to detect flaws. Should these papers be published in the first place?”
You may have a point for the growing number of papers published in the growing number of new, low-profile journals. But for higher-profile journals I think pre-publication peer review catches flaws pretty well, and isn’t getting any worse at doing so. Just my own impression.
Wanted to pass on a good point from a conversation with a colleague. My post implicitly assumes that people doing PPR only care about what’s written in the papers they’re critiquing. So while they might be quick to infer incompetence or scientific misconduct from what they’ve read, they won’t, say, attack the author because of her gender or race or sexual orientation or etc. Which isn’t always a valid assumption, and is perhaps less likely to be valid for research on certain topics. Just as researchers shouldn’t be expected to have a thick skin about attacks on their competence or professional integrity, they shouldn’t be expected to have a thick skin about misogyny or racism or etc. Another feature of pre-publication review, which post-publication review lacks, is that the norms and practices of pre-publication review are pretty effective at preventing things like misogyny and racism.
Very nice post Jeremy, and again many good comments. I much appreciate folks’ thoughts on this subject because we are involved in ongoing debates and I expect my collaborators and I will be getting into some additional ones in the near future. it is indeed challenging to get the tone right. As you and the others commenting have emphasized, the key lines to avoid going over are the accusations of general incompetence or deceitful behavior. When I review proposals I make sure to criticize the proposal, not the investigator. I suppose that is a good approach when involved in debates – to criticize the paper or critique, not the investigator. Not sure I have been as good as I should be in that regard and your comment reminds me to avoid personalizing the exchanges as much as possible.
Pingback: Which pre-blog ecologists and evolutionary biologists would’ve made the best bloggers? | Dynamic Ecology
Pingback: Questioning the value of biodiversity | Dynamic Ecology
The thoughts here seem in contrast to posts like this, where there’s a one-sided discussion about some papers: https://dynamicecology.wordpress.com/2013/06/15/angela-moles-vs-zombie-ideas-about-latitudinal-gradients-in-herbivory-and-plant-defense/
I would call this a sort of post-publication review, not just of the Moles paper but of several others mentioned in the comments. But other than Jordano’s response, I don’t see this as an example of “mutual criticism and vigorous debate.” Seems like you would have to talk to or notify the authors of criticized papers in order for it to become a debate rather than a self-congratulatory trash-talking session. What do you think? How do you square those two posts? Thanks for your thoughts.
Thanks for your comment CAB. That’s a very fair question. And in passing you’ve raised (or I least I think you’re raising) a number of related issues.
In part, you’ve noticed that my own views have changed a bit over time. By that I mean both my views on how to effectively do post-publication review, and my views on what sort of blog I want Dynamic Ecology to be. My views on both are still in flux, too.
Having said that, I don’t know that my views have radically shifted. In the post you referred to, I discussed in a positive way two peer-reviewed papers by Angela Moles, and tried to link those papers to broader issues that I often discuss. So in this particular case, I’m not sure why you think I should have notified the authors of other papers and invited them to comment. And while the comments do mention other specific papers, the discussion is all about contrasting peer-reviewed papers by Angela Moles, and peer-reviewed papers by other people. So the discussion is about a debate within the peer-reviewed literature. A debate in which I do take a side. Various other discussions on this blog over the years have been similar. For instance, in my zombie ideas posts, I’m not actually making any original criticisms of (certain versions of) the intermediate disturbance hypothesis. Rather, what I’m doing is calling attention to, and indicating my agreement with, criticisms that are already in the peer-reviewed literature.
In contrast, when I think of ‘post-publication review’, I think of cases like those highlighted in the post, which are very focused on detailed technical critique of specific papers, to the exclusion of larger issues.
If we were ever to do post-publication review of a specific paper (and I emphasize that we have no future plans to do that), we probably would notify the authors and invite them to reply. That’s actually something I’ve changed my mind about recently. Previously I probably wouldn’t have bothered to do that, so that’s a point on which my views have shifted. But because I don’t see posts like the one highlighting Angela Moles’ papers as representing post-publication review of specific papers, I don’t see any need to invite specific people to comment. We of course welcome any and all comments we get, as long as they’re productive. But it’s not feasible for us to err on the side of caution and invite comments from anyone and everyone who works on a topic mentioned in one of our posts. I’d actually LOVE it if we got lots more comments from lots more people, including people who’ve published on the issues we discuss. Many of our best comment threads have involved back-and-forth between people who are experts on the issue under discussion (see below for links to some examples). Believe me, I do NOT want this blog to just become a place where the only comments come from people who agree with the posts! But in order for debate to happen in the comments, I think it’s just going to have to happen “naturally”, rather than by us trying to invite lots of comments from specific individuals who might disagree with the posts. Plus, if we started inviting lots of comments, we’d risk giving the mistaken impression that we only welcome comments from select people, and we definitely don’t want to give that impression.
One thing we have done occasionally, and that we may try again in future, is inviting guest posts (as opposed to comments) from folks who want to express a dissenting point of view. For instance, I did this with Doug Sheil and David Burslem, in order to continue a conversation that began in the peer-reviewed literature: https://dynamicecology.wordpress.com/2013/09/03/a-thumbs-up-for-the-intermediate-disturbance-hypothesis-guest-post/. I think we’d only do this to further debate on general issues though, not to debate the merits of specific papers. Unfortunately, in our experience it’s very hard to get people to write guest posts, on anything. Over the years, we’ve invited lots of guest posts, and most people we’ve invited have eagerly agreed to write them–and then not followed through, despite prodding from us. People are busy, and not many people are actually prepared to allocate time to writing a guest post, even for a blog with a pretty big audience such as we have. I’m guessing this is something you’d like to see us do more of?
I get the sense that you think the comments here are just self-congratulatory, and that we suppress debate or dissent? Or at least cross our fingers and hope that people who disagree with us don’t find out about our posts? If not, apologies for misreading you. But if so, all I can say is that I respectfully disagree and that I hope you’ll read the comment threads on the following posts and change your mind about that:
I get the sense that you’re also uncomfortable with the tone of some of our posts or comments? If so, all I can say is that that’s something I do worry about. My natural way of writing and speaking is forceful, I have a snarky sense of humor, and I try to write in a breezy style I hope readers will enjoy. Obviously, writing in that way runs the risk of putting off some people. Indeed, I know from comments, private correspondence, and a reader survey we did that there are some people who find my tone to be really off-putting and even offensive. But I also get an awful lot of positive feedback on the way I write. It’s impossible to please everyone, so I just try to strike an appropriate balance as best I can, and apologize when I get it wrong. I’m well aware that anyone can read anything I write on this blog, and that I have to be prepared to live with the consequences of that.
You describe my post on Angela Moles’ work as “one sided”. I’m a little unclear what you mean by that. Yes, my comments on Angela’s work were “one sided” in the sense that I agree with her views, and that I didn’t go out of my way to highlight or cite the views of those who disagree. But this is a blog, not a newspaper. It’s a place where I and my fellow bloggers express our own opinions and discuss them with anyone who’s interested. I don’t think I’m under any obligation to never “take sides”, or to otherwise ensure that all sides in any debate are included in a post. Were you suggesting that I should? (honest question) In saying that, I freely recognize that this blog is not what some readers are looking for. I’m sure some ecologists see no value, or perhaps even negative value, in opinion pieces, especially those that haven’t been through peer review. Which is absolutely fair enough; anyone who feels that way is perfectly entitled to their opinion. And it’s a defensible opinion, even though I don’t personally agree with it. But anyone who feels that way shouldn’t read this blog.
In conclusion, just wanted to say thanks again for asking a very fair question. Brian, Meg, and I want Dynamic Ecology to set a good example and become the go-to place for vigorous professional discussion of issues ecologists care about. I think we’re doing a pretty good job of that, but the only way we can keep improving is via this sort of feedback.
Thank you for the response! I definitely don’t think that your blog suppresses dissent. I’m personally uncomfortable with wading into the discussion and critiquing papers in any form of “post-publication review.” I agree with you that “we need some professional norms or other about post-publication review. Because otherwise post-publication review isn’t going to work well.” I would like to be involved in the debate over papers sometimes, but simply don’t feel comfortable doing it in this setting. That’s no fault of the blog, it’s my own preference.
You state, “I don’t think I’m under any obligation to never “take sides”, or to otherwise ensure that all sides in any debate are included in a post. Were you suggesting that I should? (honest question)”
Response: Hmm, perhaps I would like to suggest something kind of like that. I called that particular post one-sided because hypotheses/arguments/evidence against your opinion are not even refuted. They don’t even seem to exist. Whatever they are, they are simply wrong, you say, because of these two papers. In a post like this you cover the other sides and refute them, so even though you certainly take a side, it’s a well-rounded discussion that comparatively makes the Moles post seem rather “one-sided.” If this type of one-sidedness doesn’t trouble you, then you’re right, I simply shouldn’t read the blog because it’ll irritate me too much when hypotheses that I happen to like for good scientific reasons (and that are very difficult to test) are branded as zombies because of a couple of refuting papers, without a well-rounded discussion of the arguments and evidence.
Thanks for the discussion!
Ack, link not working; when I say “post like this” I’m referring to http://oikosjournal.wordpress.com/2011/06/17/zombie-ideas-in-ecology/ (your post on Oikos blog, June 17, 2011 discussing IDH)
You’re absolutely right that there is a contrast between my own zombie ideas posts on the IDH, and my post on Angela Moles’ work. In the former case I spelled out detailed arguments, in the latter case I merely called readers’ attention to someone else’s arguments that I personally had found convincing. So although both posts are about “zombie ideas”, I do see them as two quite different sorts of posts, each of which has its place in my view. From the sound of it, it sounds like you much prefer the former sort of post to the latter sort of post, which is absolutely fair enough. Honestly, I prefer the former sort myself–they provide more “added value” than if I merely call attention to and briefly comment on peer-reviewed papers I liked. But none of the bloggers here have the time to write the former sort of “meaty” post all the time (though we wish we did!) And so we fill in the space with less-meaty posts that raise issues we think are worth talking about.
Pingback: Co-rex-ions | What's In John's Freezer?
Pingback: Friday links: valuing scientists vs. science, real stats vs. fake data, Pigliucci vs. Tyson, and more | Dynamic Ecology
Pingback: Friday links: does Gaad exist, stories behind classic ecology papers, evolution of chess, and more | Dynamic Ecology
Pingback: Friday links: a botanical brainteaser, hippos as invasive species, tau > pi, 2>55, and more | Dynamic Ecology
Pingback: Friday links: love letters to trees, are invasive species bad, ASN Young Investigator Award applications due soon, Barbara Kingsolver vs. Mary Treat, and more | Dynamic Ecology
Pingback: Saturday blast from the past: the times, they are a changin’ (when it comes to post-publication review) | Dynamic Ecology
Pingback: Post-publication review (and accusations of misconduct): signs of the times | Dynamic Ecology
It seems to me most people would prefer to publish peer-reviewed comments if possible, since there is little incentive for journals to publish critical comments. Especially for prestige journals, the comments can’t just be correct, they also have to be exciting! Sometimes the truth may not be as exciting as the fiction. Andrew Gelman has also written about this problem (http://www.stat.columbia.edu/~gelman/research/published/ChanceEthics8.pdf) . As you point out, it seems that the commenters in both of your examples tried to submit comments through official channels and were rebuffed. I don’t know how to locate the data, but it would be interesting to see if it has gotten harder to publish comments through official channels in recent years. I could imagine that chasing of impact factor by journal editors could play a role here. Technical comments aren’t likely to get a lot of citations in comparison with other, flashier content.
I would be very curious to see data on how many comments are submitted to journal X, how many are accepted, etc.
I heard a rumor many years ago that there are submitted comments about almost all Nature papers, the vast majority of which aren’t published. But that was nth hand and I have no idea if it’s true. Maybe it is! But maybe it isn’t.
I kind of doubt that it’s gotten harder to publish comments over time. But I’m totally guessing; I have no data and could be wrong.