Recently, we opened the floor so you could ask us anything. Here’s the next question, and our answers!
UPDATE: I see that a couple of folks (one in the comments here, another on Twitter and Tumblr) have misread this post as arguing that it’s more important for you to be nice to your colleagues and have them think well of you than it is for science to seek the truth. I’m pretty sure I speak for Brian as well as myself when I say that correcting the scientific record and seeking the truth is what matters. But like it or not, if your colleagues ignore your criticisms or dismiss them out of hand, for whatever reason (e.g., because they think you’re a jerk), your criticisms won’t actually have any effect. It’s precisely for the sake of maximizing your ability to correct the scientific record that I suggest the advice I do. Further, that I think it’s difficult to effectively correct the published record doesn’t mean I think you shouldn’t ever bother trying. Plenty of things that are difficult to do are well worth trying to do–such as science! And in case any readers doubt my sincerity here and think that the folks who write this blog really do prefer making nice to our colleagues over critiquing the published literature, well, read this. And this. And this. And this. And this. And this. And this. And this. And this…
How do you counter bad science in the peer reviewed literature, and do so without spending your whole career on it and being seen as a jerk? (Alex Bond)
Brian: I’m not clear if you mean counter as in publications or counter as in conversations with colleagues and in teaching classes. Either way I think the core is to focus on it as a scientific question answerable by empirical methods (the opposite of this is to go at it as an ad hominem/win-lose/opinion filled process). If you’re focused on advancing science by asking and answer questions, putting out alternatives and generally pushing the frontiers of science forward then its hard to argue with or take objection to. I think another key point is to make sure that this is not all you do. If your colleagues see you sticking your neck out trying to put good science out there as an alternative, they will treat you very differently than if all you do is critique other peoples work, which in the end is too easy to do, gets old, and gets you labelled as curmudgeon (I’m not going to name names but I’m sure you can think of people in your field who have this reputation).
Jeremy: As Alex’s question correctly notes, the whole idea of “post-publication review” is vastly overrated by its advocates. Unfortunately, I don’t know any way to keep people from selectively citing only the stuff that supports their own views, misciting stuff, citing stuff even after it’s been retracted or superceded, assuming that any empirical study that contradicts conventional wisdom “must” have been done wrong… The most important thing you can do is teach and train your own students and trainees as best you can, and do the most conscientious job you can in your own work and as a pre-publication reviewer of the work of others.
Beyond that, it’s hard. Brian’s right that doing good work of your own will give you “standing” in the minds of your colleagues to critique the work of others. Not that you should need standing; after all, the validity of a critique has nothing to do with who writes it. If you’re right, you’re right, no matter who you are or what else you’ve done; if not, not. In the comments on an old post I think Brian noted how unfair it was that his first paper (critiquing a then-popular method for testing “neutral models” in community ecology) was dismissed by some simply because it was his first paper and he was a grad student. But unfortunately, that is the way it is.
In terms of putting effort into correcting errors in the published literature, you have to pick your battles. Is it important enough to be worth your time? And if you do it too often, will it change how others perceive you? (that’s definitely something I worry about). For instance, I’ve learned from personal experience on this blog that just raising a previously-unrecognized caveat or drawback to a basically-sound approach or idea tends to get you labeled as a concern troll (or whatever the offline equivalent is, if you do it in print). No matter how valid your point and no matter how much you emphasize that you agree that the approach or idea is still basically sound overall. I don’t think that’s always fair. Personally I always like having a complete understanding of all the advantages and drawbacks of any approach or idea. But that’s the way it is. So “raising a caveat” usually isn’t important enough to bother with. As another example, there are some ideas and approaches in ecology that I think are wrongheaded or dead ends, but that seem to be pursued only by a particular group of people–fairly small “research programs” or “schools of thought” if you like. If there’s not really any sign of those problematic ideas or approaches being taken up by anybody outside that “school”, is it really worth it to write a critique? (And let me emphasize that I’m sure some people feel precisely that way about my work! I’m sure someone out there can’t be bothered to write a critique of protist microcosms, because it’s only a few people who waste their time on microcosms!)
Slightly contra Brian, I don’t think critiquing other people’s work is easy–not if you’re doing it well. If anything, I find it harder than doing “regular” science. Precisely because many people don’t want to hear it. Critics (or as I once called them, “short sellers“) are never popular. Plenty of ecologists, whether they would admit it or not, think that the pre-publication peer review stage is the only stage at which it’s legitimate to criticize someone else’s work. They think that any criticism of published work necessarily is nitpicking, necessarily amounts to making the perfect the enemy of the good (this is another reason why post-publication review mostly is a non-starter). It’s not easy to get your audience to notice and take seriously something that they are strongly inclined to ignore or dismiss out of hand. Relatedly, a commenter here once expressed the view that critiquing the work of others is cowardly, whereas it takes guts to do your own science. I’d say just the opposite. Precisely because doing your own science is highly valued by others, whereas critiquing the published work of others often is frowned upon, it takes a lot of guts to critique published work.
So besides picking your battles and only voicing critiques that seem important enough to need voicing, what can you do? Here are some suggestions:
(i) Be sure of your ground–make sure you really do have good evidence and good arguments on your side, and that you explain them very clearly. Running your critiques past critical colleagues (not your best friend who is predisposed to agree with you!) helps a lot with that. Blogging has helped me with that–it’s how I developed my critique of the IDH into a publishable form. But of course, even blogging about a critique means voicing it publicly.
(ii) Another thing you can do is get cover–get other people (including established people, if possible) to join you in your critique. That’s one overlooked function of joint letters that we forgot to discuss in this old post–to provide safety in numbers to those writing them. Plus, a critique that comes from a group, especially a group that includes some well-known people, will (for better or worse!) be taken more seriously than one coming from a junior individual. Even if your critique doesn’t explicitly come from a group, if you know that you’re giving voice to something a lot of other people are thinking, you’ve got some cover. This is part of what gives me the courage to critique the IDH, or to critique large chunks of phylogenetic community ecology–I know that I’m only saying what a lot of other people are thinking (and in some cases have already said in the literature, although perhaps not as clearly or forcefully).
(iii) Another thing you can do is direct your critiques not at particular papers or the work of particular individuals, but at approaches or lines of research that many people are pursuing. Again, in critiquing the IDH or phylogenetic community ecology I’m not picking on any particular individual. Those are often especially important critiques to make, since by definition the bulk of our science is comprised of popular lines of research. But they can also be the least-risky critiques to make, because individuals are less likely to feel that you’re picking on them personally and so are less likely to see you as a jerk (maybe?) Of course, if you think there are serious technical flaws that are unique to a specific paper, you have no choice but to critique that paper. But that sort of thing isn’t all that unusual, so I wouldn’t be too scared of doing it (though in that case it’s often best to first write to the author privately with your technical comments, before writing to the journal).
(iv) Another thing you can do is address your own critique. If you can raise an issue, and then show how to solve it, people often like that. Similarly, if as part of your critique of existing research you can also suggest an exciting new avenue for research, people often like that. Of course, the problem is that most people don’t like it when a critique doesn’t have a simple solution. They don’t like being told “stop doing X” unless in the next breath you tell them “do Y instead–it’s an easier and better way to do the same thing X does!” Which is a problem because sometimes there is no simple solution to a valid critique, nor any exciting new research avenue to pursue instead. I personally don’t like the attitude that it’s only ok to point out a problem if you also know how to solve it, but it is what it is and there’s no changing it.
(v) Worth noting that right sort of “meaty” critique often won’t be seen as a critique at all. For instance, in an old post I noted the excellent work of Gilbert & Bennett 2010 and Smith & Lundholm 2010, both critiquing a popular approach in metacommunity ecology by showing that it fails when applied to simulated data. Except that I bet to many readers it didn’t feel like a “critique”–it felt like an original study. Even though the take-home message–“this popular approach has serious problems and quite possibly should be abandoned”–is very critical. Along the same lines, “how to” papers laying out best-practice methodology generally are seen as original contributions, even if they also function as an explicit or implicit critique of previous work that fails to follow best practice. (I’m tempted to suggest that this connects to some people’s dislike of work based on the data of others. If your critique is based on new data or models that you developed yourself, that’s original work and probably fine. But if your critique is only based on citing data or models collected or developed by others, that’s just you carping from the sidelines and totally out of line!)
(vi) Finally, if possible you can use your own work as an example–critique yourself. For instance, think of Peter Adler’s critique of ecologists “selling” their work via superficial linkages to climate change, linked to above. Peter was hardest on himself. Or a while back I talked about an Oikos paper in which the authors found a problem in their own work that also crops up in the work of many others, and so wrote a critique that used their own research as the main example.