Recently, we opened the floor so you could ask us anything. Here’s the next question, and our answers!
UPDATE: I see that a couple of folks (one in the comments here, another on Twitter and Tumblr) have misread this post as arguing that it’s more important for you to be nice to your colleagues and have them think well of you than it is for science to seek the truth. I’m pretty sure I speak for Brian as well as myself when I say that correcting the scientific record and seeking the truth is what matters. But like it or not, if your colleagues ignore your criticisms or dismiss them out of hand, for whatever reason (e.g., because they think you’re a jerk), your criticisms won’t actually have any effect. It’s precisely for the sake of maximizing your ability to correct the scientific record that I suggest the advice I do. Further, that I think it’s difficult to effectively correct the published record doesn’t mean I think you shouldn’t ever bother trying. Plenty of things that are difficult to do are well worth trying to do–such as science! And in case any readers doubt my sincerity here and think that the folks who write this blog really do prefer making nice to our colleagues over critiquing the published literature, well, read this. And this. And this. And this. And this. And this. And this. And this. And this…
How do you counter bad science in the peer reviewed literature, and do so without spending your whole career on it and being seen as a jerk? (Alex Bond)
Brian: I’m not clear if you mean counter as in publications or counter as in conversations with colleagues and in teaching classes. Either way I think the core is to focus on it as a scientific question answerable by empirical methods (the opposite of this is to go at it as an ad hominem/win-lose/opinion filled process). If you’re focused on advancing science by asking and answer questions, putting out alternatives and generally pushing the frontiers of science forward then its hard to argue with or take objection to. I think another key point is to make sure that this is not all you do. If your colleagues see you sticking your neck out trying to put good science out there as an alternative, they will treat you very differently than if all you do is critique other peoples work, which in the end is too easy to do, gets old, and gets you labelled as curmudgeon (I’m not going to name names but I’m sure you can think of people in your field who have this reputation).
Jeremy: As Alex’s question correctly notes, the whole idea of “post-publication review” is vastly overrated by its advocates. Unfortunately, I don’t know any way to keep people from selectively citing only the stuff that supports their own views, misciting stuff, citing stuff even after it’s been retracted or superceded, assuming that any empirical study that contradicts conventional wisdom “must” have been done wrong… The most important thing you can do is teach and train your own students and trainees as best you can, and do the most conscientious job you can in your own work and as a pre-publication reviewer of the work of others.
Beyond that, it’s hard. Brian’s right that doing good work of your own will give you “standing” in the minds of your colleagues to critique the work of others. Not that you should need standing; after all, the validity of a critique has nothing to do with who writes it. If you’re right, you’re right, no matter who you are or what else you’ve done; if not, not. In the comments on an old post I think Brian noted how unfair it was that his first paper (critiquing a then-popular method for testing “neutral models” in community ecology) was dismissed by some simply because it was his first paper and he was a grad student. But unfortunately, that is the way it is.
In terms of putting effort into correcting errors in the published literature, you have to pick your battles. Is it important enough to be worth your time? And if you do it too often, will it change how others perceive you? (that’s definitely something I worry about). For instance, I’ve learned from personal experience on this blog that just raising a previously-unrecognized caveat or drawback to a basically-sound approach or idea tends to get you labeled as a concern troll (or whatever the offline equivalent is, if you do it in print). No matter how valid your point and no matter how much you emphasize that you agree that the approach or idea is still basically sound overall. I don’t think that’s always fair. Personally I always like having a complete understanding of all the advantages and drawbacks of any approach or idea. But that’s the way it is. So “raising a caveat” usually isn’t important enough to bother with. As another example, there are some ideas and approaches in ecology that I think are wrongheaded or dead ends, but that seem to be pursued only by a particular group of people–fairly small “research programs” or “schools of thought” if you like. If there’s not really any sign of those problematic ideas or approaches being taken up by anybody outside that “school”, is it really worth it to write a critique? (And let me emphasize that I’m sure some people feel precisely that way about my work! I’m sure someone out there can’t be bothered to write a critique of protist microcosms, because it’s only a few people who waste their time on microcosms!)
Slightly contra Brian, I don’t think critiquing other people’s work is easy–not if you’re doing it well. If anything, I find it harder than doing “regular” science. Precisely because many people don’t want to hear it. Critics (or as I once called them, “short sellers“) are never popular. Plenty of ecologists, whether they would admit it or not, think that the pre-publication peer review stage is the only stage at which it’s legitimate to criticize someone else’s work. They think that any criticism of published work necessarily is nitpicking, necessarily amounts to making the perfect the enemy of the good (this is another reason why post-publication review mostly is a non-starter). It’s not easy to get your audience to notice and take seriously something that they are strongly inclined to ignore or dismiss out of hand. Relatedly, a commenter here once expressed the view that critiquing the work of others is cowardly, whereas it takes guts to do your own science. I’d say just the opposite. Precisely because doing your own science is highly valued by others, whereas critiquing the published work of others often is frowned upon, it takes a lot of guts to critique published work.
So besides picking your battles and only voicing critiques that seem important enough to need voicing, what can you do? Here are some suggestions:
(i) Be sure of your ground–make sure you really do have good evidence and good arguments on your side, and that you explain them very clearly. Running your critiques past critical colleagues (not your best friend who is predisposed to agree with you!) helps a lot with that. Blogging has helped me with that–it’s how I developed my critique of the IDH into a publishable form. But of course, even blogging about a critique means voicing it publicly.
(ii) Another thing you can do is get cover–get other people (including established people, if possible) to join you in your critique. That’s one overlooked function of joint letters that we forgot to discuss in this old post–to provide safety in numbers to those writing them. Plus, a critique that comes from a group, especially a group that includes some well-known people, will (for better or worse!) be taken more seriously than one coming from a junior individual. Even if your critique doesn’t explicitly come from a group, if you know that you’re giving voice to something a lot of other people are thinking, you’ve got some cover. This is part of what gives me the courage to critique the IDH, or to critique large chunks of phylogenetic community ecology–I know that I’m only saying what a lot of other people are thinking (and in some cases have already said in the literature, although perhaps not as clearly or forcefully).
(iii) Another thing you can do is direct your critiques not at particular papers or the work of particular individuals, but at approaches or lines of research that many people are pursuing. Again, in critiquing the IDH or phylogenetic community ecology I’m not picking on any particular individual. Those are often especially important critiques to make, since by definition the bulk of our science is comprised of popular lines of research. But they can also be the least-risky critiques to make, because individuals are less likely to feel that you’re picking on them personally and so are less likely to see you as a jerk (maybe?) Of course, if you think there are serious technical flaws that are unique to a specific paper, you have no choice but to critique that paper. But that sort of thing isn’t all that unusual, so I wouldn’t be too scared of doing it (though in that case it’s often best to first write to the author privately with your technical comments, before writing to the journal).
(iv) Another thing you can do is address your own critique. If you can raise an issue, and then show how to solve it, people often like that. Similarly, if as part of your critique of existing research you can also suggest an exciting new avenue for research, people often like that. Of course, the problem is that most people don’t like it when a critique doesn’t have a simple solution. They don’t like being told “stop doing X” unless in the next breath you tell them “do Y instead–it’s an easier and better way to do the same thing X does!” Which is a problem because sometimes there is no simple solution to a valid critique, nor any exciting new research avenue to pursue instead. I personally don’t like the attitude that it’s only ok to point out a problem if you also know how to solve it, but it is what it is and there’s no changing it.
(v) Worth noting that right sort of “meaty” critique often won’t be seen as a critique at all. For instance, in an old post I noted the excellent work of Gilbert & Bennett 2010 and Smith & Lundholm 2010, both critiquing a popular approach in metacommunity ecology by showing that it fails when applied to simulated data. Except that I bet to many readers it didn’t feel like a “critique”–it felt like an original study. Even though the take-home message–“this popular approach has serious problems and quite possibly should be abandoned”–is very critical. Along the same lines, “how to” papers laying out best-practice methodology generally are seen as original contributions, even if they also function as an explicit or implicit critique of previous work that fails to follow best practice. (I’m tempted to suggest that this connects to some people’s dislike of work based on the data of others. If your critique is based on new data or models that you developed yourself, that’s original work and probably fine. But if your critique is only based on citing data or models collected or developed by others, that’s just you carping from the sidelines and totally out of line!)
(vi) Finally, if possible you can use your own work as an example–critique yourself. For instance, think of Peter Adler’s critique of ecologists “selling” their work via superficial linkages to climate change, linked to above. Peter was hardest on himself. Or a while back I talked about an Oikos paper in which the authors found a problem in their own work that also crops up in the work of many others, and so wrote a critique that used their own research as the main example.
Thanks, Jeremy & Brian, for fielding my question – and good advice all ’round.
One very positive change that’s coming is being able to critique papers that archive their data. You can then make your case by reanalyzing the original data and showing that there’s a problem. It’s much easier to avoid the subjective ‘I’m right, you’re wrong’ argument when you’ve based your case on the data. My favourite example is this paper by Eric Anderson: http://onlinelibrary.wiley.com/doi/10.1111/mec.12609/abstract
Outstanding answer Jeremy.
“For instance, I’ve learned from personal experience on this blog that just raising a previously-unrecognized caveat or drawback to a basically-sound approach or idea tends to get you labeled as a concern troll (or whatever the offline equivalent is, if you do it in print). No matter how valid your point and no matter how much you emphasize that you agree that the approach or idea is still basically sound overall. I don’t think that’s always fair. Personally I always like having a complete understanding of all the advantages and drawbacks of any approach or idea. But that’s the way it is. So “raising a caveat” usually isn’t important enough to bother with.”
It’s *not* fair. Don’t give in to such criticisms! (even though your reasons are very understandable).
One of your greatest strengths is the way you thoroughly scope out the various sides of an issue, and this is *extremely important*, and the mark of a good thinker, IMO. I suppose it can sometimes be a fine line between pedantry and careful analysis, but I don’t see you engaging in the former at all. As long as it’s clear that you’re weighing the pros and cons of Issue X, as part of the intellectual scoping and weighing process to constrain and conceptualize the issue in your mind, then if Joe Reader thinks you’re being a “concern troll” or whatever, then that’s Joe’s problem, not yours. This is important, because people love to over-simplify and pan-chromatize things in my experience, thereby forgetting that various nuances do exist that can be important, both for the issue at hand and for our mental processes. Doesn’t necessarily mean you have to include such considerations in your model (or that you even can), but you do have to be aware of them.
This is non-sense, the goal of science is to seek the truth. You Nancies are too concerned with feelings and careerism judging by the length of the answer. If the science is weak, it needs to be called–out-post pub. comments are going to be great. People should be fearful that the fluff they publish is going to get called out.
I agree that the goal of science is to seek the truth. But the question is how you do that most effectively. If everybody ignores what you have to say, perhaps because they consider you a jerk, or a professional nitpicker, or someone who hasn’t published enough of their own work to have earned the “standing” to be taken seriously, well, then your critique will fall on deaf ears. You’ll be able to pat yourself on the back for having spoken up, and for being a more honest seeker of truth than all those oversensitive careerists you see yourself as being surrounded by–but you won’t have actually changed anything.
As to whether Brian and I are “Nancies”, I think I can safely say that you’re the first person ever to accuse Brian and I of this. 🙂 You may wish to check out the following posts and comment threads, and then come back and let us know whether or not you still think we’re “Nancies”:
https://dynamicecology.wordpress.com/2012/09/11/statistical-machismo/
https://dynamicecology.wordpress.com/2013/01/11/is-using-detection-probabilities-a-case-of-statistical-machismo/
https://dynamicecology.wordpress.com/2011/06/17/zombie-ideas-in-ecology/
https://dynamicecology.wordpress.com/2011/08/26/why-the-spandrels-of-san-marco-isnt-a-good-paper/
As to whether “comments are going to be great”, and people should fear that the fluff they publish is going to get called out, the evidence so far totally goes against you on that. Again, you might wish it otherwise (and for the record, I do)–but wishing don’t make it so:
https://dynamicecology.wordpress.com/2012/10/11/in-praise-of-pre-publication-peer-review-because-post-publication-review-is-an-utter-failure/
And if you respond to this by saying, well, then ecologists themselves need to change, I’d respond by saying that the same advice applies. I don’t think you change the prevailing culture in an entire field by calling people careerist Nancies, or doing nothing but criticizing the work of others. FWIW, one of my hopes for this blog is that it leads by example and helps in some small way to change the culture in ecology towards one in which more ecologists are more comfortable with vigorous-but-professional debate, including debate about published papers.
Pingback: Civil discourse | My Track Record
Excellent post, though I would take issue with the idea that post publication peer-review is not effective. Post publication peer review is pretty much what the post is about! More formal tools are only just being developed. First generation sites/tools were the work of individuals, but we now have community efforts, for example PubPeer. The latter is making inroads into people’s reading habits and lab journal clubs. As for the zombies, they do seem to be able to live beyond death, so even Max Planck may have been wrong here – his quote is “A scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die and a new generation grows up that is familiar with it.”. for untruths, it seems at time that they can indeed rise form the dead.
“though I would take issue with the idea that post publication peer-review is not effective. Post publication peer review is pretty much what the post is about!”
I believe both that it’s important to try to do post-publication peer review as well as possible, and that even when it’s done as well as possible it’s rarely effective. Follow the links for various lines of evidence demonstrating that post-publication peer review is largely ineffective. That may change in future, of course–and I certainly hope it will–but if it does I think it will change very slowly. Just as individuals rarely change their minds about scientific truths, they rarely change their attitudes about peer review or other publication practices. And just as zombie ideas can live on in the next generation via the influence of one generation on the next, so too can attitudes about peer review and publication practices.
Pingback: A half thought out critique | Ecology for a Crowded Planet
Pingback: Post-publication “review”: signs of the times | Dynamic Ecology
Pingback: Post-publication review is here to stay–for the scientific 1% | Dynamic Ecology