Think plant-animal interactions are more “intense” in the tropics? In particular, think that herbivory levels are higher in the tropics, and that plant chemical defenses against herbivory are stronger there? Think again. Writing in the new issue of Ideas in Ecology and Evolution (open access), Angela Moles shows that those are zombie ideas which survive because of sheer dogmatism. Only 37% of papers find the expected higher herbivory at low latitudes, and the meta-analytic effect size is not significantly different from zero (Moles et al. 2011). And plant chemical defenses actually are stronger on average at high latitudes (Moles et al. 2011). But papers finding the expected pattern are cited several times more often than other papers–even if you restrict attention to papers by the same lead authors, published in the same journals! There are even cases where sloppy peer review allowed authors to claim support for this zombie idea despite being flat out contradicted by their own data. For instance, if your data say, with P=0.23 or P=0.85, that there’s fail to reject the null hypothesis of no relationship between herbivory and latitude, you should not be claiming that “we found support for the hypothesis that plants suffer greater herbivore pressure at low latitudes”! (especially not in PNAS!)
Peer review is supposed to be the scientific literature’s defense against errors. But if the reviewers are in the grip of the same zombie ideas as authors, then far from eradicating zombie ideas peer review actually reinforces them.
I’ve also been wondering if our respect for “classic” papers and “textbook examples” isn’t the problem here. Just because a paper is a “classic”, meaning it was the first to suggest an idea, or is highly cited, or was especially well-conducted, or was especially cleverly done or clear, doesn’t make it representative or typical. When it comes to evaluating purportedly-general empirical patterns like latitudinal gradients in herbivory and plant defense, we ought to care much more about what’s typical, and much less about what “classic” papers say.
Kudos to Angela Moles for fighting the good fight. But if my own experience is any guide, she’s got an uphill battle ahead of her. Hope this post will help a bit.
UPDATE: And as noted in the comments, if a recent AREES review (Schemske et al. 2009) reinforces the zombie idea, you’re really in for an uphill battle! Angela’s IEE paper discusses why Schemske et al. 2009 shouldn’t be relied on.
HT Jarrod Cusens, via Twitter.
It is noteworthy that a recent review on the same topic (Schemske et al 2009 An Rev Eco Evol Syst) overall concluded support for the latitudinal hypothesis, although it was not a quantitative review. Were these authors just making conclusions based on their existing prejudices? Did they ignore studies or results that did not fit with the expected hypothesis?
Angela comments on the Schemske et al. review in her new IEE paper. Schemske et al. apparently cited just three papers on latitudinal gradients of herbivory, all from the same lab. Now, one of those papers was a meta-analysis and the other two were data compilations. But none of the three was comprehensive even at the time, and all are now well out of date.
Unfortunately, the fact that Schemske et al. is an AREES review is probably going to do a lot to reinforce this zombie.
Plus, one wonders about additional reporting/publication bias given that there are firm beliefs which way the thing should go.
Reminded me of another recent paper that also found expected latitudinal effects reversed “Specialization of mutualistic interaction networks decreases toward tropical latitudes” http://dx.doi.org/10.1016/j.cub.2012.08.015
Yes, publication biases likely would further reinforce Angela’s case.
I strongly suspect so. Hard to quantify, but my own experience was that my global empirical study on the latitudinal gradient in defence was rejected 7 times before being published, and the reviews mostly had the tone “well, we know this gradient goes the other direction, so your study must be flawed, perhaps it’s because you didn’t account for X”. It was only accepted after my meta-analysis and literature review showed that the fact that my empirical results went in the “wrong” direction was not actually unusual.
If ecology in general is anything like fisheries, rebuttals just have no power at all to shift scientific opinion: the original articles are cited 17 times more often, and uncritically accepted 95% of the time. Even when the rebuttal is mentioned together with the original, 8% of the time the authors believe the rebuttal supports the argument of the original paper! The point is that there must be a new and believable narrative not just a criticism of the original paper.
Source: Banobi, J. A., T. A. Branch, and R. Hilborn. 2011. Do rebuttals affect future science? Ecosphere 2(3):art37. doi:10.1890/ES10-00142.1.
Yep.
And the trouble is that a new narrative is really hard to construct if the truth is “it depends” or “the data are a shotgun blast, there’s no pattern at all” or something like that.
I also had a thought (and this is something I may post on at some point): how come the particularism of many ecologists–the “but at MY field site things are different” attitude–doesn’t act as a strong barrier to the establishment of zombie ideas about non-existent “general patterns”?
Publication bias is certainly an issue. I was also going to mention the Schleuning et al. paper that Florian posted but also add some historical context. The idea that tropical plant-pollinator interactions are on average more specialised than temperate relationships is a long standing one that I hoped had been put to bed over a decade ago when I published an analysis of two independent data sets showing that the perceived tropical specialisation was due to under sampling in the tropics (Ollerton & Cranmer 2002 – Oikos 98: 340-350). Unfortunately at the same time Pedro Jordano and Jens Olesen published a paper showing the opposite, that the tropics were more specialised. But, fine and respected scientists though they are, they did not take sampling effort into account. Now depending on what you wish to state in a study, it’s possible to cite one or the other paper to support your argument. And I’ve this week reviewed a manuscript that did just that, which annoyed the hell out of me!
PS – I’ve just checked WoK and that Oikos paper has been cited 78 times since 2002 so it’s not as if authors can claim it’s obscure and no one knows about it!
I’m planning a post where I ask readers to name other zombie ideas and briefly summarize the evidence that they’re zombies. With a view to either posting on those zombies myself, or inviting readers to write guest posts on them.
OK, consider that one as an addition to the pot!
Jeremy, just a quibble about interpretation of null hypothesis significance testing (NHST). Your statement “For instance, if your data say, with P=0.23 or P=0.85, that there’s no relationship between herbivory and latitude…” seems to imply that NHST can show the null hypothesis is true (I’m assuming the null hypothesis was one of no effect). In fact, NHST can only reject a null hypothesis. Failure to reject the null only lends support to the null if power is high. I’d guess that power was no calculated (it almost never is in ecology), so it is impossible to say what the support for the null was (regardless of the P-value).
Hi Mike,
Yes, that’s sloppily phrased, I’ll update.
Speaking of other zombie ideas, a few years ago Tim Wootton gave a talk at ESA where he traced back the evidence that the ocean was acidifying under increasing atmospheric CO2. Obviously, basic chemistry says this ought to happen. But according to Tim (and hopefully my memory is accurate here!), when you actually look at the literature data are shockingly scant. It’s all just people citing other people who’ve *said* that the ocean is acidifying. Now, I haven’t looked into this myself at all, and at the time I found Tim’s claim so stunning I could hardly believe it. And it could well be that the data are better now. But just wanted throw that one out there as a candidate, in case anyone knows more about it than I do. (Let me emphasize that I am going by memory here and my memory could be *way* off! Do not go blogging or tweeting “Jeremy Fox and Tim Wootton say that the oceans aren’t acidifying” or Jeremy Fox and Tim Wootton say there’s no evidence that the oceans are acidifying”!)
Not really a zombie idea sensu stricto in any case, since Tim’s talk was about lack of evidence for a claim, rather than a claim made in the face of contrary evidence. But perhaps one step removed from being a zombie.
Thanks for the post Jeremy. Re the specialization-generalization debate that Jeff mentions. Broad patterns like large-scale latitudinal trends are quite difficult to assess. It’s great that now we have much more data and evidences available for more rigorous tests of the ideas than what we had back in 2000. There was no way to account for sampling effort when we tested the hypothesis with the 29 datasets, in contrast with Jeff’s evidence based on just 2 datasets (even with appropriate ways to assess sampling robustness). Besides, our evidence for a (slight but significant) trend for more specificity of low-latitude interactions was for the animals’ side, not the plants’. But I already had discussed and advocated how important is to assess sampling in an AmNat 1987 paper. 😉
Re Schleuning’s et al paper referred to by Florian: a potentially confounding aspect is that they measured “specialization” rather than “specificity”. But that’s another debate… I myself have some problems with the way their specialization metric gives more weight to frequently observed rather than rarely observed species and how it is weighted by interaction frequencies. And this turns, not unexpectedly, in higher “specialization” (not specificity) at high latitudes. Details are important when you aim to falsify zombies.
Science is an incremental process. To me it’s a bit short-minded to qualify *any* idea as zombie, because all ideas have an intrinsic value. Some ideas are more reluctant than others to falsify, unfortunately, for reasons unrelated to science (i.e., academic power, schools, etc); that’s science business after all, as a human-driven process. The great thing is that now we have much better scientific communication avenues to test the “zombies” and disseminate the results. I don’t know, but my gut feeling is that a bunch of apparently “zombie” ideas in ecology simply deserve proper test, as convincing as Angela’s analysis. We’ll fail to falsify some of them, and we’ll robustly falsify others, but the process will undoubtedly generate many, many new ideas in turn.
Hi Pedro – yes, agreed, latitudinal trends in interactions are not at all easy to study as the data are currently very limited. My feeling is that we’re only just beginning to understand that these are questions worth asking, and that dedicated data collection will be needed before we can really gain answers. Must go back and re-read your 1987 paper soon 😉
Pingback: How do you teach controversial scientific ideas? | Dynamic Ecology
Pingback: WIWACS vs. zombie ideas | Dynamic Ecology
Pingback: Ask us anything: how do you critique the published literature without looking like a jerk? | Dynamic Ecology
Pingback: On progress in ecology | Dynamic Ecology
Pingback: Is the notion that species interactions are stronger and more specialized in the tropics a zombie idea? (guest post) | Dynamic Ecology