A big bandwagon in community ecology over the past decade or so has been the idea that one can infer the determinants of community structure from whether locally-coexisting species are more or less similar phenotypically. Coexistence of dissimilar species supposedly indicates an important role for interspecific competition, while coexistence of similar species supposedly indicates abiotic “habitat filtering”. While this is an old idea, going back at least to Darwin, Webb et al. (2002) can be given much of the credit for getting the current bandwagon rolling. They suggested a simple recipe for applying this idea using phylogenetic data (since phenotypic traits often are phylogenetically conserved).
Unfortunately, this bandwagon is based on dubious, outdated ideas. And I’m far from the only one who thinks so. For instance, Mayfield & Levine (2010) is a recent paper pointing out that this bandwagon is based on a fundamentally-flawed conceptual picture of how coexistence works. A little while back, I did a post looking at how M&L has been cited since it was published. Is there any evidence that this particular bandwagon has been stopped, or forced to change direction, by their critique? Not really. Most subsequent papers on this topic either didn’t cite M&L, cited it only in passing, cited it only as one surmountable technical critique among others, or (in a couple of cases) miscited it.
In this post, I want to ask why that is. Not just with respect to M&L, but more generally. Why do ecologists often write in such a way as to ignore or gloss over critiques of their approach, interpretations, and conclusions?
The publication of a new paper on inferring coexistence mechanisms from patterns of phenotypic similarity (Smith et al., in press at Ecology) prompted me to post on this. One of the co-authors of Smith et al. is Nathan Kraft, who’s done a fair bit of work on this topic, and who was kind enough to engage in a good-natured debate with me in the comments on my post on M&L. I was struck by the apparent contrast between Nathan’s comments on my post, and the content of Smith et al. As demonstrated by his comments on my post, Nathan’s obviously well aware of M&L–but his paper struck me as basically glossing over their critique. But I’m guessing that the authors of Smith et al. either think they have legitimate reasons to gloss over M&L, or don’t see themselves as glossing over M&L at all. (UPDATE: Since this post was published, I’ve been made aware that Smith et al. isn’t a good example to illustrate the general point I wanted to make, for which my apologies. I think the general issue is still worth thinking about.)
I emphasize that I’m not trying to pick on Smith et al. here or single them out for criticism. As I’m sure was clear from my old post, I disagree with the authors of Smith et al., and many other workers in this area, on a number of issues. This post isn’t about those disagreements–it’s about how we write about them. Disagreement, including among smart, open-minded, well-informed, well-meaning people, is a normal part of science. In this post I’m interested in how we deal with that fact in our papers–or maybe how we gloss over or ignore it!
First, let me lay out the apparent contrast that struck me. In the comments on my old post on M&L, Nathan suggested that I’d underestimated the impact of M&L. He said that it was no longer possible to publish a paper based on the overly-simple ideas critiqued by M&L:
In short, there are many processes that can generate similar phylogenetic patterns…my sense is that the M&L paper has prevented more papers then you might think from making it through review- I think it is ratcheting up the bar for contributions in this area. I agree with your sentiment that going forward there is likely a limited role for simple null model analysis of phylogenetic structure in the absence of other information (functional traits, experiments, demographic modeling, etc) in all but the most challenging of systems to study, but I think things have been trending this way for a while now…
This is a fairly common reaction to my posts on bandwagons and zombie ideas. In my experience, it’s common for readers to respond to my critiques of bandwagons and zombie ideas by questioning whether anyone is still riding those bandwagons, or still believes in those zombie ideas. Indeed, I myself once wondered if a zombie idea I was critiquing was actually a dead horse (to mix a metaphor).
But it turns out one still can publish papers based on the ideas critiqued by M&L–as illustrated by Nathan’s own paper! Here’s a quote from the abstract of Smith et al.:
Thus, dispersion of trait values, or functional diversity (FD) of a community can offer insights into processes driving community assembly. For example, underdispersion of FD suggests that abiotic “filtering” of species with unfavorable trait values restricts the species that can exist in a particular habitat, while even spacing of FD suggests that interspecific competition, or biotic “sorting,” discourages the coexistence of species with similar trait values…We develop a set of null model tests that discriminate between FARs generated predominantly by environmental filtering or biotic sorting and indicate the scales at which these effects are pronounced…
And here’s a quote from the second and third paragraphs of the introduction:
Typically, studies of community assembly focus on several different processes, including the effects of interspecific competition or “biotic sorting” and abiotic “habitat filtering” of species with unfavorable traits (e.g., Cornwell and Ackerly 2009). Under the biotic sorting hypothesis, differentiation in trait values facilitates the coexistence of species that partition available resources, meaning that species with similar trait values will be unlikely to coexist (Brown and Wilson 1956, MacArthur). Over short time spans biotic filtering can lead to competitive exclusion of species with trait values too similar to competitively superior species (e.g., Pyke 1982), leaving the distribution of FD within a community more evenly spaced than would be expected by chance. Over long time spans it can lead to ecological character displacement as a result of evolutionary divergence of trait values, and thus facilitate coexistence (Dayan and Simberloff 2005). Either process will lead to more evenly-spaced gaps in the distribution of FD than expected by chance (Stubbs and Wilson 2004).
Alternatively, the habitat filtering hypothesis states that the abiotic environment limits the successful establishment of all but a group of species with a specific set of trait values, resulting in a distribution of FD that has less variation than one generated from a random sample of trait values in the region (van der Valk 1981).
The quoted passages articulate precisely the same ideas underpinning the work of Webb et al. 2002 and critiqued by M&L. M&L is cited once in passing in the Discussion, in the context of a short paragraph that also mentions other unrelated caveats to the authors’ approach.*
In passing, Smith et al. also perpetuate a zombie idea. They say in the discussion that their results “concur with the general expectation that competition will be reduced in harsh environments, where abiotic constraints dominate community assembly”. This is incorrect; there is no such expectation. Competition may be weaker in harsh environments–but in harsh environments it takes less competition to produce exclusion, so there’s no reason to expect competition to matter less in harsh environments. See here and here for details.
So with that background, here are my thoughts, in no particular order:
1. It sure looks to me like it’s still possible to publish papers in leading ecology journals based on the same simple ideas that underpin Webb et al., without citing (or only citing in passing) contrary evidence and arguments! Similarly, whatever you think of the correctness of the claim that competition is irrelevant in harsh environments, it’s still perfectly possible to make that claim in a leading journal without citing contrary evidence and arguments. As to whether or not that’s a bad thing, and whether or not anything could be done about it (two separate questions), read on…
2. I’m curious whether Nathan sees any contrast between his blog comments and the content of Smith et al. I’m guessing he doesn’t–but if not, why not? I can imagine several possibilities. I’ll frame them more generally, so as to emphasize that I’m not picking on Smith et al. Why might authors who are aware of criticisms of their approach nevertheless cite and discuss those criticisms only in passing or not at all? I think the following list covers most (all?) of the possibilities:
(a) They don’t agree with the criticisms.
(b) They don’t think the criticisms apply to their work.
(c) They don’t see the criticisms as especially serious.
(d) They take the view that no approach is perfect, and so set to one side criticisms of their approach on the view that we have to do this in order to get anywhere in science. Hopefully future work will address or resolve the criticisms.
(e) They don’t want to write a defensive-sounding paper, or let their “take home message” get lost. Instead, they take the view that it’s their paper, they’re entitled to state their views and interpret their data as they wish, so long as they don’t make any clear-cut technical mistakes. If others have criticisms, they can express them in their own papers, and readers will make up their own minds.
(f) They are using an approach used by others, and take the view that this is legitimate until such time as everyone in the field agrees that the approach should no longer be used.
(g) They see the criticisms as beyond the scope of the paper to address.
(h) Intellectual dishonesty: the authors ignore criticisms which they know are valid in the hopes that the referee and editors won’t notice. I mention this only for completeness. I’m sure it happens, but I’m sure it’s very rare.
Some remarks on these:
(a) On its own, disagreeing with a criticism is not a legitimate reason for not citing or discussing it.
(b-c) I suppose these reasons for ignoring criticisms might be legitimate, but only if the authors are correct that the criticisms are inapplicable or of minor importance. And people will disagree on that. For instance, I’m guessing that Smith et al.’s view on M&L is some combination of (b) and (c) (perhaps with a bit of (d) and (g) thrown in). If so, then I disagree with them on that. So do M&L, who explicitly emphasize the fundamental importance of their critique as compared to the less-fundamental issues raised in other critiques of work in this area. In light of the fact that people often will disagree on how applicable or important a criticism is, I think it’s best for authors to take the time to acknowledge such criticisms and explain why they think those criticisms are inapplicable or unimportant. And of course, authors do often do this, but I’m not sure they do it often enough.
(d) Brian and I have a good-natured, long-running disagreement about this. He argues that it’s often best for researchers to push ahead despite critiques of their approach, in the hopes that we’ll learn something along the way and that the critiques will somehow be addressed by future research. You don’t want to make the best the enemy of the good, or the good-enough. Whereas I think that’s too often an excuse for people to focus on putative shortcuts, approaches that falsely but seductively promise an easy path to inferring process from pattern. I worry not about making the best the enemy of the good, but about letting the apparently-good be the enemy of the actually-good. And both Brian and I can point to cases where we’re right. There are cases where temporarily setting aside critiques and pushing ahead has proven useful. And conversely, there are cases where it’s merely prolonged the time until it’s widely recognized that we’ve gone down a blind alley.
Can anything general be said about the circumstances in which Brian’s likely to be right, vs. the circumstances in which I am? I don’t know, but that seems like an important question to ask. Here are some tentative answers.
- If you can think of a specific way in which a critique could be addressed, at least in principle, that’s a sign that it’s worth pushing ahead in the hopes that the critique will eventually be addressed (it’s not definitive, but it’s a sign). Conversely, if you can’t, that’s a sign (again, not definitive) that pushing ahead just means going further down a blind alley. The word “specific” is key here. Because we can always express the vague hope that “future research” will overcome problems with current approaches. I mean yes, sometimes future research addresses the flaws and limitations of current research in ways current researchers never could have anticipated. But I don’t know that relying on that sort of deus ex machina (future research ex machina?) is a good policy in general. For instance, I don’t see any concrete, straightforward way to address the critique of M&L. Just having trait data as opposed to phylogenetic data, or having data at multiple spatial scales, or whatever, doesn’t really address the critique.
- If pursuing a line of research will yield important, novel information no matter whether a critique is addressed or not, that’s a reason to keep pushing ahead. I’d only add that, if you’re pushing ahead for this reason, it’s important to be crystal-clear about that. Don’t pretend (consciously or unconsciously) that you’re pushing ahead for a different reason. In particular, observational studies in ecology often are justified on two distinct grounds. One is that the patterns they document are interesting in their own right. Another is as a shortcut to insight about processes or mechanisms: we can infer process from pattern, based on easily-collected observational data and easy-to-apply statistical methods. The latter justification is almost always wrong: reliably inferring process from pattern is never easy and can be done only in quite specific circumstances. I think that if there are multiple motivations for pursuing your research, some of which are on firmer ground than others, then you should clearly lay out and separate those motivations, and be up front about how firmly they’re grounded. So for instance, I’d like to have seen Smith et al. make more of a case for studying the scale dependence of patterns of phenotypic similarity, independent of our ability to infer anything from those data about the underlying processes that generated the data.
(e) The view that peer review should only be for correcting clear-cut technical mistakes, and that otherwise we should just let the authors say whatever they want and let “marketplace of ideas” sort it out, seems to be increasingly common. Sort of like evolution by natural selection–just let authors throw random “variant” claims out there, and let “selection” by readers pick which “variants” to keep. Peer review should only be for filtering out the “inviable” variants. We might almost call this the “adversarial” model of science, by analogy with the adversarial model of legal proceedings. Each side argues their own case however they see fit, within certain bounds (e.g., no perjury, no hearsay evidence, etc.). The judge or jury listens to both sides and chooses one or the other.
I don’t agree with this model of the “marketplace of ideas”, for a couple of reasons. First, there’s often substantial disagreement as to what constitutes a “technical” mistake. For instance, I think Smith et al.’s passing claim about the unimportance of competition in harsh environments is a clear cut technical mistake–but I presume they’d disagree! As another illustration, both I and a colleague of mine have had papers rejected from Plos One, which professes to judge mss only on their technical soundness. Obviously, my colleagues and I don’t agree that our papers weren’t technically sound, given that we chose to submit those papers in the first place!
Second, this argument implicitly assumes a lot of things about the “marketplace of ideas” in science, including some things that aren’t true. Markets in economics can function less-than-optimally for all sorts of reasons. Lack of appropriate, well-enforced laws and norms to guarantee appropriate behavior by market participants. Imperfect information. Externalities. Tragedies of the commons. Incomplete markets (i.e. no markets in some goods). Transaction costs. Conflicting future plans and expectations of market participants. Etc. Similarly, the “market of ideas” in science can function less-than-optimally for all sorts of reasons. As evidenced by the fact that bandwagons and zombie ideas exist in science. They represent market failures in the marketplace of ideas, perhaps analogous respectively to financial “bubbles” and to persistent failures of real-world markets like the labor market to “clear”. Arguably, they arise in part because the scientific marketplace lacks any mechanism for “short selling”. And in evolution there are all sorts of well-known circumstances in which an evolving population will fail to evolve or maintain an optimal phenotype. So rather than just blindly trusting in the “marketplace of ideas”, I think it’s worth discussing what sorts of rules, norms, and practices would help ensure that the marketplace of ideas functions as well as possible. For instance, elsewhere I’ve laid out why I think pre-publication peer review is important to a well-functioning marketplace of ideas in science. And if one effect of this blog is to bring the “marketplace of ideas” a bit closer to “perfect information” about which ecological ideas are zombies, I’ll be thrilled!😉
Having said that, it certainly is true that a paper can be bogged down by over-defensive language and by paying too much attention to caveats and criticisms. So while I don’t think it’s a good idea for authors to just be allowed to say whatever they want as long as it’s “technically correct”, nor do I think it’s a good idea for authors to only be allowed to say things that everybody, or almost everybody, would agree on. I confess I’m unsure what to do about this.
(f) Personally, I don’t like appeals to the collective authority of the majority, though I’m not above using them myself in cases where I also think I have substantive grounds for my position. Worth noting that this is precisely how defenders of zombie ideas often defend them: not on substantive grounds, but by arguing “everyone has long believed this, so it must be true, and anyway we can’t afford to stop believing it.” The problem with appealing to the “
wisdom madness of crowds” is that the crowd can be wrong. When it is, you end up perpetuating widespread mistakes. Just because lots of people make a mistake doesn’t mean it’s any less of a mistake.
(g) In many cases it’s legitimate to set aside, or even totally ignore, certain issues as beyond the scope of the paper. For instance, every ecologist writing a paper attempting to causally explain some phenomenon just ignores Robert Peters’ claims that causal explanation is not a legitimate goal of science. And I think that’s fine; no paper can cover everything and you have to draw the line somewhere. The question is where to draw it.
Papers in philosophy and mathematics often note foundational issues but then set them to one side. For instance, in philosophy, its common for authors to debate secondary issues while acknowledging that the whole debate could be rendered moot depending on how some more fundamental issue is resolved. Analogously, in mathematics one might write a paper asking “what else would be true if theorem X were true?”, even though theorem X has yet to be proven, and the whole paper would be moot if theorem X turned out to be false. For instance, a large literature in number theory was built on the assumption that Fermat’s Last Theorem is true, before Andrew Wiles famously proved its truth. And this style of writing isn’t unknown in ecology and evolutionary biology. Charles Darwin famously devoted an entire chapter of the Origin to addressing objections to his theory, objections that he explicitly said would prove fatal to his entire theory if they were true. This seems like an admirably-honest and forthright way of writing to me. For instance, imagine if, rather than briefly citing M&L in passing, Smith et al. had said something like the following: “Our paper is based on fundamental assumptions about the mapping between species’ traits and the underlying coexistence mechanisms. Recently, Mayfield & Levine (2010) have strongly criticized those assumptions. Addressing those criticisms is beyond the scope of our work, which focuses on issues of scale dependence. However, we acknowledge that the validity of our work hinges crucially on the correctness of these assumptions.” On the other hand, it’s been argued that this style of writing in philosophy creates an unfortunate tendency to focus on trivialities while ignoring foundational issues. If everyone always acknowledges a critique but sets it aside as beyond the scope of their work, in practice it’s as if everyone’s just ignoring the critique completely.
As I said, people often disagree on when it’s legitimate to just ignore a criticism as beyond the scope of the paper. For instance, I’ve ignored or briefly dismissed such criticisms of my own work. There are ecologists who think that the Price equation is trivial, that we should give up on community ecology, and that microcosm experiments are always and everywhere valueless or worse. Sometimes I’ve been told by editors to add discussion of such issues–and sometimes I’ve included discussion of such issues only to be told by the editor to delete it! I’ve struggled with this as an editor myself. For instance, we already know on theoretical grounds that one cannot infer anything from local-regional richness relationships about the determinants of local community structure. So does that mean I should reject as moot a paper proposing a new statistical approach for estimating local-regional richness relationships, or a paper using that new approach to re-analyze the published empirical literature on this topic (see this post)? I freely admit I don’t have any easy answers here.
3. As I noted earlier, Smith et al. perpetuate a zombie idea about environmental harshness and the importance of competition as a passing remark; their paper isn’t really about that topic. Lately I’ve been wondering how important such passing remarks are for the perpetuation of zombie ideas, and how closely they’re policed by reviewers and editors. Also how closely they can be policed. By definition, an author’s passing remarks often concern topics only tangentially related to the main point of the ms–which means such remarks often concern topics on which both the author and the referees don’t have great expertise (at least not relative to their expertise on the main topic). So I wonder a bit if passing remarks are both particularly likely to be wrong, and particularly likely to slip through the peer review process when they are wrong. I don’t know the answer to those questions.
Conclusion. In conclusion, let me emphasize again that I’m not trying to pick on anyone here. I enjoyed the discussion on my old M&L post and found it valuable, precisely because Nathan and others who disagreed with me were kind enough to participate and push back. I hope readers and the other participants in the discussion felt the same way, and will welcome the chance to continue the discussion. Let’s continue pushing each other–it’ll make everyone’s science better.
UPDATE: Apparently, great minds think alike: The Lab and Field just posted on basically the same topic. That post links the issue to the rarity of corrections and retractions in ecology, arguing that far too many clear-cut technical mistakes are getting through peer review, never to be corrected or retracted. I’ve talked about the rarity of retractions in ecology before, but only in the context of retractions for misconduct, not for technical errors. And I’m flattered to see that, if anything, The Lab and Field is even more passionate than me about ridding the literature of zombie ideas–he hints that they ought to be purged via mass corrections and retractions!
*Smith et al. also have an appendix purporting to show that their approach correctly separates simulated data generated by biotic filtering vs. habitat filtering. The appendix isn’t yet available online so I can’t evaluate it. I suspect that the results they report in the appendix will be sensitive to how they modeled “biotic filtering” and “habitat filtering”. (In general, I think you want the simulation model generating your data to be “structurally dissimilar” to the analytical approach you’re trying to validate. Drew Tyre and Volker Grimm call this “virtual ecology”. For instance, using an individual-based dynamical simulation model to generate community dynamics, rather than, say, a model that just assigns “traits” to species and then decides which species persist or not based on how similar their traits are. Otherwise, you risk effectively assuming the scientific validity of the very approach whose validity you’re trying to test.) But the general points made in the post are all independent of the content of the appendix of Smith et al. As I said, the post isn’t really about whether Smith et al. in particular is correct or not.