A big bandwagon in community ecology over the past decade or so has been the idea that one can infer the determinants of community structure from whether locally-coexisting species are more or less similar phenotypically. Coexistence of dissimilar species supposedly indicates an important role for interspecific competition, while coexistence of similar species supposedly indicates abiotic “habitat filtering”. While this is an old idea, going back at least to Darwin, Webb et al. (2002) can be given much of the credit for getting the current bandwagon rolling. They suggested a simple recipe for applying this idea using phylogenetic data (since phenotypic traits often are phylogenetically conserved).
Unfortunately, this bandwagon is based on dubious, outdated ideas. And I’m far from the only one who thinks so. For instance, Mayfield & Levine (2010) is a recent paper pointing out that this bandwagon is based on a fundamentally-flawed conceptual picture of how coexistence works. A little while back, I did a post looking at how M&L has been cited since it was published. Is there any evidence that this particular bandwagon has been stopped, or forced to change direction, by their critique? Not really. Most subsequent papers on this topic either didn’t cite M&L, cited it only in passing, cited it only as one surmountable technical critique among others, or (in a couple of cases) miscited it.
In this post, I want to ask why that is. Not just with respect to M&L, but more generally. Why do ecologists often write in such a way as to ignore or gloss over critiques of their approach, interpretations, and conclusions?
The publication of a new paper on inferring coexistence mechanisms from patterns of phenotypic similarity (Smith et al., in press at Ecology) prompted me to post on this. One of the co-authors of Smith et al. is Nathan Kraft, who’s done a fair bit of work on this topic, and who was kind enough to engage in a good-natured debate with me in the comments on my post on M&L. I was struck by the apparent contrast between Nathan’s comments on my post, and the content of Smith et al. As demonstrated by his comments on my post, Nathan’s obviously well aware of M&L–but his paper struck me as basically glossing over their critique. But I’m guessing that the authors of Smith et al. either think they have legitimate reasons to gloss over M&L, or don’t see themselves as glossing over M&L at all. (UPDATE: Since this post was published, I’ve been made aware that Smith et al. isn’t a good example to illustrate the general point I wanted to make, for which my apologies. I think the general issue is still worth thinking about.)
I emphasize that I’m not trying to pick on Smith et al. here or single them out for criticism. As I’m sure was clear from my old post, I disagree with the authors of Smith et al., and many other workers in this area, on a number of issues. This post isn’t about those disagreements–it’s about how we write about them. Disagreement, including among smart, open-minded, well-informed, well-meaning people, is a normal part of science. In this post I’m interested in how we deal with that fact in our papers–or maybe how we gloss over or ignore it!
First, let me lay out the apparent contrast that struck me. In the comments on my old post on M&L, Nathan suggested that I’d underestimated the impact of M&L. He said that it was no longer possible to publish a paper based on the overly-simple ideas critiqued by M&L:
In short, there are many processes that can generate similar phylogenetic patterns…my sense is that the M&L paper has prevented more papers then you might think from making it through review- I think it is ratcheting up the bar for contributions in this area. I agree with your sentiment that going forward there is likely a limited role for simple null model analysis of phylogenetic structure in the absence of other information (functional traits, experiments, demographic modeling, etc) in all but the most challenging of systems to study, but I think things have been trending this way for a while now…
This is a fairly common reaction to my posts on bandwagons and zombie ideas. In my experience, it’s common for readers to respond to my critiques of bandwagons and zombie ideas by questioning whether anyone is still riding those bandwagons, or still believes in those zombie ideas. Indeed, I myself once wondered if a zombie idea I was critiquing was actually a dead horse (to mix a metaphor).
But it turns out one still can publish papers based on the ideas critiqued by M&L–as illustrated by Nathan’s own paper! Here’s a quote from the abstract of Smith et al.:
Thus, dispersion of trait values, or functional diversity (FD) of a community can offer insights into processes driving community assembly. For example, underdispersion of FD suggests that abiotic “filtering” of species with unfavorable trait values restricts the species that can exist in a particular habitat, while even spacing of FD suggests that interspecific competition, or biotic “sorting,” discourages the coexistence of species with similar trait values…We develop a set of null model tests that discriminate between FARs generated predominantly by environmental filtering or biotic sorting and indicate the scales at which these effects are pronounced…
And here’s a quote from the second and third paragraphs of the introduction:
Typically, studies of community assembly focus on several different processes, including the effects of interspecific competition or “biotic sorting” and abiotic “habitat filtering” of species with unfavorable traits (e.g., Cornwell and Ackerly 2009). Under the biotic sorting hypothesis, differentiation in trait values facilitates the coexistence of species that partition available resources, meaning that species with similar trait values will be unlikely to coexist (Brown and Wilson 1956, MacArthur). Over short time spans biotic filtering can lead to competitive exclusion of species with trait values too similar to competitively superior species (e.g., Pyke 1982), leaving the distribution of FD within a community more evenly spaced than would be expected by chance. Over long time spans it can lead to ecological character displacement as a result of evolutionary divergence of trait values, and thus facilitate coexistence (Dayan and Simberloff 2005). Either process will lead to more evenly-spaced gaps in the distribution of FD than expected by chance (Stubbs and Wilson 2004).
Alternatively, the habitat filtering hypothesis states that the abiotic environment limits the successful establishment of all but a group of species with a specific set of trait values, resulting in a distribution of FD that has less variation than one generated from a random sample of trait values in the region (van der Valk 1981).
The quoted passages articulate precisely the same ideas underpinning the work of Webb et al. 2002 and critiqued by M&L. M&L is cited once in passing in the Discussion, in the context of a short paragraph that also mentions other unrelated caveats to the authors’ approach.*
In passing, Smith et al. also perpetuate a zombie idea. They say in the discussion that their results “concur with the general expectation that competition will be reduced in harsh environments, where abiotic constraints dominate community assembly”. This is incorrect; there is no such expectation. Competition may be weaker in harsh environments–but in harsh environments it takes less competition to produce exclusion, so there’s no reason to expect competition to matter less in harsh environments. See here and here for details.
So with that background, here are my thoughts, in no particular order:
1. It sure looks to me like it’s still possible to publish papers in leading ecology journals based on the same simple ideas that underpin Webb et al., without citing (or only citing in passing) contrary evidence and arguments! Similarly, whatever you think of the correctness of the claim that competition is irrelevant in harsh environments, it’s still perfectly possible to make that claim in a leading journal without citing contrary evidence and arguments. As to whether or not that’s a bad thing, and whether or not anything could be done about it (two separate questions), read on…
2. I’m curious whether Nathan sees any contrast between his blog comments and the content of Smith et al. I’m guessing he doesn’t–but if not, why not? I can imagine several possibilities. I’ll frame them more generally, so as to emphasize that I’m not picking on Smith et al. Why might authors who are aware of criticisms of their approach nevertheless cite and discuss those criticisms only in passing or not at all? I think the following list covers most (all?) of the possibilities:
(a) They don’t agree with the criticisms.
(b) They don’t think the criticisms apply to their work.
(c) They don’t see the criticisms as especially serious.
(d) They take the view that no approach is perfect, and so set to one side criticisms of their approach on the view that we have to do this in order to get anywhere in science. Hopefully future work will address or resolve the criticisms.
(e) They don’t want to write a defensive-sounding paper, or let their “take home message” get lost. Instead, they take the view that it’s their paper, they’re entitled to state their views and interpret their data as they wish, so long as they don’t make any clear-cut technical mistakes. If others have criticisms, they can express them in their own papers, and readers will make up their own minds.
(f) They are using an approach used by others, and take the view that this is legitimate until such time as everyone in the field agrees that the approach should no longer be used.
(g) They see the criticisms as beyond the scope of the paper to address.
(h) Intellectual dishonesty: the authors ignore criticisms which they know are valid in the hopes that the referee and editors won’t notice. I mention this only for completeness. I’m sure it happens, but I’m sure it’s very rare.
Some remarks on these:
(a) On its own, disagreeing with a criticism is not a legitimate reason for not citing or discussing it.
(b-c) I suppose these reasons for ignoring criticisms might be legitimate, but only if the authors are correct that the criticisms are inapplicable or of minor importance. And people will disagree on that. For instance, I’m guessing that Smith et al.’s view on M&L is some combination of (b) and (c) (perhaps with a bit of (d) and (g) thrown in). If so, then I disagree with them on that. So do M&L, who explicitly emphasize the fundamental importance of their critique as compared to the less-fundamental issues raised in other critiques of work in this area. In light of the fact that people often will disagree on how applicable or important a criticism is, I think it’s best for authors to take the time to acknowledge such criticisms and explain why they think those criticisms are inapplicable or unimportant. And of course, authors do often do this, but I’m not sure they do it often enough.
(d) Brian and I have a good-natured, long-running disagreement about this. He argues that it’s often best for researchers to push ahead despite critiques of their approach, in the hopes that we’ll learn something along the way and that the critiques will somehow be addressed by future research. You don’t want to make the best the enemy of the good, or the good-enough. Whereas I think that’s too often an excuse for people to focus on putative shortcuts, approaches that falsely but seductively promise an easy path to inferring process from pattern. I worry not about making the best the enemy of the good, but about letting the apparently-good be the enemy of the actually-good. And both Brian and I can point to cases where we’re right. There are cases where temporarily setting aside critiques and pushing ahead has proven useful. And conversely, there are cases where it’s merely prolonged the time until it’s widely recognized that we’ve gone down a blind alley.
Can anything general be said about the circumstances in which Brian’s likely to be right, vs. the circumstances in which I am? I don’t know, but that seems like an important question to ask. Here are some tentative answers.
- If you can think of a specific way in which a critique could be addressed, at least in principle, that’s a sign that it’s worth pushing ahead in the hopes that the critique will eventually be addressed (it’s not definitive, but it’s a sign). Conversely, if you can’t, that’s a sign (again, not definitive) that pushing ahead just means going further down a blind alley. The word “specific” is key here. Because we can always express the vague hope that “future research” will overcome problems with current approaches. I mean yes, sometimes future research addresses the flaws and limitations of current research in ways current researchers never could have anticipated. But I don’t know that relying on that sort of deus ex machina (future research ex machina?) is a good policy in general. For instance, I don’t see any concrete, straightforward way to address the critique of M&L. Just having trait data as opposed to phylogenetic data, or having data at multiple spatial scales, or whatever, doesn’t really address the critique.
- If pursuing a line of research will yield important, novel information no matter whether a critique is addressed or not, that’s a reason to keep pushing ahead. I’d only add that, if you’re pushing ahead for this reason, it’s important to be crystal-clear about that. Don’t pretend (consciously or unconsciously) that you’re pushing ahead for a different reason. In particular, observational studies in ecology often are justified on two distinct grounds. One is that the patterns they document are interesting in their own right. Another is as a shortcut to insight about processes or mechanisms: we can infer process from pattern, based on easily-collected observational data and easy-to-apply statistical methods. The latter justification is almost always wrong: reliably inferring process from pattern is never easy and can be done only in quite specific circumstances. I think that if there are multiple motivations for pursuing your research, some of which are on firmer ground than others, then you should clearly lay out and separate those motivations, and be up front about how firmly they’re grounded. So for instance, I’d like to have seen Smith et al. make more of a case for studying the scale dependence of patterns of phenotypic similarity, independent of our ability to infer anything from those data about the underlying processes that generated the data.
(e) The view that peer review should only be for correcting clear-cut technical mistakes, and that otherwise we should just let the authors say whatever they want and let “marketplace of ideas” sort it out, seems to be increasingly common. Sort of like evolution by natural selection–just let authors throw random “variant” claims out there, and let “selection” by readers pick which “variants” to keep. Peer review should only be for filtering out the “inviable” variants. We might almost call this the “adversarial” model of science, by analogy with the adversarial model of legal proceedings. Each side argues their own case however they see fit, within certain bounds (e.g., no perjury, no hearsay evidence, etc.). The judge or jury listens to both sides and chooses one or the other.
I don’t agree with this model of the “marketplace of ideas”, for a couple of reasons. First, there’s often substantial disagreement as to what constitutes a “technical” mistake. For instance, I think Smith et al.’s passing claim about the unimportance of competition in harsh environments is a clear cut technical mistake–but I presume they’d disagree! As another illustration, both I and a colleague of mine have had papers rejected from Plos One, which professes to judge mss only on their technical soundness. Obviously, my colleagues and I don’t agree that our papers weren’t technically sound, given that we chose to submit those papers in the first place!
Second, this argument implicitly assumes a lot of things about the “marketplace of ideas” in science, including some things that aren’t true. Markets in economics can function less-than-optimally for all sorts of reasons. Lack of appropriate, well-enforced laws and norms to guarantee appropriate behavior by market participants. Imperfect information. Externalities. Tragedies of the commons. Incomplete markets (i.e. no markets in some goods). Transaction costs. Conflicting future plans and expectations of market participants. Etc. Similarly, the “market of ideas” in science can function less-than-optimally for all sorts of reasons. As evidenced by the fact that bandwagons and zombie ideas exist in science. They represent market failures in the marketplace of ideas, perhaps analogous respectively to financial “bubbles” and to persistent failures of real-world markets like the labor market to “clear”. Arguably, they arise in part because the scientific marketplace lacks any mechanism for “short selling”. And in evolution there are all sorts of well-known circumstances in which an evolving population will fail to evolve or maintain an optimal phenotype. So rather than just blindly trusting in the “marketplace of ideas”, I think it’s worth discussing what sorts of rules, norms, and practices would help ensure that the marketplace of ideas functions as well as possible. For instance, elsewhere I’ve laid out why I think pre-publication peer review is important to a well-functioning marketplace of ideas in science. And if one effect of this blog is to bring the “marketplace of ideas” a bit closer to “perfect information” about which ecological ideas are zombies, I’ll be thrilled! 😉
Having said that, it certainly is true that a paper can be bogged down by over-defensive language and by paying too much attention to caveats and criticisms. So while I don’t think it’s a good idea for authors to just be allowed to say whatever they want as long as it’s “technically correct”, nor do I think it’s a good idea for authors to only be allowed to say things that everybody, or almost everybody, would agree on. I confess I’m unsure what to do about this.
(f) Personally, I don’t like appeals to the collective authority of the majority, though I’m not above using them myself in cases where I also think I have substantive grounds for my position. Worth noting that this is precisely how defenders of zombie ideas often defend them: not on substantive grounds, but by arguing “everyone has long believed this, so it must be true, and anyway we can’t afford to stop believing it.” The problem with appealing to the “
wisdom madness of crowds” is that the crowd can be wrong. When it is, you end up perpetuating widespread mistakes. Just because lots of people make a mistake doesn’t mean it’s any less of a mistake.
(g) In many cases it’s legitimate to set aside, or even totally ignore, certain issues as beyond the scope of the paper. For instance, every ecologist writing a paper attempting to causally explain some phenomenon just ignores Robert Peters’ claims that causal explanation is not a legitimate goal of science. And I think that’s fine; no paper can cover everything and you have to draw the line somewhere. The question is where to draw it.
Papers in philosophy and mathematics often note foundational issues but then set them to one side. For instance, in philosophy, its common for authors to debate secondary issues while acknowledging that the whole debate could be rendered moot depending on how some more fundamental issue is resolved. Analogously, in mathematics one might write a paper asking “what else would be true if theorem X were true?”, even though theorem X has yet to be proven, and the whole paper would be moot if theorem X turned out to be false. For instance, a large literature in number theory was built on the assumption that Fermat’s Last Theorem is true, before Andrew Wiles famously proved its truth. And this style of writing isn’t unknown in ecology and evolutionary biology. Charles Darwin famously devoted an entire chapter of the Origin to addressing objections to his theory, objections that he explicitly said would prove fatal to his entire theory if they were true. This seems like an admirably-honest and forthright way of writing to me. For instance, imagine if, rather than briefly citing M&L in passing, Smith et al. had said something like the following: “Our paper is based on fundamental assumptions about the mapping between species’ traits and the underlying coexistence mechanisms. Recently, Mayfield & Levine (2010) have strongly criticized those assumptions. Addressing those criticisms is beyond the scope of our work, which focuses on issues of scale dependence. However, we acknowledge that the validity of our work hinges crucially on the correctness of these assumptions.” On the other hand, it’s been argued that this style of writing in philosophy creates an unfortunate tendency to focus on trivialities while ignoring foundational issues. If everyone always acknowledges a critique but sets it aside as beyond the scope of their work, in practice it’s as if everyone’s just ignoring the critique completely.
As I said, people often disagree on when it’s legitimate to just ignore a criticism as beyond the scope of the paper. For instance, I’ve ignored or briefly dismissed such criticisms of my own work. There are ecologists who think that the Price equation is trivial, that we should give up on community ecology, and that microcosm experiments are always and everywhere valueless or worse. Sometimes I’ve been told by editors to add discussion of such issues–and sometimes I’ve included discussion of such issues only to be told by the editor to delete it! I’ve struggled with this as an editor myself. For instance, we already know on theoretical grounds that one cannot infer anything from local-regional richness relationships about the determinants of local community structure. So does that mean I should reject as moot a paper proposing a new statistical approach for estimating local-regional richness relationships, or a paper using that new approach to re-analyze the published empirical literature on this topic (see this post)? I freely admit I don’t have any easy answers here.
3. As I noted earlier, Smith et al. perpetuate a zombie idea about environmental harshness and the importance of competition as a passing remark; their paper isn’t really about that topic. Lately I’ve been wondering how important such passing remarks are for the perpetuation of zombie ideas, and how closely they’re policed by reviewers and editors. Also how closely they can be policed. By definition, an author’s passing remarks often concern topics only tangentially related to the main point of the ms–which means such remarks often concern topics on which both the author and the referees don’t have great expertise (at least not relative to their expertise on the main topic). So I wonder a bit if passing remarks are both particularly likely to be wrong, and particularly likely to slip through the peer review process when they are wrong. I don’t know the answer to those questions.
Conclusion. In conclusion, let me emphasize again that I’m not trying to pick on anyone here. I enjoyed the discussion on my old M&L post and found it valuable, precisely because Nathan and others who disagreed with me were kind enough to participate and push back. I hope readers and the other participants in the discussion felt the same way, and will welcome the chance to continue the discussion. Let’s continue pushing each other–it’ll make everyone’s science better.
UPDATE: Apparently, great minds think alike: The Lab and Field just posted on basically the same topic. That post links the issue to the rarity of corrections and retractions in ecology, arguing that far too many clear-cut technical mistakes are getting through peer review, never to be corrected or retracted. I’ve talked about the rarity of retractions in ecology before, but only in the context of retractions for misconduct, not for technical errors. And I’m flattered to see that, if anything, The Lab and Field is even more passionate than me about ridding the literature of zombie ideas–he hints that they ought to be purged via mass corrections and retractions!
*Smith et al. also have an appendix purporting to show that their approach correctly separates simulated data generated by biotic filtering vs. habitat filtering. The appendix isn’t yet available online so I can’t evaluate it. I suspect that the results they report in the appendix will be sensitive to how they modeled “biotic filtering” and “habitat filtering”. (In general, I think you want the simulation model generating your data to be “structurally dissimilar” to the analytical approach you’re trying to validate. Drew Tyre and Volker Grimm call this “virtual ecology”. For instance, using an individual-based dynamical simulation model to generate community dynamics, rather than, say, a model that just assigns “traits” to species and then decides which species persist or not based on how similar their traits are. Otherwise, you risk effectively assuming the scientific validity of the very approach whose validity you’re trying to test.) But the general points made in the post are all independent of the content of the appendix of Smith et al. As I said, the post isn’t really about whether Smith et al. in particular is correct or not.
This is an excellent post and it is something that I’ve struggled with for much of my professional career. The “glossing over” of papers that critique a line of research is a real and widespread phenomenon and, in my experience, it has a net negative effect on the science because it substitutes good stories for facts – regardless of whether it benefits some individuals who practice it.
In my experience it revolved around long-term large scale experiments manipulating fire ant populations (in collaboration with Walter Tschinkel) and habitat conditions to understand the impact of fire ants, habitat, and their combined impacts on native ant communities (http://king.cos.ucf.edu/wp-content/uploads/2011/08/2006KingandTschinkelJAE.pdf, http://king.cos.ucf.edu/wp-content/uploads/2011/08/2008KingTschinkelPNAS2.pdf, and http://onlinelibrary.wiley.com/doi/10.1111/j.1365-2311.2012.01405.x/abstract). In sum, the outcome of this work showed that fire ants are a symptom of habitat and their impacts on other ants (in the absence of habitat disturbance) are weak or non-existent (in cases where they cannot colonize undisturbed habitat). This work is very unusual for ant communities (long-term, manipulations of habitat and fire ant, and only fire ant populations). It also transforms, in my opinion, our understanding of how some exotic ants have become so successful and dominant.
Yet, going through the publications that cite this work, I find that most of the actual citations of it (especially in the invasive ant literature) are dismissive or it is only mentioned in passing as a sort of “exception.” Why? I’m not sure, but I note similar overlap with the subject you’ve chosen – an emphasis on purely observational work in much of the invasive ant literature and a strong emphasis on the “competitive abilities,” such as large colony size and other factors. I remain perplexed as to why others have not used these experiments as a springboard to better understanding the synergy between exotic ants and habitat features (i.e. land-use change), not to mention what it might suggest about ant community assembly in general (another topic focused almost entirely upon interspecific competition as the driving force). I used to be quite angry about this, but I’ve moved on to other things and now I’m really just kind of amazed by this willful ignorance, so to speak, that seems to be commonly practiced by ecologists (this was my own specific experience, but what you describe is widespread in ecology). I’m not sure that there is a solution. For my students, I try to teach them to be willing to pursue experiments and test their questions rigorously, and be willing to interpret the data objectively, without being swayed by popular sentiment. It seems there is otherwise not much to be done about it except perhaps, keep complaining?!
Hey, a comment! Thanks Josh. Based on traffic levels so far I was worried this post was way over the TLDR line. 😉
Interested to hear that you’ve had similar experiences, similar to the point that they both have to do with people doggedly sticking to their preferred interpretation of observational data. Although in other fields one can certainly find examples of similar issues arising when people doggedly stick to a certain modeling approach (e.g., DSGE in economics, or string theory in physics). And the zombie idea of the intermediate disturbance hypothesis is an example of people doggedly sticking with an idea in the face of huge masses of contrary *observational* evidence, as well as experimental and theoretical evidence.
Agree that there’s no obvious solution. As you say, you can train your trainees as best you can to be rigorous and skeptical. But presumably, everybody already trains their trainees as best they can, and always has. So “better training” clearly isn’t the answer (any more than “better education” is the answer to public misunderstandings of science; see this old post: https://dynamicecology.wordpress.com/2013/05/09/book-review-the-pseudoscience-wars-by-michael-gordin/)
Historians of science will tell you that a key way scientists win scientific debates is through sheer doggedness: repeating themselves over and over. Another way is coordinating with friends and colleagues and getting them to repeat the same message, over and over. So yeah, keep complaining, basically!
My read of philosophers of science like Kuhn’s idea of scientific revolutions (which I sort of agree with) or Lakatos’ less-binary, less-point-in-time research programs idea (which I strongly think reflects how scientists do work) is that science is ultimately a collective enterprise. We are the Borg! Any one scientist at any one point in time is very likely to be wrong. But the body of all scientists over the long haul is very likely to get it right. This is where the power of science and whatever scientific method there is comes in. Its annoying as all get out to those of us stuck in the middle of transients where we can see (or at least think we can see) that most people are getting something wrong. But I do find it comforting that with this coarser-grained approach science collectively eventually gets most things right.
By the way see here for a very similarly themed post today by LabAndField.
And Jeremy, I agree, the idea that harsh (or more often I hear newly colonized environments) don’t have competition is such a recurring theme it makes me want to scream. I always want to tell people, if you believe in exponential growth, all environments are at “carrying capacity” (whatever that might be but a useful heuristic in this context) in a few generations at which point competition is an important force. Its one of the most basic laws of ecology.
This collective, long-term view of science is comforting. But it doesn’t leave us off the hook as individuals for trying to tip science in the right direction and shorten the transients. How best to do this is an interesting question, one which Jeremy seems to be homing in on in multiple posts on bandwagons and zombies. Certainly in cultures like physics, ideas are called out in the strongest possible words and battles are epic. We ecologists are rather more genteel (the null model debate being the last notable exception was 40 years ago) – I think this is probably for the better and for the worse. But on the whole, a little more calling out of ideas is probably good.
Thanks Brian. I think your comment gets back to an old comment thread (maybe on one of my old macroecology posts?) where you and I talked about whether the collective hive mind of science takes up good ideas and rejects bad ideas as rapidly as it could or should. I don’t think we came to any firm conclusion on that question.
As I’m sure is obvious, I’m an impatient sort. I’d prefer it if science advanced not just one death (or retirement) at a time, but through people changing their minds while they’re alive. That’s part of what this blog is about. And hopefully, by reaching a lot of students, this blog also reduces the “reproduction rate” of bad ideas by preventing them from being “vertically transmitted” from zombies to their students. So hopefully this blog creates both fertility and viability selection against zombies! 🙂
Of course, that cuts both ways–what if people in the grip of zombie ideas start taking up blogging in a big way? But even if that happened, I consider that progress. Because it would mean that the general online culture of vigorous debate had started to penetrate into ecology. That would be a good thing, in my view.
Re: research programs, I’m sure you’re on to something there. In which case, the question can be reframed as: since it’s so hard for external critics to get research programs to change direction, is there any way to make it more likely that research programs will be established on solid foundations from the get-go? Probably not. Although perhaps if people were obliged to write their papers a bit differently, as the post tentatively suggest, we might force those within any given research program to take a bit more notice of “outside” ideas.
Re: ecology being too genteel, well, one purpose of this blog is to try to change that! To show by example that, hey, it’s not only *ok* to tell your colleagues in public that they’re totally wrong, it’s actually both productive and enjoyable for all concerned!
Kuhn is a good call – one further point of Kuhn is that paradigms cannot be removed by proving them wrong, but only by replacing them with a different paradigm, because research on a topic would have to stop otherwise.
It seems to me that this is often going on on a much smaller scale as well – people stick with techniques/methods/mental models that have been demonstrated to be flawed because they allow them to make certain statements / predictions that would otherwise not be accessible.
Oh absolutely. As I noted in the post, and in other posts, there’s definitely a divide in ecology between people who think it’s always ok to stick with current approaches until a replacement comes along, and people like me (probably the minority) who think that sometimes we should just admit that a question is intractable with all current approaches and quit trying to pretend otherwise.
I’d have more sympathy for the former view if it was more common for people to give up on easy-but-seriously-flawed ways of doing things in favor of harder-but-less-flawed ways of doing things. In other words, I wish people who think that “there’s no alternative to the current approach” would more often admit that in fact there *are* alternatives, it’s just that those alternatives require some actual work on your part beyond running some newfangled statistical analysis on some handy observational data you already happened to have. Many of my heroes in ecology are people who’ve gone to the trouble of doing things the hard-but-right way, if necessary by totally switching study systems. Peter Morin and Dave Tilman are two good examples.
Seems we were on similar pages of late! Though I don’t know the literature specific to your post, I can say that the Marketplace of Ideas seems to be the norm, at least in the literature I keep tabs on. But as you point out, it’s not a terribly good market because it’s left to individual “consumers” (i.e., readers) to assess every product (i.e., paper), and we lose any collective information (i.e., rebuttals, corrections, errata, corrigenda, retractions).
Heck, I wrote what I thought was a death-knell of a rebuttal to a paper, got it published in Ecological Applications, and yet it’s often ignored.
And as Brian found, I have some thoughts of my own.
Thanks Alex, just updated the post to link to you. Great minds definitely thinking alike this morning.
Pingback: Play that flumpy music | BioDiverse Perspectives
This issue reminds me of a couple other issues that have been floating around the internet lately.
1. Small Pond Science’s post on ‘Pretending you planned to test that hypothesis the whole time’ (http://smallpondscience.com/2013/06/04/pretending-you-planned-to-test-that-hypothesis-the-whole-time/) seems like another version of this problem. The way science is written, it is encouraged that you frame your problem the best was possible (gloss over critiques) and that you claim you planned to addresses specific hypotheses the whole time.
2. Which leads to another internet thing – the overly honest methods hashtag on Twitter. It was fun to see people admit that the reason they chose a specific speed for their centrifuge was because it shorted out if it went faster. I’d love to see that with justification for the research framework. I’d expect to see things like ‘because we were trained in this method as a grad student and we haven’t really moved on’; ‘one co-author is a little obsessed with this method and took out a whole bunch of self-critical points we made’; ‘we didn’t really read this very closely and just relied on what our second year graduate student wrote, I’m sure it’s fine’; and of course, ‘we weren’t going to be published unless we pretended to be way more sure of ourselves then we really are*’
Overall, I think that scientists like to pretend they are much more objective then they are. Reading my sister’s research in nursing made me appreciate acknowledging biases and philosophical approaches to one’s own research. In her field, there’s no expectation of the one and only Truth so it’s important to define the context in which you’re working on your bit of truth. I’d like to see science get a bit more of this philosophy but I hold little hope.
*I admit that I struggle with this issue myself. I am pretty familiar with the flaws in my research and the shortcomings of my research framework. I try not to say ‘why bother?’ but I can see my final papers coming across as way more self-assured and cocky then what I really feel about my research.
Nice post. A general absence of a retractions, bold-faced critiques, and glossing over criticisms seems like just what the economist would predict given the incentive structure of these issues, right?
I agree that the natural selection/marketplace of ideas isn’t the solution, for the reasons you say. An exponential literature isn’t really a very efficient hive-mind for scientific knowledge, so of course it is easy to ignore it’s contradictions.
A paper presents data, methods and then draws conclusions. Put the data in an open repository with good metadata, express all computational models in open code, and express experimental methodology and conclusions in well-defined semantics. Then (a) such a marketplace could function and (b) this problem of publishing flawed ideas largely goes away, because (1) I can easily rerun the “correct” method on your data and compare conclusions, and (2) I can extract conflicting conclusions from the semantics and have to justify those differences by pointing to problems in their data, methods, or conclusions. (claims that we’d again want semantically linked to the original paper).
Of course we’ll never reach such a science fiction, but we could be a lot closer to it 😉
Interesting ideas as always Carl. Though I suspect the kinds of criticisms that they can deal with is rather limited in scope to certain sorts of technical issues. For instance, the whole “competition can’t matter in harsh environments” zombie is a conceptual mistake. It’s not the sort of mistake you can correct by rerunning code or reanalyzing data. And while I might be able to extract semantics to automatically identify conflicts between this zombie idea and the non-zombie alternatives, doesn’t that just leave us back where we started? With authors (and readers!) often choosing to gloss over semantically-identified conflicts, unless they’re forced to do otherwise?
I agree, a technical solution works much better for technical problems (your statistical method is wrong, etc) then conceptual differences. Doing anything with semantics will be much harder, but it might at least better pin down disagreements. Of course people will continue to gloss over literature that refutes them or undermines their assumptions. It just might be a little easier for the hive-mind to identify when that happens.
An off the wall question: What, if anything, does the limited success of “thumbs up/thumbs down” systems for comments on sites like YouTube tell us about the likely effectiveness of various possible post-publication review systems in science?
I’d go with “nothing.” I don’t think I’m qualified to evaluate the success of those systems (defined how? in the increase in dollars raised by sites implementing them?).
I think we’re already on the same page about the relative ineffectiveness of post-publication peer review (e.g. the Hilborn group paper discussed earlier on this site, 10.1890/ES10-00142.1)
If technology can make it harder for funders to ignore refutations and draw attention to those critiques, then we have made progress. We’ve proven that we (funders, authors, readers) will pay rather significant attention to citation metrics to begin with once they are stuck in front of our noses, so I suspect we’d find it similarly harder to ignore numbers like “this paper has been refuted 20 times” or relies on a paper that “has been refuted 20 times”. When it’s all just one more citation, it’s easier to brush under the rug.
I was thinking of “success” of thumbs up/down systems in terms of promoting productive comment threads, by highlighting intelligent comments and weeding out comments that are unproductive for whatever reason (they contain mistakes, insults, are off-topic, etc.). But you’re probably right, the analogy to any plausible form of post-publication review is probably too loose to be useful.
Very interesting suggestion that putting numbers on refutations might make people pay more attention. As I’ve argued at length in another thread today, I’m skeptical that quality (or “impact” or “influence” or etc.) of science can ever be “objectively” summarized in one or even many numbers. But numbers can give you some useful information. And as you rightly point out, people find it hard to ignore numbers. I’m kind of relishing the idea of a little counter at the top of every IDH paper. “This paper has been refuted X times”. With the counter periodically ticking upwards. 🙂
JF said “In passing, Smith et al. also perpetuate a zombie idea. They say in the discussion that their results “concur with the general expectation that competition will be reduced in harsh environments, where abiotic constraints dominate community assembly”. This is incorrect; there is no such expectation.”
There is indeed such an expectation as formally predicted by many, e.g., Menge and Sutherland 1987, Bertness and Callaway 1994, etc. The prediction may or may not be supported by data or newer theory, but it seems disingenuous to pretend it doesn’t exist. And this is pretty much what you seem to be accusing Smith et al of, no? Ignore contradictory ideas and evidence?
“Competition may be weaker in harsh environments–but in harsh environments it takes less competition to produce exclusion, so there’s no reason to expect competition to matter less in harsh environments.”
Perhaps, but this is a straw man, they didn’t say anything about exclusion or how much competition is needed to cause it. Regardless, I am not sure you are right on this. I certainly see your point – as outlined in your older IDH zombie posts and your IDH paper – but I also think it depends on what you mean by harsh, e.g., physical harshness, physiological harness, etc.
Dont you think in some systems, where there is intense physical disturbance that leads to low densities, competition is effectively nil? Not lowered but non-existant? The smaller populations may indeed be more likely to go locally extinct, but it couldn’t be due to competition if there isn’t any. Alternatively, what about a physiologically stressful habitat where there is a positive relationship between density and fitness? Again, couldn’t competition conceivably be “weaker” in the presence of more “harshness”.
Wow John. With all due respect, you’re in the grip of a zombie idea. Seriously, you are. If you haven’t read my old posts and TREE paper, I really think you should. And if you don’t find them convincing, read Chesson and Huntly 1997 Am Nat.
In particular, I have an old post that deals directly with your notion that some environments could be so harsh that competition is non-existent:
And again with respect, the papers you cite in support of your claims are *not* formal. They are graphical models, in the sense of “the authors made a verbal argument and drew a picture to illustrate it”. If you think they are actually formal arguments, then I invite you to prove it by converting them into a dynamical model.
I admit I find your comments here really distressing John! You’re a loyal reader and a sharp guy. If you’re still in the grips of these zombie ideas, that makes me really depressed about the effect my blogging is having. If I’m not changing the minds of folks like you, I might as well just quit bothering and write about other stuff. 😦
I don’t think I’m a zombie idea zombie, but then again, how would I know? Are zombies aware of their zombiness?
Maybe you’ve overestimated me as Iv’e read your blog posts and IDH paper and I truly don’t understand why, if organisms are prevented from competing by low density, competition would be as important as it is when densities are high, resources are limiting, etc. Could you please explain?
Imagine physical disturbance limiting barnacle density (ala Menge and Sutherland 1987) to say 1/m2. At this density, there is no competition. So explain how competition would be just as strong as when density is high, e.g., 1000/m2.
Also note, adding the requirement of exclusion is bogus. The stress gradient models I’ve described above predict the relative importance of different process across stress gradients. NOT the absolute likelihood of any causing extinction. So let that straw man lie.
And they generally, although not exclusively, predict relative importance, meaning even if the “strength” of competition stays the same, relative importance could decrease if the absolute importance of some other population limiting factor increase.
In your post/quote, you didn’t say anything about “formal”. You said “This is incorrect; there is no such expectation.” I guess it depends on how you formally define “expectation” and “formal” (and “dynamical”). If expectation = prediction, are you arguing that predictions in science, or at least ecology, cannot come from anything other than “formal” models? And what is a formal model if IDH, predator stress models, etc are not? I am guessing what you meant to say was mathematical models. If so, it seems pretty extreme to de facto invalidate any idea or prediction ever made using words or based on field observation and not math. And you don’t seem to be arguing they are wrong or inferior – you are implying they don’t even exist! But if they never existed (even as null models) or otherwise don’t count, why did you, or how could you have invalidated them?! Would that mean that your rebuttal also does not exist (as in there is no such paper as Fox 2013 TREE)? It seems to me the predictions either exist or they don’t. They can exist and be flawed or unsupported but how can documented ideas simply be waved into oblivion with a blog post?
I am more confused than ever but I want to believe Jeremy! Help me understand the insights of your vastly superior intellect!
your humble zombie, John
PS, insulting your readers when they ask genuine questions, may be your prerogative, but doesn’t do much to spread your ideas or increase the civility of ecology. This is something I have thought about since first reading your posts on IDH and I was hesitant to ask the question, suspecting that you’d respond pretty much how you did – with arrogance and derision.
I’m sincerely sorry you found my tone arrogant and dismissive. I should have expressed myself differently so as to encourage you to engage.
Re: competitive exclusion being a straw man, I’m afraid I’m unclear why you say that. The intermediate disturbance hypothesis says that diversity peaks at intermediate disturbance. In both theoretical and empirical work, “diversity” here is typically (not always) interpreted to mean “species richness”. Which means that competitive exclusion, far from being a straw man, is very much at the heart of the IDH. It’s a claim about the conditions under which competitive exclusion will or will not occur.
Re: low density and the strength of competition, you’ve misunderstood what I’m saying. Yes, of course, when densities are low competition is weak, meaning that per-capita growth rates are not much reduced by competition. (If you are solely concerned to establish that, when densities are low competition is weak, with absolutely no concern for the consequences that weak competition might or might not have, then I’m happy to agree with you on that much and apologize for the confusion. You need not read any further in that case.) But the sorts of environmental stresses and physical disturbances that we generally think of as reducing densities to low levels also reduce the per-capita growth rates species can achieve in the absence of competition, thereby reducing the strength of competition needed to make per-capita growth rates go negative.
You seem to be treating observed densities as if they were exogenously determined. That is, you start by assuming “here’s what species’ densities are” and then asking what follows from that assumption. I’d suggest that that way of approaching the issue, while common, is likely to lead to confusion. I suggest that it’s better to start from a focus population growth rates. That is, start by asking: what has to true about the processes driving population growth (immigration, emigration, births, deaths) in order for a system to be in a state with a bunch of species all coexisting at really low densities on average, and to remain in that state for a really long period of time? That is, what has to be true about those rates for multiple species to coexist and all break even on average in the long run (i.e. immigration rate + birth rate balances emigration rate + death rate on average)?
I can imagine several possibilities. The simplest one, which is a zombie idea, is dealt with in that old “dialogue” post. If you’re imagining a closed system (so no immigration or emigration) in which densities are very low simply because per-capita density-independent mortality rates are extremely high, well, it’s not true that you can get coexistence of multiple species that way.
Now, in dismissing that first possibility, I’m emphatically not saying that you can’t have lots of species all coexisting in the long run at really low densities on average! All I’m saying is that you can’t explain that observation by saying “high mortality rates hold down densities, therefore competition is weak or absent and so competitive exclusion can’t happen”. There has to be something else (perhaps something very different!) going on.
There are various other possibilities which can work, and they aren’t mutually exclusive. I’m no expert in rocky intertidal systems, so I wouldn’t venture to guess which of these applies in the rocky intertidal specifically. I’ll just try to list some of the possibilities, which hopefully will help to clarify the way I and others think about this issue.
One possibility is that the system is a sink habitat for all species (i.e., per-capita mortality rate exceeds birth rate for all species), but species persist at low densities because they’re subsidized by immigration from elsewhere. Note that immigration of a given species at some total rate that’s independent of the local density of that species actually acts like a source of intraspecific density dependence (e.g., as a simple example, if the total rate of immigration is I, and any variation in immigration rate is independent of variation in local population size N, then the average per-capita rate of immigration is just (mean I)/(mean N), which of course is negatively density-dependent). Note as well that competitive exclusion isn’t impossible just because you have some non-zero rate of immigration–whether a given species will persist depends on the rate of immigration relative to the net of local birth and death rates. And note as well that “immigration” on its own isn’t an ultimate explanation for coexistence, in that you ultimately need to say something about where the immigrants come from.
Another possibility is that densities are low on average, but that fluctuations in density are crucial to long-term coexistence because of the nonlinearities and/or nonadditivities in the system. There are a couple of text boxes in my TREE paper that talk about this, and I also have a series of corresponding blog posts that starts here. And Bob Holt has papers on how this possibility can interact in surprising ways with the first, an idea referred to as “inflationary sinks”.
Another possibility is that the system is best thought of as a spatial system in which sedentary individuals occupy sites (one individual per site), produce dispersing propagules that land elsewhere, and die. Individuals only compete or otherwise interact with any other individuals that happen to be sufficiently nearby, and such interactions might affect fecundity and/or mortality. And maybe there are some sites that for whatever reason are either temporarily or permanently unsuitable for occupancy. Obviously, I’m not describing any specific biological system in detail, but I’m sure you can imagine various systems to which this general sort of picture might apply. It’s perfectly possible in this sort of model to make assumptions about mode of dispersal, rates of fecundity, sources and rates of mortality, etc. etc. so as to produce a system in which most sites tend to be empty and individuals are very scattered, so that most individuals have few or even no neighbors. But here’s the kicker: in such systems, you do not get long-term coexistence just because densities are low. You don’t even necessarily get long-term coexistence any easier in such systems when the assumptions and model parameters are such as to keep densities low. In such systems, you get stable coexistence only when there’s some sort of trade-off among species (say, between fecundity and mortality rates, or between fecundity and the ability to overgrow one’s neighbors, or some other sort of “competition-colonization trade-off”) that allows each species to tend to increase when sufficiently rare. One thing it’s easy to forget about such systems is that there can still be competition even when individuals have no neighbors. If an occupied site is one in which a dispersing propagule can’t settle, then insofar as other species occupy sites in which your propagules could otherwise settle, they’re competing with you. Shea, Roxburgh, Wilson, and Miller are among those who’ve written papers about the IDH over the last decade using valid models of this type.
What these possibilities all have in common is that low densities and the concommitant weakness (or apparent weakness) of competition are not the correct explanation for how coexistence is maintained. Rather, it just so happens that the conditions that generate the low densities also are such as to generate a stabilizing tendency for each species to bounce back when sufficiently rare.
As to why I favor formal models, I favor them because then I know exactly what’s being assumed, and what the predictions are. Verbal models, and pictures illustrating verbal models, generally do not provide the necessary precision in my experience. I suggest that, if the verbal models on which you’re relying are indeed precise, it should be possible to demonstrate this by expressing them in the form of mathematics, or to cite someone who has done so. And then to analyze the resulting mathematical model in order to confirm that the verbal assumptions do actually imply the predictions the verbal argument claimed, and to confirm that the implied predictions come about for the reasons the verbal argument claimed (By the way, I’m actually very familiar with Menge and Sutherland 1987, have been since grad school. I’m also very familiar with Bertness and Callaway, I have a grad student working on that topic. My unwillingness to simply take those verbal models at face value does not stem from lack of familiarity with them.)
I’ve been reading these posts with some interest, particularly because Bruno cogently gathered and communicated a number of half formed thoughts and reactions I experienced when reading your initial post above.
Here are a few thoughts:
1) Intermediate disturbance theory, or any theory that operates as part of a closed system, is wrong. So it really is a straw man that your beating on. Unless your modeling a NASA biosphere or the entirety of planet earth, none of our study systems are closed. Metacommunity theory is an extension of this idea and patch dynamics literature is, in many ways, an update on classic IDH theory. You mention in your previous posts that spatial heterogeneity and temporal variation can contribute to coexistence, this is patch dynamics. Some simple models (Moquet, Loreau, etc) on competition colonization trade-offs show that disturbance can play a role in coexistence here and that intermediate sized disturbances vs. huge disturbances is another dimension that comes into play at this scale. I bring these thoughts up to say that I think your going overboard to throw out IDH theory completely. All models are wrong, some models are useful. I think whats happening here is that people have updated it quite a bit in recent years. I also don’t think there is a community ecologist actively working today that isn’t at least tangentially aware of the role of space in community dynamics. When I explained IDH theory to my ecology class this spring I started there and then built up to patch dynamics and spatial variability. Conceptually IDH is a great stepping stone, a piece that allows you to make the mental leap to the more complex things happening in real systems.
2) This goes back your quotes about formal and unformal predictions. When formalized mathematically, while certainly precise allowing for ease of communication between ecologists able to decipher your equations, these will still not be an accurate representation of nature. No matter how complex or excellent they are, they are still abstractions used to demonstrate concepts or mechanisms under controlled circumstances. I think that your a bit too dismissive of verbal arguments. Any verbal argument can be parameter – ized, but that won’t make it any better or worse as a model. Trying to parameterize a model is a great exercise to help focus ideas, but ultimately all models are wrong, some are useful. Models are designed to deepen our understanding of ecological systems, and verbal and conceptual models definitely have a place in ecology.
3) “But the sorts of environmental stresses and physical disturbances that we generally think of as reducing densities to low levels also reduce the per-capita growth rates species can achieve in the absence of competition, thereby reducing the strength of competition needed to make per-capita growth rates go negative.” — I’d disagree with this quote. If you get a wave that scours a rock of sessile organisms and you go back to your rock, the “per capita growth rate” is not reduced at all! It many increase for those lucky organisms who survived the disturbance event. Disturbances that are short term and catostrophic shouldn’t neccesarily have a lasting press impact on the conditions at the site, in fact they may improve them (i.e. a fire releasing nutrients into the soil and opening canopy allowing for rapid growth of colonists and the survivors).
4) I think many of your arguments about rates of mortality, growth, etc make sense in a continuous temporal model but not in a discrete model. If something kills 90% of the population of all species every 10th generation this will have a different effect on the population than if you average that intermittent mass mortality event out into your mortality constant in a differential equation.
So now, going back to the points your raise above in your discussion with Bruno. I’d agree that your initial tone came off as a teensy bit arrogant and it seems like your arguing semantics. I suppose that makes sense if your a modeler who is continually annoyed by shoddy communication or misinterpretation of cleverly constructed verbal models……..
Thanks for your interest and for commenting at such length. I’ll respond to your comments in order:
1. I’m not setting up a straw man by assuming a system closed to immigration and emigration. All of the zombie ideas I’m critiquing either explicitly assume a closed system (e.g., Huston 1979), or else make no explicit reference to the need for immigration and emigration (e.g., as in standard textbook explications). So I’m addressing those ideas on their own terms, not strawmanning them. But you’re absolutely right that immigration and emigration can, under appropriate circumstances, combine with disturbances to lead to stable coexistence. Have you looked at my TREE paper? In that paper, I devote a subsection to commenting on competition-colonization trade-off models in this context.
2. Re: all models being wrong, and formal vs. informal models, the trouble is that no one’s ever been able to specify a valid formal model of any of the verbal models I’m critiquing. Every time someone has tried to convert standard verbal arguments into formal math, the verbal arguments have been found to be invalid. This means that the assumptions of the verbal models do not actually imply the conclusions they had been thought to imply. Invalid models are non sequiters; they can’t even be said to be empirically “right” or “wrong” at all. This is totally different from the usual sort of falsity to which all models are subject, as you rightly point out. Indeed, the possibility that an argument could simply be invalid probably isn’t one that ordinarily occurs to most ecologists. Which is perhaps one reason why the invalidity of the verbal arguments I’m critiquing hasn’t been more widely noticed.
Now of course, it’s possible that the formal versions of the verbal arguments haven’t correctly captured the the assumptions of verbal arguments. But in that case, the onus is on those making the verbal argument to clarify exactly what it is they’re assuming.
Further, formal models have identified mechanisms by which disturbances can promote diversity, but those mechanisms are not the ones identified by informal verbal models. This is an important point which I’ve noted more than once, but which seems to get missed often, so let me take the opportunity to note it again in case it needs noting (I’m not sure from your comments if it does or not). It’s not that disturbance can’t affect diversity or promote coexistence–far from it! It’s just that it can’t do so via any of the three mechanisms which I’ve termed “zombie ideas”.
I would also note that some of formal models to which I’m referring are actually models of broad classes of situations. I think this point is underappreciated. I myself haven’t emphasized this point as much as I perhaps should have, so let me take this opportunity to do so as I think it well help address your comment here. The key work here is that of Peter Chesson. Peter Chesson’s theoretical results are powerful because, rather than specifying a specific model and then analyzing its behavior, he’s derived results that apply to any model within a very broad class. Basically, his results apply to any system (open or closed, deterministic or stochastic, disturbed or undisturbed, etc.) with stationary dynamics (do you need me to clarify what those are?). See in particular Chesson and Huntly 1997, Chesson 2000, and (for the most rigorous and formal treatment) Chesson 1994. Note that assuming “stationary dynamics” doesn’t restrict the biological applicability of Chesson’s results–it’s not like assuming discrete generations, or a sedentary adult stage, or asexual reproduction, or whatever (Chesson’s assumptions about the biology of the system are so broad as to apply to any living system). Chesson’s results include a proof that the classes of coexistence mechanisms he has identified for models in this class are exhaustive. There aren’t any more remaining to be discovered, if only we’re imaginative enough in specifying our dynamical assumptions. Not if our dynamical assumptions lead to stationary dynamics (I’m glossing over some technical details, but that’s the gist). And the coexistence mechanisms he’s identified don’t include the zombie ideas I’m critiquing. So those zombie ideas basically can’t work in any world with stationary dynamics. (Aside: the real world is indeed nonstationar. But it’s not even clear what “coexistence” might mean in a nonstationary world, as Peter Chesson himself pointed out at a talk at last year’s ESA meeting. The whole issue of nonstationary environments and their consequences actually is a massive gap for all ecological theorizing, whether verbal or mathematical. It’s a huge and really important problem that nobody’s thought about at all. In all my commentary on the IDH, I’ve set the issue of nonstationarity to one side, because none of the zombie ideas I’m critiquing is based at all on the world being nonstantionary.)
So you’re absolutely correct that all models are false, but some are useful–I agree 100%. But in this particular context, that slogan doesn’t have the implications you suggest it has. Models that are false in useful ways are false because their assumptions are false (or only approximately true, or only true in some circumstances, or etc.). The verbal models I’m critiquing are not false in this way. Their assumptions might well be true (or approximately true, or true in some circumstances, or etc.)–but those assumptions do not actually imply the conclusions they’ve been claimed to imply. In fact, the assumptions of the verbal arguments I’m critiquing imply that disturbance is irrelevant to the maintenance of diversity. (Aside: here‘s an old post I did on the many useful ways in which models can be false).
3. I think we’re talking past one another here. The passage you quoted was a brief attempt to explain why it’s incorrect to say “disturbances lead to low densities, and low densities mean that competition is weak, and weak competition can’t lead to exclusion”. I wasn’t trying to comment on the specific biological situation you describe. You are of course correct to note that a wave that scours out part of a rocky short thereby makes space available for colonization, thereby creating a temporary opportunity for subsequent increase by any species able to colonize the cleared area. I agree with all that. I would merely make one further clarifying remark (with which I suspect you’ll agree): that’s that whether or not disturbances that clear space in the manner you describe can promote stable coexistence depends on the assumptions you make about other features of the system. For instance, disturbances of this sort are of course a key feature of models of competition-colonization trade-offs, and the “successional niches” model of Pacala & Rees 1998. But if for instance there is no competition-colonization trade-off, disturbances of this sort won’t actually promote diversity. I add this clarifying remark because one popular zombie idea about how disturbances promote coexistence is that they do so via “interrupting” it and “setting back” the process of exclusion, preventing it from ever “going to completion” and giving every species a temporary opportunity to increase. Huston 1979 and Connell 1978 are among those who have made this claim, which is invalid. I discuss this idea in my TREE paper.
4. My arguments do not depend on the distinction between continuous and discrete time, either explicitly or implicitly. Chesson’s proofs for instance apply in both continuous and discrete time. Apologies if anything I’ve written has suggested otherwise. If there is a discrete time model that you have in mind that you think shows otherwise, please pass it along, I’d be happy to have a look at it.
Pingback: Why ecologists might want to read more philosophy of science (UPDATED) | Dynamic Ecology
Pingback: Ask us anything: how do you critique the published literature without looking like a jerk? | Dynamic Ecology