#ESA100 impressions (UPDATED)

My take-home impressions from #ESA100:


  • I’ve been expecting and hoping for this for a couple of years, and this is the year it finally happened: modern coexistence theory, as developed by Peter Chesson and collaborators, is going mainstream. I’ll even go out on a limb and predict that it’s the next big thing in community ecology. Deborah Goldberg stood up in front of a huge Ignite session crowd and named it as one of the two most important ideas in community ecology right now. A number of people besides the usual suspects gave talks on it, including about how to apply it to new problems. Steve Ellner has invented a new statistical approach that should make estimates of the temporal storage effect (a particularly important component of modern coexistence theory) both easier to do and more accurate. And Peter Chesson presented what may be a major extension of the theory. I’m planning to do my part to get this bandwagon rolling–and help steer it clear of pitfalls–by writing a series of posts explaining modern coexistence theory with minimal (but not zero) math. The emphasis will be on giving you the gist, but in a more precise way than is possible if you just avoid math entirely or rely entirely on illustrative examples. I did the first few a while back, so while you wait for me to write the rest now would be a good time to review the old ones (or read them for the first time).
  • Variance partitioning as a way to infer the processes driving metacommunity structure is dead. At least it should be, in my view. It’s now failed three major attempts to validate it using simulated data generated by known processes–Gilbert & Bennett 2010, Smith & Lundholm 2010, and now Eric Sokol’s very good talk at this meeting. And the reasons it fails probably aren’t fixable. Others would disagree, of course. And Eric himself thinks it might be possible to use other statistical approaches to infer process from pattern here, but personally I’ll believe it when I see it. Variance partitioning as a way to infer the processes driving metacommunity structure was a creative idea worth trying out. But we’ve tried it out, and it doesn’t work, not well enough to be useful at any rate. We should stop doing it. And before you say it, no, the fact that we’ve got lots of data sitting around that it would be really nice to make use of is not a good reason to keep on keepin’ on. If an approach doesn’t work, it doesn’t work, no matter how great it would be if the approach actually did work. And no, the purported lack of alternative approaches to accomplish the same goal isn’t a good reason to keep on keepin’ on either. If an approach doesn’t work, it doesn’t work, even if there aren’t any other approaches that would work. Plus, there actually are plenty of alternative ways to study the processes generating metacommunity structure–you can do all sorts of different experiments, you can collect all sorts of other data, you can do all sorts of other analyses, you can do theoretical modeling… (UPDATE: my comments on variance partitioning aren’t as clear as they should’ve been. What’s dead, in my view, is one popular use of variance partitioning–as a diagnostic tool for metacommunity structure. See the comments and this post for more on this.)
  • A few other talks I really enjoyed: Michael Cortez has a wonderfully simple, elegant idea for how to partition the stability of eco-evolutionary systems. His approach lets us address questions like whether evolution stabilizes or destabilizes the ecological dynamics (and vice-versa). Brett Melbourne showed tightly-linked models and experiments on how well a species in a changing environment will track the shifting environmental conditions to which it is best-adapted. Always cool to see someone develop a simple model that totally nails what’s going on. The alarming upshot is that standard “niche modeling” approaches for predicting species’ range shifts in response to climate change are likely to fail especially badly for precisely those species we’re most concerned about. Even in the absence of more familiar complications like interspecific interactions and barriers to dispersal. Lauren Shoemaker’s talk on how demographic and environmental stochasticity can alter the strength of spatial coexistence mechanisms was very good too. (Note: I saw lots of other very good talks, and I’m sure I missed many as well. Please don’t read anything into it if I didn’t list your talk here, even if you saw me in the audience.)


  • Biggest ESA meeting ever, or very close to it, from what I hear. More sessions than Portland a few years ago, which would seem to imply at least as many attendees.
  • The quality of Ignite talks is more variable than that of regular talks, I think for various reasons.
  • Thanks again to Ulli Hain and Emma Young for the guest posts on where to eat and drink. Those posts got a lot of views, and I heard from a lot of people who followed their advice and were glad they did. I followed several of their suggestions and can confirm that, yeah, the crab cakes at Faidley’s are amazing, and Pitango’s gelato is so good it should be illegal.🙂
  • My one quibble with the organization this year: I didn’t like having big plenary lectures–including Mercedes Pascual’s MacArthur Award lecture–scheduled at noon. I don’t like forcing attendees to choose between lunch and the MacArthur Award lecture (or between a late lunch and the first half of the afternoon sessions). A big reason people come to ESA is to see their colleagues and friends, which they do over meals.
  • I think the over/under on attendance in Ft. Lauderdale next year is 2500. With a big meeting this year, and a popular location (Portland) coming up in 2017, I suspect attendance in Ft. Lauderdale is going to be limited to folks who never miss an ESA meeting. That’s not a criticism of the choice of location–there are good reasons why the meeting needs to move around the country, and why it’s usually held in hot places. It’s just the reality–the meeting isn’t going to be equally huge every year.

Finally, a big thank you to the organizers, who have a big difficult job and who do it very well. I love the ESA meeting, and this year was no exception!

p.s. I’m on holiday until Aug. 21. Posting will remain light and comment moderation may be slow.

16 thoughts on “#ESA100 impressions (UPDATED)

  1. In the verbal description of the storage effect, one that confuses me most is the buffered population growth. To me, looks like, it is assuming the presence of some mechanism for populations to persist (e.g., seed bank). But I feel that you assume that populations to persist, of course, all populations will coexist. I want this confusion to be clarified.

    • Duly noted. The short answer has a couple of parts. One is that “buffered population growth” doesn’t require some persistent stage like a seed bank or long-lived adults. Overlapping generations will do. Another is that a persistent stage like a seed bank doesn’t guarantee persistence of all species, or even any given species. It only helps in combination with other ingredients. The storage effect identifies and quantifies those ingredients. I’ll incorporate a longer answer into the posts.

    • Buffered population growth usually represents a life history strategy – although other aspects of biology can do the trick – that connects variation in the physiological activity of individuals and the competitive effect on others. Basically, one fraction of the population has to be highly sensitive to environmental and competitive effects (larvae in the original lottery model and growing individuals in annual plants) and some other fraction is less sensitive (adults in the lottery model and seeds in annual plants). Note that the highly sensitive fraction responds strongly to BOTH variable environmental and competitive conditions.

      Because the population is subdivided this way, the highly sensitive fraction maximizes benefits when the going is good and the less sensitive fraction minimizes losses when the going is bad. The going is only good when species partition their responses to the environment. Often the going is bad because of competition.

      Ultimately, the subdivision of the population to function for population growth at different times is the critical feature of buffered population growth. Population subdivision as the central feature of buffered population growth is detailed in Chesson (1990) Geometry, Heterogeneity, and Competition in Variable Environments.

      More specifically to your question, for a species to persist simply means that the species is viable in a fluctuating environment. Many species are viable in isolation, but may be excluded from competitors. Going extinct because you have a poor strategy for the environment is different from going extinct because better competitors force exclusion.

  2. Darn, meant to hit Chesson’s talk, but got side-tracked. Looking forward to your next blog posts on the topic.

    I second the no-big-talks-at-lunchtime request. I went to get lunch and then wasn’t able to get into the room because it filled up by the start of the talk! Not only shouldn’t “regular” folks sit around hungry, but for certain groups of people (diabetics, pregnant women, others…) it is just not an option to skip lunch.

    Related: Can we PLEASE get bigger rooms? We all know that certain events — like the MacArthur Award lecture — are going to be well-attended. Most convention centers have areas that are very large — or that can become very large by combining two smaller areas. I had to uncomfortably squeeze myself into several overflowing talks and didn’t get to see the MacArthur Award lecture at all. Very disappointing.

    And a big issue for many in the online community: ESA really needs to address the broader communication issue. This year, presenters were told in emails (and this was in the program, too) that only members of the media were allowed to convey information presented at the conference to the public. This meant a no-tweet policy, as well as a no-photo policy, and a no-record policy. When a member of the ESA communications staff expressed interest in my poster, I told her she could take a picture of it to share. She told me that ESA was discouraging staff from doing so because it sent the message that photos were okay! (So I gave her a paper version of my poster, which she discreetly photographed offsite to post to Twitter.) This is all so ridiculous! We’re in a new age and ESA needs to catch up. I want to record my talk so I can put it online for others to see. I want people to tweet my talk. I want people to take pictures of my poster and share them. I think the majority of people would love to have their research disseminated more broadly. Those that have concerns should be able to have an opt-out. But the default should not be “do not communicate with the public”.

  3. Would you throw away your stone ax because the chainsaw hasn’t been invented?

    Hi Jeremy,

    I usually find your posts thought provoking even if I don’t always fully agree with them. Your post on the ‘death’ of variation partitioning to analyze metacommunities however has me stumped. I’m trying to understand your point here and not finding a way to do that.

    I gather you’ve been convinced by the papers of Gilbert and Bennet 2010 and Smith and Lundholm 2010, as well as the presentation by Eric Sokol at ESA, that variation partitioning is so plagued by problems that it doesn’t have anything to say about looking at metacommunity structure (you say processes, so maybe you mean something different though) and should be ‘dead’. I’m pretty sure I disagree.

    Here are some thoughts and questions:
    1) Variation partitioning describes patterns in data in the same way that plain regression does. How one interprets those patterns is a different question that links these analyses to other forms of knowledge (e.g. theory and/or experiments). I agree that there has been excessive enthusiasm in the second part of this by using results as a ‘diagnostic’ test for metacommunity theories but this just means that folks should be more thoughtful and circumspect about how they interpret the results rather than a fundamental problem with the method itself.
    2) I agree that more theory, modeling and experimental work are needed to study metacommunities. But I disagree that this can generally substitute for variation partitioning. Theory can provide logical verification of hypotheses and their predictions but it needs data to tell us that this is happening. Experiments are data but they almost always come from somewhat contrived settings. Variation partitioning is useful because it lets us look to see if predictions from theory and experiments matches up with patterns in nature (less contrived settings). I guess direct modeling of metacommunity dynamics in a known setting is a possibility, but the emerging efforts I know of require knowledge that isn’t really available for highly diverse natural metacommunities. Maybe someday… then maybe we can ditch variation partitioning.
    3) I also agree that variation partitioning has its problems in its current form, especially as implemented in most of the R-packages that folks have been using. As I read the papers by Gilbert and Bennet and Smith and Lundholm though I think the problem is relatively clear and solutions likely (I won’t comment on the presentation by Sokol since I was not at ESA to hear it so let me know if there is something substantially different).
    The problem here is that it combines a relatively crude way to quantify environmental predictors (based on linear multivariate models such as RDA) with very sophisticated spatial models (especially those using eigenvector maps for example). The problem is that many data sets (and simulated ones in particular) have long gradients with narrow niches in the species involved. Thus many species that occur at midpoints of gradients will not affect the predictive capacity of variation even if in fact the species show strong gradient responses to those variables (their presence tells us we are in the midpoint of the gradient). This means that environmental prediction is underestimated compared to what could be obtained with more sophisticated models for environmental distributions. That’s sort of okay, except when those environmental gradients are spatially structured. That’s because then, that part of environmentally determined variation can show up as ‘pure spatial’; and this can be substantial depending on how important the environmental variables are and how many species occur in the midpoints of these gradients. Overall, in theory this can fixed, and it should be implemented in the more popular packages.
    3) There are probably some other issues with variation partitioning that are less obvious. I don’t know much about these but until someone identifies these I’d be reluctant to ‘kill’ variation partitioning.
    4) Insufficient attention has been given to interpreting variation partitioning even if it were improved. Right now, it seems like the main thing people want to do is ‘distinguish’ between niche sorting along gradients and neutral theory. There are a whole lot of other possibilities including the other two ‘dogmas’ of metacommunity ecology (competition-colonization and mass-effects theory) but we haven’t yet really gotten a handle on how they work in general. For example, colonization-extinction dynamics can also occur in environmentally heterogenous metacommunities. Depending on the extinction rate (relative to the colonization rate) this can change predictors in a variation partitioning analysis (see Leibold and Loeuille 2015 at http://www.esajournals.org/doi/10.1890/14-2354.1). Other things also likely affect results such as the presence of alternate stable states or related phenomena. Even local adaptive evolution of niche relations probably plays a role. Then there are effects of biogeographic history. My point is that these alternatives will prevent variation partitioning from being able to ‘diagnose’ metacommunity processes but we should have known that all along, and it doesn’t have anything to do with the technical shortcomings that have been identified. Nevertheless variation partitioning can still give us insights into these possibilities by identifying how they affect variation components and by identifying the spatial (and conceivably temporal) scales at which they do.

    Let me know if I’ve misinterpreted your thinking but I don’t see anything in the published papers or in your comments that warrants declaring (or wishing) it dead. I’d hate to think we haven’t learned anything from the work that’s been done so far, nor to think that future work will be dismissed because of a premature public obituary. That said, more work does need to be done to strengthen the patient.

    • Thanks for taking the time to comment Mathew. I appreciate hearing from someone who knows more about this than me and has thought about it harder. And I appreciate the chance to clarify my too-brief remarks.

      We don’t actually disagree, at least not as much as you might think. My big problem with variance partitioning–the approach that I think should die–is using it as a diagnostic test for which kind of metacommunity one is studying. I don’t think that works at all, and I don’t think it can be made to work if only we appropriately extend/modify Karl Cottenie’s original diagnostic scheme. One reason (among others) that I think that diagnostic approach won’t work is the results of Smith & Lundholm, Gilbert & Bennett, and now Sokol. Another reason I think the diagnostic approach won’t work is your #4 (I have other reasons too, some of which I suspect you’d disagree with…). Sorry that my comments weren’t clearer on that. I don’t think that people should stop using variance partitioning on their data. I just think they should treat it as a descriptive tool rather than a diagnostic tool.

      Re: your #3, I think I disagree a bit that the issues identified by Smith & Lundholm and Gilbert & Bennett are basically fixable statistical issues. I think that their work (and Sokol’s) reveals a deeper issue: we have faulty intuitions about how a given combination of processes should translate into a given variance partitioning result. Sokol for instance emphasized how the relative and absolute magnitudes of the terms in a variance partition often change non-monotonically as one varies a parameter in one’s metacommunity model (say, a dispersal rate parameter, or a parameter governing how close the modeled metacommunity is to neutrality). But I’m not as familiar with the statistical issues here as you are.

      Hope this clarifies my views, and apologies for provoking confusion–that’s not the sort of provocation I was aiming for.🙂

    • A quick further thought re: the possibility of a premature public obituary. This blog doesn’t have nearly enough influence for there to be any need to worry about that. No one post–or even many posts–from us has the power to shut down or even substantially slow a popular line of research. For instance, I’ve been calling the IDH a “zombie idea” online and in print at TREE for several years now. There’s been no detectable effect on published research on the IDH, at least not that I can find (and I’ve looked).

      That’s not to say we have zero influence or power–we certainly have some. But it’s the power to provoke thought and discussion, not the power to change behavior.

      I have an old post on this issue more broadly. It’s very rare in science for a popular approach or line of research to be *prematurely* stopped in its tracks by outside criticism:


      (Edit: and just to be clear, I don’t think that our lack of influence on others’ behavior means we should just feel free to say deliberately-outrageous things, safe in the knowledge that it won’t matter much! I’ll cop to sometimes writing in a provocative style, and sometimes trying ideas on for size. But I would never write anything I didn’t believe and didn’t think I had good reason to believe.)

    • Just a quick comment re your point about the relatively crude ways in which environmental predictors are included in linear multivariate methods such as RDA.

      This really reflects poorly on the users of RDA etc in how they think about how to represent environmental predictors in the model, not a failing of RDA per se nor something due to it being a linear model.

      Spatial eigenvectors are nothing more than a basis expansion of the x and y coordinates. They are a pretty sophisticated basis expansion, especially when compared with the earlier spatial polynomial basis that was used in the original variation partitioning literature from Daniel Borcard, Pierre Legendre, et al.

      There is little stopping ecologists representing environmental predictors via some basis expansion and the linear model behind RDA even makes this trivial to do. In the same way that you could add spline functions of predictors to a linear regression, for example, you can add them to an RDA. In fact, you could probably do this right now without any modification to vegan’s rda() function if you used functions from the splines package that ships with R and used rda()’s formula interface.

      The main stumbling block to doing this in practice is that if you have but just a couple of environmental predictors and want complex spp ~ env_i relationships you’re going to run out of “degrees of freedom” and thus have an unconstrained model. We don’t have a good way of choosing the complexity of spline functions of environmental variables automatically in ordination like the mgcv package has for GAMs. I suppose one could envisage doing some form of forward selection on the basis expansion or a decomposed version of it, but I’d need to think this through some more.

      My gut feeling is that we need more thinking on how we choose which eigenfunctions to retain from the fancy spatial eigenfunction methods; selecting on the basis of explanatory power over the set of all species seems like we might be accumulating lots of irrelevant amounts of variance explained per species that cumulate to something statistically important when we consider all taxa. The complexity of spatial eigenfunctions/basis expansions may well be a significant fly in the variation partitioning ointment if users are not exceedingly careful.

      (Now I should probably go and read those papers on variation partitioning to see what the problems are… However, it has always struck me as a crude approach in general [I don’t do metacommunity work] compared to how we might go about doing the same thing in univariate regression models.)

      • Cheers for this. This is the sort of statistical issue that’s quite beyond me. I certainly get the gist, but not enough of the technical detail to be able to offer any useful comments.

  4. Another vote for “Not dead yet!”

    James Stegen and I used simulation models to evaluate the utility of variance partitioning as well


    and like Smith & Lundholm and others found that the “traditional” approach was flawed in that the strength of environmental filtering or other processes was not necessarily a monotonic function of explained variance (in part for reasons Mathew stated above).

    Furthermore, conducting variance partitioning on taxonomic composition data alone failed to diagnostically identify the combination of processes generating the pattern.

    HOWEVER, layering in additional information such as variance partitioning of phylogenetic and functional diversity as well was sufficient in our model to correctly identify the generating processes.

    So I would argue that interpreting variance partitioning from a single type of data alone as has traditionally been done may be inadequate, but there is some hope that as you fold in additional patterns into your expectations, that your ability to discriminate and diagnostically identify processes goes up. For example, for a particular taxonomic variance partitioning result, there are multiple parts of “process space” that could have generated that pattern. However, for a particular taxonomic variance partitioning result plus a particular functional variance partitioning result PLUS a particular phylogenetic variance partitioning result, we found that there was only a small region of process space that was consistent, which coincided with the parameter set that generated the data.

    The question I would have asked Jeremy if he had given his originally slated ESA talk would have been whether it would be possible to examine meaningful functional traits for his species and use the combination of taxonomic and functional composition patterns in concert as we did in our paper with perhaps a better result. (I’m assuming the species involved are super disparate phylogenetically such that treating them as a clade that has co-evolved doesn’t make much sense, but by all means that could be folded in too.)

    • “The question I would have asked Jeremy if he had given his originally slated ESA talk would have been whether it would be possible to examine meaningful functional traits for his species”

      Depends what you’re prepared to count as a “trait”. For instance, I’d count R* values as a very important trait in the context of experiments like the one I was planning to talk about (see Fox 2002 Am Nat). But I suspect that’s a broader definition of “trait” than most people would want to adopt.

      I doubt that looking at phylogenetic relatedness of the species involved in my microcosm experiments would tell you much, but I could be wrong. Lin Jiang’s done some microcosm work on that, focusing on bacteria.

  5. Variation partitioning should not die, it should evolve!

    Some people have already contacted me asking whether variation partitioning is no longer! I don’t think Jeremy tried to say that. He commented that “Variance partitioning as a way to infer the processes driving metacommunity structure is dead”! However, even there, I don’t quite agree with him.

    Let me start with a teaser: Metapopulation ecologists have been inferring on habitat quality (environment) and spatial connectivity (dispersal) for quite sometime. This seems not to have gotten them into trouble. Why is that?

    Variation partitioning:

    – Can infer on environmental filtering! This is great!
    – Allows (if properly used) for proper inference under spatial autocorrelation (i.e., spatial nuisance).
    – Is a tremendous heuristic tool in the sense that allows us to know how much we don’t know about the potential spatial (and non-spatial; see below) processes driving metacommunity dynamics (i.e., spatial legacy).

    The main criticisms towards variation partitioning were generated by Gilbert and Bennet 2010 & Smith and Lundholm 2010) and I’ll provide detailed comments on their findings. I will also comment on ways worth looking about making variation partitioning better!

    A – VARIATION PARTITIONING IS A REGRESSION MODEL! This realization is important because regression is one of the most important inferential tools we have. Historically, variation partitioning was created “to identify common and unique contributions to model prediction and hence better address the question of the relative influences of the groups of independent variables considered in a regression model” (I’m quoting myself as in Peres-Neto et al. 2006). It has been created outside of the realm of ecology under the name of commonality analysis (Mood 1969, Kerlinger and Pedhazur 1973). As far as I know Borcard et al. (1992) were the ones that renamed “commonality analysis” into “variation partitioning”.

    Two pet peeves of mine:

    – It is not “variance” partitioning because they are not true variance estimates.
    – The common fraction is not an interaction.

    So, variation partitioning as a way “to identify common and unique contributions to model prediction and hence better address the question of the relative influences of the groups of independent variables considered in a regression model” is a good thing!!!

    One major issue is that we use an OLS model with abundance and presence-absence, though one would be quite surprised to see how robust they compare to more appropriate GLMs for ecological inference.


    It seems that what get us ecologists into trouble is when we consider a set of spatial predictors in a variation partitioning scheme and the inferences regarding the processes we make. I think that was Jeremy’s original point. We infer space as dispersal dynamics, we infer dispersal dynamics as neutral. Can we? Should we? More below!

    Criticisms root from two main sources: a) Bias in the contrast of fractions between environmental and spatial variation (Gilbert and Bennett 2010) and b) the interpretations (inferences) we give to the spatial fraction are incorrect (Smith & Lundholm 2010). I’ll discuss these two studies because they seem to be the foci of attention.

    1 – Contrast of fractions: Gilbert and Bennett (2010; GB hereafter) produced a set of simulations in which the relative amount of environmental and spatial contribution was preset and then estimated by variation partitioning. Their general conclusion was that the spatial component based on sampled explained more variation in contrast to environment than originally preset in the metacommunity. In my mind, their conclusions points out to a major issue: “ecologists need to pay more attention in how to treat environmental predictors in variation partitioning” and not a criticism of the method per se.

    – Species environmental responses were simulated according to a Gaussian response curve (i.e., non-linear). As such, species responses are non-linear in relation to environment under an RDA. They acknowledge this issue and used PCNM also to model environmental variation. PCNM is not appropriate in these situations. Any site that is more environmentally distinct than any other set of sites beyond a certain threshold (usually set as the maximum distance from a minimum spanning tree) will be treated as equally different (all distance above threshold are fixed in PCNM). This is a reasonable assumption for geographic variation (i.e., the connections between geographically distant sites are severed and made all equal), but makes little sense environmentally. It makes sense geographically because beyond a particular distance (threshold in PCNM or range in spatial statistical models), information on species distributions from one site cannot predict variation in other sites; this is the reason that a number of variogram models have a range in which beyond that value, spatial variance becomes zero! Now going to a PCNM applied to environment. Imagine that the range (based on a minimum spanning tree or other method) establish a threshold of 4 degrees in terms of temperature. In this case, given any particular site (local community), all other sites that have difference of 4 degrees or more in temperature is given a distance of zero in the distance matrix used to extract “environmental” PCNM. So, sites with 4, 8, 12 or more degrees of difference in temperature are set to have the same effect on species distributions. That does not seem ecologically sound and may actually produce non-linear responses of the environment that provide poorer fit for environment in contrast to space.

    By setting appropriate transformations for the environmental predictors, a linear model would work quite well! If these transformations had worked well the conclusion would be that “ecologists need to pay more attention in how environmental predictors are treated in variation partitoning”. BTW, splines with appropriate penalization schemes would had performed quite well in modelling their environmental data. I’ve been working on a variation partitioning scheme using MARS (Multivariate additive regression splines) that works quite well.

    – Degrees of freedom. GB (and others, including Peres-Neto and Legendre 2010) have raised the following issue “how to penalize the inclusion of multiple eigenvectors, when several eigenvectors may be modelling a single spatial process” (excerpt from GB). In fact, by penalizing each spatial eigenvector with one degree of freedom, the importance of the spatial fraction may be underestimated in many cases (contrary to GB’s findings). There are two ways in which ecologists seem to select (usually by forward selection) spatial eigenvector (predictors) to include in a variation partitioning: 1) select them by already considering the environmental predictors in the model; 2) select them independently of the environmental predictors. The second one seems to be more used and allows to properly measure how much the environmental and spatial sets overlap in predictive space (i.e., shared variation, fraction [b]). The first one has the motivation of giving more importance to the environmental set given that we know its origins. Moreover, it treats spatial autocorrelation as a nuisance, i.e., space is not the main interest but when considered allows for proper inference of the regression coefficients of the predictors of interest (also known as spatial filtering). Both have important repercussions to the interpretation of fractions that I won’t have the time to discuss here. Lasso and cross validation can become really useful here and should be tried out. Cross validation is used to estimate unknown degrees of freedom for a set of predictors and is actually directly related to adjusted R2s.

    – I assume (as usually happens in many of multispecies simulations) that many species in the whole landscape would not be present in the GB’s samples. This in itself can bias variation components. A fair contrast would be to delete species in the whole landscape that were not present in the sample and to re-estimate the “true” (expected) variation components.


    Logue et al. (2011) reviewed that currently there are three main analytical frameworks to empirically analyse metacommunities: variation partitioning, null models of species co-occurrences and test of neutral theory. They also point out that although variation partitioning is the only one that aims at distinguishing species sorting (via differences in environmental affinities) and dispersal, in reality, only the former can be clearly distinguished. I don’t disagree with that at all! And it happens because variation in species composition across communities that is spatially structured can result not only from dispersal but also from unmeasured spatial environmental variation (i.e., missing environmental predictors that are themselves spatialized). The problem is that when neutral dynamics arrived, ecologists started inferring that the spatial component was due to neutral dispersal dynamics. The technique, however, was proposed and used almost a decade before neutral dynamics. So, I do agree here with Jeremy on “I don’t think it can be made to work if only we appropriately extend/modify Karl Cottenie’s original diagnostic scheme.” I also think we will have a hard time to make variation partitioning into a tool that can contrast neutral versus filtering but let me give you some places to perhaps start:

    Connectivity versus spatial eigenvector predictors. In Jacobson and Peres-Neto (2010):


    we pointed out that one should build spatial predictors closer to dispersal dynamics as we do in metapopulation models. One such approach has been implemented by Layeghifard et al. (2015):


    Let’s assume that a metacommunity is driven by both niche and neutral dynamics. In this case (see Smith & Lundholm 2010), if parts of the environmental variation that drive metacommunity structure are themselves spatially autocorrelated, then the common fraction ([b]) with “space” may contain three types of covariation: a) covariation with spatial predictors because the considered environmental predictors are themselves spatially structured; b) covariation with spatial predictors that may represent missing environmental predictors that are themselves spatially structured; c) covariation that is due to spurious autocorrelation due to the neutral component of the dynamics. Although slopes in OLS models are not biased in the sense that their estimates (in average) under sampling variation are equal to the population mean (though in many situations with little accuracy as their confidence intervals are inflated), R2 estimates are biased due to spatial autocorrelation. This latter point is little acknowledged but it means that the common fraction will contain some variation due to spurious autocorrelation. There are ways to separate these 3 components and we have been working on these implementations, but I won’t be able to cover them here.

    C – Variation partitioning as a tool to make our science better. Assuming we use variation partitioning properly, we can estimate how much spatial variation is of unknown origin. It is up to us to investigate further the origins of this variation. We are very bad at that though! Moreover, spatial predictors can be used to map spatial structure at different scales and point out important directions for further investigation.

    IN CONCLUSION and in agreement with others earlier, variation partitioning should not die (at least not before some more good investment, at least in my mind). That said, we do have to work harder towards evolving it.

    • Thanks for the lengthy comments Pedro. I’m glad my remarks sparked a useful discussion even though they initially weren’t as clear as they should’ve been.

      I agree with much but not all of what you have to say. Or perhaps I’m not understanding some of it–you may mean something different by “environmental filtering” than what I think should be meant. My own preferred definition is the same as that used by Kraft et al. for “habitat filtering” in their recent paper; you can’t learn anything about habitat filtering in their sense from variance partitioning of observational data from unmanipulated natural systems (https://dynamicecology.wordpress.com/2015/06/29/steering-the-phylogenetic-commuinity-ecology-bandwagon/)

      And I’m not sure that “spurious” spatial autcorrelation is really a thing. I think if you have spatial autocorrelation in your data, it arose for some reason, and whether you want to subtract it out of your analysis depends very much on the goals of the analysis. Brian has an old post on this in another context (https://dynamicecology.wordpress.com/2013/10/02/autocorrelation-friend-or-foe/) But in saying this, I may actually be agreeing with your remarks C at the end of your comment, noting that we need to investigate the causes of spatial autocorrelation of unknown origin.

    • Lots of good points and information in there Pedro!

      The main point I want to highlight and thoroughly agree with is “space” is convenient to throw into a regression but “space” is NOT a process. It may be unmeasured environmental variables that are spatially structured, it may be dispersal but dispersal need not necessarily equate to dispersal limitation. And dispersal definitely need not be neutral.

Leave a Comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s