INTECOL 2013 – Tuesday Recap #INT13

I missed the first plenary today by Ove Hoegh-Guldberg as I was finalizing my own talk to be given immediatley after the plenary so I am giving a 2nd hand account. But by all accounts it was a great talk. It was on coral reefs and the doom they face. However, unlike many such talks it struck a positive note by pointing out you have to reach people’s hearts to get them to care about coral reefs and demoing an extension to google maps to achieve this that let you “virtual” dive on the coral reefs.

I was part of a session on “Reinvigorating macrecology with process based approaches”, giving the kick-off talk. I only attended this session today partly because I consider it polite to sit all the way through your own session and partly because there were lots of good talks I wanted to see. Which meant I missed talks in at least three other seesions I wanted to go to (its the conference version of Murphy’s law). Rob Colwell, John Harte and myself all gave a strong pitch for the important role of stochastic processes in macroecology (and of seeing them as mechanisms). Sean Connolly explicitly ruled out random placement of species ranges (which both Rob & I cited as an example of a process, albeit stochastic) as not being a process, but gave a nice talk built on Pueyo 2006 a paper which he, and I, would say has not been properly appreciated. It basically builds a Taylor-series like expansion around stochastic null models to increasing levels of complexity (literally higher order terms). Sean tested the framework on a bunch of datasets and showed you need to add the higher order terms (that lead to a lognormal) in most data.

One of the differences I’ve noted from ESA is that at ESA the symposia are all invited speakers with 30 minute slots. Here at INTECOL (or maybe it is a BES tradition) half the speakers are invited and the other half are selected from people who submitted abstracts to give a talk at an oral session and all but one speaker has only 15 minutes. I have mixed feelings about this. It is nice in that it gives early career people a chance to speak at the symposia. And to be exposed to new work instead of some of us (like me repeating old work). But for audience members it kind of reduces the distinction of attending a symposium – namely that a selection filter was applied for people who are mostly good speakers and who have 30 minutes to build their ideas more carefully leading to lots of good talks.

Regardless there were some excellent talks in my symposium (which spanned morning and afternoon). Julia Blanchard talked about work deriving the power distribution with a -2 exponent for body sizes using McKendrick-Von Forester type equations and extended this to show how the body size distribution changes under climate change and fishing pressure. Samraat Pawar presented his 2012 Nature paper deriving optimal foraging parameters from body size allometries (I’m biased having published a paper in this area, but I think this is a really cool, potentially ground breaking paper). Marius Somveille showed a nice study on what controls species diversity in migratory birds (high seasonality). Esther Sebastian-Gonzalez gave a talk on frugivore networks. my first thought was oh-no, not another network statistics talk (such talks are running rampant everywhere), but she actually went the next step and carefully tied traits of the birds and berries to their roles in networks (e.g specialist vs generalist) and examined network variation across latitude (which showed no pattern) which I thought was a great idea of where network theory should be going (i.e. connections to traits and geography). There were several other nice talks in my session, I really enjoyed them all, but space limits prevents more details. Overall, I found it a very heartening session. Although some see macroecology as a plug-and-chug regression then publish field, there is a lot of good deeper thinking going on.

The second symposium was by Nancy Grimm of the ASU sustainability group and the Phoenix Urban LTER speaking on themes near to my own heart. She emphasized that both climate change and the rapid rate of urbanization (we recently passed 50% of the world living in urban areas and may hit 90% by 2100) place increased pressure on water delivery and management. She suggested that the traditional approach of building hard grey infrastructure (lots of concrete) that was designed to make failure (e.g. flooding) rare, but was allowed to fail catastrophically when it failed (think Hurricane Sandy in New York as a recent example) needed to be replaced by a focus on resilience introduced by diversity and green infrastructure and gave some examples. She emphasized that ecologists have an important role in this and we need to start looking at the ecology of built infrastructure and working more closely with engineers. I found this theme inspiring.

The third plenary was Ilka Hanski. He gave a nice overview of the history of spatial ecology. Striking to me (especially given my recent post) was that prior to 2005 the two most highly cited papers on “spatial ecology” where Wiens and Levin’s papers on spatial scaling. The three most highly cited papers since 2005 were all on niche modelling. He then spoke about his recent work on evolution of dispersal in (what else) patchy metapopulations. In his final section he talked about how because metapopulations can collapse (go extinct) when the patch density is too low, the traditional species area law approach to estimating extinctions from lost area is an underestimate – additional species to those going extinct due to loss of area will go extinct due to fragmentation and loss of viable patch structure (too much distance and small patches).

This is another difference I have noticed from ESA. There are lots of plenary sessions – 2-3 every day. And these plenaries are all big name ecologists talking about their work. And everybody goes. My recollection (or more precisely revelation of my perception and behavior of what I attend at ESA) is that the MacArthur award winning lecture is about the only thing at ESA that matches this. There are plenaries for awards, plenaries for opening ceremonies, often with dignitaries, etc. But I really like the high number of plenaries are are basically just high-level departmental seminars. The are all (by definition) excellent, informative speakers, and it sort of ties the conference together (everybody went to the talk and has something to talk about).

One final theme that I see really emerging from this conference is that nobody is willing to squint their eyes and ignore curvilinearities in supposed power laws. If you plot data on a log-log plot you get a straight line if the data follows a power law – y=ax^b and these abound in ecology. But almost always there is a subtle curvilinearity in the data. Joel Cohen talked about them yesterday in his plenary on Taylor’s law. John Harte emphasized their existence in species area relationships. Marcus Viera gave a talk where he argued that mammal allometries should show a break at a body size of 100g due to the Brown-Marquet-Taper argument that this is the optimal body size for mammals. He took an allometry for distance travelled vs body size which had no data below 100 grams (I guess nobody radio collars mice) and went out and got data below 100 grams and show enough there was a clear change in slope (from negative to postive) in distance traveled vs. body size at 100g. Moral of the story – sweep the deviations from straight lines in power laws under the rug at your own risk!

PS – if you want to follow the conference on Twitter look for hashtag #int13

9 thoughts on “INTECOL 2013 – Tuesday Recap #INT13

  1. Nice recap, thanks. Wish I was there (sigh).

    I agree with you re: the way the symposia are structured making them not that much different from ordinary oral sessions. If I recall correctly, this may be a BES thing, but I’m not sure. ESA of course does have it’s own hybrid session of sorts, the organized oral sessions. In which just two (rather than half) of the slots are reserved for people who applied to give regular talks. But in practice, those organized oral sessions usually function more as a device to ensure that similar talks are grouped together than as proper symposia, though there are exceptions.

    Re: stochastic processes in macroecology, I’m with Sean Connolly. The mid-domain effect (random placement of species’ ranges) is not a process. It’s just a bad “null model”, arbitrarily “nullifying” certain effects of the underlying stochastic processes that generated the data while leaving other effects of those same processes intact. But you already knew how I felt about that.🙂 (and see subsequent comments from Sean Connolly himself)

    That aside, the general theme of that macroecology symposium sounds really interesting. Related to a lot of stuff we’ve talked about on the blog too.

    • FWIW Rob’s talk was not on mid-domain. It was his work with Thiago Rangel where species are randomly assigned niches in several environmental dimensions, then placed in geography based on their niche then evolution and speciation are allowed to happen. I definitely oversimplified by calling it randomly placed ranges. There is a lot more process in these models than the mid-domain models, but still not sure it is process based enough for you Jeremy!

    • re “Re: stochastic processes in macroecology, I’m with Sean Connolly.” are you really throwing all stochastic processes out the window or just mid-domain.

      • For the record, I’d call the stuff Thiago and Rob have been doing process-based, since the approach produces estimates of things like temperature tolerance ranges and so forth which can in principle be tested with data other than the species ranges used to fit the model.

        This was essentially my diagnostic for calling a model “process-based”– parameters have a biological interpretation and thus are independently estimable — at least in principle. I’m not overly wedded to this definition, but when I tried to reverse engineer my subconscious working definition by asking what were the shared features of models I considered process-based, that’s what I came up with. Often these are reductionistic, but I don’t think this necessarily has to be the case.

        Null models need their own sort of treatment here, since many are parameterless, but things like decisions about what to constrain to observed values in a null model, or what probability distribution to use to draw randomized values play a role akin to choice of parameter value or functional form. Could one somehow go and, independently of species range data, work out what that constraint should be (e.g., use observed range sizes but choose locations from uniform distribution, or choose both from a uniform distribution, or use something besides a uniform distribution, etc.)? I can’t work out how.

        I think for a lot of models the natural intuitive “nullness” would disappear if, instead of saying vaguely that we generate some aspect of the data “randomly” or “stochastically”, we were more specific and specified the probability distribution we were assuming (usually uniform within some bounds dictated by aspects of the data). Sometimes that statistical expectation makes biological sense, but sometimes it just seems weird. For instance, if I believed that species ranges were specially created by a divine being with an inordinate fondness for uniform distributions, then subsequently shifted in response to the environment, that would be a good reason to use a static randomization of species ranges as a starting point.

        As I said before though, I don’t think Rob Colwell ever intended those randomization algorithms to be the final word on what MDEs should look like. I think he’d consider the use of “process-based” stochastic colonization extinction type models to be a natural refinement/development of theory for what species richness patterns should look like under the “no environmental gradient” hypothesis. Even the niche conservatism models like Rangel et al have a family resemblance to this. Temperature matters in those models, but there is no tendency for species’ niche preferences to gravitate towards, say, hotness rather than coldness.

        Sorry for the excessive use of scare quotes. I blame the jet lag.

  2. I write from the INTECOL meeting in London, where Sean Connolly kindly suggested I might want to have a look at what was being written here, about my two INTECOL talks and their relation to geometric constraints (GC)/MDE theory.

    The mid-domain effect is a pattern forced by geometric constraints on the location and overlap of ranges within bounded domains. To my mind, a constraint is not a process, but constraints can shape the outcome of processes. The geometric constraint on the location of real geographic ranges within the physical limits of real geographical domains (such as the elevational, sea-to-mountaintop, gradients that I discussed in both my INTECOL talks) scales with range size. Small ranges are little constrained by geometry, and their midpoint location is thus free to respond to environmental and historical drivers. Midpoints of, and thus the overlap among larger ranges are increasingly more constrained in location as range size approaches domain estent, and thus often bear the signature of GCs. In one of my talks, I described a novel method (by Wang and Fang 2012) for modeling interactions between GCs and environmental drivers, for empirical data. The interaction has been very simply modeled by Rangel et al. (2005, see also Rangel & Colwell 2009), with evolutionarily dynamic ranges, an approach that was later elaborated in two process-driven, stochastic models (Rangel et al. 2007, Colwell and Rangel 2010). Under the realistic assumption that extinction is a stochastic, inverse function of range size, the patterns that arise from these models clearly bear the imprint of geometric constraints, which are just about as hard to avoid as death and taxes.

    Hurtt and I (Colwell and Hurtt 1994) pointed out the importance of extinction in our models two decades ago: “The assumption in model 2 that species’ ranges cannot, on the average, become arbitrarily small as a hard boundary is approached, is supported not only by the general absence of empirical patterns of significant range contraction at the extremes of distributions but also by theories of extinction and minimum viable population size (e.g., Lande and Barrowclough 1987). Ultimately, then, what we have called nonbiological richness gradients, to emphasize the absence of assumptions about environmental gradients, may depend on this fundamental biological principle.”

    With regard to the assumptions of “classical” GC/MDE null models, Jeremy Fox and other geometry-deniers are apparently still confused, despite repeated, published efforts, by several of us, to clarify matters. These null models assume only that the LOCATION of ranges within a bounded domain is independent of any environmental or historical factors that vary spatially within the domain. These models do NOT assume that real range SIZES are determined by a deity, but rather by interactions between natural selection, environmental drivers, evolutionary and enviromental history, and unspecified factors that collectively we call “stochastic.” For theoretical studies, we may draw range sizes from specified statistical distributions, but for null models aimed at assessing the role of “pure” GC on empirical range location and (thus) richness patterns in nature, we take what nature give us in terms of range sizes: the empirical range size frequency distribution (RSFD). Narcissus effect? Please re-read Colwell and Winkler (1984). The concern, there, was that the prior effects of competition on the mainland pool would be “reflected” (and thus hidden) in island null models that draw from the pool that aim to detect competition. Classical MDE models are in no way aimed at modeling patterns of range size, but rather patterns of range location. In the process-based models that have grown out of the early models, location and size of geographic ranges are modeled independently (to the degree permitted by the GC). In the classic models, failing to match modeled RSFD to the empirical RSFD simply confounds range size with range location, instead of revealing the effects of GCs.

    Colwell, R. K. and T. F. Rangel. 2009. Hutchinson’s duality: the once and future niche. PNAS 106:19651-19658.

    Colwell, R. K. and T. F. Rangel. 2010. A stochastic, evolutionary model for range shifts and richness on tropical elevational gradients under Quaternary glacial cycles. Philosophical Transactions of the Royal Society B 365 3695–3707.

    Rangel, T. F. L. V. B. and J. A. F. Diniz-Filho. 2005. An evolutionary tolerance model explaining spatial patterns in species richness under environmental gradients and geometric constraints. Ecography 28:253-263.

    Rangel, T. F. L. V. B., J. A. F. Diniz-Filho, and R. K. Colwell. 2007. Species richness and evolutionary niche dynamics: a spatial pattern-oriented simulation experiment. American Naturalist 170:602-616.

    • Thanks for your comments Rob, I and I’m sure many readers appreciate this discussion.

      As you note, the debate over the mid-domain effect seems to be rather resistant to resolution. Both sides have made their arguments repeatedly in peer reviewed papers and other venues but yet failed to convince the other side. I appreciate that you find it frustrating that I and others on the other side of the debate from remain confused in your view, but I’m sure you also recognize that in long-standing debates both sides tend to get frustrated at what they see at the continuing confusion of the other side.

      I don’t have any comments to make about the interpretation of classical randomization-based null models of the MDE that haven’t already been made in the peer-reviewed literature, so I won’t comment further on that point as my comments wouldn’t add any value.

      It’s my sense that the general direction of research in this area has been towards models that incorporate the processes and constraints thought to influence range size frequency distributions, species richness gradients, and other biogeographical patterns, and away from attempts to make inferences from comparisons between data and simple randomization-based “null” models. You’ve cited several papers along those lines, there’s also Sean Connolly’s work and other papers as well. I think this shift in direction is a good thing, it allows for much richer and more informative comparisons between model and data. I agree 100% that “constraints can shape the outcome of processes”. And so while I suspect I disagree with you as to the way in which geometric constraints shape the outcome of birth, death, and movement processes in heterogeneous environments, that interplay certainly can be fruitfully explored with process-based models and I look forward following future developments in this area.

  3. Pingback: #EUMacro and macroecology reaching a self-conscious adolescence | Dynamic Ecology

Leave a Comment

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s