We’re very excited to announce a new feature here at Dynamic Ecology: invited guest posts. There are lots of ecologists who, while they don’t want to become bloggers, have some ideas they’d like to blog about, especially if they could reach a large audience. A guest post on Dynamic Ecology is a perfect fit for them. And from our perspective, it’s a great way to provide more of the substantive posts that you, our readers, want.
Our first invited guest post, below, is by Peter Adler. I’ve been bugging Peter since the summer to do a post for us, but it’s well-timed now because it picks up on some themes of Brian’s recent post on how ecologists need to get better at prediction. Ecologists often claim that their work will help improve predictions. Peter takes a hard look at these claims, and he doesn’t spare himself from scrutiny.
“Ecological forecasting” has become a ubiquitous buzzword for good reason: skillful predictions of future ecological changes would be tremendously valuable in terms of conservation success and real dollars. Should land management agencies modify their critical habitat designations for endangered species? How much money should they budget for fighting fires or invasive species in the future? Will land-atmosphere feedbacks dampen or accelerate climate change? Where should carbon traders invest? The list of critical questions goes on and on.
I suspect that I am not alone in justifying my research as an important step towards useful ecological forecasts. But basic research questions are my ultimate motivation, which makes it hard for me to claim that I am really serious about forecasting. I suspect that I am not alone here either, and that has me worried about our field making promises that we don’t intend to keep.
My current NSF project provides a case in point. The title of the project is “Forecasting climate change impacts on plant communities: When do species interactions matter?” It is ostensibly about ecological forecasting, but the real emphasis of the work is on understanding niche differences, the strength of species interactions, and the magnitude of indirect effects. These are topics that NSF review panels get excited about, but they may not be so important for forecasting. In fact, my work shows that when niche differentiation is strong, the indirect effects of climate change are very weak compared to its direct effects, which could be captured by single-species models. Similarly, Bill Murdoch has shown that single-species models work well for generalist consumers but not specialists. So it’s the species with strong, specialized interactions that will require forecasting models which deal explicitly, not just implicitly, with interspecific interactions. We may know enough right now to identify many of these special cases, things like Canada lynx and snowshoe hare or whitebark pine and the mountain pine beetle.
If I were really serious about forecasting, I would take a much different approach. Instead of studying species interactions, which may or may not be important for different species, I would focus on a factor that we know is important for all species in my semi-arid study systems: water availability. If I had good data on how much water a species needs, when in the year it needs it, and where in the soil profile it gets it, I could project future changes in climate on soil water availability and, in turn, species performance. Those predictions could address changes in abundance as well as changes in distribution. My PhD advisor, Bill Lauenroth, and his current collaborators have some nice examples of this approach. Yes, information about species interactions and other conceptually interesting complications might improve these projections, but for most semi-arid plant species I would bet that a first order approximation based on water relations would give the best predictive return per dollar of research investment.
The problem with this approach is that it’s not sexy. We already know that water is the key limiting factor in these ecosystems. A proposal to pursue this kind of research cannot promise a conceptual advance or a new way of thinking about ecology, it just promises to fill in lots and lots of details. Knowing that water is the limiting factor is not enough for prediction; we need to be able to quantitatively describe the shape of species’ reaction norms. Unfortunately, collecting this kind of data is tedious and expensive, especially at the spatial scales relevant to ecological forecasting.
The NSF programs I am familiar with do not reward this kind of work, nor do the high impact journals in our field. My impression is that the rewards and incentives are all focused on breakthroughs in understanding nature, not advances in predictive skill. Ecologists like to assume that deeper understanding will lead to more skillful prediction. That’s a common theme in the Broader Impacts section of NSF proposals. But this may be a bad assumption—witness the success of purely empirical forecasting approaches such as machine learning. Leo Breiman’s classic “Two cultures” paper emphasizes that understanding and prediction are different goals that often require different approaches. Mechanistic understanding can improve prediction in some (many?) cases, but we shouldn’t assume that it always will.
So far I have been arguing that NSF ecology programs do not support research with purely predictive goals that involves collection of reams of boring but useful data. Well, what about NEON? It’s a $400 million investment in ecological forecasting, right? Does this mean that the conceptually-focused culture of ecology is changing? I’m not sure. I have a hard time finding anyone who is excited about NEON, but the kinds of complaints people offer are revealing. One common refrain is, “$400 million and no hypotheses!” This represents what I am calling our traditional ecological research values, emphasizing conceptual advances over prediction. On the other hand, many colleagues I talk to about NEON don’t question its objectives. Rather, they worry that problems in the network’s design will doom it to failure. This is a concern about means, not ends, and may indicate that a portion of our discipline is willing to commit seriously to the ecological forecasting challenge.
I would never argue that we all should be working on ecological forecasting. I’m not even sure that I want to do it, and I would have no problem arguing that basic research is as, if not more, important. My point is that we should be careful about justifying our intellectual curiosity and basic research obsessions under the guise of ecological forecasting. And those of us who do tackle the forecasting challenge should be pragmatic, prioritizing skillful prediction over mechanistic understanding.