Ecological forecasting: why I’m a hypocrite and you may be one too

We’re very excited to announce a new feature here at Dynamic Ecology: invited guest posts. There are lots of ecologists who, while they don’t want to become bloggers, have some ideas they’d like to blog about, especially if they could reach a large audience. A guest post on Dynamic Ecology is a perfect fit for them. And from our perspective, it’s a great way to provide more of the substantive posts that you, our readers, want.

Our first invited guest post, below, is by Peter Adler. I’ve been bugging Peter since the summer to do a post for us, but it’s well-timed now because it picks up on some themes of Brian’s recent post on how ecologists need to get better at prediction. Ecologists often claim that their work will help improve predictions. Peter takes a hard look at these claims, and he doesn’t spare himself from scrutiny.

-Jeremy

****************************************

“Ecological forecasting” has become a ubiquitous buzzword for good reason: skillful predictions of future ecological changes would be tremendously valuable in terms of conservation success and real dollars. Should land management agencies modify their critical habitat designations for endangered species?  How much money should they budget for fighting fires or invasive species in the future? Will land-atmosphere feedbacks dampen or accelerate climate change? Where should carbon traders invest? The list of critical questions goes on and on.

I suspect that I am not alone in justifying my research as an important step towards useful ecological forecasts. But basic research questions are my ultimate motivation, which makes it hard for me to claim that I am really serious about forecasting. I suspect that I am not alone here either, and that has me worried about our field making promises that we don’t intend to keep.

My current NSF project provides a case in point. The title of the project is “Forecasting climate change impacts on plant communities: When do species interactions matter?” It is ostensibly about ecological forecasting, but the real emphasis of the work is on understanding niche differences, the strength of species interactions, and the magnitude of indirect effects. These are topics that NSF review panels get excited about, but they may not be so important for forecasting. In fact, my work shows that when niche differentiation is strong, the indirect effects of climate change are very weak compared to its direct effects, which could be captured by single-species models. Similarly, Bill Murdoch has shown that single-species models work well for generalist consumers but not specialists. So it’s the species with strong, specialized interactions that will require forecasting models which deal explicitly, not just implicitly, with interspecific interactions. We may know enough right now to identify many of these special cases, things like Canada lynx and snowshoe hare or whitebark pine and the mountain pine beetle.

If I were really serious about forecasting, I would take a much different approach. Instead of studying species interactions, which may or may not be important for different species, I would focus on a factor that we know is important for all species in my semi-arid study systems: water availability. If I had good data on how much water a species needs, when in the year it needs it, and where in the soil profile it gets it, I could project future changes in climate on soil water availability and, in turn, species performance. Those predictions could address changes in abundance as well as changes in distribution. My PhD advisor, Bill Lauenroth, and his current collaborators have some nice examples of this approach. Yes, information about species interactions and other conceptually interesting complications might improve these projections, but for most semi-arid plant species I would bet that a first order approximation based on water relations would give the best predictive return per dollar of research investment.

The problem with this approach is that it’s not sexy. We already know that water is the key limiting factor in these ecosystems. A proposal to pursue this kind of research cannot promise a conceptual advance or a new way of thinking about ecology, it just promises to fill in lots and lots of details. Knowing that water is the limiting factor is not enough for prediction; we need to be able to quantitatively describe the shape of species’ reaction norms. Unfortunately, collecting this kind of data is tedious and expensive, especially at the spatial scales relevant to ecological forecasting.

The NSF programs I am familiar with do not reward this kind of work, nor do the high impact journals in our field. My impression is that the rewards and incentives are all focused on breakthroughs in understanding nature, not advances in predictive skill. Ecologists like to assume that deeper understanding will lead to more skillful prediction. That’s a common theme in the Broader Impacts section of NSF proposals. But this may be a bad assumption—witness the success of purely empirical forecasting approaches such as machine learning. Leo Breiman’s classic “Two cultures” paper emphasizes that understanding and prediction are different goals that often require different approaches. Mechanistic understanding can improve prediction in some (many?) cases, but we shouldn’t assume that it always will.

So far I have been arguing that NSF ecology programs do not support research with purely predictive goals that involves collection of reams of boring but useful data. Well, what about NEON? It’s a $400 million investment in ecological forecasting, right? Does this mean that the conceptually-focused culture of ecology is changing? I’m not sure. I have a hard time finding anyone who is excited about NEON, but the kinds of complaints people offer are revealing. One common refrain is, “$400 million and no hypotheses!” This represents what I am calling our traditional ecological research values, emphasizing conceptual advances over prediction. On the other hand, many colleagues I talk to about NEON don’t question its objectives. Rather, they worry that problems in the network’s design will doom it to failure. This is a concern about means, not ends, and may indicate that a portion of our discipline is willing to commit seriously to the ecological forecasting challenge.

I would never argue that we all should be working on ecological forecasting. I’m not even sure that I want to do it, and I would have no problem arguing that basic research is as, if not more, important. My point is that we should be careful about justifying our intellectual curiosity and basic research obsessions under the guise of ecological forecasting. And those of us who do tackle the forecasting challenge should be pragmatic, prioritizing skillful prediction over mechanistic understanding.

48 thoughts on “Ecological forecasting: why I’m a hypocrite and you may be one too

  1. Hi Peter – great post, and obviously a topic I’m thinking a lot about too right now.

    You touch on many things. I wanted to highlight two.

    1) You give the example of predicting growth of plants in arid environments and how species interactions are a second-order factor and water balance is the first order factor. This is a nice example. And for me it highlights the success of basic research. We know what is first most important and roughly how it works. We know what is second most important and how it works (and even how and when it interacts with the first factor). This to me is an impressive story. A success story, and it is a success story of whether we can predict outcomes as a litmus test. Thus nicely exemplifying my previous claim that worrying about whether we know enough to predict or not is a good challenge and guide for basic science.

    2) Then you raise the question about who should do prediction and how it interacts with getting funding. This is a whole separate question from #1. As you quite correctly point out, many ecologists at research universities and our main funding agency in the US, NSF, often pay more lip service to the goal of prediction than true devotion. But I also think that is OK. Basic research is what got us to #1. Prediction in an applied context (that is making specific numerical predictions for specific locations and contexts) is a different beast. Think of the split of responsibilities between physics and engineering. So to me what you’re saying is ecology does not have a robust engineering branch*. And in the end, I think that is a failing of society not, ecologists. Society pays $100s of milllions of dollars (maybe billions of $)/year for weather forecasts of the very specific kind I mentioned. And we sure pay a lot of money for an engineer to tell us whether a bridge will fall down or not (something a physicist would be terrible at even if it uses the laws of physics). We don’t pay much of anything for ecological forecasting (nor in a related issue monitoring). This is a deep issue involving science education, human shortsightedness, etc. As a result what we get is a bit of fly-by prediction done on the side by basic-research scientists. Now whether that makes us hypocrites for cloaking ourselves in prediction to get funding, I don’t know. But I’m glad we do it once in a while as nobody else is bothering.

    In short, I strongly think basic research needs the goal of prediction as a metric for our success as scientists (one of the main posts of my series on prediction), but I think using these principles to make specific, place-based (and species-based) predictions is a different job. This latter is a society-wide issue although hopefully basic-research ecologists can provide some leadership towards this goal.

    A great and thought provoking post.

    *Is conservation biology the engineering branch of ecology? Some would argue yes, but I’m not sure most managers on the ground would agree, at least its not there yet. Or maybe academic conservation biology is the analog of academic/research engineering? Restoration ecology is arguably the discipline closest to prediction engineering today.

    • I always think it’s interesting that ecologists go to ‘conservation biology’ as their interpretation of the applied or engineering side of the field. Why not fisheries or wildlife biologists? The people who most often are applying ecological principles to active management, and whose fields predate conservation biology by decades? And who are often quite quantitative and prediction-oriented (think Ray Hilborn if you’re fishing for examples).

  2. Peter,
    What an honest and meaty post. I have thought about these issues quite a bit, especially since I’ve taken a post where I am forced to produce results that are immediately useful to partners (not an easy task for a theoretical ecologist). I’m not sure we can separate mechanistic understanding from successful forecasting in the way you suggest for two reasons. 1) Forecasting without mechanism is a prescription for disaster. See: financial crisis 2) Without an improvement in prediction how can we separate true “conceptual” advances from mere naval gazing (or finding new ways to get excited about tiny effects)? As a field, we need to keep the focus on the goal of understanding natural systems. I argue that the best way to do this is by showing that we can successfully make quantitative predictions of the effects of interventions (i.e. use mechanistic knowledge to make and test forecasts). This can be at odds with our individual incentive structure which is to stay funded by appearing to do something sexy; a tough knot to cut.

    • Minor quibble Don: it’s not at all clear that failure to forecast the financial crisis was a matter of lack of mechanistic understanding. Indeed, there’s a strong argument to be made that at least some aspects of financial crises (e.g., stock market crashes) cannot be forecast reliably…

  3. Brian and Don both point out the way that basic research and forecasting “need” each other. This is a great point and one that I haven’t though about much (which further underlines my lack of applied bona fides). I also agree with Brian’s call for an “engineering branch” of ecology. As Eric points out, one could argue that this already exists in our natural resource management agencies. From this perspective, it is the mission-oriented agencies, not NSF, that should fund this kind of work. And they do fund a little…but probably not anything proportional to the challenge.

    I think that wildlife and fisheries researchers have been quite successful in providing managers with useful tools based on fundamental population theory. But my impression is that most of these tools assume stationarity: If we know carrying capacity, we can predict maximum sustainable yield and lots of other more sophisticated metrics as well. Unfortunately, it’s no longer safe to assume that K is stationary. The challenge is predicting how carrying capacity is changing, which leaves the fisheries and wildlife researchers in the same pickle as us basic researchers.

    • Well, I guess I’d ask: has anyone quantified how expenditure to basic ecology by NSF compares to grant funding for applied ecology by other sources? EPA, DOD SERDP, NOAA, USGS, USDA, etc.? There’s an assumption in the initial post and Brian’s reply that the only way this work gets done is by NSF-funded basic ecologists at universities. Hence: “Now whether that makes us hypocrites for cloaking ourselves in prediction to get funding, I don’t know. But I’m glad we do it once in a while as nobody else is bothering.” Neither NSF or those more blatantly applied funding sources or agencies may be “proportional to the challenge,” but it’s a bit constrained and disciplinary to dismiss entire related fields and agencies at ecology’s periphery.

      I take some issue with the “nobody else is bothering” basis of Brian’s reply and limited NSF-focus of the initial post, while agreeing with the overall message that prediction and application is probably tacked on and over-stated in most NSF applications for basic ecology. But I disagree with the interpretation from Brian that “Society pays $100s of milllions of dollars (maybe billions of $)/year for weather forecasts of the very specific kind I mentioned. And we sure pay a lot of money for an engineer to tell us whether a bridge will fall down or not (something a physicist would be terrible at even if it uses the laws of physics). We don’t pay much of anything for ecological forecasting (nor in a related issue monitoring).” We do as a society support ecological monitoring and forecasting; we even do it through the same organizations we support for weather forecasting (NWS and fisheries in NOAA). There are obvious biases towards systems and species that society values, or that have a legal mandate for protection (USFWS and the ESA; EPA and the Clean Water Act), but it is something that exists.

      Now what kind of science do these agencies do or fund? Not to get into the (very diverse) weeds, but if you take issue with the work that Forest Service biologists or the USGS Biological Resources Division or NOAA Fisheries do, there’s a solution to it: train graduate students who do the work you’re calling for, and who are interested in research and management agency jobs rather than academia. Those agencies may all be slow ships to steer, but you can influence applied ecology through your own academic tree and outside of the university system and shadow of NSF support. And the better that applied ecologists are at doing applied ecology, maybe the less incentive or need there will be to over-state predictive implications in basic ecology NSF grants?

      • Eric,
        Part of my motivation for writing this post was my feeling that a lot of the research necessary for forecasting is not being done or funded. It’s not conceptually interesting enough for NSF. And although the agencies have funded a lot of applied research in the past, I don’t think you can call it “forecasting” research. Figuring out stocking rates, harvesting rates, and critical habitat needs in a stationary world is much different from figuring out how to set those policies in a non-stationary world. The agencies are now funding more true forecasting work. The Dept of Defense has supported large climate change experiments and USDA has supported a lot of CO2 work over the past decade. The new USDA programs (AFRI) are certainly targeting climate change more. But I still think these agencies would balk at funding some of the basic data collection that would be required for real forecasting (“Don’t you guys know this stuff already?!?”). I am also suspicious about how applied the research produced by these programs really is. The folks I know involved in DoD and USDA supported climate change projects are responding to pretty much the same sets of incentives and rewards that I am. They are promoted on the basis of the quality and quantity of of publications, not the effect of their research on management. More on this below in response to Jeremy’s quote from Ellner…

      • Hi Eric,

        Good points, and thanks for calling me out on using limited language. First, if you don’t know my background I have spent half my career in natural resource departments (and the other half in biology departments) and have received funding from agencies other than NSF. So perhaps from this perspective I tend to use words like ecology and conservation biology more broadly than others would (i.e. to me these are all inclusive terms). I do know some people in a wildlife department would object to this and some wouldn’t (and marine sciences including fisheries is a whole ‘nother artificial split I don’t have space to get into), but in the rush of blogging between meetings I was probably using these terms in a quite general way that I probably wasn’t very clear about.

        As to your points, I do agree that natural resource departments and funding agencies like USDA are doing a great job on monitoring that is not being done in basic science/biology/EEB departments. And this is really important (see my next post on data). However, it would be interesting to have a conversation about whether they’re really doing forecasting in the sense of Peter’s (and my) post, and if so how well those forecasts are going. I know wildlife scientists (mostly working for state agencies) do forecasts of populations of game animals, and of course there are forecasts of fisheries populations that drive regulations. And population viability analyses are done, sometimes in universities sometimes outside. So I am willing to stand corrected that these do occur,. And maybe this is unfair since they’re the only one sticking out their necks on even these kinds of forecasts, but many of these forecasts, especially fisheries, have been famously bad. More importantly, I would agree with Peter that these are one very narrow type of forecast focused on populations in the very short term (1-5 years from now). Forecasting the long term or responses to land cover change, climate change, invasive species, etc, are pretty much still a 95% empty landscape (there are exceptions but rare enough to be the kind of exception that proves the rule) no matter how broad the group of ecologists you include and how many funding agencies you include. I am curious Eric, do you think that is fair?

      • Peter: But I still think these agencies would balk at funding some of the basic data collection that would be required for real forecasting (“Don’t you guys know this stuff already?!?”).

        -> Don’t the agencies often do (useful) basic data collection and monitoring? Forest Inventory Analysis? USGS Status and Trends? I see these datasets regularly applied by agency and academic researchers for forecasting with respect to land use change, climate change, etc. I’m not arguing that these couldn’t or shouldn’t be better, only pushing back that they exist. I chime in on this blog when I think the state of something is being a little misrepresented, albeit never intentionally by an otherwise engaging group of contributors. Do we need more/better data for forecasting? Sure; you wouldn’t see so many people making presence-only shortcuts from museum records for distribution models if this wasn’t the case. But let’s give credit to what resources do exist (and protect their funding!).

        Peter: The folks I know involved in DoD and USDA supported climate change projects are responding to pretty much the same sets of incentives and rewards that I am.

        -> I guess the difference I would offer is: shouldn’t grants to academics through those avenues (above) not respond to novelty or sexiness (as for fundamental ecology for NSF) and more to application and usefulness? You still want to publish someplace good (if you’re an academic); it just may be the Journal of Applied Ecology instead of Ecology. But I don’t think review at DoD or USDA is responding to the same grant elements as NSF, and I don’t think researchers at those agencies are as motivated or incentivized by Science/Nature/Ecology Letters/etc. publications – publishing a couple of times a year in your professional society’s journal is usually plenty to advance. But that disagreement is based on my very subjective experiences; I don’t have much to back it up and may be far off.

        Peter: I am also suspicious about how applied the research produced by these programs really is.

        -> Sure; absolutely fair. And I don’t dispute the Ellner quote given by Jeremy in the least, regardless of funding source, place of publication, etc. Something funded or published under a very applied mechanism can still be (often be?) spectacularly useless and disconnected from actual needs.

        Brian: First, if you don’t know my background I have spent half my career in natural resource departments (and the other half in biology departments) and have received funding from agencies other than NSF. So perhaps from this perspective I tend to use words like ecology and conservation biology more broadly than others would (i.e. to me these are all inclusive terms).

        -> My first comment was less on splitting hairs between ecology/conservation biology and other disciplines (I’d probably take that dig back as misjudged – albeit set off by comments like “nobody else is bothering” and “hopefully basic-research ecologists can provide some leadership”), and more on pointing out that as a society we do have ecological equivalents of weather forecasters at state and federal agencies. I guess my sensitivity on this is a perception that sometimes ecologists don’t see some of these researchers as in a shared field (and the inverse!), which can lead to slighting the work that they do, suggesting it doesn’t exist, or arguing no ‘ecologists’ are doing it. Jeremy covers the opposite problem elsewhere (applied fields disregarding basic research as useless; see below).

        Brian: However, it would be interesting to have a conversation about whether they’re really doing forecasting in the sense of Peter’s (and my) post, and if so how well those forecasts are going… More importantly, I would agree with Peter that these are one very narrow type of forecast focused on populations in the very short term (1-5 years from now). Forecasting the long term or responses to land cover change, climate change, invasive species, etc, are pretty much still a 95% empty landscape (there are exceptions but rare enough to be the kind of exception that proves the rule) no matter how broad the group of ecologists you include and how many funding agencies you include. I am curious Eric, do you think that is fair?

        -> I think it’s inaccurate, but that may be a consequence of the researchers I run with and the articles I read, although I consider there to be more than enough of them to disqualify “exception” status. You can’t swing a stick without hitting state/federal/academic researchers publishing non-NSF funded forecasts of implications of land use change/invasive species/climate change on species/communities/ecosystems in journals from Conservation Biology to Ecological Applications to Journal of Applied Ecology on down to obscure society journals. If this is as uncommon as is being sold here, how do you get papers like Elith et al. 2006 in Ecography being cited 1343 times (at present)? Mostly by people wanting to do things like forecast spread of invasive species or forecast species and community responses to climate change. Or similarly, Phillips et al.’s 2006 MaxEnt paper cited 1205 times? You’re certainly welcome to find fault with those tools, but you can’t say there isn’t evidence of people pursuing ecological forecasting, especially via out of the box methods (Maxent) that probably aren’t sexy enough (if you aren’t developing it, or developing an alternative) to have an NSF grant at their core.

        I guess I think the world is being represented as a little inverted in the comments on this post. I view NSF funding of basic research in ecology as a concession that this fundamental work is hard to support by other channels, whereas applied ecology is easier to support by a long, long list of funding acronyms (that should care about results and not sex appeal). Ecological forecasting is far from the only thing that that long list of acronyms funds, but I don’t think it’s that hard to find. So why sell a forecasting application for management in a grant to NSF? Especially when the initial blog post is predicated on the observation that most of these forecasting tools should be simpler, not more complex (i.e., omit complex ecology-based mechanisms), precluding them from NSF’s expectations for earth-shattering novelty? If you’re interested in simple, applied tools for management, there are lots of places you can go for funding, and lots of agencies and researchers who would benefit from your insights and the students you can train.

        This is a long way of getting to agreeing (I think) with Peter: if you feel guilty doing it (shoehorning a forecasting application into your NSF grant), maybe you shouldn’t be doing it. And some fault probably falls on NSF on that: looking for broader implications on grants that are (in my perception) intended to balance the favoritism already given to simpler, routine, applied work elsewhere.

      • Eric – you make good points. And I think you’re right. I made overly strong statements that aren’t supported and I need to back off them. I think what I’m really trying to get at is that I think our current prediction tools are woefully inadequate (more of the quality point that you raised), and that I find it frustrating that nobody seems to be doing anything about it. Government scientists (an imprecise term as not all of them are in government) are indeed as you say cranking out predictions, but I think the foundations/tools of the predictions are not great and the track record is not great (although again those not making predictions at all should not be throwing stones at the glass house of those that are). Although I know there are scientists in USGS and CSIRO and NOAA and some other places doing research to advance the methods, I have a feeling the vast majority are content to use the current models without advancing them (Elith et al 2006 being cited >1000 times being a case in point). Conversely, I think basic ecologists (and NSF) do not fundamentally care about the prediction project in any meaningful way and are not advancing the field either (as Peter argued). I think this is really unfortunate for society and really unfortunate for not holding basic scientists feet to the fire to buckle down and tackle these fundamental issues. So in the end, I would agree strongly with your point that the different groups of ecologists need to recognize each other and work with each other more because there is a *lot* of work that needs to get done!

        And thanks for calling me out in a civil fashion on the predictions that do exist. I definitely spoke too casually and hastily and therefore incorrectly on this.

      • This is actually something I think will be really interesting, Brian: people are making lots of ecological predictions and forecasts (1,000+ for Maxent! That’s not all ecology -some evolution papers as well- but a lot of it is for climate change/invasive species/etc). And you can bet over the next 10-50 years we’re going to be evaluating which of those predictions were right (did species invade? which species tracked climate change as predicted?), and trying to disentangle why. The method? Particular species or particular species traits? Or are they just going to fail in bulk? This is something that I anticipate people who do ecoinformatics spending a lot of time on down the road: evaluating which of our predictions were good and which were bad and looking for reasons why.

        And I do think a lot of government (and applied academic) ecologists are critical of some of the methods being used for these things, and trying to develop alternatives. For example, you can see USGS wildlife biologists pushing back against the ubiquity of Maxent and offering a maximum likelihood alternative for presence-only scenarios:

        http://onlinelibrary.wiley.com/doi/10.1111/j.2041-210X.2011.00182.x/full

        Similarly, there was another Maxent-related fight between USGS biologists concerned over the potential spread of invasive pythons in the southeastern United States and the pet trade/other academics who disagreed with them. The USGS invasive snake crew argued that pythons would establish throughout much of the southeastern United States, a conclusion with implications not only for management (control) but an outright ban on import and trade – going back to one of Jeremy’s recent posts, this is making a bet on your forecast! But it’s not a friendly bet between ecologists; it’s a bet with an industry or a stakeholder group’s money or livelihood that something they’re doing carries unacceptable ecological risk. So you shouldn’t be surprised when they push back! Which happened when another academic published the Maxent version of the python distribution with the typical 19 WorldClim layers and produced a much reduced prediction of its potential US range (i.e., south Florida). Which led the USGS biologists to then outline a long list of grievances with Maxent that you’d likely agree with:

        http://link.springer.com/article/10.1007%2Fs10530-008-9228-z?LI=true

        http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0002931

        http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0014670

        Anyway, I absolutely think there’s an important place for fundamental ecology to inform and improve these kind of applied predictions – but I don’t know that Peter agrees in his original post. Or at least, the argument there seems to be that promising forecasts with applications in an NSF grant is a little disingenuous, and that what applied ecologists need is reliable, context-dependent prediction independent of mechanism or complex ecological insight (and that sounds an awful lot like Maxent to me). That’s not sexy enough for an NSF grant; I’m just offering the consolation that if it’s the kind of work you’re interested in, there are plenty of avenues willing to fund it. A final lingering question (and one I think Jeremy and Peter touch on below, although I’ve just skimmed it) is how much these kind of predictions (above) are really needed or implemented by managers, but I’ve taken up more than enough of your (and my) time here.

  4. Question for Don and Brian (and anyone else who cares to chime in): do you see “fundamental” or “basic” research as the same as “mechanistic” research? I do think Don’s point–that we often need mechanistic models in order to make good quantitative predictions–is well taken. Particularly in a non-stationary world (non-stationarity being an issue Peter raised in the comments). But my own view, and I think Peter’s view as well (Peter, correct me if I’m wrong!), is that “fundamental” (aka “basic”, “question-driven”, or “conceptual”) research isn’t necessarily mechanistic. For instance, much of Peter’s own work on coexistence mechanisms can be seen as quite phenomenological–just estimating species’ average rates of increase when rare, or what those rates would be in a constant environment, without worrying (at least in the first instance) about the underlying mechanisms that determine those rates. Don and Brian, it seems like the sort of “basic” research that you see (rightly) as being helpful for prediction is actually the sort of “fill in the mechanistic details” stuff that, as Peter notes, NSF panels tend to find boring. Whereas the sort of conceptually-motivated work that folks like Peter (and me!) do tends not to be that relevant for forecasting purposes, whether or not it happens to be mechanistic.

    Aside: perhaps worth noting that my old post on “why do fundamental research” doesn’t list “to improve forecasting” among the answers. Although I do suggest that fundamental research has applications…

    https://dynamicecology.wordpress.com/2012/03/16/why-do-fundamental-research-in-a-world-with-pressing-applied-problems/

    • Jeremy, I agree: Not all mechanisms of conceptual interest are necessary for forecasting, and not all mechanisms (or at least not all the details) necessary for forecasting are of conceptual interest. A few years ago, I was optimistic that one carefully parameterized empirical model could be used to answer all sorts of questions, basic and applied. Lately I’ve been realizing that every question might require its own specific model, if you really want to do it right.

    • So, I would agree that basic research does not need to be mechanistic. I would also agree that many of the population dynamics/density dependence is a curve fit to the data or a carrying capacity fit to the data is phenomenological, but I know many people who would call that mechanistic.

      What I don’t see is that the kind of basic research that is helpful towards prediction is not conceptual. Peter’s work being a good example (and yours as well Jeremy). Maybe the only divide I can see is I would be really interested in the variance partitioning or the meta analysis that asks how commonly the conceptual factor is important or how much variance it explains and would still call this basic research but you might not be. But to me the conceptual approach cannot stop just at “this processes” exists – it has to try and go down the road of “this process is important”. It might be an interesting question whether NSF cares about this step or not, but certainly NCEAS and top journals care about this step so I think it is within the sphere of basic ecology.

  5. Worth noting a comment from Steve Ellner in an interview from last year, on what the most important question in ecology is–and how ecologists’ answers to that question are mostly bulls–t:

    “How can we contribute to improving the state of the planet, now and in the future? We have an answer – it has to do with studying or predicting the ecological effects of climate change (or some other aspect of the human footprint) – and it’s wrong. An increasing fraction of ecological research is motivated or justified with reference to climate change, but a very high fraction of that work will have no effect on outcomes or responses because there’s no actual link to policy or management. Is that the best we can do, or can we do more useful things?”

    The full interview is here: http://sarcozona.org/2011/08/31/esa-interviews-steve-ellner/

    • Great quote for two reasons: 1) The links to policy and management really are a critical missing piece, and 2) the leadership of our field has chosen to “justify our existence” based on application rather than knowledge for its own sake.

      As for links to policy and management, I’ve recently waded into the growing cottage industry of climate change vulnerability assessment (agency’s are begging for information, and the lack of it makes it clear that whatever applied research we have been doing has not met the need), and one of the lesson’s I’ve learned is about the scale mismatch between research and management: Researchers, both basic and applied, strive for generality. It’s hard to publish a paper based on what’s happening to one species or population in one very specific, unique, and small location. But most managers are making decisions about specific populations in specific places and complain that none of the research is relevant. It may be the case that we should not try to integrate climate change considerations into these fine scale decisions, and instead should target the land-use planning process, which often spans 20 yr time periods and is conducted higher in the agency hierarchies.

      As for justifying our existence, this was another motivation for me in writing the post. It would be a lot easier to accept the fact that NSF doesn’t fund and high impact journals don’t publish predictive work if the luminaries in our field weren’t signing letters to Science saying that we need to do a better job on ecological
      forecasting! This the hypocrisy–as a community we are not willing to prioritize the kind of work that our leaders say is most important. Ellner is right on in suggesting that we either need to get better at linking research and management, or (this is implied) we need to get on with the business of basic research without the dubious PR campaign.

      • “the leadership of our field has chosen to ‘justify our existence’ based on application rather than knowledge for its own sake”

        Small anecdote that illustrates this point (which I think is right): The official “theme” for the ESA meeting every year is some variation on “sustainability”–sustaining a changing planet, sustaining our fragile biosphere, something like that. Which has precisely no effect on the actual content of the meeting, except for obliging those proposing symposia to wave their arms and pretend that what they’re proposing has something to do with the meeting theme. But presumably the ESA chooses a theme that emphasizes the global, applied relevance of ecology because that’s what they think the public, funding agencies, and policymakers want to hear.

        In fairness, it is *not* easy to articulate an honest case for fundamental, question-driven research that doesn’t sound self-indulgent, elitist, and out-of-touch. I did the best I could in that old post (https://dynamicecology.wordpress.com/2012/03/16/why-do-fundamental-research-in-a-world-with-pressing-applied-problems/), and I sincerely believe that case I made. I actually don’t have any problem looking myself in the mirror and believing that what I do really is worth society spending money on. But I am under no illusions that anyone not already convinced would read that post and be convinced.

  6. Is forecasting species range shifts in response to climate change one forecasting challenge that’s proven of interest to funding agencies and conceptually-oriented journals? Not really an area I follow, but I have the impression it’s a “hot” topic, with lots of debate about how mechanistic your “niche model” (lord, what an unfortunate name!) needs to be, what sort of information about species interactions needs to be included, etc. And are forecasts of species range shifts primarily of interest to academics, or are there “end users” outside of academia who want them?

    • I knew it was inevitable that species distribution models (SDMs) would come up. I probably should have addressed this head on in my original post–I think that omission is what prompted Eric to push back (Eric, if you are still following, correct me if I’m wrong). First, some easy answers to Jeremy’s questions:

      1. Yes, papers about species range shifts do make it into Science and Nature (though I am not sure these headline grabbers are of much use to managers).
      2. Yes, these projections are of great interest to agencies, especially in the context of climate change vulnerability assessments.
      3. Yes, you can get some funding to generate these projections, but probably not to collect the underlying data (more on this below).

      So how do I reconcile all this forecasting work with my pessimistic post? The first part of the answer is that I find SDMs so unsatisfying and didn’t want to get into a long discussion about them. Even if we believed the output, and had no concerns about assumptions, techniques, accuracy, etc., information about future distribution is just the tip of the iceberg. What about changes in dominance (ie “habitat” if we are talking about plants)? In disturbance regimes? In ecosystem services? SDM projections are being used in very interesting ways to complement other approaches, but I am pretty cynical about their value as stand-alone products. The second part of my answer is that agency interest in these projections seems to prove how little forecasting information is available. The agencies are being pressured from Washington to do something about climate change, and the species distribution models are essentially the only quantitative tool that researchers have made available to them. I was in a graduate seminar last week with a state-level BLM employee and I wish I could describe the sour look on his face as we answered his questions about what our SDM projections really meant. It’s frustrating not to have more to offer. And even though the agencies are funding people to turn the cranks on these models, I do not think they are funding anyone to collect field data. As Brian (I think) has pointed out in a previous post, the biggest limitation of this approach is the availability of training data.

      I probably should have put this all in a massive footnote at the bottom of my post. Rookie mistake?

      • Thanks Peter, this is very helpful (and sobering) for someone like me, who is far more “ivory tower” than you or probably just about anyone else around here. I have to say, as hard as you are on yourself, I’m glad you haven’t turned your piercing gaze toward me! I’ve never even *met* someone from a state-level BLM or the Canadian equivalent. They may not like that you don’t have more to offer–but I have nothing concrete to offer at all! As I said in a previous comment, I’ve made my peace with that, but still…

        No, omitting a footnote about SDMs isn’t a rookie mistake. Posts exist to start conversations, not to anticipate all issues anyone might raise and thus prevent conversations. Man, you’re tough on yourself! If you were here I’d, I don’t know, give you a hug or something… 😉

      • Well, you did include it in there, Peter: “witness the success of purely empirical forecasting approaches such as machine learning.” I don’t disagree on the (often overlooked) limitations of SDM methods, and also think they’re just the tip of the iceberg for what we need. On meeting with agency biologists: I think we all struggle with that implementation gap. Even if you have a good, robust, important prediction for the effects of climate change on a species of conservation concern at a continental scale, what doe that mean for an on-the-ground manager in a restricted jurisdiction? Usually not much. I guess in part I see more application for SDMs in the invasive species literature, because anticipating whether or not something will survive in a new region is pretty important for permitting or prohibiting its importation. But again: how much do SDM results get applied in white list or black list processes? Probably not that much.

  7. Peter – Great post, and the conversation that follows is fantastic! I like to think of myself as working on ecological forecasting, both using SDMs and single species population models. Your comments were a ‘virtual hug’ for my choice to stop worrying about not getting NSF funding for my work!

    I have received plenty of funding from state and federal mission agencies for this work, but there’s a catch there. These agencies are ‘mission oriented’ which means that they want results! Now! So I’ve learned that one has to be careful what one promises in proposals – taking funding from mission agencies is much more like consulting (doing what you know how to do) than research (if we knew how to do it it wouldn’t be research). Most agencies aren’t interested in funding something that might not work. So doing research into the methods for forecasting has to be largely ‘on the side’. There might well be funding for such work out there, but I haven’t run into it yet. Maybe if I was a ‘proper statistician’ I could get funding from the mathematical sciences at NSF.

    On the topic of the engineering branch of ecology, I think the consensus is about right – fisheries, wildlife, conservation, and I would add pest management. The missing part of ecological engineering is community and ecosystem level stuff – what happens to nutrients in streams and so forth. I think we do a pretty good job of that on the basic level, but (to my knowledge) it isn’t getting translated upscale to where it might matter (like managing flows on big rivers). The other thing is that we don’t train these ecological engineers in engineering! We teach them how to identify, trap, count, and collect organisms, but not how to forecast future abundance/distributions and make decisions. That’s changing, but slowly.

    The last thing I wanted to say is that interfacing science with policy is hard, as has been pointed out. I’ve had the great fortune to work with Sarah Michaels, a colleague in our political science department here, and its been very enlightening. There’s a whole field of research related to the science-policy interface, and my sense is that ecologists are largely unaware of it. Which is fair enough – the science-policy interface people are largely unaware of ecology! Sarah and I have a paper where we tried to put these two fields together: How indeterminism shapes ecologists’ contributions to managing socio-ecological systems
    (I hope that link comes through – couldn’t find any help on embedding links in comments).

    • Yep: that is absolutely a big distinction between the two funding pathways. The expectation to explore and discover vs. the expectation to produce applied results on time (not that NSF isn’t keeping tabs on if/when/where you publish!). But I think there’s room to work on the side to advance and improve the forecasts even under those mission-oriented umbrellas.

      And on something Peter mentioned in his initial post: “The NSF programs I am familiar with do not reward this kind of work, nor do the high impact journals in our field. My impression is that the rewards and incentives are all focused on breakthroughs in understanding nature, not advances in predictive skill.” If you’re looking for reasonably high impact journals that will get excited about predictive skill, they exist: Global Ecology and Biogeography; Ecography; Diversity and Distributions; etc. In many cases they match or exceed the impact factor at Ecology, and they’re going to care a lot if your forecasting method seems better than what else is available, regardless of reason.

      • The journals you list that are excited to publish improved forecasts are all biogeography journals. Presumably they’re only excited about forecasts of biogeographical variables…

      • Sure – and if you want to forecast species interactions, I think you can find good venues at Conservation Biology / Ecological Applications / Ecological Modelling / etc. Which again (by impact factor, not to open that can of worms) shouldn’t carry any penalty at tenure or promotion. Although I don’t think its uncommon to see papers in D & D / Ecography / etc. address species interactions and more typical ecology rather than biogeography content (although that admittedly is dominant).

  8. Thanks for this post – gave me the push I need to actually comment (I see Brian in person every Wednesday and have admitted my reluctance to wade into the comment section on the blog).

    I pretty much agree with everything Eric and Peter have said and I’m glad to see that the more applied parts of ecology are getting some time. In my experience, there is a lot of money and interest from government and non-governmental organizations for species distribution modelling and conservation planning with and without climate change. But, like the blog post notes, there’s definitely a lack of good predictive models of how climate change will effect species, especially when you’re talking about complex systems.

    Another issue that applied biologists have to deal with is finding the sweet spot between a complex enough model (including multiple species and habitats) and one that is simple to explain and get people to use. I was recently at a meeting that presented a ‘spatially explicit climate change impact index’ for a number of species. However, when the stakeholders asked what that red (=important) dot represented, the researchers said they would have to check their notes; they couldn’t say whether that was important for wetland or forest conservation. That kind of thing turns stakeholders off using the products.

    Finally, I want to reiterate that a lot of this data is difficult to gather and that no one seems interested in funding it. I would like really good data on how, e.g. salamander populations respond to different habitats and precipitation levels but most of our current data is on rare or endangered species. For more abundant species, I’m left quoting papers from the 1970s and hoping things haven’t changed too much. Also if someone could fund databases (creation of, and data entry in), that would be really helpful too.

    • Thanks for the comments ATM, and for the anecdote about model complexity and communication. It’s consistent with the Ellner quote we keep referencing, that the science-management links may be more of a challenge that the science itself.

  9. Hi Peter, this is a topic near and dear to my heart. And when I see it discussed there is usually an explicit comment that prediction and understanding are clearly different things. While they may not be the same thing in theory I am increasingly convinced that they are in practice. I would say (and do more often than I probably should) that the only way to demonstrate understanding is with prediction. So far, I haven heard convincing arguments against. Jeremy has mentioned that we can demonstrate understanding by explaining data/patterns that we already have/know about. I would still consider those predictions. A model that does a good job of predicting data we already have is only suspect if we believe that the model was, in part, designed to predict those data. And even then, if we are comparing 2 models and one does a good job of predicting past data and one doesn’t (even if the first was designed to predict those data) that is still a reasonable basis for preferring the first model.
    So, if the only way to demonstrate understanding is with prediction then shouldn’t even scientists whose primary goal isn’t forecasting be expected to show evidence that they have increased our understanding when they make that assertion? That prediction somehow gets relegated to sub-disciplines seems to ignore the fundamental role that prediction plays in demonstrating understanding. Prediction should be as important in the ivory tower as it is down in the mud and the blood and the beer. Best, Jeff Houlahan

  10. Pingback: Are established ecology blogs beginning to fade away? (UPDATED) | Dynamic Ecology

  11. Pingback: Ecologists need to do a better job of prediction – part II – partly cloudy and a 20% chance of extinction (or the 6 P’s of good prediction) | Dynamic Ecology

  12. Pingback: The road not taken – for me, and for ecology | Dynamic Ecology

  13. Pingback: Peers, mentors, role models, and heroes in science | Dynamic Ecology

  14. Pingback: Friday links: s**t students write, do big name scientists have too much money, and more | Dynamic Ecology

  15. Pingback: Happy Birthday to us! | Dynamic Ecology

  16. Pingback: It doesn’t get any easier (guest post) | Dynamic Ecology

  17. Pingback: Friday links: the history of “Big Data” in ecology, inside an NSF panel, funny Fake Science, and more (UPDATED) | Dynamic Ecology

  18. Pingback: Ask us anything: climate change science | Dynamic Ecology

  19. Pingback: Ask us anything: how do you critique the published literature without looking like a jerk? (UPDATED) | Dynamic Ecology

  20. Pingback: Why I like this: Martorell and Freckleton (2014) | Jabberwocky Ecology | The Weecology Blog

  21. Pingback: Is fundamental research a young ecologist’s game? | Dynamic Ecology

  22. Pingback: Is basic vs applied research a single dimension? | Dynamic Ecology

  23. Pingback: Prediction in ecology -implementing a priority. – biologyforfun

  24. Pingback: Poll results: what are the biggest problems with the conduct of ecological research? | Dynamic Ecology

  25. Pingback: Ask us anything: the future of machine learning in ecology | Dynamic Ecology

Leave a Comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.