Ecologists need to do a better job of prediction – part II – partly cloudy and a 20% chance of extinction (or the 6 P’s of good prediction)

So before the holidays I started a series of posts on ecologists needing to do a better job of making predictions. I argued that we should predict more both for the benefit of applied usages but also for the better advancement of basic research. I also argued that ANOVA (at least as usually used) is a big blockage to a culture of prediction. Shortly after my post, guest commentator Peter Adler wrote a great post on prediction and the degree to which basic researchers are serious in making prediction vs using it as a front to get funding.

I have at least two more posts planned after this one (one on mechanistic vs phenomenological/statistical prediction and one returning to some questions raised in my first post about statistics).

But in this post, I want to look at a scientific field that I would argue has been the most successful at making predictions: meteorology. As Jeremy has noted in the past, one should worry when ecologists start reasoning by analogy to other fields of science instead of talking about ecology, but I have a specific goal here. I want to derive what I think are some good practices about prediction and think about the degree to which they do or don’t fit into ecology. Indeed to make it catchy and sound simple, I will boil it down to the 6 P’s of good prediction. And I will talk about how these apply to ecology.

OK – so first weather prediction. There are a number of good papers (like this and this) and even a book reviewing the history of weather prediction. Weather prediction is very different in one way – we know the laws. There are 7 equations that describe the behavior of air (see first review paper). The problem is that they are continuous in space over the whole globe and they are chaotic. Despite this, the bottom line is early predictions were worse than the obvious null models (tomorrow will be the same as today, tomorrow will be the same as the 30 year monthly average=climatology). Now they are way better than this for the 3-day-out prediction and also better for the 5-day-out prediction and even the 7 day prediction is slightly better than the null.

Reproduction of figure 4 in Simmons and Hollingsworth 2002. X-axis is year, y-axis is a correlation coefficient on air pressure deviations.

Reproduction of figure 4 in Simmons and Hollingsworth 2002. X-axis is year, y-axis is a correlation coefficient on air pressure deviations.D+3 is a prediction 3 days into the future.

Weather’s record of prediction is enviable both in absolute level (high correlation) and in the trend of constant improvement. If you read the histories, these improvements are a combination of three things:

  • Better computers leading to a finer resolution grid approximation to the continuous differential equations (the first models were a 3 degree x 3 degree grid and one vertical layer – modern global models are 1 degree x 1 degree with 5-7 vertical layers)
  • Some improved modelling tweaks
  • More data on initial conditions

I would argue that weather prediction has been such a success (Nate Silver’s new book on prediction also holds up weather prediction as a uniquely good success) because they follow the 6 P’s of good prediction (that I invented for a talk I gave a few months ago). These are:

  • Precise enough to be possibly wrong – Jeremy asked me in the first post what defines a prediction. And my answer was its not black and white, but a spectrum. Or as Lakatos said, a good prediction for testing a theory must be risky. The more risky the prediction (and also the more predictions) a theory makes, the better the test. Weather predictions are indubitably precise enough to know if they are right or wrong making them risky. They are maybe not the most risky predictions imaginable, but there are a lot of them (like 365 a year). Now compare this with ecological predictions: e.g. predation can, but not necessarily will, induce oscillations of some kind. Not very risky! (And not very many predictions from one theory). Who is really putting their neck on the chopping block with their predictions?
  • Probabilistic–  weather forecasters do something almost no other predictors do. They put the percentages and error bars in their predictions (20% chance of snow with a high temperature between 25 and 30). You might think this is an escape from the first point of being risky, but only in the short term. If you predict a 20% chance of rain and then it rains, you seemingly have an out. But not if you have 10 years of data. THen you really ought to see rain 20% of the time on days you said a 20% chance of rain. In fact the weather service gets this right to within a percent or two. My main point is a good prediction includes an estimate of its uncertainty – it has error bars. Some branches of ecology do this well (e.g. PVA analyses provide ranges of extinction probability) but many branches of ecology don’t.
  • Prolific data – if you look back at the figure you see the Northern Hemisphere predictions have gradually gotten more accurate. It is very hard to tell how much this is due to better computing power vs more data. But the Southern Hemisphere predictions have gotten better at a much faster rate and now converged to being almost as good. This is almost entirely attributable to having better/more input data to the models (its the same model and computer for both hemispheres). Weather forecasters have devoted enormous efforts to collecting data. They have more stations but also collect more kinds of data at these stations. It is impossible to get better at prediction without voluminous data! NEON in the US may be an attempt at this, but it is kind of sobering to realize that NEON wouldn’t even have a sensor in every one of the 3 degree x 3 degree cells in the oldest weather models and nowhere close to covering modern grids of 1 degree x 1 degree (and NEON is focused on a subset of ecological data). The breeding bird surveys and forest inventories sample a little more densely, but are a very limited subset of measurements (it would be like trying to predict temperature by only measuring overnight low temperatures once a year to input into the model). We have to get *REALLY* serious about data if we care about prediction.
  • Proper scales – I find it fascinating that the early weather modelers had a very explicit sense that the most tractable problem was to focus on regional scale pressure variation (i.e. the high and low pressure systems and the fronts). Other things like precipitation depend to a much greater degree on micro-scale processes (e.g. local convection and evaporation). What is really fascinating is that even though precipitation was probably the ultimate goal, the weather modellers followed their noses and modelled the scales that were most tractable first and got that right and only later started trying to add in details specific to precipitation (and anybody who has lived in an arid landscape and seen how spotty rain can be knows how hard it would be to get this really right). I’m pretty sure we ecologists are not this scale-detached. We insist on modelling the scales we want answers to not, the ones that are amenable to modelling.
  • Place specific – Here is something that will be controversial. Weather forecasts explicitly reject the Robert May strategic modelling approach. They make forecasts that are specific to a time and a place and thus highly dependent on the initial conditions, parameter values and specificities. And the National Weather Service pays big bucks to have local experts who look at the computer outputs and “correct” them for local idiosyncracies that these local modellers have come to understand relating to mountains, oceans, etc. Now it might seem in ecology we only need to make place-specific predictions for the applied side. But I would argue it is just as important for the basic research side. The main reason goes back to the predictions that are precise enough to be wrong. To take the ecological prediction that I picked on (some predator prey systems will cycle), this could be a very precise prediction if we said predator prey systems will cycle in boreal and tundra ecosystems but not elsewhere. And I’m not wedded to place – it could be condition dependent – predator prey systems will cycle when there is a 30 degree difference between summer and winter temperatures is condition dependent rather than place dependent but it serves the same purpose. So I think even for the good of basic science, we need place-specific (or condition-specific) predictions (and of course this will make applied scientists happy too).
  • Public even when worse than random – But more important than anything else, I think weather forecasters get big credit for and have received big benefit from the fact they don’t hide when their models are wrong. Going way back to the first real weather prediction which was hand calculated by an ambulance corps volunteer during World War I – he published his result even though it was very wrong. This making of public predictions leads to a strong culture of figuring out what went wrong and making things better. This incremental improvement is exactly what you see in the figure above. Weather predictions started out worse than null models and now are much better than null models. And I think all sorts of factors contributed to this, but most of these factors got invoked because of the rigor of public, risky predictions on a repeated basis. This is a central theme of Nate Silver’s book. But really, if you think about it, is the central theme of good science!

OK – so I have somewhat presumptiously (and ponderously?) given 6 P’s of good prediction. I’ve commented a bit along the way on how ecology is doing, but I wanted to expand the application to ecology a bit. To really assess how ecology is doing on prediction, I think there are two cultures of prediction in ecology and their strengths and weaknesses are rather different and need to be broken out. The first culture is the one found in theoretical ecology, that finds May’s strategic modelling approach inspiring and makes predictions like “predator-prey systems can have cycles” – I’ll call this the strategic prediction culture. The second culture is centered in government agencies and NGOs although it certainly extends into universities. I stuck my foot in my mouth in comments to Peter’s post by not really recognizing this type of prediction culture (which is embarassing because I’ve done some of it and certainly have colleagues down the hall doing it) but fortunately Eric Larson called me to task. I’ll call this the management prediction culture.

First the strategic prediction culture. I’ve been creating a little bit of a straw man by characterizing this culture as predicting “some predator-prey systems will cycle”. That is too simplistic. But by how much? This approach really falls down on the issue of public risky predictions. The P’s of precise, place-based and public are all weak here. The goals of this group are all basic research, so I won’t hold them to accountability for applied relevance, but even for basic research, are these predictions sticking their neck out predictions? Are they specific enough to be falsifiable? or is there room to wiggle and say “something else was going on” every time the predictions fail. I think these predictions have also failed on the probabilistic “P” – most of these models produce no sense of error bars or degree of confidence in the prediction. I would also argue that the proper scale “P” is largely ignored. There has been very little discussion of at which scales noise trumps signal or vice versa (and it is mostly raised by macroecologists who feel scoffed at for raising it). Probably a mixed bag on the prolific data P. Some of these modellers care immensely about testing their models with real-world data and are hungry for more data. But a good many are not.My thought is that a more rigorous prediction culture would cause this field to advance faster and there is a lot of room for improvement.

Now the management modelling culture. This group regularly makes predictions that are requested by and then used to inform management decisions about endangered species (listing and management), invasive species, climate change, acceptable harvesting levels, etc. How does this group do? They do make precise, public, place-specific (and species psecific) probabilistic predictions on a regular basis. This is to their great credit. They very often have no choice about the proper scale at which they are asked to model, but probably don’t have a healthy enough respect for the ensuing limits this entails. And I think you would have to give this group a mixed grade on prolific data. Much of the prolific data we have (breeding bird surveys, forest inventories, etc) come from management contexts. But management also has reams of place-specific monitoring data sitting in drawers and could probably do a better job of using their privileged position (policy makers want their predictions) to push the data agenda further. And I think one has to ask if they really accomplish the underlying goal of public, precise, place-based predictions, which is to have a critical culture of model evaluation and model improvement driven by clear model failures. This piece of the feedback loop is I think weaker than it should be (the slope of the line of improving prediction is rather flatter than the one for the weather forecasters in the figure above). So many predictions are for 20 years in the future and never really checked. And even the short-term predictions are last years work and not followed up in a detailed way (unless an embarassingly bad prediction makes it into the news). The modelling of ocean fisheries is an interesting example. It is complex, and I am not an expert by any stretch so I would like to hear opinions of those that are. But my impression is that while politics absolutely drove many of the decisions one cannot escape the fact that the scientific predictions regularly underestimated the threat of overfishing  and overestimated the rebound potential, thereby also playing a role in the current mess. And while one cannot use a broad brush to characterize a large population of scientists, and I know there is research on improving and fixing models, my understanding is that there is a real culture of inertia resisting change and improvement to the prediction models. My colleagues at Maine would suggest that a big part of the problem with current models is that they are at the wrong scale, but I cannot offer a strong opinion on that. So having picked on fisheries scientists for a minute, let me reverse course and reiterate that this group (and their colleagues doing similar things for deer populations, etc) are, in my opinion, closer to my 6 P’s of prediction than any other group in ecology.

So, a rather long post. Three main things I would love to hear comments on – do you agree that the 6 P’s of prediction are all important and good or am I missing anything big? How do you think the strategic modelling culture is doing with prediction and the 6 P’s? How do you think the management prediction culture is doing with prediction and the 6 P’s?

32 thoughts on “Ecologists need to do a better job of prediction – part II – partly cloudy and a 20% chance of extinction (or the 6 P’s of good prediction)

  1. Great post, Brian. I don’t have a lot to add, beyond on “management also has reams of place-specific monitoring data sitting in drawers” – I absolutely agree, and really think the push by people like Stephanie Hampton and Josh Tewksbury for data accessibility, data transparency, and data management is tremendously important.

    http://www.esajournals.org/doi/pdf/10.1890/1540-9295-10.2.59

    Is that only a call for fundamental ecologists? Hopefully not; professional societies on the applied side are also calling for improved consistency in data collection to (hopefully) facilitate data sharing or comparisons between studies, regions, time periods, etc. I give a (dated) example from freshwater fisheries because that’s where I work, but I assume (hope?) the same is true in other systems.

    Click to access Bonar_Hubert_2002.pdf

    Standardizing field methodologies is of course only one step; another is making those reams of place-specific monitoring more accessible to other agencies, researchers, etc. Despite my objections on the past posts (or their comments), understand that I share your frustrations on the gap between what is available and what would be ideal. I spent a good chunk of my dissertation trying to track down 30+ years of unpublished and functionally invisible lake monitoring data for an entire state by combining data from academic researchers, county agencies, state agencies, and federal agencies. It’s still a hope to eventually make the database we assembled public, but there’s obviously less career incentive to do that than to just get the papers out.

  2. Great post Brian. Great, great post.

    A few thoughts:

    As you note, a big (I’d say *the* big) difference between weather prediction and most ecological situations is that meteorologists know the dynamical equations, and there are only seven of them. There are rare cases in ecology (say, certain well-studied cycling populations) in which we have what’s effectively a low-dimensional dynamical system with a strong deterministic signal in its behavior. In those cases, we have been able to figure out (and then parameterize) the equations, and thus make good predictions. Seems to me that your 6 P’s, important as they are, only come into play once you’ve gotten over this hump.

    Re: starting with tractable predictions (and following on from my previous comment), does that mean we should focus on trying to predict the dynamics of model systems in which we can figure out and parameterize the governing equations? So, systems like microcosms, mesocosms, atypical natural populations like lynx and hares…That is, only try to emulate meteorologists in systems where it’s feasible to emulate meteorologists. And quit trying to do so, or feeling guilty about not doing so, in other systems!

    Re: PVA analyses making probabilistic predictions: yes, they do. But does anyone ever check that those probabilities are right? Do populations go extinct within 100 years 20% of the time when PVAs say that they have a 20% chance of extinction within 100 years? I’m no expert on this literature, but the only experiment I’m aware of that’s ever checked this found that PVA probabilities were way off. This is very nice work from David Lodge’s group (I think? I’m getting old, memory cells dying…) where they just grew a whole bunch of replicate Daphnia populations in a constant environment in the lab in small culture vessels, used a typically-short period of data collection to do a standard PVA, and then followed the populations to see the distribution of times to extinction.

    Re: fundamental theory making weak or unfalsifiable predictions, I think that’s unfair. The goal of fundamental theory mostly isn’t to make predictions, it’s to promote understanding. The Rosenzweig-MacArthur model may not be able to predict the dynamics of any real predator-prey cycles in nature, because it’s not a close enough approximation to the true equation governing the dynamics of any natural predator-prey system. But the Rosenzweig-MacArthur model is essential to our understanding of a key mechanism that helps generate predator-prey cycles in nature. Prediction absolutely is a valuable goal of science–but understanding *why* the world is the way it is is valuable too.

    Further, understanding often (not always, but often) is a necessary prelude to successful prediction. For instance, Robert Peters-style purely phenomenological statistical models often do a crap job of prediction in ecology. We can make better predictions if we build models that incorporate our understanding of the underlying mechanisms driving changes in the state variables. Fundamental theory and the understanding it provides also alerts us to cases where prediction is impossible. If you don’t know what “chaos” is, you might be tempted to think that, if we only had more weather data, we could predict the weather accurately months or years in advance. Fundamental theory and the understanding it provides also helps us make good predictions by preventing us from mis-specifying the problem or testing baseless predictions. Population ecologist Dennis Chitty’s memoir, Do Lemmings Commit Suicide? Beautiful Hypotheses and Ugly Facts, is a chronicle of his admitted failure, after a lifetime of research, to figure out why small mammal populations often cycle. And the ultimate reason for his failure, in my view, is that he simply didn’t understand how stochastic dynamical systems work. In his book, he talks about rejecting predator-prey cycles as a cause because he once observed a vole population that failed to crash as expected despite there being lots of bird predators present at the time (or something like that, I may be misrecalling the details). Chitty is all about testing predictions, that’s what he spent his entire career doing. But his predictions often lacked grounding–they didn’t actually follow from the hypotheses or assumptions Chitty was making. For instance, in a stochastic predator-prey system, there will in fact be occasions on which prey fail to crash “on schedule”. Modern population ecology, based on a thorough fundamental *understanding* of how and why stochastic dynamical systems behave the way they do, has made massive progress, including *predictive* progress, in the last 20 years.

    • Hi Jeremy – lots to respond to.

      Weather is governed by known equations but: a) it is not really low dimensional because it is spatially implicit. In fact it is estimated that the initiali conditions for the model require 10E7 data points! b) the 7 equations are known to be overly simplistic (e.g. frictionless fluid assumed). There is room for many more equations if they wanted – it is the incremental approach of start simple and add when required that I admire. They may have it easier in some ways but I’m not so convinced these are really the key part of their success.

      Definitely not opposed to theoretical or mechanistic underpinnings to prediction but not adamant about them either. Maybe we should save this topic for my next posting which is exactly on this issue.

      OK – Rosenzweig-MacArthur was a great advance in understanding when it was published in the 1960s. How much have we used this understanding with 40+ more years to now make predictions. Ecologists seem to me very happy to wallow in this vague cloud of claimed understanding, but it will never get precise unless we push ourselves to make predictions. Or in other words, how good is your understanding really if you can’t at least stick your neck out to make a prediction into the future that has some moderate chance of being right. Philosophers “UNDERSTAND” things. Scientists seek over time to understand things well enough to model and make quantitative predictions. At least that’s my take. This, I think, is my whole posting theme in a nutshell. And it seems to me, judging by your last sentence you don’t necessarily disagree.

    • Oh – and yes – I’m on board with focusing on predictions in model systems for now (and maybe the next 100 years). At least for the basic researchers.

      • Ok, so now that you’ve given me the opening, I’m going to try to bait you into offending 95% of the people reading this blog: 😉

        Is the ultimate problem here that ecologists, at least the basic researchers, just wouldn’t enjoy doing what would need to be done in order for us to make more and better predictions? They’d find it boring or uninteresting or just no fun? Is the problem that, ultimately, most people just don’t feel like working in the few model systems that are tractable for the sort of predictions you argue for, or else don’t feel like doing the sort of work needed to make their system into a predictively tractable one? Instead, people mostly just do what they enjoy, and then find a rationale for it that others will hopefully find compelling?

        If that’s the ultimate issue here, is that a bad thing? (I don’t know that it is) And if it is bad, what’s the alternative? Presumably taking away most basic research funding and putting it towards contract work–soliciting proposals to build validated predictive models of specified variables in specified systems?

      • Jeremy – thanks so much for the opportunity to offend 95% of people!

        I’m only going to go half way though. Here’s my opinion.

        As a field, ecology needs to have people who are driven to push to prediction or as a field we do not advance. I don’t agree with taking away funding for basic research or turning everybody into an applied scientist. Other fields (physics, chemistry) have people who span from wild imaginations about string theory to rigorous predictions in the basic sciences to engineering. Ecology seems to not fill the same range. This is a MAJOR problem in my opinion. But the solution is not obvious. You cannot force everybody to go there (wouldn’t want to if you could). Ultimately, I suspect the shortcoming is the quantitative deficit in ecology. A quantitative nerd like me just isn’t happy until I create new numbers (aka predictions) and see if they match out. But there are a lot of people who don’t need to scratch that itch.

        So far so cautious. Let me get a little more offensive so you’re not totally dissappointed Jeremy ;-). One could also attribute this deficit to ecology being full of people who are lazy, less altruistic, or just communing with their favorite warm cuddly organism etc. And I suspect that’s there and we need to have a candid conversation about this and root it out because that’s not ultimately how a field progresses, but it is definitely not all that (or probably most of what) is going on. So beyond that, I don’t have the answers. All I know is that Ecology needs to find a way to get 30% of the people with that quantitative prediction itch, or we’re not going anywhere!

      • Probably best that you didn’t entirely rise to the bait Brian. As Peter notes, ecologists certainly do follow the money just like other scientists. If NSF panels really wanted to fund truly predictive work, there’d be a lot more truly predictive work proposed, even without any shift in the composition of the population of academic ecologists towards a greater proportion of quantitative forecasting types.

        No, ecologists aren’t lazy or selfish! And while probably a fair number just like their favorite organism, is that really so different from what you or I do? You like communing with numbers. It’s not that you’re sucking it up and trying to do quantitative predictive work that you don’t enjoy because you’re convinced on a purely intellectual level that that’s what ecology needs. Similarly, I like working in microcosms (not because I like protists, but because I like working in a system where one gets clean answers and gets them fast; it’s immensely satisfying to me). So if you say you want to see more quantitative predictive work, or we both say we want to see more work in tractable model systems, we’re arguing that ecology just so happens to need more of the sort of stuff that we personally like. Which might be a happy coincidence for us. The fact that, as Peter has noted, ecologists all pay lip service to prediction suggests that even folks who don’t enjoy doing predictive work agree at some level that they ought to be doing it. But it might not be a happy coincidence for us, it might just be us finding post-hoc justifications for stuff we’d be doing anyway.

  3. Great post Brian, looking forward to the follow-ups.

    Would be interested to hear if you have the same enthusiasm for climate predictions, which, after all, use more or less the same models, but on a time scale more comparable to ecology.

    • Remembered me of a talk I just posted on http://theoreticalecology.wordpress.com/2013/01/09/lessons-from-meteorological-and-climate-models-for-predictive-ecology/

      When I remember correctly, Tim Palmer mentions there among other interesting things that weather forecast is essentially an initial value problem, and that weather forecast uncertainties arise to a large extent from our inability to constrain uncertainties on the initial values of the atmosphere, while climate is a forced (boundary value) problem, where uncertainties arise from our uncertainty about boundary conditions (radiative forcing, feedbacks with other compartments of the earth system) in addition to uncertainty about the description of atmospheric processes, which are of course present in both cases. Not sure whether ecological problems tend to be more of the former or the latter type, if we’re unlucky we might be in the middle of both for many questions, with initial values of population sizes, distributions etc. being as important as boundary values and processes for many questions.

      Another random thought: there was a special issue in Phil. Trans. R. Soc. B. on predictive ecology last year, might provide some interesting examples as well http://rstb.royalsocietypublishing.org/content/367/1586.toc

      • Thanks Florian (and I enjoyed your post).

        I’ll talk a little about climate forecasting in my next post. But absolutely – the record in climate forecasting is not as good as weather forecasting (there seem to be some practical limits where the method used just will not go past 7-10 days (maybe ever). It is interesting that even the very first forecaster (the aforementioned ambulance corps person) foresaw this limit.

        And you make a nice point – if you look at the framing from my scaling up post you can write down Xt+1=F(Xt,θ) then for prediction (looking at Xt for t in the future), we need to know all of the initial conditions (X0), the parameters (θ) and the model (F). In weather forecasting F and θ are known fairly well (at least the first order model – there has been active research on small improvements). It is a question of figuring out X0 (initial conditions).

        Now in ecology, I think we have active research on F, X0 and θ. Unlucky as you say. Or I prefer to think of it as rising to a great challenge 😉

  4. Pingback: Lessons from meteorological and climate models for predictive ecology « theoretical ecology

  5. Hi!

    A fantastic series of posts and comments, I can’t wait to read more. I agree that in basic ecological research we somewhat have an anti-prediction culture but is that because we don’t want to (as Jeremy and Brian seem to suggest) or because doing it requires so much time that obtaining funding and building a career on it is difficult? Meaning the anti-prediction attitude is externally imposed on us?
    Here are two pieces of predictive work in consumer-resource theory where F and θ were known very well and also X0 to some extent. It took both groups of researchers’ ca. 20 years to get there.

    Click to access 610.full.pdf

    Click to access 1743.full.pdf

    This work also suggests that in contrast to what Brian seems to say, the predictive power AND the explanatory power of consumer-resource (or predator-prey) theory is actually quite good. Probably because we have a solid understanding.

    best,
    Arne

    • Thanks Arne for the examples. I agree, we can do a good job of prediction when we try. And as you note the same theory I was picking on (basic predator prey equations) can lead either to weak or risky, strong predictions. Its really on us.

    • Arne’s comments encapsulates what I was trying to get at in my post about prediction: I worry that the current career incentives and funding constraints in our field don’t promote this kind of predictive work, basic or applied. Although we all have our own pet obsessions and preferences in research, I’m certain that offering more rewards–funding or prestige–would tempt more talented people into tackling these challenges.

      Love the post Brian, looking forward to the next.

      One more comment about monitoring data sitting in management agency drawers: I know from painful experience that some if it is useless due to changes in monitoring protocols over time or just poor sampling technique. It can be very frustrating to try to resurrect.

      • I’ve found in working with certain historical data sets that, in the process of trying to milk all the information I could from the data, that I’ve learned a lot about statistics and quantitative analysis in general. I would argue that the development of data set-tailored quantitative methods deserves much more attention than it gets. Historical data has a value like no other–if you can extract meaning from it.

  6. Jeremy – we’ve exceeded the 3 level nesting for comments – so I am starting a new comment here. Although I can’t disprove it – my claim is for others to judge – I certainly feel like I am saying something more than you should do the kind of science I happen to like. I am claiming that good science that helps a field advance has risky quantitative predictions which are tested with failures leading to refinements of theory (and not just in the applied world) as a central part of the mix. Why other fields of science have this and we don’t in ecology is a complex issue and as you say more a problem of mixture of people rather than some people doing the wrong stuff.

    Jeremy & Arne & Peter all address the incentives coming from NSF. And I agree. But it is important to remember that NSF panels are mostly made up of … us ecologists. In the end, in a collective sense, we give ourselves grants. Other fields strongly expect and reward prediction. One of my college roommates went to grad school in physics. There were some very specific theoretical predictions about what would happen within half a degree of absolute zero known as Bose Einstein condensates that had never been tested. He, his professor and the entire lab, and another competing lab were pulling in major NSF bucks just to do the empirical work to test the prediction because physicists considered that really important. If ecologists valued prediction and testing, prediction would be rewarded with grants. So all of these comments about NSF really just equate in my mind to saying ecologists collectively don’t value prediction.

    I think the first step in ecology is to convince people that the prediction agenda is important. If this battle is won, then hopefully the mix of people, the training and the grants we give ourselves will start to change.

    • I know you didn’t mean to argue that everyone should just do the sort of science you happen to like. As I said, it could well be that we really do need more predictive science of the sort you’ve argued for so well, and it’s just a coincidence that that just so happens to be the kind of science you like. I just wanted to note the coincidence, because in other posts I’ve been quite tough on other ecologists for suggesting what I consider to be weak post-hoc rationalizations of their scientific choices. It’s only right that we subject ourselves to the same scrutiny!

      As we’ve disussed in other threads, NSF panels don’t value certain sorts of prediction. Like forecasting the future abundances of the species at some particular site. But they certainly do value testing of a priori hypotheses. So I’m not sure your example of testing the predicted behavior of Bose Einstein condensates at 0.5 degrees Kelvin is the best example of the sort of thing that gets funded in other fields but not so much in ecology. That sounds to me like exactly the sort of hypothesis testing work the NSF ecology panel likes!

      But your broader remark that NSF panels are us is well-taken

      • Hi!

        A remark*, the work I cited in my comment was actually done by scientists who were just doing what they loved and who were driven by an urge to understand things. However, they were also strongly application oriented right from the start. Bill Murdoch started working with Californian Red Scales because he wanted to understand consumer-resource interactions and because he wanted to get a handle on this economically important pest (http://press.princeton.edu/titles/7569.html). Cheryl Briggs is motivated by the extinction threat to Amphibians worldwide. Andre de Roos and Lennart Persson are concerned with the effects of ontogenetic individual development for populations/communities but also with the sustainable exploitation of natural resources, especially fish populations (http://press.princeton.edu/titles/9915.html).
        There are several points I think one can make: First, I don’t think there is a divide between basic and applied ecology, or better, the divide is an artificial one, created by cultural differences between schools and funding structures. This is hence also true for differences in prediction versus explanation attitudes which are according to the posts I read here strongly associated with an applied or a basic orientation. However, applied ecologists do basic research they just don’t call it that, but many data from for example restoration/conservation projects could be used to test ecological concepts (alternative stable states, coexistence theory, invasion ecology, source/sink dynamics) by making predictions based on these concepts, even quantitative predictions. But these data are often not gathered or made available because funding structure and cultural attitude prevent this. And basic ecologists do applied stuff and again don’t call it that by making predictions with their mathematical models for real world systems (see my examples) but rarely publish in applied journals or go to the relevant conferences. Again funding issues and attitude. A stronger collaboration between these two schools and willingness to understand each other would be very helpful and efficient. Robert Cabin makes this case in his book “Intelligent tinkering: a personal account of the science and practice of ecological restoration”. Second point: the people mentioned above could predict things because of the detailed understanding of the underlying ecological processes. They have the correct explanations for the patterns they see. Moreover, because of this mechanistic (and I like to point out, general) understanding it is now possible to suggest measures for reaching a given management objective. This is something that a pure statistical, prediction-orientated approach can never give you (prediction in the sense of what will come next). Future patterns in species distribution, exploited fish stock dynamics or disease spreads may be predicted well by simple autocorrelations and other statistical functions etc. but they cannot tell you why what comes next actually comes (what I think is another important objective of predictions) and what to do. Third, funding structure: I predict that NERC, VR, DFG, ESF and other European research councils will soon follow suit and jump the band waggon of predictive ecology. And since as Brian pointed out it is us who do this it is up to us to stop that. Because it is dishonest as Peter said in his post. It is not possible to do the type of predictive work I have in mind (good predictions with good explanations) during the duration of a typical grant. It takes many of these grants instead. And if people get these grants they actually will make and test predictions. If research councils are really serious about it they should fund longer projects and/or give more money to applied projects to gather meaningful data and/or fund collaborations between applied and basic ecologists instead of making us paying lip service to stuff we can’t do. Fourth, attitude: while I make a case here that some ecologists stick out their necks and do predictive work by doing what they love, I agree that there are problems in how it is done and how frequent it is. There are issues like an obsession with ANOVA and p-values, categorical testing instead of process oriented surface models (see Brian’s first post), the non-probabilistic framework of many mathematical models and the absence of model outcome checking / refinement based on good data and precise predictions (this post of Brian), which plague our field back. My examples (another example is Edward McCauleys work on Daphnia-algae dynamics) are maybe light towers, but they are published well and will together with posts like Brian’s and Peter’s hopefully change things.
        Overall, I don’t think we need stop doing what we love, just let’s do it differently.

        Best,
        Arne

        *OK, this turned out to be a long comment. I just started commenting on blogs and I really enjoy this. But can you do me a favour and let me know when I cross some lines or do things badly. I have read Jeremy’s guide lines on how to blog and comment (https://dynamicecology.wordpress.com/2012/09/22/comments-on-comments/), but direct and more personal feedback is always better. That is why I am sceptical about MOOCs (https://dynamicecology.wordpress.com/?s=education+music+industry&submit=Search).

      • Thanks Arne (as far as I’m concerned as long as you saying new, interesting relevant things – which you are here – there is no line to cross). At some point you have to worry about if anybody will take the time to read it, but I don’t think you’re there either. To me the main reason I blog is because I learn from the discussion, so thank you for taking the time.

        I agree with most of what you say. I see the applied basic as a spectrum, not a divide. There are people at either end doing things that are pretty much 100% applied or basic, but there are plenty of people located somewhere in the middle of the spectrum. I’m not familiar with Cabin’s book but it sounds interesting. You (like Jeremy) touch on mechanistic vs statistical prediction, which is an important enough topic that I am planning a whole post on it, so I will save my thoughts for there. It will be interesting see how tings change in ecology. Certainly NEON has put a stake in the ground for prediction, although since they haven’t turned on yet, it remains to be seen if they are serious or just cloaking themselves in a fundable concept like Peter’s post points out many ecologists do.

        Thanks

      • Arne: sorry I missed this earlier (yes, sometimes I can’t keep up with my own blog), but this is a very perceptive comment. Will be interested to see if your prediction at the end comes true. And judging by what you’ve written here, you certainly don’t need advice from me or anyone about how to comment! 🙂

  7. Outstanding post Brian, both in terms of choice of topic and explication. Covers many topics that I think about and think are absolutely critical. Many thanks indeed.

    The most important of these Ps in my mind is prolific data. I think the development of the data assimilation system in weather forecasting has to rank among the most important methodological developments in all of earth/environmental science. I don’t see any way that they could have attained the accuracy that they have without this constantly interative procedure they go through to assimilate new observations and re-forecast. They should be very proud of that way of doing things, and it should indeed serve as a model for how to proceed in other environmental sciences.

    Exactly how that translates in ecology I don’t know but the system that appears to come the closest to it, in terms of ground-collected data, is the FIA forest inventory system in the US, with its rotational inventory schedule in each state. [But I’ll disagree on the thoroughness of the data collected–it’s very good IMO; you can do a *lot* of things with that data]. Satellite imaging also qualifies but that’s a different beast designed for a generally different set of questions at different spatial scales.

    On “proper scales” I think of that as a hierarchy issue. I don’t doubt that the weather forecasters were looking to what was most tractable first, but I’m also guessing that they must have had an eye towards hierarchical relationships in the atmosphere (I don’t know, I’m postulating). Yes, precipitation undoubtedly depends on evaporation and convection (and aerosol loads and topography etc) but it’s also true that large scale pressure systems provide the boundary conditions on regional precipitation regimes–e.g. you don’t get precipitation under high pressure. You need to get those large scale boundary conditions modeled right first, and then start piecing in the smaller scale precip events that are more diffiucult to predict. Or perhaps this was your point?

    One of my favorite books in grad school was “A hierarchial theory of ecology”. I think the weather modelers had this sense of a definite hierarchy of cause and effect, and I think this way of thinking absolutely has to be part of any real advancement in ecological prediction and truly valuable theorizing/modeling.

    On the need to be precise and place-specific, I’ve never been of any mindset otherwise. I don’t know how science can advance any other way and I am dubious of the final usefulness of non-specific approaches, such as May’s, and I don’t see how they are necessarily any more “strategic” than than more specific models are.

    • Thanks Jim. Just to clarify, I wasn’t knocking the USFIA. I use it myself. But is only counts and sizes of trees. There’s a lot of biological information (I’m thinking mammals, birds, insects, worms, and etc and even in trees leaf damage, phenology, etc) missing. Weather forecasting has improved significantly as stations started adding relative humidity, radiosonde balloons etc. We ecologists need to go for depth as well as breadth too – not that I’d expect you’d disagree.

      • No I know you definitely weren’t knocking FIA data. Yes, it’s definitely only designed to be a vegetation/plant inventory–but within that context it is very good and thorough IMO, compared to what else is out there. Phenological observations would be too problematic–too timing dependent which would not work logistically with the field crews. And which is amenable to remote sensing inventory as well.

        Definitely agree with your last point.

  8. Pingback: The road not taken – for me, and for ecology | Dynamic Ecology

  9. Pingback: Answers to reader questions: part I | Dynamic Ecology

  10. Pingback: Ecologists need to do a better job of prediction – Part III – mechanistic or phenomenological? | Dynamic Ecology

  11. Pingback: Ecologists need to do a better job of prediction – Part IV – quantifying prediction quality | Dynamic Ecology

  12. Pingback: In praise of exploratory statistics | Dynamic Ecology

  13. Pingback: Friday links: 7-year postdoc, every major’s terrible, citation Matthew Effect, KILL BLOGS WITH FIRE, and more | Dynamic Ecology

  14. Pingback: In praise of a novel risky prediction – Biosphere 2 | Dynamic Ecology

  15. Pingback: Why ecology is hard (and fun) – multicausality | Dynamic Ecology

  16. Pingback: Prediction Ecology Conference | Dynamic Ecology

Leave a Comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.