“Null” and “neutral” models are overrated

Recently I reviewed an interesting paper proposing a new model of X.* X is an ecological phenomenon that we’d like to understand and predict. X has been modeled before in various ways, with different models making different ecological assumptions about the factors that govern X, and different simplifying assumptions about other things. The main goal of this new paper was to develop a simple model of the effects on X of some factors ignored by previous models. All of which is fine (like I said, I found the new model quite interesting), and none of which is what prompted this post.

What prompted this post was that, in a couple of places, the authors referred to their new model of X as a “neutral” model. I admit that I wasn’t 100% clear on what they meant by this. But I got the impression that the authors felt their model had some sort of special status compared to previous models of X. That they viewed their model as a “limiting” or “baseline” case, perhaps–the factors included in their model are always at work, whereas the factors included in other models might or might not be at work. Or perhaps they felt that their model should be treated as a “null” model, to be tested and rejected before we are entitled to infer that some other process, not included in the model, matters? As I say, I’m not clear exactly what they meant, and the authors didn’t make a big deal of it so it wasn’t a huge concern for me

But this paper is just one example of what seems to me to be a growing trend, although its roots go way back. In the wake of Steve Hubbell’s very influential application of a neutral population genetics model to ecology, ecologists seem increasingly keen to develop “neutral” or “null” models for all sorts of ecological phenomena. In practice, this usually means a simple model which omits, or sets to zero, the effects of one or more ecological factors or processes, while explicitly or implicitly retaining the effects of other factors or processes. Just as neutral models in population genetics set selection to zero, but include (or can include) effects of other evolutionary forces, like mutation, migration, and drift. And then it’s claimed or implied that the resulting model has some sort of special status, that it’s somehow different than other models of the same phenomenon, and so should be treated differently.

This trend kind of bugs me. Developing ecological models that omit or set to zero the effects of some ecological processes often is very useful, I have no problem with that. But I really wish we’d quit calling the resulting models “neutral” or “null” models, and treating them differently than we treat other models on which we haven’t slapped those labels.

The issue here is one of which research strategies are effective in which contexts, or for which purposes. There absolutely are contexts in which it makes sense to treat some particular simple model as a “null” model, which ought to be rejected as a first step, before we are entitled to infer the operation of any processes or factors not included in that particular model. But there are many other contexts in which that research strategy is not only ineffective, but likely to be positively misleading.

To explain why, let’s consider a canonical case in which it really does make sense to start with a null model that you will try to reject before doing anything else. In simple statistical contexts, the null hypothesis describes how you’d expect the data to look if there was nothing going on except sampling error. Sampling error is of no scientific interest. It’s a nuisance, pure and simple. If we could completely and accurately census the statistical populations of interest, we would. But unfortunately, complete and accurate censuses ordinarily are impossible, so sampling error is ubiquitous. Further, its effects aren’t always obvious or easily recognized. So in order to avoid getting fooled into seeing patterns that aren’t really there, it makes sense to first rule out the possibility that any apparent patterns in the data arose from sampling error alone. And in order to do this we need to be as sure as we can be that our null hypothesis correctly describes the effects of sampling error, and doesn’t include the effects of anything else besides sampling error. Because otherwise we will be seriously misled.

Of course, sampling error isn’t the only possible “nuisance” in science. A “nuisance” could be any factor that, for whatever reason, is totally irrelevant to the question being asked. So in general, we can say that a “null” model is one that includes the effects of any “nuisance” processes or factors that are of no scientific interest, but just get in the way of detecting effects that are of scientific interest. Unfortunately, these “nuisances” are ubiquitous or nearly so (otherwise why would we worry about them?), and have non-obvious effects (otherwise why would we need to model them to detect them?) To be useful, the null model must correctly describe the effects of these “nuisances”, and must not include any effects of any non-nuisance factors. Indeed, insofar as the null model doesn’t correctly describe the effects of “nuisances”, or includes effects of non-nuisances, it can be worse than useless. It can be positively misleading. And of course, all of this assumes that we can all agree on what’s a “nuisance”, for purposes of the question asked.

In practice, I think “neutral” models in ecology often are intended to function as “null” models in the sense just described. Which is a big problem, I think. Because can you think of any ecological model (as opposed to a statistical model of sampling error) that actually fits the description I just gave? I can’t.

For instance, all neutral (in the sense of selection-free) models of which I’m aware include the effects of other processes of scientific interest–drift, migration, mutation, etc. These processes are of interest both in their own right, and due to their interactions with selection. And further, those other processes aren’t necessarily ubiquitous; there are real-world situations in which some or all of drift, mutation, and migration are negligible. And further still, different models omitting different processes often can produce similar-looking data. This is a really crucial point. For instance, there are models with selection but no drift, mutation, or migration that produce realistic species-abundance distributions. When the world is overdetermined, it is a very bad research strategy to default to assuming that certain processes matter while others might or might not. And in ecology, the world often is overdetermined, by which I mean simply that many different combinations of processes are sufficient to generate the observed data, with no one of them being necessary. So if you’re trying to understand the processes that generated your data, I don’t see why you’d ordinarily want to confer special “null” status on a model omitting any one of those processes. Not when that “null” model is simply one model among others that might have generated the data.

But at least neutral models in population genetics do in fact omit selection, while retaining drift, migrations, etc. Many other putatively “neutral” or “null” models in ecology don’t even manage that. For instance, randomization-based “null” models for detecting effects of interspecific competition are infamously problematic because it’s totally unclear what effects they actually eliminate and what effects they retain. As a second example, the “mid-domain effect” is a strange “null” model that admittedly nullifies only some of the effects of environmental gradients on species’ geographic ranges. I could keep going, but you get the idea.

I sometimes see ecologists argue that one always has to have a null model. You always have to rule out “noise” before you can claim that there’s a “signal” worth studying. One problem with this argument is that it gets deployed in contexts in which what counts as “noise” is highly debatable. If by “noise” you mean, not “sampling error”, but “ecological processes that I personally happen not to be interested in”, you really should not be deploying this argument. A second problem with this argument is that it’s deployed to defend null models that the users themselves admit are imperfect, e.g., because they include effects of “non-nuisance” processes. Again, having a bad null model often is worse than not having one at all, because it’s positively misleading. In such cases, your best bet is to find some other way of addressing the scientific question of interest. For instance, back in the 1980s community ecologists famously abandoned randomization-based null models and other observational approaches for inferring the operation of competition, in favor of field removal experiments to directly test for competition.

I also sometimes see ecologists giving special status to simple “null” models on grounds of parsimony. I don’t buy that. I wonder if people who make this argument have thought sufficiently carefully about precisely what “parsimony” means and why we might care about it. (There is an extensive philosophical literature on this) Personally, I generally don’t care about simplicity (parsimony) for its own sake. I care about the truth, or at least a good enough approximation to the truth for my purposes. And the truth, or a good enough approximation to it, might well be complicated! For instance, if the truth is that the world is not neutral, so that selection is among the processes that actually generated my data, why should I care if a simple model that omits selection can reproduce certain features of my data? Especially since, thanks to overdetermination, different “null” or “neutral” models that omit different factors often will all be able to reproduce those same features of my data. Which means you can’t argue that the factors omitted from any one of those models are irrelevant (too often, “parsimony” is invoked not as a substantive argument but simply as a way to shift the burden of proof) And if you say that simpler models are to be preferred only when all else is equal, you’ve just admitted that parsimony is irrelevant in practice, since in practice all else is never equal when it comes to comparison of substantive scientific models. Bottom line: the reasons for favoring simple models over complex ones, independent of how close they are to the truth, are extremely limited at best.**

None of the above is intended as an argument against statistical hypothesis testing in ecology. Even in an overdetermined world, it still often makes good scientific sense to start by ruling out the possibility that your data could’ve arisen from pure sampling error. Traditional statistical ideas about sampling error are pretty much always relevant.

Don’t get me wrong, I know as well as anyone that all models are false, are imperfect approximations to the unknown and unknowable truth. And there absolutely are good reasons why, when trying to learn about how the world works, we might want to start by developing and testing simple models rather than starting out with more complex ones. This post is emphatically not an argument that we should aim to develop literally-true models (that’s impossible), or models that are as complex as possible! But the whole point of having a false model, or a bunch of different false models, is to home in on the particular ways in which they’re false, and leverage those falsehoods to get closer to the truth. Too often, that’s not how purportedly “neutral” or “null” ecological models are used. It’s usually a bad research strategy to set up one particular model among others as a “null”, just because it happens to be simpler than the others or just because it omits some particular process that other models include. It’s often far more useful to start with a suite of alternative models, none of them privileged with the label “null”, in order to get a sense of the range of models that might have generated the data (e.g., the recent work of Storch et al., to pick one possible example among many).

*Obviously, I can’t go into any further detail without violating confidentiality.

**As illustrated by the fact that popular statistical methods for model selection, such as AIC, are not methods for choosing “parsimonious” models. They’re not methods for choosing “simple” models, independent of how close they are to the truth. They’re not even methods for choosing models that represent some sort of optimal “compromise” between simplicity and closeness to the truth, though they’re often described that way. Rather, they are methods for choosing the model that’s closest to the truth, period. A model can be false by being simpler than the truth, or by being more complex than the truth (as in cases of “overfitting” the observed data, also known as “fitting the noise”). That, and not “parsimony”, is why AIC includes a penalty term for the number of free parameters a model has. AIC scores for alternative models are estimates of the relative Kullback-Leibler divergence between the alternative models under consideration, and the unknown true model that generated the data.

20 thoughts on ““Null” and “neutral” models are overrated

  1. “Bottom line: the reasons for favoring simple models over complex ones, independent of how close they are to the truth, are extremely limited at best.”

    Allow me to disagree on this one (or just to point out one obvious reason): the potential of a theory for refutation (i.e. to be shown to be wrong) clearly hinges on its parsimony. If, from field data, you found that A -> B -> C -> D -> X or A-> E -> X are two equally likely models (in terms of whatever you like, AIC or the like), you might go with the second model when going experimental (i.e. manipulating the system) just because it is way simpler. This is very practical, and I guess it might be one reason why refuting simple models first is always the path chosen in science.

    (Despite this minor disagreement, I really enjoyed reading this post – this is the kind of thing that tends to irritate me now and then)

    • Thanks for your comments Francois. The notion that simpler models are more easily falsifiable or testable is indeed one common pragmatic justification for preferring simpler models. And I agree that starting with the simpler hypothesis because its the most easily-testable can be a pragmatic thing to do. But not necessarily. For instance, ecologist Earl Werner once argued to me that one should start with more complicated hypotheses and then try to pare them down. His feeling was that, if you start with a simple idea and then have to make it complicated, you’ll often incorporate biological complexities incorrectly–you’ll misdescribe them. Earl argued that it’s better to start by trying to describe all the biological details as accurately as you can, and then later simplify as needed. Not saying that Earl’s approach is always the way to go, either. But it’s not true that “start with the simpler model” is always the path chosen in science. Sometimes, starting simple is the practical thing to do. But sometimes it’s impractical, just a recipe for biasing your conclusions or misleading you.

  2. Hi Jeremy,
    I usually agree with your comments (indeed, you and I are out of the same intellectual ‘school’ that derives from the rise of experimental ecology of the 1980s, Wilbur, Werner et al). However, feel the need to channel Dan Simberloff, Nick Gotelli , Rob Colwell and others to respectfully disagree with your characterization of ‘null’ models with randomizations to examine things like community patterns as being rather useless because they have limitations in how they are structured (i.e., what goes into the species pool). Certainly, they do have limitations, and this led to much of the null model wars in the 1980s and the move towards reductionism and experimentalism. I would argue, however, that the pendulum probably swung a bit too far with regards to pairwise removal experiments and the like. In fact, while there are certainly limitations of the null model approach, I have relatively recently become convinced that they are in fact necessary for a number of the questions we are trying to address. The problem is that when we are faced with certain patterns, we automatically want to ascribe some mechanism to explain that pattern—I know I have, and in general, it’s some very deterministic/predictable mechanism. However, what we forget is that often, the patterns that we observe are in fact, what we would expect from simple chance and there is really nothing to explain beyond probability.
    I came to the realization of the fundamental importance of null models later in my career. Prior to this time, I felt much like you. And, ironically, you were one of the people who helped me to realize the problem of not using null models as part of a family of hypothesis tests. I had just given a talk at ESA on how various stresses influenced beta-diversity in my pond communities and I was making arguments about how this could help us understand the relative influence of niche versus neutral processes in the structuring of communities. While people generally liked my talk, you and Jon Shurin came up to me immediately afterwards and took the wind out of my sails by pointing out that the value of the metric of beta-diversity I was using—Jaccard’s index of pairwise dissimilarity—was in fact heavily influenced by the local diversity within the system of interest. So, you suggested you didn’t believe my results because they could have happened by random chance alone (due to the effect of those stresses on local diversity). I was dismayed and rather annoyed with you, but also realized that I had a problem. Indeed, months earlier, I had received similar feedback from Richard Condit after a talk I gave at NCEAS. Clearly, I wasn’t thinking about the process clearly enough. After about a year of banging my head and thinking to myself “how can I prove to Jeremy Fox and those like him that these effects are real”, and with the help of people smarter than me, I realized that null models were essential to solve this problem. In this case, the null model asks ‘what would beta-diversity (measured as Jaccard’s index) look like if the stress only influenced local diversity, but had no other influence on community assembly? Deviations from this null expectation can thus tell us something very real about how the community is structured. That doesn’t mean that the probability inherent to what the null model ‘eliminated’ isn’t interesting, it just means that if we want to ask a question about determinism, we first have to eliminate probability and ‘see what’s left’. The realization of the importance of these null models—which I’ll remind you again, you inspired—has fundamentally changed the way I think about and work on these problems. In fact, I actually owe you a huge thanks, even though you probably don’t even remember this interaction. Your inspiration that drove me to develop these null models directly led me to publish a paper for which I later won the Mercer Award, as well as to put together an NCEAS group (co-led by Nate Sanders and Amy Freestone and with many other great folks) where we developed these null models in much more detail to explore patterns of beta-diversity along ecological gradients (including a paper in Science on the latitudinal gradient in Beta diversity that Nathan Kraft led).
    So, I know you had a very specific point in your post, which by and large, I agree with. However, I think your characterization of the utility of null models—even those that sparked the null model wars of the 1980s—is a bit ‘throwing the baby out with the bathwater’ (and, that saying has a lot of meaning to me right now, as a 2 month old baby is sleeping on my chest at this very moment). Clearly, there are lots of problems with null and neutral models, but much of the problem is that people are using these tools without necessarily understanding deeply what they’re doing—the same can be said for all of the other fancy stats that people love to spew nowadays. Null models have to be used intelligently, but they can be very useful for testing hypotheses. Think of the chi-square test (which is really just a null model based on random expectations). The more I dive into the influence of probability, the more I realize how essential null models are for most questions we address in community ecology. The values of many of the metrics that we measure, and ascribe meaning to, are in fact, quite dependent on probability, and null models are needed to peel apart what’s really going on. Rarefaction for species richness estimates is an obvious example, but there are many others; measures of beta-diversity, trait and phylogenetic community assembly, network structure in food webs and other interaction webs, and many others all depend critically on how many species are in the mix. Comparing this without appropriate null models can lead to nonsensical interpretations of differences in the metrics—just as I had done in my ESA talk many years ago.

    • Hi Jon,

      Thanks very much for the very kind and thoughtful comments. I actually do remember the interaction you mention, though I remember it slightly differently. I recall Jon as the one who was pushing you on Jaccard dissimilarity coefficients being confounded by local diversity, whereas I think I was questioning whether niche and neutral models really made the predictions you claimed that they did. But probably, your memory is better than mine, since I didn’t think about that conversation any further after we had it, whereas you clearly spent a long time thinking about it.

      I actually don’t think you’re disagreeing with me here, at least not too much. If it seems like you are, it’s probably because I wrote the post badly (wouldn’t be the first time…). The problem you were trying to deal with strikes me as basically a sampling error problem, or at least very much like one. The fact that Jaccard coefficients are confounded with local diversity is a pure nuisance for your purposes. It’s of no scientific interest. It’s just a mathematical artifact that gets in the way of studying what you’re trying to study. So you did the right thing and figured out how the Jaccard index behaves in a situation where stress only affects local diversity, not beta diversity. Much like how a statistician might have to derive the sampling distribution of some novel estimator, right?

      I’m all in favor of ruling out or otherwise controlling for sampling error. Using rarefaction to check whether you’ve seriously undersampled the community, and to estimate how many species truly are present, is a great example.

      More broadly, I’m totally in favor of knowing how your response variable would be expected to behave under a wide range of ecological scenarios. Your work on the behavior of different measures of beta diversity is a great example. As another example, and just so I don’t come off as totally picking on neutral models, I think it’s very important to know that both neutral and non-neutral models can reproduce, say, realistic-looking species-abundance distributions. If you don’t know that, you’re going to make seriously incorrect inferences. In general, you always want to know “what range of models, making what range of assumptions, could have generated my data?” I think ecologists too often are wrong about how they should expect their data to look if some ecological hypothesis were true. I’ve talked about this in the past, for instance in this old post pointing out that coexistence theory does not actually predict how closely-related or phenotypically-similar coexisting species will be. Or this old post where I note that you don’t actually expect competition to lead to saturating local-regional richness relationships, or the lack of competition to lead to linear relationships.

      What I object to are two things. One is “null” or “neutral” models that don’t correctly capture the effects they purport to capture. That’s one of the nice things about neutral models in population genetics–they’re process-based, so they really do correctly describe the dynamics of systems in which selection is not among the processes at work. And that’s one of the bad things about null models based on constrained randomization of observed data–they’re not process-based, and so the randomization doesn’t actually eliminate all and only the effects of processes that the investigator is trying to eliminate. The second thing I object to is when someone knows that various different ecological models could’ve generated the data, and sets up one of those candidate models (usually the “simplest” one) as the “null”, which is assumed to be the correct model unless it can be rejected. That’s a very ineffective way to learn which of the candidate models actually generated the data.

      • Coming from molecular evolution, I was going to write in defense of null models, but this comment has clarified your point for me.
        In molecular evolution and population genetics, null models are very useful in my opinion insofar as they model processes which we know for sure are occuring. There is no such thing as genomes evolving without mutation and drift, this can be demonstrated. So other parameters must explain the observations better than mutation + drift.
        But if in ecology you have “null” models which are not process-based, I see your point. There is a very similar situation in Evo-Devo in my opinion, where we don’t know what is a reasonable null expectation.
        Finally, a pet peave of mine is that many people use “null” and “neutral” interchangeably, which they are not. The null hypothesis is the neutral hypothesis only in the case where you are testing the impact of selection. You can have a null selective hypothesis or an alternative neutral hypothesis.

      • Thanks Marc.

        I think the other difference in evolution is that, for many sorts of data, different evolutionary processes generate different signals in genetic data. Which lets you develop tests like the HKA test and its derivatives for detecting effects of selection. When different processes generate different signals in the data, AND you can develop a well-justified, process-based model of all the processes that generated the data, you have a fighting chance of making defensible inferences of process from pattern. I have an old post on this. In ecology, that’s often not the case.

  3. Do spatial/area hypotheses for the latitudinal gradient in species diversity (namely the mid-domain effect) fall under the category of legitimate null models in your opinion? Learning that some researchers do not first account for these effects before accounting for others was profoundly disappointing to me as an undergraduate learning about this stuff. (Hopefully I have not just given you fodder for another zombies post/paper!)

    • Have another look at the post. As I mention briefly in passing, I think the “mid-domain effect” is a really strange and poor null model. Basically, it’s a “Narcissus effect”, to use an old term of (ironically!) Bob Colwell’s. A Narcissus effect is an artifact of the fact that constrained randomization of your observed data typically doesn’t remove all and only the effects of the process you’re trying to remove. Here, randomizing the observed locations of species’ geographic ranges while retaining their observed sizes does not eliminate all and only the effects of environmental gradients, leaving only effects of “hard boundaries”. That’s because environmental gradients affect the size of species’ ranges, not just the locations of their centroids. If you want to model how species would be distributed in the absence of any environmental gradients, but with hard boundaries that they can’t cross, you need a model like that of Connelly et al. Am Nat. They explicitly modeled the movement, births, and deaths of individuals in a homogeneous area with hard boundaries. What they found (not surprisingly) is that there’s only a really, really weak mid-domain effect (FAR weaker than Bob Colwell’s null models find, and far too weak to be worth worrying about in practice). This really, really weak mid-domain effect occurs because moving organisms “bounce off” the hard boundaries and so have a very, very weak tendency to be found towards the center rather than the edges of the available space.

      So I hope you find this reassuring: you had no reason to be disappointed as an undergrad! Because there basically is no such thing as a mid-domain effect that researchers need to account for! It’s an artifact of a really bad, incorrectly formulated “null” model.

  4. G’day Jeremy:

    Firstly, I think you mean “Connolly 2005”, unless you are thinking of a different paper 😉

    Secondly, I think I have to buy you a beer if I ever make it back to an ESA again. That’s the second time you’ve mentioned this paper in a blog post. I think you’re one of about 10 people in the universe who have ever read it.

    Thirdly, I agree with you that many models we think of as null or neutral don’t necessarily deserve some sort of default status where they have to be disproved before we can draw inferences that invoke other mechanisms. However, I don’t really think that about null hypotheses either. Our conclusion in such a test (e.g., the response variable Y is a decreasing function of the explanatory variable X) could well depend upon how we package this with other assumptions – are we estimating the slope of a straight line, or the rate parameter for an exponential decline? Are we assuming normal errors, or lognormal, or gamma?

    If you take the mid-domain effect, in that 2005 paper I was basically trying to try to capture formally (albeit imperfectly) a verbal expression of how Colwell and Hurtt (or maybe I’m thinking of the Colwell et al TREE paper) were conceptualizing the biology behind their hypothesis – the environment varies, and species respond in different ways to it, but some environmental conditions don’t intrinsically promote species richness more than others (e.g., warm environments don’t confer higher speciation, lower extinction, greater niche diversity, etc). Alternatively, though, you could pick two other models in different papers (a book chapter in Marine Macroecology, or a GEB paper led by a postdoc working with me, Sally Keith), which also make the “no environmental gradients” assumption, but package it with different ancillary assumptions. If we want to ask the question: ”What kinds of species richness gradients might we get in the absence of environmental gradients?”, we are stuck reasoning inductively from the whole constellation of models analyzed so far that share this core assumption, unless we can unambiguously identify some of those sets of ancillary assumptions as manifestly more realistic than others. From what I’ve found and also seen of others’ work on this particular topic, I’d venture to say that, when species ranges can be highly non-contiguous (species ranges can persist to either side of habitat that is unsuitable or otherwise can’t be occupied), gradients in alpha diversity tend to get eroded very fast, although gradients as estimated from overlap of geographic ranges can still be pretty pronounced.

    The upshot of this though is that I don’t think we could have concluded all this a priori. We had to put the models together and go and look, and we only did because of the conceptual model of Colwell and Hurtt. And to the extent that this whole argument has gotten people thinking about different ways of modeling the effects of environmental gradients, or their absence, on species distributions and species richness gradients (see for instance Rangel et al’s “niche conservatism” paper in Am Nat from around 2005 or so, on which Colwell is a co-author), I think that’s a good thing. Though I concede that a lot of ink probably has been spilt unproductively in the process of getting where we are.

    Sorry for the rambling. I checked this at 12:00 “just before” going to bed, so am not at my most lucid.

    Cheers
    Sean

    • Hey, *the* Sean Connolly! Thanks for stopping by Sean!

      Sincere apologies for the misspelling, I did indeed mean Connolly 2005.

      And thanks for the great comment. It’s actually a great illustration of something I advocated for in the post, but probably not clearly enough. Instead of just building *a* simple model and treating it as a “null” or “neutral” model that must be rejected before we do anything else, build a bunch of different models making different assumptions (both ancillary assumptions, and “core” assumptions), and compare and contrast their predictions.

      Re: what you and others have learned not being obvious ex ante, sure. Although one thing that I do think was (or should’ve been) obvious from the get-go is that Colwell’s original MDE “null” model was just a Narcissus effect. At least, that was my reaction when I first read it! So that when Connolly 2005 came out, I was like “Finally, somebody actually went to the trouble of proving mathematically what’s been obvious to me all along!” 😉 But I suppose one could argue that I was wrong at the beginning to think it obvious that the original MDE model was a Narcissus effect, even if subsequent work did end up confirming my original intuition.

  5. Pingback: Friday links: Rich Lenski is blogging (!), scientist goes rogue (!), shark-bear (?!), and more | Dynamic Ecology

  6. Pingback: What are simple models for? Two must-read views from economics | Dynamic Ecology

  7. Pingback: Friday links: the Silwood story, advice for (and from) grad students, and more | Dynamic Ecology

  8. Pingback: On the value of simple limiting cases: Lotka-Volterra models and trolley problems | Dynamic Ecology

  9. Pingback: Why ecologists might want to read more philosophy of science | Dynamic Ecology

  10. Pingback: Three goals for computational models | Theory, Evolution, and Games Group

  11. Pingback: The Benefits of Being Unrealistic

  12. Pingback: Friday links: what’s flipped learning anyway?, bad null models, peer reviewers vs. lightbulbs, and more | Dynamic Ecology

  13. Pingback: Steven Frank on how to explain biological patterns | Dynamic Ecology

  14. Pingback: Does any field besides ecology use randomization-based “null” models? | Dynamic Ecology

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.