Katie Koelle delivered the opening talk in the Ignite session on “theory vs. empiricism” at the ESA meeting.* I thought she raised several interesting issues that weren’t really touched on in the rest of the session. I was struck by one remark in particular: that theory in ecology is dying, or at least going out of fashion, and is being replaced by modeling.

Theory here means trying to discover or derive general principles or laws–the fundamental simplicity underlying and unifying the apparent polyglot complexity of nature. Think of evolution by natural selection, the laws of thermodynamics, general relativity, MaxEnt, and statistical “attractors” like the central limit theorem and extreme value theory.

In contrast, modeling here means building a mathematical description of some specific system, in order to explain or predict some aspect of that system. The model need not include every detail about that specific system, but it is tailored to that system. So there’s no hope or expectation that it will explain or predict any other system (though importantly, there could still be commonalities or analogies with other systems). Think of global climate change models, or models of various cycling species, or Meg’s award winning work on host-parasite dynamics in *Daphnia*.**

If you want to personify the contrast: John Harte is a theoretician. Tony Ives is a modeler.

Hopefully it goes without saying that both theory and models are hugely valuable in science (indeed, both John and Tony note this in the links above). But there’s much more that can be said about the distinction (and I do think it’s a real distinction, or at least two ends of a continuum). Here are my thoughts (strap in, there’s a lot of them!):

- I think Katia and Karen are right that modeling is the hot thing right now in ecology, while theory’s not, except for a small number of theories that are hot because it’s possible to treat them like models in the sense that you can fit them to data (e.g., MaxEnt). I think John Harte gets at one big reason why in the interview linked to above: advances in software and computing power mean that it’s now easier than ever to do simulate complicated, analytically-intractable models, and to fit those models to data using computationally-intensive statistical approaches. Water flows downhill, following the path of least resistance, and so does science. If X becomes easier to do than it used to be, people are going to do more of X. Which is a good thing, at least up to a point. I mean, if there was something we wanted to do more of, but couldn’t because it wasn’t technically feasible, then surely we ought to do more of it once it becomes technically feasible! The danger, of course, is that people start doing X just because it’s easy (never mind if it’s the right thing to do), or because it’s what everyone else is doing (a bandwagon). There’s a thin line between hammering nails because you’ve just been given a hammer, and thinking everything is a nail because you’ve just been given a hammer (or thinking that, because you’ve just been given a hammer, the only thing worth doing is hammering nails). There’s an analogy here to adaptive evolution. The direction in which a population evolves under natural selection depends both on the direction of selection, and on the genetic variance-covariance matrix. The “direction of selection” in science is what we’d do if we were unconstrained by technology, time, money, or effort. The “genetic variance-covariance matrix” is the constraints that define the paths of least resistance, and the intractable dead ends. The art of doing science well is figuring out the optimal “direction of evolution”, balancing what we’d like to do and what’s easiest to do.
- I think the trend away from theory and towards modeling in ecology is a long-term trend. See for instance this essay from the early ’90s from Jim Brown, arguing for the continuing value of theory (well, maybe; more on that in a second), and the response from Peter Kareiva, arguing that ecologists need to get away from general theories and move towards system-specific modeling. I think Kareiva’s point of view is winning. As evidence for this, recall that in recent decades, the most cited papers in ecology have not been theory papers, in contrast to earlier decades.
- That Kareiva essay gets at another reason why I think modeling is ascendant over theory in ecology: theory often is hard to test. It’s not merely that lots of different theories tend to predict the same patterns, so that those patterns don’t really provide a severe test of any of the theories, although that’s often part of it. It’s also that, because theories aren’t system-specific, they’re often hard to link to data from any specific system (and all data come from
*some*specific system or systems). How do you tell the difference between a theory that “captures the essence” of what’s going on but yet doesn’t match the data well because it omits “inessential” details, and a theory that’s just wrong? The link between theory and data (as opposed to model and data) often involves a lot of hand-waving. And while I do think there’s such a thing as good hand-waving, so that “good hand wavers” are better at testing theory than bad hand wavers, I admit I can’t really characterize “good hand waving” except to say that I think I know it when I see it. - If the previous two bullets are right, then that means ecologists are getting over Robert MacArthur. That is, they’re getting away from doing the sort of theory MacArthur did, and trying to test theory in the way that MacArthur did (e.g., by looking for a fuzzy match between qualitative theoretical predictions and noisy observational data). On balance, and with no disrespect at all to MacArthur (a giant who helped invent ecology as a professional science), I think that’s progress. But I’m not sure. Maybe it’s progress in some respects, but retrogression in other respects, with the net result being difficult or impossible to calculate? Brian for one seems to have mixed feelings. On the one hand, he has called for mathematical descriptions of nature to start “earning their keep” more than they have (e.g., by making bold, quantitative predictions that are testable with data). Which would seem to be a call for more models and less theory. But on the other hand, he’s also lamented that ecologists seem to be running out of big theoretical ideas. And Morgan Ernest has expressed mixed feelings about how we’re becoming more rigorous but less creative, better at answering questions but less good at identifying questions worth answering.
- As Tony Ives notes in the interview linked to above, being a modeler as opposed to a theoretician doesn’t mean just becoming a mathematical stamp collector and giving up on the search for generalities. Because there often are analogies and similarities between apparently-different systems. One way to model a specific system is to recognize the ways in which that system is analogous to other systems. See this old post for further discussion, and this excellent piece for a discussion in a related context.
- It’s tempting to think that the divide between theory and models might have cultural roots, much as the divide between theory and empiricism ultimately is cultural. Perhaps it reflects a cultural divide among mathematicians between theory builders and problem solvers.*** Maybe theoreticians in ecology are really mathematicians or physicists at heart, while modelers are biologists or engineers at heart. Maybe theoreticians care about simplicity and elegance, while modelers revel in complexity. Maybe theoreticians care about fundamental questions while modelers care about practical applications. But I’m not sure. For instance, in that interview linked to above, theoretician John Harte talks about the value of theory (as opposed to models) for conservation, and for getting policy makers to take ecologists seriously. He also talks about how important it is to him to do field work and to get out in nature. Conversely, Ben Bolker is a modeler rather than a theoretician, but in describing his own motivations he talks about loving the ideas of physics and mathematics and being only loosely anchored in the natural history of particular systems. So I’m not sure that the divide here is a cultural one; it might be more of a personal, different-strokes-for-different-folks thing. And in any case I hope it’s not cultural, since cultural divides are pretty intractable and tend to give rise to mutual misunderstanding and incomprehension.
- That linked piece from the previous bullet on the two cultures of mathematicians suggests that there are areas of mathematics where you need theory to get anywhere, and others where you need modeling to get anywhere. That’s a fascinating suggestion to me–do you think the same is true in ecology? For instance, to use John Harte and Tony Ives as examples again, maybe you
*need*theory to make headway in macroecology, as John Harte has been doing in his MaxEnt work? While maybe you*need*modeling to make headway on population dynamics, as Tony Ives has been doing? - The difference between theories and models isn’t always clear. For instance, is the “metabolic theory of ecology” a theory? I’m honestly not sure. The core of it–West et al. 1997–looks like a model to me. For instance, it’s got a pretty large number of parameters, and it’s got different simplifying assumptions tailored to circulatory systems that have, or lack, pulsatile flow. Ecologists refer to the “theory” of island biogeography–but isn’t that really just a very simplified model of colonization and extinction on islands? The same way the Lotka-Volterra predator-prey “model” is a very simplified model of predator-prey dynamics? Maybe theory and models are more like two ends of a continuum? The more simplifying assumptions you make, and the less tailored your assumptions are to any particular system, the closer you are to the theory end of the continuum?
- One can talk about subtypes of theory and models too. For instance, Levins (1966) famously suggested a three-way trade-off between realism, precision, and generality in modeling. Models that sacrifice generality for precision and realism are what I’m calling “models”. While models that sacrifice precision for realism and generality, and models that sacrifice realism for precision and generality, are different subtypes of what I’m calling “theory”.
- Some applications of mathematics in ecology kind of fall outside the theory-model dichotomy (or theory-model continuum). I’m thinking for instance of partitions like the Price equation, or Peter Chesson’s approach to coexistence theory. They aren’t models or theories themselves. Rather, they tell you something about the properties that
*any*model or theory will have (e.g., any model or theory of stable coexistence will operate via equalizing mechanisms and stabilizing mechanisms). - I’m curious how aware empirically-oriented ecologists are of the theory-model distinction. And how their awareness of it, or lack thereof, affects their attitudes towards mathematical approaches generally.
- As a grad student, I got into microcosms because that seemed like a system in which theories
*were*models, or at least close to being models. That is, the drastic simplifying assumptions of the theories in which I was interested (“community modules”, as Bob Holt calls them) were closer to being met in microcosms than in most other systems. So that theories could be tested in a rigorous way, much as system-specific models are tested. But I’ve found myself increasingly getting away from that, and wanting to build models for microcosms. And more broadly, I’ve found myself becoming more excited about the Tony Ives approach of using models tightly linked to data to solve system-specific puzzles. I think that many of the most impressive successes in ecology over the last couple of decades have come from that approach. Even if you’re interested in general theories (and I still am), increasingly I feel like bringing data to bear on those theories is best done by bringing data to bear on models that incorporate theoretical ideas. - After I wrote this post, I was alerted to a new paper on theory in ecology that covers much of the same ground. It’s very interesting, looks like good fodder for a future post.

*On behalf of Karen Abbott, who couldn’t make it. UPDATE: Marm Kilpatrick and Kevin Gross also contributed a lot to the intro talk.

**Yes, I know others have defined “theory” and “model” differently. Which is why I defined my own usage for purposes of this post.

***A theory builder being someone like David Hilbert, as opposed to a problem solver like Paul Erdös.

Lots to think about, Jeremy!

First thought is your model/theory distinction comes I think very close to May’s distinction between tactical and strategic models I think That puts the dichotomy in ecology back at last 40 years. And I think he raised it in (in his 1974 book introduction if I recall correctly) to defend the strategic (what you call theory) approach even back then. I think theory has always been challenged to justify its existence in ecology – something Kingsland traces rather strongly in her history of ecology.

I definitely think theory can make bold predictions too. By a long thread of conversation starting with Star Trek I ended discussing with my son the idea that particles gain mass as they get closer to the speed of light. Now there is a bold prediction from a very general theory! And it turned out to be true.

One push to the modelling side right now you don’t mention is the “big data” flood of data available to ecologists now in ways that are qualitatively different than the past.

You spotted the most obvious omission from my post! I was trying and failing to remember who originated the strategic vs. tactical modeling distinction, which is why I didn’t mention it in the post. Thanks for jogging my memory.

Re: theory being able to make bold predictions that turn out to be true, yes. Much more that could be said here re: different sorts of theory and the ways they can be tested. Perhaps a topic for a future post.

Good point re: the advent of “big data”. Though do you think that’s something that’s mostly still on the (near) horizon for (most of) ecology, or something that’s already here?

Oh, big data is definitely already here which is not to say it won’t continue getting bigger for some time to come.

Via Twitter, Jim Heffernan (@BioGeoJim) makes a good point. One way theory and models can work together is that theory can tell you what sort of general phenomena to look for and where/when to look for them, but you’ll probably need models to actually test whether those phenomena are occurring where/when you expected. He gives the example of multiple stable states in shallow lakes. The possibility of multiple stable states is one we can identify from theory, and theory might also give some guidance as to the general circumstances under which we’d expect to observe multiple stable states. But to actually test for multiple stable states in any particular system, general theory isn’t going to cut it, you’re probably going to need some system-specific modeling. This is a good example of a Tony Ives-type approach, I think (not that the approach is unique to Tony, of course–I just keep bringing him up because he’s such a handy example).

I wonder how much of this is a stages of the field thing? It makes a lot of sense to me that population ecology is very model focused these days. Major theories were developed as far back as the 1920s. And certainly very extensive theoretical development in the 1960s and 1970s. Its not to say that you can’t do new theory in population ecology, but it doesn’t seem surprising the balance has tipped.

Macroecology is I think a much younger field and perhaps in conjunction with that there is much more modelling. (Which makes the theory approaches of Harte and Hubbell stand out).

Wait, is the suggestion that theory comes first, models come later? That seems to be what you’re saying with the population ecology example. But then the macroecology example suggests the opposite? So I’m intrigued, but you lost me a little–what are the stages? What comes first?

You’re completely right – that’s what I get for dashing off a comment in transit. The population example suggests we move from theory to modelling (per your definitions). The macroecology stage almost seems to be going in the reverse direction (lots of models, specifically very statistical models or if you prefer patterns), then now some theory starting to emerge.

I guess you card argue statistical models (which I think are different than what you are calling models which are more incorporating many processes), then theory, then your type of model? IE I was confusing myself by linking statistical models/pattern finding with tactical modelling.

Or maybe I’m just full of it 🙂 But I do think it might be reasonable to expect disciplines at different stages to place a different emphasis on different proportions of theory and models. And I think you could argue population ecology has gone through this whole sequence.

I agree with Brian that what you describe as theory vs modeling really sounds more like May’s strategic vs tactical models, or as you already cited, Levin’s “The Strategy of Model Building in Population Biology”. I think a lot of tensions we see today were already well reflected when he was writing in 1966, with similar cultural differences that reflected the strength of the different models.

You focus on a trend from strategic to more tactical modeling, but also on a trend towards model fitting to data. I wonder to what extent this is a shift from ‘mathematical modeling’ towards ‘statistical modeling’, where we are concerned more with quantitative than qualitative comparisons; or even just a shift towards greater integration between theoretical and empirical work. You don’t seem to develop here the relative roles of data in the different approaches you consider here.

To me, “pure theory” is the realm of mathematical proofs, showing what are the logical consequences of certain assumptions. I’m rather surprised that you highlight Chesson’s work or the Price equation as a kind of grey area — I think they are clearly examples of (this kind of) theory.

How general such theorems are is largely a question of empiricism, e.g. how often are the assumptions met — I’ve never understood why generality should be a postulate in order to do theory, as your construction makes it out, rather than an empirical question to test after the fact.

(Of course pure theory can play a role in that generalization too — such as relaxing assumptions about white noise in demonstrating coexistence or existence of bet-hedging strategies).

To me, this kind of theory includes both the analytically tractable equations we can manipulate in chalk (e.g. Levins), but also the really complex, rich individual-based simulations such as we see from Ian Couzin’s work. Both also make assumptions (as Tony Ives emphasizes, it is usually the assumptions that we are really testing — testing a model’s predictions is just a means to that end) that we can confront with empirical study.

To me, this interaction of doing good theoretical work that is independent of data (regardless of whether it is expressed in the language of math, numerical simulations, or pictures and stories), and then figuring out how to empirically test these assumptions is more interesting than the kind of straw-man dichotomy between whether we need “general laws” and or “a different black box predictive algorithm for every system”.

Just a quick aside Carl (sorry, short on time just now), but yeah, I definitely see the Price equation or Chesson’s work as quite different. They’re not really making assumptions about how the world works. At most, there are only assumptions about the domain of applicability–assumptions that define the question or problem. That’s especially so for the Price equation. You aren’t testing its assumptions when you use it–it either applies, or it doesn’t. It’s a quite different beast.

As to whether there’s a trend towards statistical modeling, as distinct from mathematical modeling, interesting question, I’m not sure. One big advance in modern statistical methods is that we can now fit our mathematical models to data (think of Aaron King’s pomp package for R). Or think of Simon Wood’s partially specified modeling approach, where the model is a hybrid of a mathematical model and a nonparametric statistical model. So the distinction between statistical and mathematical modeling may be breaking down.

I think any model fitting is inherently statistical in the sense that you need a probabilistic model (implicitly or explicitly). Contrast that to a qualitative prediction from a mathematical model such as: ‘species can coexist on the same resource given the appropriate temporally fluctuating environment’, or ‘linking two sink habitats can allow a species to persist’.

I think what you refer to as ‘domains of applicability’ is exactly what I mean by assumptions. They aren’t sweeping statements about ‘the way the world works’, they just define when the situation applies. But surely it is still an empirical question, even if a rather trivial one to answer, of where it applies. Do you see those theorems as different from any other theorems we have in ecology? Or just different from other things we might call theory?

@cboettig:

“Do you see those theorems as different from any other theorems we have in ecology? Or just different from other things we might call theory?”

Chesson’s stuff is pretty different from the usual sort of ecological theory, though perhaps not extremely so. You could say that he’s just deriving the consequences of the assumptions that define a broad class of models (roughly, any multispecies model with stationary dynamics and a few other properties).

But the Price equation is just different. I mean, yes, I suppose you could say that “assumptions” define its domain of applicability. But as someone who works with the Price equation, I find that way of thinking about the Price equation unhelpful. The “assumptions” that define the Price equation just seem to me to be totally different in character from the usual sort of assumptions one makes when doing the usual sort of theory or modeling. The Price equation is really more of an abstract mathematical relationship that holds by definition (but that nevertheless remains incredibly useful, rather than being trivially obvious). Or you could say it’s just good notation–not so much a model or a theory as good bookkeeping. Have a look at Steven Frank’s J Evol Biol paper on the Price equation from last year (or earlier this year? Can’t recall…)

Hi Jeremy and all, I have to confess that this distinction between models and theory seems to be so fuzzy that I’m not clear exactly what the distinction is. For this to work it seems to me that there has to be some general agreement about where the line is drawn …and I just can’t see where that is. Jeremy, even where you have defined the difference it’s never completely clear to me what you’re calling theory – here’s the definition that I think you gave,

“Models that sacrifice generality for precision and realism are what I’m calling “models”. While models that sacrifice precision for realism and generality, and models that sacrifice realism for precision and generality, are different subtypes of what I’m calling “theory”.

When I contrast the first two it sounds like part of what separates a theory from a model is that it makes poor predictions – that can’t be where we are heading, is it? And I’m not clear about the third – do you have an example of a theory that sacrifices realism for precision and generality? It sounds like an odd combination but I’m probably not clear on your intent. And from there the distinction becomes less and less clear, at least for me. You ask,

“How do you tell the difference between a theory that “captures the essence” of what’s going on but yet doesn’t match the data well because it omits “inessential” details, and a theory that’s just wrong?”

Indeed. My position on this is that you can’t tell the difference because there’s no evidence that there is a difference. How can details be ‘inessential’ if the result is the data not matching the theory? This statement captures one of our biggest problems in ecology – the widespread conviction that theory can capture all the essential pieces of a process even if it doesn’t predict the data well. So, what do we use for evidence that it has captured all the essential pieces if not how well it predicts the data? I am never completely clear about why so many ecologists want to deny the centrality of prediction in assessing what we know. In a field like machine learning it is well-accepted that the way you measure learning is the ability to predict out-of-sample data. Ecologists seem to believe there are other characteristics that can be used to assess what we have learned but I haven’t been able to sort out exactly what they are.

Further, “The more simplifying assumptions you make, and the less tailored your assumptions are to any particular system, the closer you are to the theory end of the continuum?”

Jeremy, I absolutely appreciate that you are just spit-balling and trying to come to some kind of solid ground where we can comfortably distinguish theories from models but any definition that is going to rest on some arbitrary number of simplifying assumptions combined with some estimate of how tailored the assumptions are to specific systems sounds doomed to failure (if even a fragile consensus is one objective)

Using Ives and Harte as bookends to this discussion makes some sense but even there I think we invoke characteristics of their work that aren’t very dissimilar – they are both accomplished quantitively and build mathematical models. Both Ives and Harte talk about theory and both talk about models. The difference is in the scope of their theories – Harte searches for theories that are generally true (i.e. true all over the planet, true at different times in history, perhaps true at multiple scales) while Ives searches for theories that are specifically true (i.e. true in a particular lake, true at a particular time, true at a particular scale). In my opinion, the only difference between the two is that John’s search for general theory adds an additional hurdle – he had to demonstrate not only that his model/theory predicts well for a single system or a few systems but that it predicts well for most or all systems. Tony has achieved his goal if his model/theory predicts well for the particular system he’s studying at the time he studied it. I think that John’s approach is much more ambitious and I’m much more sympathetic to his approach than Tony’s but, in fact, which is correct is an empirical question…if there are no general laws then it doesn’t matter how hard we look for them, we won’t find them. That said, if there are many general laws in ecology we run the risk of addressing relatively trivial questions if we don’t search for the general laws. My guess is that most ecologists who don’t search for general laws have made that decision because they are very skeptical of the existence of general laws and/or our ability to extract general laws from the noise of history and complex relationships. And with all respect (I wish there was an emoticon that could represent the respect I have for Tony’s work) his comment about stamp collecting is a bit of a copout – the extent to which it is not stamp collecting is the same extent to which what happens in his model system also happens in other systems…the degree to which the explanation is generalizable. And the extent to which it is generalizable can only be assessed by how well the model/theory from the specific system helps us predicts characteristics of other systems. Why would you make the claim that there are analogies or similarities between systems if not because the model of your system helps you make better predictions in another system than you were making with random guesses?

All of the further discussion heads in the same direction theory (general) versus models (system-specific) So, I don’t see this as a theory versus models discussion unless we are calling models, system-specific and theories, general. And that certainly redefines ‘scientific model’. I couldn’t find a definition of model that mentioned system specific and few that talked about mathematics. They all talked about ‘representations’ of nature or reality. The fact is that all theories are models or sets of models. I would be willing to state the opposite is probably true as well (i.e. that all models are theories) although that may be a bit more controversial. But would anybody argue that a theory doesn’t meet most definitions of a ‘scientific model’?

I think we potentially head down the wrong path when we draw a hard line between models and theory when what we are really intending is to draw a line between general theories and specific theories. And missing the point that both general and specific theories should be held to the same standard – how well they predict –out-of-sample data. What I want to know about both John’s “theories” and Tony’s “models” is how well do they predict characteristics of the systems they purport to describe. And this may sound like I’m implying that they don’t do a good job and that isn’t my intent – my fault completely, but I haven’t looked closely enough at their work to sort out how well they predict.

In my mind, Tony captured the real dichotomy with his question at ESA – this isn’t about theories versus models – it’s about general theories versus specific theories. This may seem like semantics but I’m loathe to so narrowly define scientific models (i.e. mathematical equations that refer to a specific system) when they have historically had a much broader definition.

Best, Jeff H.

Hi Jeff,

Thanks for the lengthy comments. Not sure I’ll be able to do justice to them briefly (and unfortunately I only have time to be brief right now).

It seems like some of what bothers you might just be semantics? If you prefer to talk about general vs. specific theories, rather than theories vs. models, that might just be different words for the same distinction.

Don’t really have anything to say that I haven’t said before about prediction vs. other desiderata, so I’ll leave that one to other commenters.

Re: Levins’ 3-way schemata, you’re best off reading his essay (it’s quite short and accessible). But if I recall correctly, his example of sacrificing realism for prediction and generality is graphical models presented in terms of isoclines. The idea is that, by looking at isoclines, you have a lot of generality. You haven’t specified specific equations for the isoclines–the isoclines effectively specify a class of models rather than one specific model. But you’ve still got precision, at least in some respects, because you’ve still got a more precise model specification, and more precise predictions, than would be possible with a verbal model. But the model’s not tailored to any particular system, and probably makes a lot of simplifying assumptions, so it’s not a realistic description of any particular system.

Hi Jeremy, I think a fair chunk of it is semantics and I am often in the camp that would refer to that as ‘just semantics’ but here I think we are redefining what ‘model’ has historically meant and that seems likely to cause confusion. I think it does here because the theoretician versus model dichotomy suggests that John Harte and Tony Ives do something fundamentally different and I don’t think they do – it’s just the intended scope of their theories/models that is different.

Best, Jeff H.

Hi Jeff,

Well, all I can say is that both John and Tony describe themselves as doing something quite different. Whether they’d call it “fundamentally” different, I don’t know. But it’s different enough that they both remark on the difference in the linked interviews. And others have picked out the same difference as being enough of a difference to be worth commenting on (e.g., May’s “strategic” vs. “tactical” models, and see the forthcoming Bioscience paper linked to at the end of my post). So if the difference is a difference in intended scope of theories/models, well, apparently that’s a pretty significant difference in the eyes of many modelers/theoreticians.

Re: the suggestion that some problems call for theory, others for modeling: those contrasting Brown and Kareiva essays seem to bear this out. Jim’s essay is all about the importance of static patterns and the need for general theory to explain them. Kareiva’s essay is all about the importance of dynamics and the need for models to explain them. So perhaps that’s what the theory/model divide really comes down to? Do you care about patterns, or do you care about dynamics?

If that’s right, it immediately raises various questions. Like, why can’t you have a *theory* of dynamics? Maybe you can and we just haven’t thought of it? Because there certainly are patterns in dynamics that suggest we ought to be able to theorize about them. I’m thinking for instance of Murdoch et al.’s famous result that it’s only specialist consumers that exhibit predator-prey cycles; generalist consumers that cycle all exhibit stage-structured cycles (“generation” cycles).

Hi Jeremy,

I usually think of the theory/modelling divide in the following way: theory is primarily concerned with conceptual development of the field, while modelling is about articulating and testing those developments. In that sense, I think the divisions outlined in Scheiner & Willig’s 2008 paper between theories and models would help in this debate to clarify what is meant by theories versus models.

Hi, I had similar thoughts as Jeff above and probably agree with Justin as well. The difference between theory and modeling is for me that theory has the aspiration to explain causation, while modeling may also be purely predictive / descriptive.

In that sense, I simply don’t think that modeling and theory are two things that work well as natural opposites. Theory needs modeling for testing, but not all models are theory driven, models can also be purely data-driven.

I also have some reservations to placing Harte’s MaxEnt on the socket of an “ideal theory” – my understanding is that MaxEnt essentially identifies (or assumes) some statistical symmetries that are then used to predict diversity patterns under constraints. It may be a good theory, but if so, I would say it is not because it’s general (so is a regression), but because it explains why these symmetries exist. So far I had the feeling there is still a bit of work to do on the explaining part in MaxEnt, but maybe that’s just my lack of understanding of this field.

thanks for the interesting post and the insightful discussion!

there is a tiny typo in the text: the initial West et al MTE paper was published in 1997 not 1994 … (the link is correct, however)

typo fixed, thanks!

Pingback: What to read Wednesday. - The Bee and Me