Simplifying a complex, overdetermined world

Ecology is complicated. Anything we might want to measure is affected by lots of different factors. As a researcher, how do you deal with that?

One way to deal with it is to try to focus on the most important factors. Try to “capture the essence” of what’s going on. Focus on developing an understanding of the “big picture” that’s “robust” to secondary details (meaning that the big picture would basically look and behave the same way, no matter what the secondary details). This is how I once would have justified my own interest in, say, simple theoretical food web models (e.g., Leibold 1996). Sure, they’re a caricature of any real-world system. But a caricature is a recognizable—indeed, hyper-recognizable—portrait. The whole point of a caricature is to emphasize the most important or distinctive features of the subject. A caricature that’s not recognizable, that’s not basically correct, is a very poor caricature.

But here’s a problem I’ve wondered about on and off for a long time: what’s the difference between a simplification that “captures the essence” of a more complex reality, and one that only appears to do so, but actually just gives the right answer for the wrong reasons? After all, as ecologists we aren’t in the position of an artist drawing a caricature. We don’t know for sure what our subject actually looks like, though of course we have some idea. So it’s not obvious that our caricatures are instantly-recongizable likenesses of whatever bit of nature we’re trying to caricature.

Now, one possible response to this concern is to deny that getting the right answer for the wrong reasons is even a possibility. If we develop a simplified picture of how the world works, then any omitted details which don’t change the predictions are surely unimportant, right? If our model makes basically the right predictions, then it’s basically right, at least as far as we can tell? Right?

I’m not so sure. The reason why I worry about this is what philosophers call “overdetermination“. Overdetermination is when some event or state of affairs has multiple causes, any one of which might be sufficient on its own to bring about that event or state of affairs, and perhaps none of which is necessary. Philosophers, at least the few I’ve read, are fond of silly examples like Sherlock Holmes shooting Moriarty at the exact same instant as Moriarity is struck by lightning, leaving it unclear what caused Moriarty’s death. But non-silly examples abound in ecology. Here’s one from theoretical ecology (I could easily have picked an empirical example). The Rosenzweig-MacArthur predator-prey model predicts predator-prey cycles for some parameter values. Imagine adding into this model a time lag between predator consumption of prey and predator reproduction, one which is sufficient on its own to cause predator-prey cycles. Now here’s the question: is the original Rosenzweig-MacArthur model a good approximation that “captures the essence” of why predator-prey cycles occur when there’s also a time lag? Put another way, is the original Rosenzweig-MacArthur model “robust” to violation of its assumption of no time lags? Or in this more complex situation, is the Rosenzweig-MacArthur model misleading, a bad caricature rather than a good one?

The same questions arise when different causal factors generate opposing effects rather than the same effect, and so cancel one another out. Consider a predator-prey model which has a stable equilibrium because of density-dependent prey growth. Now add in both predator interference and a time lagged predator numerical response, with the net effect being that the system still has a stable equilibrium because the stabilizing predator density-dependence due to interference cancels out the destabilizing time lag. Does the original model “capture the essence” of the more complex situation? Is it “robust” to those added complications? Or is it just giving the right answer for the wrong reasons?

I think the answer to all these questions is “no”. That is, in cases of overdetermination, I’d deny that a model that omits some causal factors is “capturing the essence”, or is “robust”, or is accurately “caricaturing” what’s really going on, no matter if its predictions are accurate or not. But I’d also deny that, in cases of overdetermination, a model that omits some causal factors is misleading or wrong. That is, I think that the alternative possibilities I set up at the beginning—our simplified picture is either “basically right” or “basically wrong—aren’t the only possibilities. There’s at least one other possibility—our simplified picture can be right in some respects but wrong in others.

Further, I think this third possibility, though it might seem rather obvious, actually has some interesting implications. For one thing, a lot of work in ecology really does aim to “capture the essence” of some complicated situation. It’s not just theoreticians who try to do this—empirical ecologists (community ecologists especially) are always on the lookout for tools and approaches that will summarize or “capture the essence” of some complex phenomenon. Which assumes that there is an essence to be captured. Conversely, a lot of criticism of such work argues not only that ecology is too complicated to have an essence to be captured, but that all details are essential, so that omitting any detail is a misleading distortion. I’m suggesting that, at least in an overdetermined world (which our world surely is), both points of view are somewhat misplaced.

For another thing, it’s important to recognize how simplified pictures that are right in some respects but wrong in others can help us build up to more complicated and correct pictures of how our complex, overdetermined world works. Recall my examples of predator-prey models. How is that we know that, say, density-dependence is stabilizing, while a type II predator functional response and a time-lagged numerical response are destabilizing? Basically, it’s by doing “controlled experiments”. If you compare the behavior of a model lacking, say, density-dependence to that of an otherwise-identical model with density-dependence, you’ll find that the latter model is more stable. In general, you build up an understanding of a complicated situation by studying what happens in simpler, “control” situations (often called “limiting cases” by theoreticians). The same approach even works, though is admittedly more difficult to apply, if the effects of a given factor are context dependent (this just means your “controlled experiments” are going to give you “interaction terms” as well as “main effects”). So when I see it argued (as I have, more than once) that complex, overdetermined systems can’t be understood via such a “reductionist” approach, I admit I get confused. I mean, how else are you supposed to figure out how an overdetermined system works? How else are you supposed to figure out not only what causal factors are at work, but what effect each of them has, except by doing these sorts of “controlled experiments”? I mean, I suppose you can black box the entire system and just describe its behavior purely statistically/phenomenologically. For some purposes that will be totally fine, even essential (see this old post for discussion), but for other purposes it’s tantamount to just throwing up your hands and giving up.

Deliberately simplifying by omitting relevant causal factors is useful even when doing so doesn’t “capture the essence”, and even when there is no “essence” to capture. These sorts of simplifications aren’t caricatures so much as steps on a stairway. In a world without escalators and elevators, the only way to get from the ground floor to the penthouse is by going up the stairs, one step at a time.

About these ads

18 thoughts on “Simplifying a complex, overdetermined world

  1. Can’t our understanding of gravity shed some light on this issue? First, we had an understanding that masses attract one another according to Newton’s formulation, which gives an accurate picture at the scale of our solar system, then we saw a problem with that model, as I understand it, because of some aspects of relativity and quantum mechanics (NOT an expert!), and have been trying to resolve them with new mathematical and cognitive models, as discussed in your post on ordinary versus scientific language. I really like this philosophical direction in the blog, as I’ve said before, so thanks!

    • I think Newtonian gravity vs. Einsteinian relativity is a different sort of case. They’re related to each other, but not in the sense that Newtonian gravity omits causes that Einsteinian relativity includes.

    • I think the answer to your question is that died-in-the-wool empiricists don’t see themselves as having or using models of any sort, except purely descriptive statistical ones. They see themselves as doing model-free science, just “letting the data do the talking” and taking a purely inductive approach. Whether they are correct to see themselves in this way is another question, and whether science actually can progress in this way is still another question.

      • I might be using the term model too freely, anyway. So an empiricist would see science as some sort of evolving statistical process? but then (I guess this was your point), wouldn’t they still have to choose what to measure and what to leave out and doesn’t this imply some idea of importance?

      • “…wouldn’t they still have to choose what to measure and what to leave out and doesn’t this imply some idea of importance?”

        Yup.

        The ecologist who most famously advocated pure inductive empiricism (also called “instrumentalism”) is the late limnologist Robert Peters, in his polemic A critique for ecology. Instrumentalists like Peters argue that there’s either no such thing as causality, or if there is it’s not the business of science to investigate it (or any other “non-observable” or “hypothetical”). Rather, the only legitimate goal of science is prediction of as-yet-uncollected data about “observable” entities, and the way to achieve that goal is with purely phenomenological models such as descriptive statistical models. Explanation of why things happen is worthless and/or impossible; the only thing that matters is predicting what will happen. That’s a very rough gloss, but it gives you a bit of the flavor of instrumentalism.

      • Peters also appears to draw an unjustified distinction between interpolation and extrapolation in his Critique. I got the impression when reading his book, that he viewed the correct role of new empirical work in ecology as filling in the gaps along a regression line of existing data, as predicting beyond the limits of our data was somehow philosophically wrong. I disagree. I also recall a slightly troubling chapter about Evolution somewhere in there, but it’s been a while so it may not really have been that bad or even in there.

        But I also thought immediately of Physics and gravity as I was reading through the post. The charicature approach applies to pretty much all major scientific fields (ideal gas law, chemistry; the four humors, medicine). And I agree with your conclusion, Jeremy. I’ll even add that we can sometimes learn more from being wrong than we can from being right!

      • Peters’ book is indeed problematic in many ways. Yes, there is a chapter in which he dismisses evolution by natural selection as an empty “tautology”, a stance that’s way off base for reasons that have been widely noted.

    • Don’t know Deutsch, but many people (including philosophers, scientists, and sociologists of science) have made a wide range of claims to the effect that the content of science is not dictated solely or even mainly by how nature actually is. These claims are so disparate as to defy easy classification, but yes, they do include various versions of the thesis that scientists somehow impose an explanatory order on nature rather than discovering one that’s there independent of us.

      I don’t pretend to have read more than a tiny fraction of the massive amount of stuff (much of it very good, much of it absolutely abysmal) that’s been written on this, so any reading recommendations I could make will be pretty random. If you’re a Terry Pratchett fan, The Science of Discworld series is a lot of fun. It’s a collaborative effort between comic fantasy author Pratchett and a couple of British scientists. Each book alternates chapters from a Pratchett novella with commentaries on the scientific issues raised in the chapters. One of the overarching themes is “narrative”, the human need (including the scientific need) to impose an order on nature. In a more serious vein, there’s Peter Dear’s The Intelligibility of Nature, a historical/philosophical study of how what sort of thing counts as a scientific explanation has changed over time. For instance, Newton’s laws of motion weren’t universally acclaimed initially–many scientists thought they they didn’t explain anything, and rather were just phenomenological patterns in need of explanation themselves. I’m certainly not going to recommend any hardcore social constructionist garbage, but if you want something with a bit of that flavor that’s actually worth taking seriously, try philosopher of science Ian Hacking, especially Representing and Intervening and The Social Construction of What?. The former is an introductory text (although with a lot of Hacking’s own views worked in), but both are reasonably accessible to non-philosophers.

      • Thanks for the recommendations! Maybe I should give Pratchett another try – I thought Good Omens was too cutesy and didn’t pick up anything else by him.

      • I actually quite liked Good Omens, so perhaps Pratchett’s just not your thing. Pratchett’s quite consistent, so it’s possible that if you didn’t like Good Omens, you don’t like Pratchett.

        If you’re just looking to dip into Pratchett’s Discworld series and aren’t sure where to begin, the recent AV Club article on Pratchett is a good overview that will help you decide. They correctly note that starting at the beginning with The Colour of Magic isn’t the best idea because Pratchett was just finding his feet as an author then and that first book is basically just silly slapstick. Note that the AV Club does recommend Good Omens as one possible entry point, so again, maybe Pratchett’s just not your thing. The AV Club article is also correct that the tv movie productions of Pratchett’s work (which you can probably find on YouTube) are poor, so don’t bother with them.

        I have a number of favorites (warning: what follows comes from a big-time Pratchett fan, and so may well be WAY too much information. Seriously). As literature, Small Gods is probably Pratchett’s best book, and it’s kind of a standalone, so that’s one possible starting point. Many of the other Discworld books fall into one of several informal sub-series (see here for handy chart). I like the later “witches” novels. Especially Lords and Ladies, Maskerade, and his first three “young adult” witches novels, The Wee Free Men, A Hat Full of Sky (my favorite book of his), and Wintersmith. The young adult novels are less jokey than most of his other stuff, so if you find silliness off-putting, those might be another good entry point. The “witches” books feature two of the best characters (Granny Weatherwax, and in the young adult ones, Tiffany Aching). And you might find them attractive because Pratchett plays around with the idea of “stories” that we tell ourselves to make sense of the world. I’m not as big into the “Death” novels as many people (and I’m unusual in finding Hogfather one of Pratchett’s weakest books), except for Thief of Time, which is only sort of a “Death” novel and is one of his best books. The “City Watch” novels feature another of Pratchett’s best characters, Sam Vimes, and some very good supporting characters. Of these, I think Guards! Guards!, Men at Arms, and Night Watch are the best. The “Industrial Revolution” novels (satires about various technological advances, like the printing press) are probably my least favorite sub-series.

  2. Hmmm.

    I can only say that I’m having a very hard time wrapping my mind around the philosophers’ definition of overdetermined, which at the very least, is a whole lot different than the statistical/mathematical use of the term. Indeed, I think their definition is largely nonsensical. The idea that a number of different things simultaneously, or any one of them alone, caused some event to occur….say what?

    • Maybe you’re getting hung up on simultaneity, Jim, which can make overdetermination seem like an incredible coincidence (as in the Moriarty-gets-shot-at-the-same-time-as-being-struck-by-lightning example). Here’s another standard philosophical example: firing squads. The victim is shot by many bullets at once, any one of which would’ve been sufficient to kill him. Or think of the many ecological examples where different sources of mortality are substitutable–if one thing doesn’t get you, the other will. Does that make more sense?

  3. Pingback: Instrumentalism, Dali, My Dad’s Photographs, and Me | aquaticbiology

  4. Pingback: “Null” and “neutral” models are overrated | Dynamic Ecology

  5. Pingback: There are ecology blogs, but no ecology blogosphere | Dynamic Ecology

Leave a Comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s