Brian’s been bugging me to respond to his post on scaling up, so here it is. Don’t know that I’m living up to Brian’s very popular original (sorry Brian). It’s just a bunch of thoughts, not a single coherent argument, but I hope there are some interesting thoughts in there somewhere. They’re not exhaustive—the commenters on Brian’s post were great, and I’m mostly not to repeat what they said. My comments won’t make sense if you haven’t read Brian’s post, so go read it if you haven’t already.
1. I agree with Brian that scaling up often is difficult or impossible. I also agree with him that, when you can do it, it’s really useful (I’d say essential). Here’s an old post I did which uses an analogy to the ideal gas law in physics (an analogy often used by macroecologists themselves) to argue for the importance of scaling up. Brian and I might disagree a bit on how often it’s possible to scale up, but it’s not like either of us really has any idea how often it’s possible. Neither of us has tried to somehow count the number of instances of “successful scaling up” in ecology, and I have no clue how you’d do that.
2. I may disagree with Brian a bit about what we should do when scaling up is impossible. Brian says he “refuses to accept” that macroscale studies might be intractable, and so shouldn’t be pursued. To which I’d respond, that’s like refusing to accept gravity. If a question is intractable, it’s intractable. Now, in fairness to Brian, I’m sure he didn’t actually mean to set himself up as a macroecological King Cnut, trying to make the tides bend to his will. I certainly agree with him that there’s lots we can learn about macroecology, even when we can’t scale up. And further, often the only way to find out if something is intractable is to try studying it. Just so long as we don’t cross the line from trying to learn as much as possible at the macroscale to trying to learn more than is possible. Sometimes, doing good science is like spelling “banana”: the hard part is knowing when to stop. “B-a-n-a-n-a-n-a-n-a…oops! Went too far!” 😉
3. Brian says that, when we can’t scale up, we just have to “muddle through” in our macroscale work. Can’t really say if I agree or not with this, because I’d need to know what “muddling through” actually involves. There are surely ineffective as well as effective ways to learn about the macroscale. Saying that we need to “muddle through” surely doesn’t mean that anything goes in terms of our research approach. That may seem like an obvious point, but as I’ve noted in a recent post, I do think ecologists often are rather too quick to use the difficulty of studying their chosen question in their chosen system as an excuse for using inferior or problematic research approaches. I’d be interested to hear from Brian what he thinks are some really good macroecological examples of “muddling through”, since he knows the literature far better than me. I think this would be a good way to move the discussion beyond generalities, and analogies to things like astronomy and the ideal gas law, and ground it more in the details of actual ecological practice.
4. In general, Brian and I may appear to disagree more than we do because some points he tends to emphasize, I tend to note only briefly, and vice-versa. Brian gets annoyed when people tell him you’re not doing real science unless you can scale up, and so he writes a lot about that. I’d probably find that annoying too if I were Brian, and rightly so, but I’m not Brian, so I don’t write much about that. Conversely, I get annoyed when people try to “scale down”—that is, when they try to use macroscale data (usually in combination with dubious implicit assumptions) to infer something about microscale processes. Especially when people who do this try to justify it (as they often do) by first pointing out that it’s impossible to scale up! Um, no; if you can’t scale up, you can’t scale down either, as this old post of mine points out (and often you can’t scale down even if you can scale up, because many different microscale models might lead to the same macroscale behavior). This is the sort of thing I’m talking about when I talk about people trying to make too much of macroscale data. Peter Adler, commenting on Brian’s post, made the same point. Brian recognizes this point, but I write about it more than he does, because I’m the one who gets annoyed by it. To each his own pet peeves. 😉
5. Just because you can’t scale up doesn’t mean microscale data and models can’t inform your interpretation of macroscale data. For instance, in this old post I point out how microscale data can inform our interpretation of the local-regional richness relationship, even though we don’t actually know how to quantitatively scale from the local to the regional. We may not know how to scale from the local to the regional, but we do know that species interactions matter a lot in every locality, so our interpretation of the regional had better be consistent with that knowledge. This is another point I believe Brian agrees with, even if he doesn’t emphasize it. So one way to “muddle through” when it comes to doing macroecology is “draw on microscale data and models, even if you can’t scale up”.
6. In the comments on his post, Brian suggests that, when we can’t scale up, the way to go is “models at the macroscale”. By which I take it he means models defined in terms of macroscale parameters and variables, that don’t make any explicit reference to the microscale, and aren’t explicitly derived by scaling up from the microscale. Assuming I’m understanding him correctly (and if not, the fault is mine), I agree that that approach is quite promising, as long as it’s properly understood. Some remarks:
Brian’s “models at the macroscale” are analogous to old school macroeconomic models of the sort favored by Paul Krugman, like the IS-LM model. There’s been a lot of debate recently in the econ blogosphere about “microfoundations” in macroeconomics, which is exactly the same issue as “scaling up” in macroecology (see this and this from Noahpinion for discussion, and links to posts from other economists) As an interested bystander to this debate, my sympathies are very much with old school, non-microfounded macroeconomics. Which puzzled me for a while, since in my own field of ecology I’m all about the importance of “microfoundations” whenever we can develop them, and rather skeptical of how much we can learn in their absence. Does that make me inconsistent? I’ve been thinking about this for a while, which is why I didn’t respond to Brian’s post until now—I wanted to make sure I had my head straight. I’ve decided that I’m not being inconsistent. Old school macroeconomics can get away without being rigorously derived from (i.e. scaled up from) explicit microfoundations because it’s based on parameters and functions that summarize the macroscopic effects of the relevant microscale processes. Sometimes, these parameters and functions are justified purely on empirical grounds. For instance, many macroeconomic models without microfoundations assume “downward nominal wage rigidity”, which is jargon for assuming that, while workers certainly can lose their jobs during a recession, the salaries of the workers who retain their jobs don’t get reduced, at least not far enough or fast enough to matter. The grounds for this assumption are purely empirical—it seems to be true, even though it’s surprisingly difficult to specify a tractable microscale model of the behavior of workers and firms in which it would be true.
There are plenty of analogous cases in ecology. Broadly speaking, the sort of “models at the macroscale” that Brian wants to see already exist in many areas of ecology—indeed, they’re ubiquitous! Most any parameter or function in any ecological model can be thought of as summarizing the effects of some underlying, often unspecified biology. And further, the justification for these parameters and functions often is purely empirical. Think for instance of the conversion efficiency parameter in a predator-prey model, the parameter that tells you how many units of predator biomass are produced from each unit of prey consumed. This parameter is usually a constant with a value less than 1—which is just a way of summarizing all the massively complicated underlying physiological, biochemical, and gut-microbiological processes involved in digestion and assimilation. Extensive empirical data justify summarizing all this complicated underlying biology in a single, constant parameter less than 1 in many, though not all, circumstances. Or think of the concept of density dependence. There are all sorts of mechanistic reasons why the per-capita growth rate of a population would depend on its own density—including reasons that actually have to do with interactions with other species. It’s often possible to summarize interspecific density dependence via an appropriate model of intraspecific density dependence, thereby allowing you to model the dynamics of a focal species embedded in a larger community without actually having to model that community. For instance, here’s a 2009 Peter Abrams paper on this, deriving the “macroscale” models of consumer intraspecific density dependence arising from different underlying “microscale” models of consumer-resource interactions. Or think of group selection and the associated concept of “group heritability” (“parental groups” passing on their “group traits” to “offspring groups”), which might be summarized by a heritability parameter. You can use the Price equation to show how group-level heritability reflects lower-level processes, including individual-level selection among the individuals comprising the parental groups (Okasha 2006). And there are a bazillion other examples that could be given. So when Brian says that macroecology should be based on “models at the macroscale”, he’s really just asking for the sorts of models that already form the basis of population ecology, community ecology, evolutionary biology, and other fields. All of which is a very long-winded way of saying that I’m actually optimistic that Brian’s “models at the macroscale” approach is feasible and worthwhile. After all, the analogous approach has proven very feasible and worthwhile in other areas of ecology.
Now, before we all start holding hands and singing kumbaya, let me offer a couple of cautionary notes.
First, the ubiquitous strategy of summarizing lower-level processes with higher-level parameters or functions is not risk-free. Every such summary, even a purely empirical one like “downward nominal wage rigidity”, necessarily involves some sort of assumptions about the lower-level processes. Those assumptions might be false in some circumstances, or might be true now but change to being false in future, or etc. Insofar as those assumptions are false, your macroscale model will be in trouble. That may seem trivially obvious, but there are some subtle manifestations of this point. For instance, standard population genetics models like the Wright-Fisher model don’t explicitly specify any ecology, but instead summarize the evolutionarily-relevant effects of the underlying ecology in terms of population size parameters and selection coefficients. That is, ecology matters for evolution in these models only insofar as it affects population sizes and selection coefficients. The trouble is, structuring your “macroscale” evolutionary model in this way makes it really natural to think of population sizes and selection coefficients as independent of one another. After all, they’re totally different parameters, right? Now, you could of course assume that they’re correlated in some way, but in the context of a population genetics model that feels like a very ad hoc and artificial assumption. Almost like you’re “rigging” the model to behave how you want it to behave. But here’s the thing: if you actually specify some underlying ecology, and let the population sizes and selection coefficients emerge from that “microscale” ecological model, you’ll find that it’s hard to avoid generating some sort of correlation between selection coefficients and populations sizes! I’ve made this same point in the past in a different context. The moral of the story is that, when you’re working at the macroscale, it’s not only impossible to avoid making implicit assumptions about the microscale, it’s really hard to keep those implicit assumptions from biasing your macroscale work. I don’t have much idea what to do about this. After all, the whole point of your macroscale model is to bury the microscale, by summarizing it or implicitly accounting for it. But to keep from being misled in the way I’ve just described, you basically have to keep the microscale out in the open, in the forefront of your mind. Maybe the only thing you can do is build microscale models and use them to alert you to potential hidden biases in your macroscale work. The point here is not to build realistic microscale models (who knows if they’re realistic or not?) The point is just to use some microscale model to alert you to macroscale possibilities that you’d be unlikely to recognize just from studying the macroscale model. Peter Abrams used to do a lot of this sort of thing.
Here’s my second cautionary note. Thinking of things like predator-prey models as “macroscale” models offers a good illustration of a point made in #4 above: you can’t “scale down” and use macroscale data and models to make inferences about the microscale. Nobody would ever use the fact that predator conversion efficiency often is roughly constant and less than 1 to infer anything about the biological details of predator physiology, gut microbiology, or biochemistry. That would be very strange, and very silly. Nor would anyone ever use the fact that we can write down a “macroscopic” predator-prey model without explicitly specifying the underlying “microscale” physiology, gut microbiology, etc. as evidence that that underlying biology is just “small-scale” stuff that doesn’t matter at the macroscale. That would also be very silly; just because you can summarize or implicitly account for the effects of various microscale processes in a single macroscale parameter or function doesn’t mean that those processes don’t exist or don’t matter! After all, the behavior of your macroscale model would change if you changed those parameters and functions—which means that the behavior of your macroscale model would change if the underlying microscale processes changed. Which is why I get really annoyed when macroecologists say or imply things about the microscale, based solely on macroscale data and models. Because that’s what they’re doing whenever they claim that, on “large” spatial and temporal scales, the effects of “small scale” and “short term” processes are unimportant, or cancel out, or are just a minor source of “noise” superimposed on the “signal” created by “large-scale” processes. Wrong. Such claims apparently aren’t as obviously silly as, say, inferring the irrelevance of predator physiology and gut microbiology from the structure of a predator-prey model. But they’re equally false, and for the same reason. Sorry macroecologists, but it’s population and community ecology all the way down. You certainly can do macroecology without explicitly accounting for this fact—but you can’t use macroecology to do away with this fact. Not that Brian would ever try—I hope!