# Scaling up is hard to do: a response

Brian’s been bugging me to respond to his post on scaling up, so here it is. Don’t know that I’m living up to Brian’s very popular original (sorry Brian). It’s just a bunch of thoughts, not a single coherent argument, but I hope there are some interesting thoughts in there somewhere. They’re not exhaustive—the commenters on Brian’s post were great, and I’m mostly not to repeat what they said. My comments won’t make sense if you haven’t read Brian’s post, so go read it if you haven’t already.

1. I agree with Brian that scaling up often is difficult or impossible. I also agree with him that, when you can do it, it’s really useful  (I’d say essential). Here’s an old post I did which uses an analogy to the ideal gas law in physics (an analogy often used by macroecologists themselves) to argue for the importance of scaling up. Brian and I might disagree a bit on how often it’s possible to scale up, but it’s not like either of us really has any idea how often it’s possible. Neither of us has tried to somehow count the number of instances of “successful scaling up” in ecology, and I have no clue how you’d do that.

2. I may disagree with Brian a bit about what we should do when scaling up is impossible. Brian says he “refuses to accept” that macroscale studies might be intractable, and so shouldn’t be pursued. To which I’d respond, that’s like refusing to accept gravity. If a question is intractable, it’s intractable. Now, in fairness to Brian, I’m sure he didn’t actually mean to set himself up as a macroecological King Cnut, trying to make the tides bend to his will. I certainly agree with him that there’s lots we can learn about macroecology, even when we can’t scale up. And further, often the only way to find out if something is intractable is to try studying it. Just so long as we don’t cross the line from trying to learn as much as possible at the macroscale to trying to learn more than is possible. Sometimes, doing good science is like spelling “banana”: the hard part is knowing when to stop. “B-a-n-a-n-a-n-a-n-a…oops! Went too far!” 😉

3. Brian says that, when we can’t scale up, we just have to “muddle through” in our macroscale work. Can’t really say if I agree or not with this, because I’d need to know what “muddling through” actually involves. There are surely ineffective as well as effective ways to learn about the macroscale. Saying that we need to “muddle through” surely doesn’t mean that anything goes in terms of our research approach. That may seem like an obvious point, but as I’ve noted in a recent post, I do think ecologists often are rather too quick to use the difficulty of studying their chosen question in their chosen system as an excuse for using inferior or problematic research approaches. I’d be interested to hear from Brian what he thinks are some really good macroecological examples of “muddling through”, since he knows the literature far better than me. I think this would be a good way to move the discussion beyond generalities, and analogies to things like astronomy and the ideal gas law, and ground it more in the details of actual ecological practice.

4. In general, Brian and I may appear to disagree more than we do because some points he tends to emphasize, I tend to note only briefly, and vice-versa. Brian gets annoyed when people tell him you’re not doing real science unless you can scale up, and so he writes a lot about that. I’d probably find that annoying too if I were Brian, and rightly so, but I’m not Brian, so I don’t write much about that. Conversely, I get annoyed when people try to “scale down”—that is, when they try to use macroscale data (usually in combination with dubious implicit assumptions) to infer something about microscale processes. Especially when people who do this try to justify it (as they often do) by first pointing out that it’s impossible to scale up! Um, no; if you can’t scale up, you can’t scale down either, as this old post of mine points out (and often you can’t scale down even if you can scale up, because many different microscale models might lead to the same macroscale behavior). This is the sort of thing I’m talking about when I talk about people trying to make too much of macroscale data. Peter Adler, commenting on Brian’s post, made the same point. Brian recognizes this point, but I write about it more than he does, because I’m the one who gets annoyed by it. To each his own pet peeves. 😉

5. Just because you can’t scale up doesn’t mean microscale data and models can’t inform your interpretation of macroscale data. For instance, in this old post I point out how microscale data can inform our interpretation of the local-regional richness relationship, even though we don’t actually know how to quantitatively scale from the local to the regional. We may not know how to scale from the local to the regional, but we do know that species interactions matter a lot in every locality, so our interpretation of the regional had better be consistent with that knowledge. This is another point I believe Brian agrees with, even if he doesn’t emphasize it. So one way to “muddle through” when it comes to doing macroecology is “draw on microscale data and models, even if you can’t scale up”.

6. In the comments on his post, Brian suggests that, when we can’t scale up, the way to go is “models at the macroscale”. By which I take it he means models defined in terms of macroscale parameters and variables, that don’t make any explicit reference to the microscale, and aren’t explicitly derived by scaling up from the microscale. Assuming I’m understanding him correctly (and if not, the fault is mine), I agree that that approach is quite promising, as long as it’s properly understood. Some remarks:

Brian’s “models at the macroscale” are analogous to old school macroeconomic models of the sort favored by Paul Krugman, like the IS-LM model. There’s been a lot of debate recently in the econ blogosphere about “microfoundations” in macroeconomics, which is exactly the same issue as “scaling up” in macroecology  (see this and this from Noahpinion for discussion, and links to posts from other economists) As an interested bystander to this debate, my sympathies are very much with old school, non-microfounded macroeconomics. Which puzzled me for a while, since in my own field of ecology I’m all about the importance of “microfoundations” whenever we can develop them, and rather skeptical of how much we can learn in their absence. Does that make me inconsistent? I’ve been thinking about this for a while, which is why I didn’t respond to Brian’s post until now—I wanted to make sure I had my head straight. I’ve decided that I’m not being inconsistent. Old school macroeconomics can get away without being rigorously derived from (i.e. scaled up from) explicit microfoundations because it’s based on parameters and functions that summarize the macroscopic effects of the relevant microscale processes. Sometimes, these parameters and functions are justified purely on empirical grounds. For instance, many macroeconomic models without microfoundations assume “downward nominal wage rigidity”, which is jargon for assuming that, while workers certainly can lose their jobs during a recession, the salaries of the workers who retain their jobs don’t get reduced, at least not far enough or fast enough to matter. The grounds for this assumption are purely empirical—it seems to be true, even though it’s surprisingly difficult to specify a tractable microscale model of the behavior of workers and firms in which it would be true.

There are plenty of analogous cases in ecology. Broadly speaking, the sort of “models at the macroscale” that Brian wants to see already exist in many areas of ecology—indeed, they’re ubiquitous! Most any parameter or function in any ecological model can be thought of as summarizing the effects of some underlying, often unspecified biology. And further, the justification for these parameters and functions often is purely empirical. Think for instance of the conversion efficiency parameter in a predator-prey model, the parameter that tells you how many units of predator biomass are produced from each unit of prey consumed. This parameter is usually a constant with a value less than 1—which is just a way of summarizing all the massively complicated underlying physiological, biochemical, and gut-microbiological processes involved in digestion and assimilation. Extensive empirical data justify summarizing all this complicated underlying biology in a single, constant parameter less than 1 in many, though not all, circumstances. Or think of the concept of density dependence. There are all sorts of mechanistic reasons why the per-capita growth rate of a population would depend on its own density—including reasons that actually have to do with interactions with other species. It’s often possible to summarize interspecific density dependence via an appropriate model of intraspecific density dependence, thereby allowing you to model the dynamics of a focal species embedded in a larger community without actually having to model that community. For instance, here’s a 2009 Peter Abrams paper on this, deriving the “macroscale” models of consumer intraspecific density dependence arising from different underlying “microscale” models of consumer-resource interactions. Or think of group selection and the associated concept of “group heritability” (“parental groups” passing on their “group traits” to “offspring groups”), which might be summarized by a heritability parameter. You can use the Price equation to show how group-level heritability reflects lower-level processes, including individual-level selection among the individuals comprising the parental groups (Okasha 2006). And there are a bazillion other examples that could be given. So when Brian says that macroecology should be based on “models at the macroscale”, he’s really just asking for the sorts of models that already form the basis of population ecology, community ecology, evolutionary biology, and other fields. All of which is a very long-winded way of saying that I’m actually optimistic that Brian’s “models at the macroscale” approach is feasible and worthwhile. After all, the analogous approach has proven very feasible and worthwhile in other areas of ecology.

Now, before we all start holding hands and singing kumbaya, let me offer a couple of cautionary notes.

Here’s my second cautionary note. Thinking of things like predator-prey models as “macroscale” models offers a good illustration of a point made in #4 above: you can’t “scale down” and use macroscale data and models to make inferences about the microscale. Nobody would ever use the fact that predator conversion efficiency often is roughly constant and less than 1 to infer anything about the biological details of predator physiology, gut microbiology, or biochemistry. That would be very strange, and very silly. Nor would anyone ever use the fact that we can write down a “macroscopic” predator-prey model without explicitly specifying the underlying “microscale” physiology, gut microbiology, etc. as evidence that that underlying biology is just “small-scale” stuff that doesn’t matter at the macroscale. That would also be very silly; just because you can summarize or implicitly account for the effects of various microscale processes in a single macroscale parameter or function doesn’t mean that those processes don’t exist or don’t matter! After all, the behavior of your macroscale model would change if you changed those parameters and functions—which means that the behavior of your macroscale model would change if the underlying microscale processes changed. Which is why I get really annoyed when macroecologists say or imply things about the microscale, based solely on macroscale data and models. Because that’s what they’re doing whenever they claim that, on “large” spatial and temporal scales, the effects of “small scale” and “short term” processes  are unimportant, or cancel out, or are just a minor source of “noise” superimposed on the “signal” created by “large-scale” processes. Wrong. Such claims apparently aren’t as obviously silly as, say, inferring the irrelevance of predator physiology and gut microbiology from the structure of a predator-prey model. But they’re equally false, and for the same reason. Sorry macroecologists, but it’s population and community ecology all the way down. You certainly can do macroecology without explicitly accounting for this fact—but you can’t use macroecology to do away with this fact. Not that Brian would ever try—I hope!

This entry was posted in New ideas by Jeremy Fox. Bookmark the permalink.

I'm an ecologist at the University of Calgary. I study population and community dynamics, using mathematical models and experiments.

## 12 thoughts on “Scaling up is hard to do: a response”

1. I think you nailed something really key two paragraphs after #6 where you wrote: “Most any parameter or function in any ecological model can be thought of as summarizing the effects of some underlying, often unspecified biology… So when Brian says that macroecology should be based on “models at the macroscale”, he’s really just asking for the sorts of models that already form the basis of population ecology, community ecology, evolutionary biology, and other fields. All of which is a very long-winded way of saying that I’m actually optimistic that Brian’s “models at the macroscale” approach is feasible and worthwhile.”

I think part of the reason scaling up is hard is that sometimes the processes driving the patterns we see at different scales are different (or are weighted differently) at those scales. That means that in moving from one scale to the other we might have to throw out some information that is otherwise essential to think about at one level, and likewise we often need to reconsider what other processes might come into play at our new scale.

For example, I recently did some work on an immune-pathogen model to understand variation in disease progression, and then wanted to scale up to address what was going on with virulence evolution at the population level. A LOT of other processes come into play here (e.g. something as simple as how infected hosts die requires thinking about weather and interactions with predators, etc.), so even if I had a perfect model of the host-pathogen interaction (microscale), I still couldn’t scale up without bringing in these new processes. Likewise, to reiterate what’s mentioned above, there are probably a number of different immune-pathogen models that could get me to to more or less the same macroscale patterns since not all of the information at the microscale feeds into the processes that matter at the macroscale.

PS: I had missed Brian’s earlier post, and just now read it and only skimmed the comments there. Good stuff!

• Thanks Paul–not just for your very smart comments, but for reading my post first! See that, Brian? You’re not the only one who can draw traffic around here! 😉

2. Thanks for taking the time to do a very thoughtful and detailed post (4.5 pages!). I have been eager to hear your response because I do feel like you come from a very different place than I do but also are fair and open minded unlike some coming from that perspective (I won’t regale you with stories I’ve repeated elsewhere but the “that’s not good science” comment gets directed at macroecologists in public settings not infrequently).

As you note there are many things we agree on. Your #1 and #4 obviously. But also on the problems of scaling down. I completely agree. I have published repeatedly in support of this idea. For example that species abundance distributions (at least as generic hollow curve shapes) are not a good mechanism to differentiate what population processes are going. We have already seen at least four dozen micro-scale (population process) models that produce this macro-scale (hollow curve effect). This most happened most famously when people were impressed at how well neutral theory produced species abundance distributions and species area curves and took it is evidence that neutral theory must be true (despite literally dozens of other models that do equally well).

On #2 and #3 about what to do about the difficulty of scaling up. I am pretty sure you will never let me live down the use of the words “muddling through” and “refuse to accept”. But in the end I agree pretty strongly with Lakatos or in less philosophical terms the commentor Don S on recent post who called science a market place of ideas. If I’m willing to go work on hard macroecological problems and be judged by my successes or failures, why do so many people feel the need to tell me a priori that I cannot (not should not but cannot) go there (and I’m not saying you said this Jeremy because you didn’t). To date macroecologist have been rather successful at producing science that seems to be of interest to a broad audience (at least judged by the flawed measure of papers in Science and Nature). And conversely, its not like working on the population level guarantees you are not going to go too far and step on a “B-a-n-a-n-a…”. I don’t want o start a distracting thread by naming fields in population/community ecology that have done this but we all know they’re there (you’ve identified several zombies!). So in the end I just have to say with regards to number 2 and number 3, the proof is in the pudding and nobody can predict how science is going to turn out before doing it (as repeatedly underlined by Nobel prize winners).

As for how to go forward, my main approach (I called it equation number five in my scaling up is hard post), is to just go ahead and model macro-scale parameters of interest (e.g. species richness, global abundance of a species, etc) at the macro-scale. As Paul noted his in earlier comment, the processes do change drastically with scale. I always have to laugh a little when population dynamics people tell me you can’t model within the level of interest (i.e. macro-scale in my case). Every single parameter in population models is a population parameter (e.g. birth rate or intrinsic rate of increase cannot even be measured on anything but a population)(as you noted Jeremy). There have been a few nice attempts to derive population parameters from physiology, but they have been very limited and often narrow in scope – in short scaling up is hard to do from individual to population too. So as just one example of modelling at the macro-scale, we have learned a lot about how species richness is a function of climate. We’re struggling at the moment to define mechanisms.

I like your analogies with micro- and macro-evolution, and am actually preparing a post on that (evolution is at least closer to ecology than the ideal gas law).

On your concern #1 on macro-level modelling (there are got you’s hiding in ignoring the micro-level processes such as interactions between macro-level parameters), I would simply have to say yes. And there are got-you’s hiding in population level parameters interacting with each other that get ignored. And got-you’s of other sorts hiding in being too reductionist (i.e. ignoring the fact that communities were not closed systems in a homogenous environment for like 50 years). There are got-you’s everywhere in science. Cleverness to avoid this is a required trait of all scientists. Nothing unique here about macro-scale modelling.

On your concern #2 on macro-level modelling (that macro-processes are really a bunch of micro-processes that shouldn’t be ignored, or in your memorable phrase of an old post that you linked to above “it is community ecology all the way down”). Yes it is. It is also physiology all the way down. It is also chemistry all the way down. It is also quantum mechanisms all the way down. Where does this reductionism stop? I suggest in practice it stops when it is no longer useful/practical. Worrying about quantum tunnelling to help understand community ecology strikes most everybody as ridiculous. And that was really the main point of my scaling up is hard paper, because of real mathematical challenges very often, going any further down the reductionist chain than the level of the phenomena one is interested in often not practical/useful.

In short with respect to your two main concerns, I see nothing unique about scaling from population to macroecology. I suspect physiologists are saying there are the same pitfalls and problems in working at the population level. And they are right and wrong for exactly the same reasons.

Thanks for your time and thoughts Jeremy!

• Re: “muddling through”, I wonder if it’s possible to put together for macroecology something like Wimsatt’s (1987) list of all the uses of false models. Box’s famous line about how all models are false, but some are useful always bugged me, because it leaves you hanging. HOW are false models useful? Wimsatt 1987 (which I’ve linked to in other posts; sorry, too lazy to go look up the link again) is an answer to that question. His list includes things like “A pair of false models that describe two different limiting cases can give us a sense of how a more realistic, but intractable, intermediate case behaves” and “A false model that is false by virtue of omitting some process or factor can be used as a baseline; the difference between the model’s predictions and the data is an estimate of the effect of the omitted factor.” Not that the listed techniques are infallible–far from it. But having the list is a starting point for talking about the strengths and weaknesses of different uses of false models.

My reaction to your “muddling through” line is much the same: Ok, sure, we’ll just have to muddle through–but HOW do we do that? I think between the two of us we’ve already come up with a few “techniques for macroecological muddling”. Just off the top of my head:

1. Use microscale data and models to constrain or limit the macroecological inferences you try to draw. Whatever macroecological conclusions you draw shouldn’t conflict in known ways with microscale information.

2. Use macroecological data to narrow the space of viable hypotheses. As in your example of using paleo data to show that the latitudinal richness gradient is at least 100 million years old. That rules out the “it’s just a post-Ice Age transient” hypothesis.

3. Combine microscale modeling and macroscale data to identify what sorts of data you need to distinguish among microscale hypotheses. As with your example of the species-abundance distribution. It’s hard *not* to get a “hollow curved” species-abundance distribution, so we should avoid trying to use “hollow curved” species-abundance distributions to test alternative microscale models.

4. Build models at the macroscale. In terms of specific examples of this, maybe things attempts to use phylogenetic and paleo data to estimate if net diversification rates are diversity-dependent (analogous to estimate density-dependence from time series data in population ecology).

I’m sure this list could be greatly extended by someone who knows the literature (nudge). 😉

EDIT: And yes, however we pursue macroecology (or anything), the “marketplace of ideas” will sort things out. But I wouldn’t want to see us rely solely on the marketplace, for two reasons. First, as discussed in other posts on bandwagons, zombie ideas, etc., the marketplace of scientific ideas is not an efficient market (whether it could be made substantially more efficient than it is, I don’t know). Second, science progresses faster, the better the “variants” the marketplace of ideas has to sort among. The “adaptedness” of the “population of ideas” is not improved when someone proposes a “deleterious mutant” of an idea, even if that idea is quickly “selected against”. Unlike evolving organisms, we scientists have the power to bias our “mutations” with respect to their “fitness effects”: we can help science progress by doing whatever we can to make sure our ideas and approaches are *promising* ones. So yes, the “marketplace of ideas” will sort things out–but that’s no reason to avoid discussion of how to make sure that we’re putting “high fitness” ideas out into the marketplace

3. I had one more thing that I wanted to say on Brian’s post too:

If I remember rightly,
1) $\theta_i$ are the parameters for all the different field plots, and to build the micro-macro theory, if all the different values of $\theta_i$ are considered, this is said to be computationally intractable.

On the other hand,
2) if building a theory from micro to macro, where the mean value of $theta$ is used to predict the mean value of the macroscopic quantity; this approach is bound to fail by way of Jensen’s inequality and given sufficient variation in the distribution of $\theta$.

However, I see 1) and 2) as two ends of a spectrum: either the full distribution of $\theta$ or only it’s mean. But if you’re doing a proper job of building the theory, and $\theta$ is known to be poorly approximated by just it’s mean, then shouldn’t this be built into the theory and $\theta$ be approximated by the appropriate number of higher order moments (without requiring necessarily that all the moments be brought into it)?

For example, the theory could involve two parameters: the mean and the variance of $\theta$ and predict the mean and the variance of the macroscopic property. Or the mean, variance and skewness, and so forth: just stop when you get to BANANA.

Maybe I’m over-interpreting what Brian had initially said, but I thought the argument was that 1) and 2) implied that scaling-up is impossible. I think what Brian showed is that this is true if the micro-macro theory is restricted to only considering the means of an inherently variable system – but the ‘if’ in that sentence is should be emphasized. I don’t think it shows that all types of micro to macro theory are doomed to be poor approximations or computationally intractable.

• Hi Amy, Brian, Jeremy, et al.,

I’m obviously very late to this post (!), but I have been thinking about this topic recently and doing projects that address the problem empirically and so I thought I’d write a comment despite my tardiness.

I agree Amy. One of your solutions, Brian, is to measure the dynamics in all plots but this isn’t feasible. The other micro-to-macro alternative you suggest is to use a mean-field approach, which won’t work because of nonlinearities combined with variance between plots. Your conclusion, therefore, is to jump straight to the macro level, which I think results in a lot of great science with sometimes impressive predictive capacity – so long as one doesn’t then ‘drill-down’ to infer mechanisms operating at the micro-scale (i.e. scaling down is also hard to do – which Jeremy has pointed out before).

But can’t we scale up by getting an estimate of the mean and variance in the dynamics across a sample of the plots (i.e. not the full area)? When we know the nature of the nonlinearity, and we have an estimate of the mean and variance between plots then don’t we have the information required to scale up?

Obviously, no need to reply… it has been 2.5 years since you wrote the post after all!

Simon.

• Hi Simon – Since I wrote my preceding post, I’ve increasingly become curious about the viability of what I (along with economists) called the delta method. It has also been called the moment closure and other things in ecology. As linked to in the comments there are certainly a dozen or so papers I know of that try to do this (Ben Bolker’s spatial moment closure paper and Peter Chesson’s scaling papers being the most prominent popping into my mind this second). But these techniques are difficult, have not had great up take, and their actual predictive accuracy in real world situations is largely unknown (to be precise they give you the means to approximate at a scaling up – how good the approximations are in real world situations is I think the big question). Economists like it, but it hasn’t had much of a role in weather prediction. I think this will be an interesting question to see whether this line really pans out or not. I’m in the mildly skeptical corner but would be really happy to be wrong.

• @Brian:

Your comments on moment closure take me back to my Silwood days, when my officemate Dave Murrell and his PhD advisor Richard Law were very into this approach. My friend Jon Norberg has used it too. And of course Ben, and Steve Pacala, and others…really, looking at that list of great people who used the approach, it’s hard to believe it didn’t take over the world! 🙂

I think there’ll always be a lot of use for approximation methods–Taylor series etc. But I agree that the moment closure approximation method never really took off. Not sure why. The math was tough. I think the choice of closure turned out to matter a lot in many cases, and it turned out to be difficult to guide that choice on theoretical grounds or interpret why one closure worked better than another in any particular case. And there might be other reasons. Now I’m curious enough that I’ll need to ask Ben at some point…

This site uses Akismet to reduce spam. Learn how your comment data is processed.