Here’s a cartoon sketch of how I think a lot of empirical research in ecology proceeds:
- Many ecologists get interested in some phenomenon that occurs or might occur in many different systems. Interspecific competition. Trophic cascades. Keystone predation. Ecosystem engineering. Curvilinear local-regional richness relationships. Whatever.
- Nobody really has much theoretical idea of how common or important the phenomenon might be, or what factors might influence its occurrence or strength. Maybe we have some vague verbal hypotheses, such as that the phenomenon might be stronger in the tropics or something.
- Many ecologists go out and do field studies to test for the phenomenon or its effects. They do lots of competitor removal experiments, or document lots of local-regional richness relationships, or whatever. The hope is that once we have some data to go on, empirical patterns will emerge and those patterns can then guide future theoretical and empirical work. Give theory a target to shoot at, as it were.
- Then somebody does a meta-analysis or other quantitative summary of all those studies, looking at both the overall average strength of the phenomenon, and at covariates that might be associated with variability around the overall average. Usually, those covariates are readily-available, “coarse” variables like the biome in which the study was conducted, the broad taxonomic group of the key species involved in the study, the latitude at which the study was conducted, etc. Usually, the headline result of that meta-analysis is that the phenomenon is common and strong on average. But the studied covariates either have no significant effect on the occurrence or strength of the phenomenon, or explain little of the variation in its occurrence or strength.
- And…that’s it. The headline results gets added to ecologists’ body of empirical knowledge. The effects of the covariates mostly get ignored, since after all they’re weak/noisy/non-existent. But beyond that the meta-analysis doesn’t inspire or guide any new theoretical modeling, or lead to any big new insights (even though the meta-analysis’ authors often say or hope that it will). Maybe the meta-analysis doesn’t even lead to any new work at all. It functions not as a jumping-off point for further work on the topic, but as an endpoint. This topic’s played out. We’ve got the answer, ecologists seem to say. Or at least, as much of an answer as we can get expect to get easily. The low-hanging fruit’s been picked; time to move on to the next thing.
Here are some questions for you about this little cartoon sketch:
- Have I got it about right? If not, where am I way off base?
- Assuming that I’ve got this about right, could ecologists do better? If so, how?
- Following on from the previous question: what are the biggest and best exceptions to my little sketch? I’m particularly interested in exceptions to step 5: examples of ecologists going out and studying some phenomenon in a range of systems, with the resulting meta-analysis or other statistical summary inspiring productive theoretical modeling (that ideally then suggests or guides further empirical work). Rather than the meta-analysis ending up serving as an endpoint for the research program. The biggest and best example that comes to my mind is 1/4 power allometric scaling. Can you think of others?
- Lurking under the surface here is a bigger, broader question: what are the necessary ingredients for “pattern first” empirical research to lead to progress that goes beyond a statistical summary of the pattern? Not that the statistical summary doesn’t itself represent progress—it does! But what does it take to go further? We all say that good science ideally consists of ongoing, productive feedback between theory and data (don’t we?) Are there any broadly-applicable “rules of thumb” for what sorts of data provide the best starting point for productive data-theory feedback? Can anything be said in general about what sort of data give theory good “targets to shoot at”?*
*In economics, there’s a literature on this. Empirical data that are particularly good fuel for theory are known as “stylized facts“. A stylized fact is a simplified summary of an empirical finding, such as “education substantially increases lifetime income”. The term is originally due to Kaldor (1961), who identified several now-canonical stylized economic facts. Another classic reference on “stylized facts” and why they’re useful for fueling further theoretical (and thus empirical) work is Summers (1991). Notably, Summers accused empirical economists of the time of spending too much effort using sophisticated statistical techniques to precisely estimate model parameters and tease out non-obvious causal relationships in empirical data, activities which Summers argued were much less useful than producing stylized facts. Abad and Khalifa (2015) provide a recent philosophical review and analysis of the notion of “stylized facts”. Here’s one randomly-googled example of a theoretician proposing a model to explain some stylized economic facts; the model in turn predicts other stylized facts for which empiricists could look. I think ecology has stylized facts, but offhand I’m not sure it has enough of them (you can’t have too many!), or that it’s good enough at producing them. The cartoon sketch above is a proposed stylized fact about ecology’s paucity of stylized facts.**
***You can never be too meta!