In an old post I talked about how the falsehood of our models often is a feature, not a bug. One of the many potential uses of false models is as a baseline. You compare the observed data to data predicted by a baseline model that incorporates some factors, processes, or effects known or thought to be important. Any differences imply that there’s something going on in the observed data that isn’t included in your baseline model.* You can then set out to explain those differences.
Ecologists often recommend this approach to one another. For instance (and this is just the first example that occurred to me off the top of my head), one of the arguments for metabolic theory (Brown 2004) is that it provides a baseline model of how metabolic rates and other key parameters scale with body size:
The residual variation can then be measured as departures from these predictions, and the magnitude and direction of these deviations may provide clues to their causes.
Other examples abound. One of the original arguments for mid-domain effect models was as a baseline model of the distribution of species richness within bounded domains. Only patterns of species richness that differ from those predicted by a mid-domain effect “null” model require any ecological explanation in terms of environmental gradients, or so it was argued. The same argument has been made for neutral theory–we should use neutral theory predictions as a baseline, and focus on explaining any observed deviations from those predictions. Same for MaxEnt. I’m sure many other examples could be given (please share yours in the comments!)
This approach often gets proposed as a sophisticated improvement on treating baseline models like statistical null hypotheses that the data will either reject or fail to reject. Don’t just set out to reject the null hypothesis, it’s said. Instead, use the “null” model as a baseline and explain deviations of the observed data from that baseline.
Which sounds great in theory. But here’s my question: how often do ecologists actually do this in practice? Not merely document deviations of observed data from the predictions of some baseline model (many ecologists have done that), but then go on to explain them? Put another way, when have deviations of observed data from a baseline model ever served as a useful basis for further theoretical and empirical work in ecology? When have they ever given future theoreticians and empiricists a useful “target to shoot at”?
Different fields and subfields of science have different methodological traditions. Standard approaches that remain standard because students learn them.
Which to some extent is inevitable. Fields of inquiry wouldn’t exist if they had to continuously reinvent themselves from scratch. You can’t literally question everything. Further, tradition is a good thing to the extent that it propagates good practices. But it’s a bad thing to the extent that it propagates bad practices.
Of course, it’s rare that any widespread practice is just flat-out bad. Practices don’t generally become widespread unless there’s some good reason for adopting them. But even widespread practices have “occupational hazards”. Which presumably are difficult to recognize precisely because the practice is widespread. Widespread practices tend to lack critics. Criticisms of widespread practices tend to be ignored or downplayed on the understandable grounds of “nothing’s perfect” and “better the devil you know”.
Here’s one way to help you recognize when a widespread practice within your own field may be ripe for rethinking: look at whether the practice is used in other fields, and if not, what practices those other fields use instead to address the same problem. Knowing how things are done in other fields helps you look at your own field with fresh eyes.
I use R. I like it. I especially like the versatility and convenience it gets from add-on packages. I use R packages to do some fairly nonstandard things like fit vector generalized additive models, and simulate ordinary differential equations and fit them to data.
Scientific ideas can have various virtues. Most obviously, they can be correct. But they can also be clever, surprising, elegant, etc.
One important but difficult-to-pin-down virtue is fruitfulness. A scientific idea is fruitful if it leads to a lot of further research, especially if that research retains long-term value (it wasn’t just a trendy bandwagon or whatever). Fruitfulness overlaps a lot with influence.
Fruitfulness or influence covaries positively with correctness, but not perfectly. It would be nice if the covariance were perfect. It’s unfortunate when an influential idea turns out to be wrong, because the work that grew out of that idea often loses at least some of its value, and because there’s an unavoidable opportunity cost to building on ideas that turn out to be wrong. Andrew Hendry has a compilation of ecological and evolutionary ideas that inspired a lot of research despite being (in Andrew’s view) wrong, or at least not all that important.
In this post I’m interested in the flip side of incorrect-but-influential ideas: ideas that were correct but not influential. Somebody said something true–but nobody else cared. Correct but non-influential ideas are the proverbial tree falling in a forest that doesn’t make a sound.
What are your favorite examples of correct-but-uninfluential ideas in ecology? In all of science?
One way among many others by which a theoretician might develop a mathematical model of one scenario is by analogy with some other scenario that we already know how to model.
The effectiveness of this approach depends in part on how loose the analogy is. At the risk of shameless self-promotion, I’ll highlight a physical analogy that my own work draws on (the analogy isn’t originally mine): dispersal synchronizes spatially-separated predator-prey cycles for the same reason that physical coupling synchronizes physical oscillators. Here’s a standard, and very cool, demonstration involving metronomes sitting on a rolling platform. The analogy between the ecological system and the physical system is actually fairly close, though for reasons that might not be immediately apparent (how come coupling via dispersal works like coupling via a rolling platform?) The closeness of the analogy is why it works so well (Vasseur and Fox 2009, Fox et al. 2011, Noble et al. 2015, and see Strogatz and Stewart 1993 for a non-technical review of coupled oscillators in physics, chemistry, and biology).
But it’s more common for physical analogies in ecology to be quite loose, justified only by verbal argument. Hence my question (and is is an honest question, not a rhetorical one): can you think of any examples in ecology in which models based on loose physical analogies have worked, for any purpose? Sharpening of intuition, quantitative prediction, generation of hypotheses that are useful to test empirically, etc.? Because I can’t.
Scientists—and indeed scholars in any field—often have to choose how wide a net to cast when attempting to define a concept, estimate some quantity of interest, or evaluate some hypothesis. Is it useful to define “ecosystem engineering” broadly so as to include any and all effects of living organisms on their physical environments, or does that amount to comparing apples and oranges?* Should your meta-analysis of [ecological topic] include or exclude studies of human-impacted sites? Can microcosms and mesocosms be compared to natural systems (e.g., Smith et al. 2005), or are they too artificial? As a non-ecological example that I and probably many of you are worrying about these days, are there any good historical precedents for Donald Trump outside the US or in US history, or is he sui generis? In all these cases and others, there’s no clear-cut, obvious division between relevent information and irrelevant information, things that should be lumped together and things that shouldn’t be. Rather, there’s a fuzzy line, or a continuum. What do you do about that? Are there any general rules of thumb?
I have some scattered thoughts on this, inspired by the concept of “shrinkage” estimates in statistics:
I am currently attending a Festschrift this week for Michael Rosenzweig. Make no mistake, he is still actively doing science, but with 50+ years of scientific career, it seems like a good time to reflect on what an impressive career he has had. Just for full disclosure upfront, he was my PhD adviser, so I’m hardly the most unbiased reporter, but of course that gives me a close perspective.
Mike was awarded the Ecological Society of America’s Eminent Ecologist award in 2008 and he has well over 100 papers, many massively cited, and three books, so I imagine many are familiar with his published work, and it would take too much space to summarize it anyway. I want to offer several more reflective and in some cases more personal thoughts. Take them as a reflection of my respect and appreciation for Mike or my musings on the ingredients of a good scientific career as you wish.
One of the most important conceptual advances in community ecology over the last couple of decades has been the development of modern coexistence theory: a quantitative, rigorous theoretical framework that exhaustively defines, and quantifies the strength of, the classes of mechanisms by which species coexist (e.g., stabilizing vs. equalizing mechanisms). Chesson (2000) is the most accessible summary of this theoretical framework. Adler et al. (2007) is an even more accessible overview of some of the key ideas. Folks like Jon Levine, Peter Adler, Janneke Hille Ris Lambers, Steve Ellner, and their colleagues are now applying modern coexistence theory to real data, showing that it leads to practical real-world insights.
But most ecologists only care about coexistence mechanisms as a means to the end of understanding species diversity. And as various folks have noted (including me here on the blog), a theory of coexistence isn’t necessarily the same thing as a theory of species diversity. The question is, how are those two things related?
I’ve been thinking about that question, have chatted about it with various people, and have seen various people mention it in talks. I’ve been struck by the divergence of opinion as to what the answer is. But obviously, my anecdotal experience probably isn’t representative of the broad views of ecologists. Hence my little poll below: do you think more species-rich communities are those with stronger coexistence mechanisms? Choose the answer that best matches your views.
I may decide to do my ESA talk on this topic if the early poll responses are all over the map or if the modal answer is one I seriously disagree with. So please vote! 🙂
In the comments, I encourage you to explain your vote.
“Operationalization” is the term for taking a concept that’s vague or abstract and making it more precise and concrete, so that it can be put to practical use. Like many scientific and social scientific fields that aren’t physics or chemistry, ecology has many concepts that are only vaguely defined, or at least were only vaguely defined when they were first proposed. “Niche” is an infamous example. Or think of how one response to my critique of the intermediate disturbance hypothesis was to question whether the ideas I was critiquing were “really” part of the intermediate disturbance hypothesis, properly defined. Few big ideas are born fully formed, so most new ideas have to go through some refinement and elaboration to make them operational
Sometimes, the process of operationalization is successful, meaning that eventually everyone agrees on the definition of the concept and can go out and apply it. For instance, everybody agrees what “gross primary productivity” is. There might be practical obstacles to measuring it in any particular case, and different ways of measuring it might be prone to different sorts of errors. But those are practical obstacles, not conceptual ones.
But sometimes, the process of operationalization fails.
If you’re a very avid reader of this blog, you need to get a life will know that I’m writing a book about ecology. It’s for University of Chicago Press. The working title is “Ecology At Work”, though that’s only one of several candidate titles. Other candidate titles include “Ecology Master Class”, “Re-engineering Ecology”, and the joke titles that I and others tweeted recently.
Anyway, I’m very excited by this new challenge I’ve set myself, and also very nervous that I can pull it off. Which is where you come in. Below the fold is a draft introduction to my book. Please tear it apart.
Ok, don’t just tear it apart; any and all feedback is most welcome. But critical feedback and suggestions for improvement are particularly welcome. If you think the style sucks, or that the book sounds boring, or whatever, you are not doing me any favors unless you tell me that!
Feel free as well to ask me questions about the book, suggest things I should read, etc.
I’ll of course be getting feedback from more traditional sources as well. But every little helps.
Since many readers prefer not to comment, at the end there’s a little poll for you to tell me what you thought.
UPDATE: The comments have already given me some good feedback: it’s not as clear as it should be up front what the book is about and who the target audience is. And for some readers it’s still not totally clear even by the end. So: the book will comprise comparative case studies of what works and what doesn’t in ecological research. It’s not an introductory ecology textbook, it’s not a methods handbook, and it’s not an “ecology grad student skills” manual like How To Do Ecology. If you think of it as “kind of like A Critique For Ecology, but with lots of positive bits to go along with the critical bits and without a single narrow prescription for how to do ecology properly”, you won’t be too far off. The target audience is ecologists and ecology grad students interested in fundamental research.