A while back we invited you to ask us anything. Here’s the next question, from Margaret Kosmala, who clearly knows Brian and I. Question has been paraphrased, click through for the original.
In light of this opinion piece from a physicist, should ecologists be trying to estimate the values of universal (or even conditional) constants, thereby allowing more severe tests of ecological hypotheses?
Brian: I’ve thought about this topic a lot. See my paper on Strong and Weak Tests in Macroecological Theory. And in Empirical Tests of Neutral Theory I was actually so bold as to lay out a rank order of strength of tests. I’m not sure I would be so bold today. But I would say that in general:
- Predictions tests>null-hypothesis/failed to reject types of tests (especially the ANOVA type that have no change occurred as the null hypothesis)
- Predictions of quantitative values>prediction of qualitative values
- Multiple predictions (and tests) from a single theory>single predictions
- Comparisons of multiple theories/models>tests of a single theory/model (sensu Chamberlain)
Putting that all together, I am all in favor of predicting numbers (and making multiple predictions and testing multiple competing theories). But I don’t think that is the same as having a universal constant. I don’t think ecology has universal constants. Some of the best attempts (e.g. 3/4 scaling of metabolism with body mass, life history invariants) are right at most 80-90% of the time and/or +/- 10-15% accuracy. This is impressive in ecology and I consider those fantastic results. But its hardly the physics pursuit of the universal constant of gravity or Planck’s constant or the mass of a proton to the umpteenth decimal place.
Its an interesting question to ask why ecology doesn’t have fundamental constants. But it clearly has something to do with the fact that we study millions of things that are large and complex and evolved, whereas physicists study as few as possible types of objects that are in some sense atomic or indivisible from the point of view of the theory under consideration. The best physics occurs when you only have one class of object that can be described by a property. Like the gravitational attraction of bodies that basically boils down to one type of body described by one number (its mass). That’s over and on top of the challenge of multicausality. Thus I find myself strangely on the fence. Would ecology be better off if the average ecologist tried to be a bit more like physicists (e.g. quantitatively predictive) – yes. But could we should/we actually try to be exactly like physicists (e.g. find universal constants) – no!
Jeremy: Man, this is right in my wheelhouse, and Brian’s! It’s like you’re an audience plant. 🙂 And the post you linked to is really good.
Short answer: what Brian said. But I’ll add some emphasis/elaboration/bullshit.
Brian’s first bullet is important. Every time you say your “hypothesis” is “Y will vary with X” (i.e. the statistical null hypothesis will be rejected), baby R. A. Fisher cries.
Like Brian said, I don’t think ecologists can be like physicists if “be like physicists” means “predict and estimate the precise quantitative values of universal constants”, because ecology doesn’t have any. The closest we have are probably the two Brian lists. But I do think we can be more like physicists in the sense of “subject our ideas to severe tests”. Severe tests are tests that correct ideas would pass with high probability and that incorrect ideas would fail with high probability. We have plenty of examples of severe tests in ecology (though we can debate whether they’re also examples of “strong inference”, which is one particular way of doing severe testing). That linked post includes some speculation on why we don’t do as much severe testing as I think we could, even though papers that do severe testing often win awards. I think those award-winning examples are the ideal for which we should all be aiming, and they give me hope. Contrary to popular belief, I don’t think that all ecologists are always Doing It Wrong!
As the linked opinion piece you provided suggests, severity is about removing wiggle room and setting the bar high. Those are the common threads running through all four of Brian’s bullets. For instance, testing several predictions cuts down your wiggle room; you can’t easily explain away several incorrect predictions post hoc. Testing several predictions also raises the bar. It’s easy for a false hypothesis to get lucky and make one correct prediction, harder for it to get lucky and make several correct predictions. (Aside: there are tricky conceptual issues here about how to count separate predictions and about interdependence of different predictions…) And this is why that piece you linked to is right to complain about vague theory. If there’s room for argument about what exactly a theory predicts or what counts as a test of it, that’s not a testable theory, full stop. There’s too much wiggle room. That’s why the textbook version of the hump-backed model isn’t testable.
I also think we can be more like physicists in the sense of learning from error. A good hypothesis test doesn’t just reject a hypothesis as false–it’s also informative about why and how the hypothesis is false, and so points you in the direction of the truth. That’s another common thread linking all four of Brian’s bullets. Testing multiple predictions is better than testing one because which predictions fail should tell you something about why your hypothesis is false. Testing multiple theories simultaneously is better than testing just one because the pattern of successful and failed predictions across all theories should help you triangulate the truth. Etc.
I also think that testing assumptions is complementary to testing predictions, and is an underused way to increase severity in ecology (see also). Testing assumptions often tells you why your prediction held or didn’t hold. That matters a lot in ecology because when you’re only testing one or a couple of non-quantitative predictions, it’s very easy for the prediction(s) to “get lucky” and hold for the wrong reasons. Testing assumptions also helps you avoid weak or logically-invalid predictions by forcing you to pay attention to the basis of those predictions. A laser-like focus on testing predictions/hypotheses sometimes leads ecologists to be too easily satisfied with any ol’ hypothesis, I think. As if the important thing was having something to test, never mind if it’s worth testing. Seriously, if all you care about is having predictions/hypotheses, you might as well get them from a Ouija board. Finally, testing assumptions often obliges you to consider many different lines of evidence or types of information. One important source of non-severe tests in ecology is when people focus too narrowly on just one line of evidence. One great thing about the recent debate on limits to continental-scale species richness is that both sides considered relevant evidence from small-scale studies. That’s useful–even essential, I’d say–because alternative hypotheses about the limits to continental-scale richness make assumptions and predictions about small-scale phenomena.
One way to get more quantitative predictions in ecology, and thus more severe tests, is to do more system-specific case studies. Note that this wouldn’t mean giving up on any hope of generality in ecology, though it might mean broadening or redefining what we mean by “generality”. I think one important cause of non-severe tests in ecology is ecologists overreaching for generality. An over-keen desire for generality sometimes causes us to frame our hypotheses/predictions more vaguely and qualitatively than we otherwise would.
Finally, a big reason to care about lack of severe tests in ecology is because we want the field to move forward. We don’t want zombie ideas wandering around, continuing to eat people’s brains. But people naturally get invested in their ideas, and so it’s worth thinking about ways to ensure that the field as a whole moves forward even if (some) individuals don’t. For instance, maybe sometimes trying to slay a zombie idea–trying to get everyone to reject it–just has the effect of reviving or prolonging interest in the idea. Frontal assaults on zombie ideas attract attention to those ideas, handing their proponents a ready-made argument for continuing to pursue them (“The intermediate disturbance hypothesis remains the subject of ongoing debate…”), and may come off as personal attacks that just cause proponents to dig in. Maybe sometimes the way to slay a zombie idea is to move past it, letting it fade into a ghost.
p.s. Just so there are no misunderstandings, no, I don’t think that ecological ideas can be neatly divided into true and false ones, because ideas can be correct in some respects and incorrect in other respect. Yes, I recognize the importance of approximations. Yes, I’m well aware of contingency–that X might be the case at one place or time but not another. Etc. Indeed, I think that falsehood of our ideas often is a feature, not a bug. In order to learn from error, you have to have errors! Finally, no I don’t think truth and falsehood are the only important properties of ideas. Ideas can also be creative, fruitful, etc.–though I also think it’s difficult to ascribe those other properties to ideas, as opposed to the scientists pursuing those ideas.
You chaps have put a lot of thought into this, and your remarks are well worth paying attention to.
For sake of argument/discussion I will suggest at least one real universal constant, and present a little history. ‘Every’ population ( models do it too!) I know-of has the ‘birth’ sex ratio at 1/2; ok, 98%. To some folks this is just a fact, to chaps like me[ and Darwin, Fisher,Bill Hamilton, Bob trivers] it was a a big puzzle. Not only is it 1/2, but this is independent of the underlying genetic sex determination mechanism [ many different kinds], and,more interestingly for ecology, independent of sex differential mortality [ or most other life history differences]. Why?
This fact of 1/2 was the spark that led some of us to ask for its deviations[ nature’s patterns], and its generalization. By understanding ‘why’ 1/2 was so often expected, by understanding the evolutionary principles behind 1/2, we were then in a position to move foreword to new systems [ Local Mate competition, environmental sex determination,hermaphroditism, sex change, etc]. I have long argued, oops discussed ,with Jeremy via email the need for ecology to focus on principles behind STUFF. My experience with sex ratio is the basis for the belief; Guess the correct principles, and move foreword.
eric
Great example, albeit one I tend to think of as evolutionary rather than ecological. Then again, the examples given in the post are evolutionary too. Which raises the question of whether there are any decent examples of purely ecological “universal constants”. The slopes of continental species-area curves being about 0.25 on log-log plots?
If sex ratio were often not 1/2 ecological theory [ pop dynamics, etc] would be much more complicated and variable since 1/2 is embedded in lots of theory. the ESS sex ratio being independent of many ecological details is a blessing for ecology. there are other evolutionary ecology examples…how about anisogamy generating basic sex differences?
There are numbers used by fishery scientists, the original life-history invariants actually, that date from about 1960 and have been estimated at basically the same value for 50+ years.
Yes, evolutionary constants have ecological implications. But they’re still, well, evolutionary. Are there ecological constants that arise for reasons having nothing to do with evolution? How’s the energetic equivalence rule looking these days? The 3/2 self-thinning rule? (Yes I know some proposed explanations for those involve evolution; trying to cast a wide net…)
How about Ro ~ 1, in the medium run, for all pops?
Hmm…but is that interesting? Imagine if it weren’t the case. There wouldn’t be any species left, except for the ones that had infinite abundance!
I think the fact that most populations mostly break even demographically (on average) has important implications. But is the fact itself all that surprising?
Ro ~1 is not surprising, but it has quite surprising implications for life history evolution[ where dRo/dstuff =0 in ESS], and leads to all sorts of power function rules at the ESS. Sex ratio of 1/2 is also not an empirical surprise; the sur-prizes come from thinking more deeply about it.
Jeremy
As a practicing macroecologist, proud of our accomplishments, I still wouldn’t hold 1/4 power species area or 3/2 thinning or 3/4 energetic equivalence as anything more than the most vague approximations (+- 50%). I guest tastes could differ, but to me not exactly analogous to physical universal constants.
Brian; even if these numbers are ‘noisy’ do you think their
value(s), the patterns they summarize point towards some deeper, more basic ….truth(s)/principles? That would make it more like physics, maybe not classical physics, but more modern complexity stuff.
I think they could.
I have a paper where I argue the 1/4 power of species area comes from the distribution of range sizes. The 3/2 thinning rule is under much debate whether it is 3/2 which would be a volume surface-area argument of if it is 4/3 which would really be a 3/4 scaling argument. Which I think is one problem – when you have so much variability in your ‘constant” it is hard to use your constant to test theory. In my experience the 3/4 abundance/body size prediction is really an upper bound tha tis only occasionally met with most real-world allometric constants between 0-3/4 (e.g.varying around 0.2 or 0.3 or 0.4) I think that leads to some scientific elucidation that metabolic energy limits set an upper bound but that most communities are more limited by other factors (or alternatively that big organisms get more than “their fair share”.. Guess I would say its a bit of a mixed bag as I walk through it.
All of biology doesn’t have laws, only rules, principles and generalizations: http://joelvelasco.net/teaching/130/brandon97-biologylaws.pdf
I agree, though it depends what you mean by “law”. Different people use the term in different ways. And that’s even before you get into philosophical debates about the nature and interpretation of the paradigmatic example, laws of physics. Think for instance of Nancy Cartwright’s work.
Much ink to spill on this topic. Entire sentences even!
One point is that there is an important difference between what Margaret asked and the title of the piece.
Relative to the title, I have no problem answering “No, not particularly”. You have to give up something to get something, and although physical scientists have made much progress in modeling relatively simple systems with deterministic equations, the limitations of that approach show up when they have to address more complex systems involving many variables with non-constant parameter values, systems that require more sophisticated, multi-causal thinking and general statistical savvy. I pay a fair amount of attention to various aspects of climate science as currently practiced, and without launching into a dissertation, I can see this issue manifesting itself, often. As just one crude and quick example, you’d think those folks had never heard of any multivariate technique other than principal components analysis. Mention “correspondence analysis” and prepare for a blank response, and they sure do like assigning fixed values to “grid boxes” imposed on continuous, and sporadically measured, climate variables. Other examples could be given, and not trivial ones either, but life is short and getting shorter.
As to Margaret’s actual question the main points have been made already and I mainly agree. Quibbles could be found or created of course. It goes without saying that the search for “universal constants” is a fairly dumb thing to do if you don’t have evidence of some probability of success on the matter.
And “triangulate the truth”, oh I do like that–should maybe engrave it on my mental (or moral!) compass.
Glad you liked the triangulation metaphor, though I doubt I’m the first to use it.
” It goes without saying that the search for “universal constants” is a fairly dumb thing to do if you don’t have evidence of some probability of success on the matter.”
Yes, though that raises the question of what non-dumb thing to do instead if you don’t have good reason to think that there’s a universal constant out there waiting to be discovered or explained. This gets back to that old post of mine on different roads to generality.
Yes it very much does, and for those of us interested in advancing a very tight and direct relationship between scientific research and various resource management concerns (of which there are very many), this is a fairly big sore point. If some cool generalization emerges from exacting studies/data compiled from a number of different systems…wonderful, cake icing. If not….oh well, still got the cake, keep on trucking.
We don’t search for numeric constants, we search for understanding and sometimes the surprising result is that something general is found, maybe numeric, maybe qualitative. Many distinguished ecologists, like my post doc advisor CS Holling, believe[d] that only by first assembling a series of believable empirical-math models for various big systems would we be able to look for generalizations .Its an interesting question if we already know enough about the pieces [ like predation, etc] and what we lack is knowledge about the dynamical outcome of the pieces interacting… knowledge that we can only gain by doing the assembly case -by-case for a while. some of the finest work on trophic interactions lies with top economic entomologists who have devoted their careers to using tritrophic models [ cultivar plant…pest herbivore…predator/parasitoid designed for control] to DESIGN predation outcomes that we label biological control. I would suggest Andy Gutierrez at Berkeley {https://ourenvironment.berkeley.edu/people/andrew-paul-gutierrez} as the type specimen [ disclosure; only met andy once, but worked with his buddies at UBC for one summer on the topic].
In my brief historical mention of sex ratio theory I omitted one of the great heroes , Richard Shaw, who with JD Mohler wrote a 1953 Am Nat paper that invented the ESS technique and gave the first really understandable derivation of 1/2 as the equil: they did all this as graduate students. Shaw’s sex ratio selection experiments did not work…so he did theory. A later paper by Shaw( 1958,also from his thesis?) did population genetic simulations…real genes determining the sex, or the ratio, and showed that virtually any autosomal system gave a population equl of 1/2. This before computers made such dynamical simulation easy; he did it by desk calculator. he also showed that non autosomal control often did not go to 1/2. Amazing and pioneering work .
eric
I forgot to mention that Robert MacArthur made one seminal contribution to sex ratio theory, a 1965 paper in an edited volume where he derived a generalized version of the Shaw-Mohler ESS argument. While it seems to have little influence for a decade, GENERALIZED versions of his rule have proven quite useful. Its great to be able to add sex ratio to macarthur’s other long lasting contributions, Island Biogeography and optimal foraging theory. Even if his approach to communities is out of favor these 3 are a damn good batting average.
“Its great to be able to add sex ratio to macarthur’s other long lasting contributions, Island Biogeography and optimal foraging theory. Even if his approach to communities is out of favor these 3 are a damn good batting average.”
It’s interesting to ask if there’s a reason why MacArthur’s approach worked for sex ratios, island biogeography, and optimal foraging, but failed for community ecology.
As you note evolutionary and behavioral tend to be where we find ‘ numeric rules’ or at least believable generalizations. I would add physiological ecology. The UNM Metabolic Ecology Mafia [ UNMMEM…. just made this up, kind of catchy label, like zombie] proposed many sorts of numeric universals beyond the 3/4 body size scaling; my almost favorite is the exponential exponent underlying temperature responses to thing like development rate, pop growth rate, metabolic rate, etc. While our views have certainly matured/broadened/etc since the 2001-2004 period, a lot of interesting research resulted from treating it as a constant [sic] and also looking for ecological deviations away: see http://www.pnas.org/cgi/doi/10.1073/pnas.1015178108 for an example.