A while back we invited you to ask us anything. Here’s the next question, from Margaret Kosmala, who clearly knows Brian and I. Question has been paraphrased, click through for the original.
In light of this opinion piece from a physicist, should ecologists be trying to estimate the values of universal (or even conditional) constants, thereby allowing more severe tests of ecological hypotheses?
Brian: I’ve thought about this topic a lot. See my paper on Strong and Weak Tests in Macroecological Theory. And in Empirical Tests of Neutral Theory I was actually so bold as to lay out a rank order of strength of tests. I’m not sure I would be so bold today. But I would say that in general:
- Predictions tests>null-hypothesis/failed to reject types of tests (especially the ANOVA type that have no change occurred as the null hypothesis)
- Predictions of quantitative values>prediction of qualitative values
- Multiple predictions (and tests) from a single theory>single predictions
- Comparisons of multiple theories/models>tests of a single theory/model (sensu Chamberlain)
Putting that all together, I am all in favor of predicting numbers (and making multiple predictions and testing multiple competing theories). But I don’t think that is the same as having a universal constant. I don’t think ecology has universal constants. Some of the best attempts (e.g. 3/4 scaling of metabolism with body mass, life history invariants) are right at most 80-90% of the time and/or +/- 10-15% accuracy. This is impressive in ecology and I consider those fantastic results. But its hardly the physics pursuit of the universal constant of gravity or Planck’s constant or the mass of a proton to the umpteenth decimal place.
Its an interesting question to ask why ecology doesn’t have fundamental constants. But it clearly has something to do with the fact that we study millions of things that are large and complex and evolved, whereas physicists study as few as possible types of objects that are in some sense atomic or indivisible from the point of view of the theory under consideration. The best physics occurs when you only have one class of object that can be described by a property. Like the gravitational attraction of bodies that basically boils down to one type of body described by one number (its mass). That’s over and on top of the challenge of multicausality. Thus I find myself strangely on the fence. Would ecology be better off if the average ecologist tried to be a bit more like physicists (e.g. quantitatively predictive) – yes. But could we should/we actually try to be exactly like physicists (e.g. find universal constants) – no!
Jeremy: Man, this is right in my wheelhouse, and Brian’s! It’s like you’re an audience plant.🙂 And the post you linked to is really good.
Short answer: what Brian said. But I’ll add some emphasis/elaboration/bullshit.
Brian’s first bullet is important. Every time you say your “hypothesis” is “Y will vary with X” (i.e. the statistical null hypothesis will be rejected), baby R. A. Fisher cries.
Like Brian said, I don’t think ecologists can be like physicists if “be like physicists” means “predict and estimate the precise quantitative values of universal constants”, because ecology doesn’t have any. The closest we have are probably the two Brian lists. But I do think we can be more like physicists in the sense of “subject our ideas to severe tests”. Severe tests are tests that correct ideas would pass with high probability and that incorrect ideas would fail with high probability. We have plenty of examples of severe tests in ecology (though we can debate whether they’re also examples of “strong inference”, which is one particular way of doing severe testing). That linked post includes some speculation on why we don’t do as much severe testing as I think we could, even though papers that do severe testing often win awards. I think those award-winning examples are the ideal for which we should all be aiming, and they give me hope. Contrary to popular belief, I don’t think that all ecologists are always Doing It Wrong!
As the linked opinion piece you provided suggests, severity is about removing wiggle room and setting the bar high. Those are the common threads running through all four of Brian’s bullets. For instance, testing several predictions cuts down your wiggle room; you can’t easily explain away several incorrect predictions post hoc. Testing several predictions also raises the bar. It’s easy for a false hypothesis to get lucky and make one correct prediction, harder for it to get lucky and make several correct predictions. (Aside: there are tricky conceptual issues here about how to count separate predictions and about interdependence of different predictions…) And this is why that piece you linked to is right to complain about vague theory. If there’s room for argument about what exactly a theory predicts or what counts as a test of it, that’s not a testable theory, full stop. There’s too much wiggle room. That’s why the textbook version of the hump-backed model isn’t testable.
I also think we can be more like physicists in the sense of learning from error. A good hypothesis test doesn’t just reject a hypothesis as false–it’s also informative about why and how the hypothesis is false, and so points you in the direction of the truth. That’s another common thread linking all four of Brian’s bullets. Testing multiple predictions is better than testing one because which predictions fail should tell you something about why your hypothesis is false. Testing multiple theories simultaneously is better than testing just one because the pattern of successful and failed predictions across all theories should help you triangulate the truth. Etc.
I also think that testing assumptions is complementary to testing predictions, and is an underused way to increase severity in ecology (see also). Testing assumptions often tells you why your prediction held or didn’t hold. That matters a lot in ecology because when you’re only testing one or a couple of non-quantitative predictions, it’s very easy for the prediction(s) to “get lucky” and hold for the wrong reasons. Testing assumptions also helps you avoid weak or logically-invalid predictions by forcing you to pay attention to the basis of those predictions. A laser-like focus on testing predictions/hypotheses sometimes leads ecologists to be too easily satisfied with any ol’ hypothesis, I think. As if the important thing was having something to test, never mind if it’s worth testing. Seriously, if all you care about is having predictions/hypotheses, you might as well get them from a Ouija board. Finally, testing assumptions often obliges you to consider many different lines of evidence or types of information. One important source of non-severe tests in ecology is when people focus too narrowly on just one line of evidence. One great thing about the recent debate on limits to continental-scale species richness is that both sides considered relevant evidence from small-scale studies. That’s useful–even essential, I’d say–because alternative hypotheses about the limits to continental-scale richness make assumptions and predictions about small-scale phenomena.
One way to get more quantitative predictions in ecology, and thus more severe tests, is to do more system-specific case studies. Note that this wouldn’t mean giving up on any hope of generality in ecology, though it might mean broadening or redefining what we mean by “generality”. I think one important cause of non-severe tests in ecology is ecologists overreaching for generality. An over-keen desire for generality sometimes causes us to frame our hypotheses/predictions more vaguely and qualitatively than we otherwise would.
Finally, a big reason to care about lack of severe tests in ecology is because we want the field to move forward. We don’t want zombie ideas wandering around, continuing to eat people’s brains. But people naturally get invested in their ideas, and so it’s worth thinking about ways to ensure that the field as a whole moves forward even if (some) individuals don’t. For instance, maybe sometimes trying to slay a zombie idea–trying to get everyone to reject it–just has the effect of reviving or prolonging interest in the idea. Frontal assaults on zombie ideas attract attention to those ideas, handing their proponents a ready-made argument for continuing to pursue them (“The intermediate disturbance hypothesis remains the subject of ongoing debate…”), and may come off as personal attacks that just cause proponents to dig in. Maybe sometimes the way to slay a zombie idea is to move past it, letting it fade into a ghost.
p.s. Just so there are no misunderstandings, no, I don’t think that ecological ideas can be neatly divided into true and false ones, because ideas can be correct in some respects and incorrect in other respect. Yes, I recognize the importance of approximations. Yes, I’m well aware of contingency–that X might be the case at one place or time but not another. Etc. Indeed, I think that falsehood of our ideas often is a feature, not a bug. In order to learn from error, you have to have errors! Finally, no I don’t think truth and falsehood are the only important properties of ideas. Ideas can also be creative, fruitful, etc.–though I also think it’s difficult to ascribe those other properties to ideas, as opposed to the scientists pursuing those ideas.