Platt (1964 Science) is a classic practical statement of philosophy of science by a scientist. Briefly, Platt argues that some fields of science progress faster than others, and that this is neither an accident nor attributable to variation in intrinsic skill or brilliance among scientists in different fields. Rather, he says that workers in some fields routinely use a method that more or less guarantees progress: strong inference. They line up a bunch of competing alternative hypotheses, and then conduct the decisive observations and experiments to distinguish among those hypotheses. So that in the end, they either identify the correct hypothesis, or know they have to go back to the drawing board and come up with some new hypotheses.
At least according to one small sample, ecologists mostly don’t do strong inference.* Why not?
tl;dr: I think Brian’s wrong and that the usual excuse for not doing strong inference in ecology is mostly bogus. I think ecologists could, and should, do a lot more strong inference than they do.
The usual excuse for not doing strong inference in ecology is that ecology is too noisy and complicated. Platt was a molecular biologist working with highly-controlled, low-noise systems in the lab, making decisive experiments feasible. And the alternative hypotheses of interest in molecular biology at the time were mutually exclusive, and either right or wrong. Ecologists have to deal with many sources of variation, including but not limited to massive sampling error. Our alternative hypotheses often aren’t mutually exclusive and often are matters of degree. And experiments are infeasible at some spatial and temporal scales. That’s why Quinn and Dunham (1983), and Brian, say you can’t do strong inference in ecology, except perhaps in some unusual special cases.
That all sounds very plausible. It’s also wrong. As evidenced by the many, many ecologists who’ve done strong inference. Way too many for them to all be unusual special cases with no lessons to offer the rest of the field. Just off the top of my head, supplemented with one casual Web of Science search:
- McCauley et al. 1999 is the culmination of series of experiments by Ed McCauley and colleagues to nail down the explanation for why Daphnia in nature don’t exhibit the paradox of enrichment. As Meg will be happy to confirm to you if you doubt it, Daphnia dynamics in nature are noisy. And the five explanations for the absence of the paradox of enrichment in natural Daphnia dynamics aren’t mutually exclusive. But yet, it was still perfectly possible to do a decisive series of experiments testing each of them in turn.
- Kendall et al. 1999 show how to do strong inference about the causes of population cycles, at least in well-studied species.
- Schluter and McPhail 1992 developed a now-standard checklist of assumptions and predictions that need to be tested in order to demonstrate character displacement and rule out alternative possibilities such as divergence in allopatry followed by secondary contact. Several field systems have now been shown to tick every box on the list. The Schluter and McPhail checklist is a particularly telling counter-example to the claim that you can’t do strong inference in ecology because ecology is “noisy”. Ruling out “chance” or “noise” is the first item on Schluter & McPhail’s checklist.
- Simons 2011 develops a checklist for what you need to show, and what alternative hypotheses you need to rule out, to demonstrate bet hedging. As with character displacement, a few studies tick every box on the list, though most don’t. And if you say that it’s only possible to develop checklists and get others to use them after a long history of failed attempts to test the ideas in question, well, so what? Saying that “we can do strong inference in ecology, it just takes a while to figure out how to do it in any particular case, and then get people to actually do it” is totally different from saying “we can’t do strong inference in ecology”. I’m fine with the fact that, when ecologists first start exploring some big question, they may not be in a position to do strong inference–just so long as we work our way up to doing strong inference eventually.
- Ford and Goheen 2015 call for studies of trophic cascades involving large carnivores to be based on strong inference, and explain in detail how this could be done (Adam Ford’s done it himself in his own work). Even though large carnivores are infamously difficult to study in controlled replicated experiments at the relevant spatial and temporal scales, and even though the alternative hypotheses aren’t mutually exclusive.
- Downes 2010 does the same for studies of effects of stressors in streams. Notably, she uses the noisiness and complexity of stream systems as an argument for strong inference, not an argument against it.
- O’Connor et al. 2015 reviewed studies of climate change impacts and found that only a minority stated prior expectations and tested alternative hypotheses about drivers of change. But that some of them did shows that strong inference of climate change impacts is possible. Failure of many climate impact studies to use strong inference has nothing to do with the inherent noisiness or complexity of the problem.
- Similarly, Dochterman and Jenkins 2011 note that behaviorial ecologists often have tested multiple alternative hypotheses, and suggest ways for them to do so more often and more effectively.
- Many recent Mercer Award winning papers test the hypothesis of interest from all angles, powerfully combining different research approaches and lines of evidence to show that the hypothesis passes a severe test (meaning: a test that a correct hypothesis would pass with high probability, and an incorrect one would fail with high probability). Which seems to me to be close enough to strong inference for government work.
- Similarly, in a guest post here, Britt Koskella talked about how she combined microcosm experiments and other lines of observational and experimental evidence to cut through the complexity of nature and test alternative hypotheses about parasite-host coevolution.
- Krebs’ informal survey, linked to above, found that 22% of papers in the first two issues of Journal of Animal Ecology in 2015 had explicit hypotheses and alternatives. Okay, you’d have to look more closely at those papers to see if they actually were successful implementations of strong inference. But surely 22% is too high a fraction to just dismiss strong inference as obviously impossible in ecology.
- Other examples that I’m sure must exist.🙂
My point isn’t that strong inference is the infallible recipe for progress Platt made it out to be–it’s not. But the notion that ecology is somehow not amenable to strong inference is just false.
So if ecologists could do more strong inference, why don’t they? I don’t know, but here are some hypotheses:
- Strong inference can be hard. Simons 2011 notes that the studies of bet hedging that only tick some of the boxes almost invariably tick the easiest boxes. The same is true for studies of character displacement.
- Inertia. If there’s an established way of studying X, it’s quite likely that lots of people will keep following that approach, even if it doesn’t involve strong inference. It’s that sort of inertia that those review/perspectives pieces arguing for strong inference are pushing back against.
- Self-fulfilling prophecy. If you think that the world is really noisy and complicated, so that you don’t expect strong inference to work, maybe you never try it, or expect anyone else to try it.
- The oft-remarked on tendency of many scientists to conflate statistical null hypothesis tests with tests of substantive scientific hypotheses. Some people think they already have alternative hypotheses–the statistical null hypothesis and the statistical alternative hypothesis.
- An understandable but overly-narrow interpretation of strong inference as nothing but replicated, controlled, manipulative experiments. Ok, that was Platt’s original formulation. But I think it’s more useful to define strong inference more broadly, as I have in this post. The point is to check your hypotheses from every angle (both assumptions and predictions) so as to subject them to severe tests.
- Conversely, some of it is failure to recognize cases in which experiments are perfectly possible and would be really useful. Too many ecologists mistakenly think that, because the phenomena they’re interested in predicting or explaining occur on spatial or temporal scales on which experimentation is impossible, that relevant experiments are impossible. For instance, regarding the causes of local-regional richness relationships. Just because you’re interested in some “large scale” or “long term” phenomenon does not mean that small scale, short term experiments are uninformative! On the contrary, they’re often hugely informative, since many large-scale, long-term phenomena represent the cumulative outcome of small-scale, short-term events and processes that happen everywhere, all the time. Always remember: macroecology is community ecology all the way down. Yes, the large-scale, long-term implications of small-scale, short-term processes can be difficult to suss out–but you’re voluntarily discarding evidence if you don’t try. (Aside: in researching this post, I was struck by how many review/perspectives pieces on strong inference in various areas of ecology make this same point: that ecologists are missing opportunities to do strong inference by not drawing on research approaches and lines of evidence they could be drawing on.)
- Unwillingness to eliminate seemingly-intractable complexity by asking higher-level questions. Nobody in evolutionary biology complains that the stuff they study is intractably noisy and complicated. Not because evolution is inherently any “simpler” than ecology, but because evolutionary biologists ask “higher level” questions than ecologists typically do.
Entertaining refutation from Brian in 3…2…🙂
p.s. Just to be clear: I think ecologists could do more strong inference than they do, but I don’t think 100% of ecology could or should be strong inference. There’s more to science than inference about hypotheses.
*It would be very interesting to enlarge this sample and write a paper about what you found. That paper would definitely have a shot in a top journal. Grad students: instead of participating in a conventional reading group next term, rope in some of your friends and do this instead as a side project! You’ll still be reading a lot, but you’ll also get a paper out of it.