Here’s a recent blog post at Backreaction, pushing back against the arguments in high energy physics for a new, extremely expensive particle collider. I was most interested in the point that several of the purported scientific goals of a new collider could be achieved both more effectively and more cheaply with other sorts of equipment. For instance, if your goal is to look for proton decay, you can do that more effectively and cheaply by monitoring large tanks of water.
I’m not a particle physicist, so obviously I’m totally unqualified to evaluate these arguments. But I find them interesting to read anyway, because you don’t see these sorts of arguments very often in ecology. Indeed, they’re so rare that when they happen they become famous historical markers that get remembered for decades. Think for instance of the famous November 1983 issue of American Naturalist debating different approaches for studying interspecific competition. In ecology, it’s common for authors to argue for their own preferred approach to studying X. But I feel like it’s rare for authors to compare the pros and cons of different approaches. It’s commonly done for technical statistical debates such as data transformation vs. generalized linear models. And it’s sometimes done in review papers of non-statistical topics. But even there, it seems like reviews in ecology focus more on reconciling the results from different research approaches, less on evaluating the relative merits of different approaches.
Years ago, I complained about this in the context of using local-regional richness relationships to infer whether local species interactions limit local community membership. If that’s what you want to infer, well, in many systems you can just test that directly by experimentally adding species not currently present, perhaps also crossing the addition with some manipulation that reduces interspecific competition from resident species. See, e.g., Shurin (2000) for lake zooplankton, or the many, many terrestrial plant seed addition experiments and transplant experiments. Why use an indirect inference that depends on a lot of dubious and difficult-to-check background assumptions when you can just do a direct, straightforward experiment?* That’s just the first example that occurred to me off the top of my head. It seems like there are many contexts in ecology in which we ought to be debating the relative merits of alternative research approaches, but aren’t. Does anyone else feel that way? Or is it just me? (wouldn’t be the first time it’s just me…)
*And no, experiments aren’t always the best approach. My chosen example is just that–one example. The point of this post is not to argue that experiments are always and everywhere the Best Way to do ecology, because that’s not true and I don’t think that. I do think that, in general, scientists should prefer more direct approaches over less direct ones. As philosopher of science William Wimsatt points out, the more background assumptions and intermediate logical steps on which a conclusion depends, the less reliable it is, all else being equal.