Are there any examples of a single question/method/approach taking over an entirely scholarly field, to the field’s detriment?

Recently, I was amused to read opponents of randomized experiments in developmental economics complaining that randomized experiments are “crowding out” other approaches. An accusation that turns out to be simply false if you tally up what sorts of papers journals actually publish. The fraction of developmental economics papers that report results of randomized experiments is growing but remains a modest-sized minority of all papers.

I found this amusing because the same groundless argument gets used in ecology all the time. There are people who think (falsely) that meta-analyses are crowding out other sorts of ecology papers. There are people who think (falsely) that quantitative ecology faculty positions are crowding out other sorts of ecology faculty positions. Etc. Somehow, it’s comforting to learn that other fields have the same silly fights ecology does. I’m half-tempted to generalize from these examples, and propose Jeremy’s Law of Complaining About New Things: everybody who doesn’t like [new thing] just reflexively complains that [new thing] is crowding out [old thing].**

Ok, snark aside, here’s a serious and I think interesting question: are there any examples of a particular question/approach/method/etc. completely taking over an entire scholarly field (or reasonably-large subfield)? To the point where you can’t expect to have a career in that field, or publish in that field, unless you work on that question, or use that approach, or etc.? And in the cases where this has happened, are there any in which it later became clear that the takeover was a bad thing? That it would’ve been better, in retrospect, for the field to maintain a greater diversity of questions/approaches/methods/whatever?

I ask because I suspect that such such takeovers are fairly rare, and that when they do happen they usually happen for good reasons. So whenever somebody claims that “[thing] is taking over my field, crowding out [other things], and that’s bad”, you should have a strong prior that both those claims are false. My claim here is not that every little “pendulum swing” in a field’s questions/methods/approaches is always an improvement (it’s not!). I’m merely claiming that (i) it’s rare for pendulum swings to go so far that the pendulum never swings back, and (ii) that the rare pendulum swings that are never reversed are mostly good things.***

For instance, a long time ago it used to be the case that you could have a career in ecology, and publish ecology papers, without knowing or using any statistics whatsoever. That’s more or less impossible now, at least in the countries with which I’m familiar. So “statistics” is a set of methods that has more or less completely taken over ecology. But without wanting to claim that ecologists’ collective statistical practices are perfect (nothing’s perfect!), I’d say the statistics takeover was a good thing for ecology on balance.

There are many examples of inarguable methodological advances taking over entire fields. That’s why you can no longer, say, manually sequence DNA. But it’s more interesting to think about other sorts of takeovers.

Years ago, Lee Smolin argued that string theory had taken over fundamental physics, to the detriment of progress in that subfield because other equally-promising theories, and the people pursuing them, were crowded out. I don’t know enough to judge whether Smolin was correct.

I’ve read that “deconstruction” took over many US English departments back in the ’80s and ’90s, to the ultimate detriment of that field. But I don’t know enough to judge whether that’s true or whether, e.g., it was never a complete takeover or only a takeover of certain prominent departments.

What do you think? Do you agree with me that most complaints that a field is being “taken over” by a particular question/method/approach can be dismissed out of hand unless supported by very strong evidence and arguments? Or am I overgeneralizing from the examples that happen to come to my mind? Looking forward to learning from your comments.

*And if you say that any increase in the frequency of any question/method/approach implies that other questions/methods/approaches are being “crowded out” to at least some small extent, you’re saying that some question/method/approach or other is always being “crowded out”. Well, unless the field remains exactly as it is now, forever! Personally, I don’t think “crowding out” should be treated as a synonym of “any and all change in a field’s questions/methods/approaches.”

**Not actually a law, there are too many exceptions.

***I suppose you could argue that the only reason most pendulum swings eventually swing back is because of people complaining about getting crowded out. But that argument vastly overrates the influence of complaints, especially complaints about popular things. For instance, does anyone think that Lindenmayer & Likens’ complaints about meta-analysis in ecology have had any effect whatsoever on the popularity of meta-analysis?

27 thoughts on “Are there any examples of a single question/method/approach taking over an entirely scholarly field, to the field’s detriment?

  1. I understand from a colleague that, on Twitter, Jacquelyn Gill suggested research repeatedly debunking the vaccines-autism link. I agree that it would’ve been better for science if money and effort hadn’t been wasted studying something that was already known and that there was no scientific reason to revisit. But I don’t know enough to say whether the entire field of vaccine research got sidetracked debunking the vaccine-autism link.

    Jacquelyn’s proposed example is also different than the sort of thing I had in mind when I wrote the post, in that it was driven from outside science. Thanks to one crank (Andrew Wakefield), a segment of the general public got worried for no good reason, and so in response a bunch of scientific research effort was misdirected. When I wrote the post, I was wondering about cases driven from within science itself.

  2. Via Twitter, someone pointed to this modeling paper, “The natural selection of bad science”:
    https://royalsocietypublishing.org/doi/full/10.1098/rsos.160384
    To which, yes, that’s the sort of thing I’m thinking of. But the paper is very speculative. The proposed model of science is a toy model that (to my eyes) doesn’t really describe the incentives scientists face in most fields, even approximately. And the empirical evidence offered doesn’t address the model’s assumptions at all. There are lots of reasons why statistical power of behavioral studies hasn’t increased over time, besides the speculative hypothesis proposed in the paper.

      • Going to disagree with you on that one Peter (if I’m understanding you correctly; apologies if I’m not). Modelling simplified worlds definitely hasn’t taken over ecology!

      • Modelling certainly hasn’t taken over the whole of ecology (yet ?) but it has certainly established a solid presence in certain areas of it – population estimates for large vertebrates for example. Validating methods by running simulations, rather than against real world data is a symptom.

      • “Modelling certainly hasn’t taken over the whole of ecology (yet ?)”

        Are you suggesting it might do so eventually?

        “Validating methods by running simulations, rather than against real world data is a symptom.”

        Well, you can’t check whether a method recovers the correct answer unless you know for sure what the correct answer is. There are a few contexts where you can validate methods with real world data from very well-studied systems. Allison Barner’s excellent recent work showing that various observation-based methods fail to correctly infer species interactions, for instance (https://esajournals.onlinelibrary.wiley.com/doi/abs/10.1002/ecy.2133). Or further back, think of the work of Bruce Kendall and colleagues, validating methods for inferring the causes of population cycles using data from experimental laboratory populations for which the causes of the cycles was well-understood. But I think those sorts of contexts are fairly rare in ecology. So yes, validating a method on simulations isn’t perfect–real world data might be importantly different from simulated data in some way. But validation on simulated data is better than no validation at all. And except in rare cases, I think “no validation at all” is the only alternative to validation on simulated data. Often, the best we can do in ecology is to draw tentative conclusions from a method that’s been validated in some limited set of contexts.

      • Modelling will never take over the whole of ecology, but it will become a larger and larger part of it. It is attractive to managers and funders because it is zero risk and doesn’t involve any expensive hardware or field trips. It is attractive to a new generation of tech-savvy students who grew up in cities, and it is attractive to policy makers because with the appropriate inputs it will provide the desired outputs.

  3. The analytic tradition in philosophy is more or less completely dominant in British/American/Australian philosophy, with a few notable holdouts (e.g. Notre Dame). People like Russell, Popper, Quine, Searle are in this tradition, and hold a narrow view of what philosophy should be about (working out problems in language, being a “handmaid to science” etc.) vs. people in the continental tradition who think philosophy is about what it means to live a good life. Some people think that the dominance of analytic philosophy has greatly impoverished the field, while others think that this change has made philosophy relevant after centuries of circular arguments and debates about meaningless concepts. Critics of the analytic tradition have definitely been pushed out into sociology, comparative lit, religious studies departments, or have moved to the European continent, but they’re still around.

    I think the move to cognitive behavioral therapy (away from Freud) has been somewhat complete in psychotherapy, but I don’t know enough to say if this is detrimental. But there are still psychoanalysts out there.

    I think it will be hard to come up with examples of fields that became completely dominated by an approach, to its detriment. There are probably always refugia of heterodox ideas (crusty old full professors, people in intersecting disciplines, grad students not yet wedded to an approach) that will swoop in and claim the mantle of a discipline when the dominant approach reaches a dead end… The department names endure, but the subject they teach may be completely different. From my (limited) view, this is what happened with lit crit circa 2000.

    • Ooh, analytical philosophy in the Anglosphere is an interesting potential example!

      The move to cognitive behavioral therapy and away from Freud strikes me as a takeover that should be celebrated rather than lamented, but I don’t really know anything about it.

      ” think it will be hard to come up with examples of fields that became completely dominated by an approach, to its detriment. There are probably always refugia of heterodox ideas ”

      Good point. Another example of this might be Keynesian macroeconomics, which got pushed out of most leading academic economics departments in the US for many years but lived on in central bank research departments.

  4. As a correspondent just reminded me, someone’s likely to suggest expensive, purportedly undesirable infrastructure projects as an example. Like NEON in the US, about which I believe it was Bob Paine who complained “$454 million and no hypotheses”. Or further back, the IBP, or in the UK the Ecotron (see Lawton 1996 on the Ecotron: https://esajournals.onlinelibrary.wiley.com/doi/10.2307/2265488).

    The counter-argument to complaints about these projects “crowding out” other sorts of ecological research is that these projects received “new” money that would not otherwise have gone to ecology in the absence of a proposal for a big bold infrastructure project. Based on what I know (and confessing I’m not an expert), I think that counter-argument is pretty strong in the case of both the Ecotron and NEON. And at least in the Ecotron case, there’s the additional counter-argument that the Ecotron produced a lot of excellent science. So even if it did “crowd out” other sorts of research, well, that looks like a good trade. Unless you’re willing to assume (implausibly) that whatever research didn’t get funded because the Ecotron was funded would’ve been even *better* than the Ecotron work.

    Ok, at some level there’s a zero-sum game between funding NEON (or the Ecotron, or whatever), and funding *something* else. Military equipment or bridge repair or a tax cut or whatever. But I think it’s kind of pointless to complain that government spending on X is crowding out some other, unspecified and perhaps totally unrelated thing that the money could’ve been spent on instead. Yes, absolutely, opportunity costs are ever-present. But complaining that we spent money on X instead of on some unspecified not-X, or instead of on some totally unrelated thing Y, is just a way of saying “I don’t like X” or “I like Y better than X”.

    • Those infrastructure examples could apply to fields that are very dependent on a relatively small number of expensive, shared projects (high energy physics and astronomy are the two most obvious examples). Like your other examples, I have no idea what competing methods/questions/projects were “crowded out” by the winning projects or whether those were ultimately good decisions.

      More generally, I think “Jeremy’s Law” is a manifestation of long held grievances that sub-disciplines have against one another for real or perceived insinuations that one is inferior to the other. The divide between quantitative and qualitative methodologists in many social sciences seems to result in this type of dialogue.

      • Yes, to my outsider’s eyes, qualitative vs. quantitative methods in the social sciences seems like a situation where both sides are always complaining about being crowded out!

        Which in seriousness is a recipe for toxicity. If there are two sides in a debate they both perceive as zero-sum and and both feel like they’re losing, it’s a recipe for a vicious cycle. I feel like there are aspects of US politics that are becoming like that. Both sides are increasingly desperate because both sides feel like they’re losing. (I say that as someone who’s from the US, who is on one of the two sides, and who often feels like his side is losing).

  5. Hi Jeremy,
    thanks for pointing out the ROS-paper. Even though the model may not reflect reality, I must say that I enjoy the approach. You discussed a while ago about the generalisations and how we can have different views on these. What the ROS-paper does is an attempt in this direction, instead of just hand-waving. By making the assumptions apparent in the model, it is also possible to criticize them. That is not possible when someone uses only hand-waving arguments (which is the reason that I often prefer mathematical over verbal models).

    On the main topic, you asked for cases when one approach (or whatever) takes over a field. Having been around the field for a while, I have certainly seen band-wagons come and go. Often something interesting comes out at the end, but one may at times wonder if the field was also diverted in an less good way. One example that springs to my mind was the Chitty hypothesis, where it at times was not possible (or at least difficult) to publish alternate views. But perhaps that affected the later discussion about interactions between ecology and evolutionary processes. Another case could be BDEF (biodiversity-ecosystem function) experiments that dominated ecology for a while. There were some interesting outcomes, but one also wondered if that was mainly a small-scale problem. I suppose we cannot see the true value of an approach until afterwards, but I have wondered if one of your favourites, the globally distributed experiments, could be one of those methods that we may reevaluate in the future (but they are certainly good for careers at the moment).

    • Thanks Peter, I agree with your comments.

      Absolutely, there are trends and bandwagons in ecology (and presumably every field). And some of them we’ll all end up regretting in the long run. I just don’t feel like most of those trends ever become so big that anyone who doesn’t participate in them would be justified in complaining about being “crowded out”. For instance, I don’t think the popularity of BDEF work has made it appreciably more difficult for any ecologist who doesn’t work on BDEF to get a job, have a career, or publish papers.

  6. Pre-statistical ubiquity, what were the most commonly employed approaches in ecology? Observational studies or other kinds of qualitative things? Could it be said that these sorts of approaches are not really taught or studied much anymore, and that for at least some questions they may be more appropriate than the contemporary methods taught to new researchers in the field? I’m really just taking a shot in the dark here, but I can see graduate education being the primary impact of something “crowding out” other things. I know there are many Universities where the majority of the theoretical physicists are string theorists, and hence any student at that University will be far more likely to also specialize in that area, to the detriment of studying other kinds of theoretical physics.

  7. Not exactly the type of situation (Perhaps the early stage of it?), but there used to be a time where one could describe species without making a revision of the previous material. Today (although still possible) it isn’t seem as a cautions practice, sometimes leading to some debate within the scientific community.

    Another example that I think describes this phenomenon with higher precision is the emergence of Cladistics. There used to be some debate whether it was better than the phenetical approach, but nowadays it rules over every other approach.

    • “Another example that I think describes this phenomenon with higher precision is the emergence of Cladistics. ”

      Heh, a draft version of this post used that exact example, as an example of a takeover that was a good thing on balance! 🙂

  8. There are a few very fundamental ideas about how we do research (rather than what we research) that I think have taken hold to the detriment of ecology (and perhaps other disciplines).

    1. Statistical significance at 0.05.
    2. Followed closely by – p-values tells us nothing at all. So, the debate sends us to our corners in a way that is counter-productive.
    3. The emphasis on novelty rather than repeating experiments or studies.
    4. The idea that ‘searching’ for general laws is a methodological issue. The generalizability of ecological models, theories or hypotheses is an empirical question not a methodological one.
    5. That hypothesis testing (null and otherwise) is how science must be done.

  9. It actually goes back to Tony Ives’ question at ESA a few years ago “Should ecology be about general laws?” I have never really understood how this is a question that people could or should reasonably vote on. Ecology should be about general laws to the extent that they exist and not about general laws to the extent that they don’t exist. This has always struck me as a strictly empirical question and suggesting that one could or should choose a path on this strikes me as odd.

Leave a reply to Jeremy Fox Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.