Novelty and conceptual fragmentation in ecology: A self-reinforcing spiral?

Note from Jeremy: this is a guest post from Peter Adler.

*************************

In my view, the current culture of ecology suffers from an unhealthy obsession with novelty and a problem with conceptual fragmentation. I suspect these are related problems linked in a positive feedback cycle.

First, to illustrate what I consider our unhealthy obsession with novelty, here is an excerpt from a recent review of one of my manuscripts: “…the only negative thing I can say about this paper is that it comes across as rather confirmatory in nature. It is absolutely solid science and highly interesting and important but the writing does not punch you with novelty.” Our paper was eventually accepted after we made a stronger case for its novelty, so this isn’t sour grapes. I’m quoting the review because I think it perfectly captures the current priorities of our field: to get a paper into a top-tier journal, or a grant funded at NSF, it’s not enough for the research to be solid, interesting and important. It also has to be novel.

To give a more specific example, in one of my research areas, coexistence, it feels easier to publish theoretical “what if?” papers about intransitive loops and higher order interactions than empirical papers quantifying the strength of the pairwise interactions we’ve thought about for decades, but still struggle to rigorously measure. Similarly, I would bet that a proposal about a sexy new concept/interaction/process that has a small chance of being an important driver of dynamics has a better chance of being funded than a proposal to do the tedious work of carefully measuring effects of the long-recognized drivers we know are important (e.g. drought in water limited ecosystems). As my colleague Steve Ellner likes to say, “NSF is interested in questions not answers.”

The conceptual fragmentation problem was the motivation for Mark Vellend’s recent book, and leads to papers listing the 100 most important research questions in ecology. Can we really expect our poorly funded discipline to make progress on so many questions? Having too many priorities is effectively the same as having no priorities.

The Long-Term Ecological Research network provides another case in point. Despite being a coordinated, national program with a fairly applied mission statement (“to provide the scientific community, policy makers, and society with the knowledge and predictive understanding necessary to conserve, protect, and manage the nation’s ecosystems, their biodiversity, and the services they provide”), every LTER site comes up with its own conceptual framework, and collectively they span much of the bewildering variety of ecological research. Contrast that with NSF’s Critical Zone Observatories, which have an even broader mission (to “discover how Earth’s living skin is structured, evolves, and provides critical functions that sustain life”), but, in my very limited experience, all seem to focus on similar questions about carbon and water cycling and don’t have a problem spending big money to get a better estimate of one important flux.

Why do I think novelty and conceptual fragmentation are related? Pressure to demonstrate novelty creates a perverse incentive to minimize connections with previous work. In Vellend’s language, the incentive is to emphasize the uniqueness of low-level concepts and individual case studies, rather than the unifying commonalities of high-level concepts. This is a recipe for a discipline that cannot agree on its priorities. With everyone working to promote their own pet idea, rallying around a few grand challenges becomes impossible.

At the same time, if we can’t agree on what is important, then by default novelty becomes the key criteria for evaluating papers and proposals. At least we can agree on what is new, right? It’s much easier to recommend rejection because a paper isn’t novel than to make a difficult and seemingly more value-laden argument about why it is not important.

The irony of this predicament is that collective action itself becomes a novelty. I’m thinking of the Nutrient Network and similar distributed research efforts. When dozens of ecologists agree to pursue a few core questions using consistent protocols, you get novel (and important!) results that get published in top journals. Choosing and agreeing on those core questions is the hard part, agreeing on consistent methodology is secondary.

So why do I care? After all, I’ve done fine under the status quo. But I am increasingly concerned about future funding for basic research. I’m becoming more convinced that we can’t expect society to support ecological research for knowledge’s sake alone. In the long run, our best hope of maintaining funding for basic research may be to show that it can sometimes help solve problems important to society. But an over-emphasis on novelty and our conceptual fragmentation act as barriers to collective action and make it hard to solve big, complex, problems. I’m not suggesting we ignore novelty altogether–we absolutely need to keep incentives for innovation. But I do think we need to reconsider and adjust our priorities. We should publish solid, interesting and important papers in our top journals, even if they don’t punch you with novelty. The challenge in making this shift is reaching some agreement about what is “important”; what are the highest priority problems our discipline should try to solve?

26 thoughts on “Novelty and conceptual fragmentation in ecology: A self-reinforcing spiral?

  1. Do entirely agree. Let me point also to a couple of other manifestations of “innovation syndrome”.

    1. Innovation tends to be overstated. In any given issue of most ecology journals, many of the papers will claim in Intro or Discussion to be overturning or at least questioning previous knowledge, even though to my eye the results seem more or less what might be expected. (And all the better for that — more confirmation and less overturning shows that ecology is actually making progress.)

    2. Postgrads and others just setting out on research careers must feel really disoriented — must feel as though we hardly know anything with reasonable reliability.

    Cheers

    • Mark, as an agronomist delving into ecology to see if agroecology has much of value to offer, I often “feel as though we hardly know anything with reasonable reliability.”

  2. One of the next blog posts in my queue is about my experience of how hard it is to publish conceptual ‘what if’ papers! Maybe it differs between sub-disciplines?
    I think ‘novelty’ and ‘conceptual’ are very broad terms that are interpreted differently by disciplines and individuals and are often conflated.

    • Your comment helps correct my laziness in not really defining novelty. I was definitely focused more on “conceptual novelty” than other kinds. Inventing a new way to measure an important quantity could help unify, not fragment, the field–see Brian’s comment below about measure theoretical constants in physics.

  3. 100% agree – thanks for sharing. The fractured research landscape raises the question of reliability for me. Consider, cell biology/physiology where it takes many many person hours across many labs all probing a tight question in a complex system to get something resembling reliable knowledge. And there are many dead ends and false positives in its wake.

  4. Very nice post, thanks for this Peter. Without meaning to, you’ve provided the best pushback ever against my old post in which I explained why I don’t care what the biggest question in ecology is: http://www.oikosjournal.org/blog/why-i-dont-care-what-biggest-question-ecology

    Some scattered thoughts:

    -I wonder if it’s possible to test your hypothesis with bibliometric data?

    -One way to read your post is as an argument for unifying conceptual frameworks like the one Mark Vellend’s book stumps for. That’s #4 on my list of “roads to generality”, and I agree it’s a road more of ecology should travel: https://dynamicecology.wordpress.com/2015/06/17/the-five-roads-to-generality-in-ecology/. Unfortunately, *good* unifying conceptual frameworks (especially ones that can be expressed mathematically) are much harder to come by than arm-wavy verbal “syntheses” that aren’t very useful.

    -Do you think ecology would be better off with more centrally-coordinated research effort? Fewer individual investigator-led research programs, more NutNets (https://dynamicecology.wordpress.com/2011/10/20/thoughts-on-nutnet/) and GEMs (https://dynamicecology.wordpress.com/2013/01/16/the-road-not-taken-for-me-and-for-ecology/)?

    -Related to your argument, Mike the Mad Biologist has an old post arguing that in neuroscience and many other areas of biology, tons of money are wasted on grants to individual investigators who conduct small sample size, underpowered studies, so that the whole field ends up chasing noise. See also the example of social psychology’s replication crisis, the solution to which appears to be massive, centrally-coordinated, pre-registered replications involving many labs. Mike argues that the US federal government should quit spreading money so thin and instead spend massive amounts on a small number of high-powered “Manhattan projects”: https://mikethemadbiologist.com/2013/04/11/how-the-dominant-funding-structure-leads-to-the-decline-effect-and-othe-stats-problems/. Do you think the same argument could be made in ecology?

    -Also related to your argument: my old post arguing that ecologists should focus more on a smaller number of model systems:https://dynamicecology.wordpress.com/2012/10/18/ecologists-should-quit-making-things-hard-for-themselves-and-focus-more-on-model-systems/

    • I’m not 100% comfortable with the idea that we need to push some top-down (or bottom-up?) focus on a few big questions. That is definitely the conclusion my blog points to, but I haven’t really made peace with it yet, let alone form an opinion on how we could actually get there from where we are now. I knew someone was going to ask, “OK, so what should our high priorities be?” I don’t have an answer. I can make a good argument about why the things I study are “important,” but I’ve never tried to argue that they are more important than other things that other ecologists study! That kind of comparative exercise would make a lot of us uncomfortable, but maybe that’s the way to advance this conversation?

      Some of these comments and commenters also make me wonder if the vision of a more focused field appeals more to senior researchers. As I hinted above, it smells a little of hierachy. But I suspect that innovation is just as likely in a conceptually structured field than in an unstructured field, and that young scientists will always find ways to innovate, no matter the rules of the game.

  5. I agree. There are definite limits of comparing physics to ecology, but in this case I think it is revealing. In physics there are two highly successful roles (career tracks really) that we don’t really value in ecology:
    1) Testing theory – the field is broadly divided between theoreticians and empirical testers. It is not uncommon to see a nobel prize go to both the theoretician and the tester who validated the theory. I know many empiricla papers claim to test theory, but my impression is that they often make up their own theory to justify the work rather than sincerely engaging with theory and trying to directly and decisively test theory.
    2) Measuring theoretical constants with precision – while not as prestigious as theory and testing, it is considered perfectly respectable (and even admired) to figure out the tricks to measure a theoretical constant with more precision. While ecological contstants may not meaningfully have 15 decimal places (e.g. varies across taxa) it would be great to know the landscape of constants. Your examples of critical zones vs LTER are a good one. In general ecosystem ecology seems much happier with measuring numbers than the rest of ecology.

    As one example of the latter I wanted to develop an allometry of handling times based on predator and prey body sizes. It was shocking just how little data there was out there. Handling time is a highly useful notion that is fairly straightforward to measure but we don’t really have much of any clue about what factors cause handling time to vary across systems because even though it is a critical parameter in both predator-prey dynamics and optimal foraging we’ve hardly bothered to seriously and repeatedly measure it.

    • “I know many empirical papers [in ecology] claim to test theory, but my impression is that they often make up their own theory to justify the work rather than sincerely engaging with theory and trying to directly and decisively test theory.”

      Very interesting remark. It’d be a very interesting exercise to try to quantify how often ecologists do this. And maybe compare to how often, say, evolutionary biologists do it.

      “In general ecosystem ecology seems much happier with measuring numbers than the rest of ecology.”

      To which the response of the rest of ecology (including me!) is to look at ecosystem ecology projects like the IBP and NEON and say things like “$434 million dollars and no hypotheses” (as Bob Paine said of NEON).

      So one way to summarize your comments might be “How do we merge the desire of some ecologists to test theory with the desire of other ecologists to do careful descriptive measurements?”

      • I wonder if ecology pendulums back and forth between these poles (what you call “shopkeeper science” vs big dollar coordinated projects) more dramatically than other fields? I am hoping for an alternative: a field where there is some agreement on the most important problems and priorities, but great freedom in how individual investigators or larger groups attack them.

  6. I think there is likely to be fairly broad agreement on the overemphasis on novelty. But ultimately “they” (journal editors, NSF panel members) is actually us, so we are probably the cause of our own problem. That makes me wonder what the origin of the problem is, which we ought to sort out if we want to think about what we can possibly do about it. Here’s a perspective on the origin:

    What most defines a “top-tier journal” is its degree of selectivity. If the number of “solid, interesting and important” papers produced every month greatly exceeds the number journal X is willing to publish (in order to retain top-tier status), then some other criteria need to be sought out. It is less often that one reads a paper and says “huh, never thought of it that way” or “wow, amazing they got 80 people to do the same experiment”, so that (i.e., novelty) becomes a viable criteria for being highly selective. Absent other options to achieve the same end, we go with novelty. (I think this would apply even if we agree on what’s “important”; if not, it might just paraphrase the argument in the post.)

    Unrealistic solution: Boycott highly selective journals.
    Realistic solution: ___________

    A couple aspects of the problem that concern me (in addition to those already mentioned):
    (i) Science is adopting click-bate culture. New and shiny gets clicks (Altmetric anyone?). And most of us purport to abhor click-bate culture.
    (ii) Success in passing the selective filter depends too much on lawyer-like skills of argumentation. It’s easy to imagine a study that appears “novel” under the penmanship of one author, and “solid” for another. This is another disadvantage to non-native English speakers.

    • Yes, absolutely, we are entirely responsible for the current state of affairs. That also means we are responsible for changing it.
      Your point about selective journals is a refinement of my “novelty as default criteria” argument. I agree that it is easier for us to recognize novelty than importance. But maybe if we had clearer priorities it wouldn’t be so hard to recognize and reward importance?

      • Definitely agree that clearer field-wide priorities would help a lot, but I’m not holding my breath for the bottom-up emergence of a shortish list. For field-wide consensus, you need many people to acknowledge that they’re not working on one of the important big questions (not going to happen), or for a slow evolution of convergence of most people’s work on a few topics (long after we’ve retired).

        Top-tier journals are of course not just about novelty, but also importance (we shouldn’t overstate things), and so there’s tremendous jockeying among researchers or groups for their topic to be considered one of the most important. And it works – at any given point in time there are definitely some topics that come to be seen as more important than others. The optimistic view is that to some degree we do identify topics of “true” broader importance (so we’re on the path you advocate); the cynical view is that we invent self-serving fads to chase novelty.

        Just to partially absolve the current “us” from all the responsibility, “us” also includes the generations of ecologists from whom we inherited the state of affairs.

  7. I also just had a paper rejected based on novelty. Here’s the direct quote: “I think that the methods and findings in this study are sound. My rationale for rejecting the paper is simply because I do not think that the findings are novel/interesting enough to warrant publication in this journal”

  8. Sorry for the long post, but Peter and I have been discussing this before so I have a lot to say.

    I think the obvious response here might be that other fields such as medicine, robotics, materials science make a lot out of novelty. It’s a big deal when a new vaccine or antibiotic is proven effective even if an older one works. It’s a big deal when a robot can do a new task that was thought to be outside of what we thought they could do (playing chess, driving etc.). It’s a big deal when materials are engineered with novel properties, or even when old materials are made from new ingredients. From my experience skimming science news and tables of contents ecology does not emphasize novelty any more than other fields.

    But if we think more carefully about these examples its not their novelty alone which is exciting. In fact, they deal with very old, very unoriginal issues. Making vaccines, programming machines to do things, making lighter/stronger materials, and so on are all old (unoriginal) desires. What’s exciting is a new solution to an old question or problem. The underlying reason we want to make vaccines, make robots etc. has nothing to do with a love of novelty! Same with much of fundamental physics. What’s the smallest thing? How did the universe begin? These are old questions, we may get excited about a novel answer, but the questions are old and important.

    What I think I find problematic in ecology, and I think that Peter would agree, is the proliferation of novel questions and models at the expense of refining answers to old, big, important questions. For instance, what does it matter if competition is intransitive or transitive? It could matter if intransitivity promotes coexistence, or leads to multiple stable states, but these effects are interesting as they relate to very old questions about coexistence and stability. But shouldn’t we first try measuring plain vanilla pairwise competition coefficients in many more systems and see how far we get with that?

    My pessimistic perspective here is that the interest in novelty points to a lack of powerful theories. When there is widespread interest in the potential of a theory, then reviewers will welcome tests of that theory and refinement of the measurements it involves–even if these tests are unoriginal. That is if a theory actually makes interesting predictions and if the parameters in that theory actually matter than that theory will be tested and retested many many times.

    But in ecology our attention span for a new theory or model seems to only last a couple of papers and then we are on to the next thing. It’s kind of shocking for instance how few studies have directly measured Tilman’s R* given that it is enshrined in textbooks. Likewise, there are only a handful of papers that directly measure niche and fitness differences despite the fact that Chesson’s 2000 Annual Review paper has been cited over 3000 times!

    Why don’t we repeatedly measure R* or repeatedly measure niche and fitness differences? 1) It could be because we have a culture valuing novelty at the expense of refining the theories. 2) It could be that it is just really hard to measure these things and few labs have the resources to do it. Or 3) it could be that there are fundamental shortcomings in many of the theories themselves and that this makes repeated tests only weakly informative. Unfortunately, I find the argument for the third case pretty persuasive. I think this is more or less the point that Marquet et al. make in their 2014 paper “On theory in ecology”.

    • “What I think I find problematic in ecology, and I think that Peter would agree, is the proliferation of novel questions and models at the expense of refining answers to old, big, important questions.”

      Hmm. Not sure that’s quite right. I don’t think ecologists have a problem relating whatever novel thing they’re working on back to old, big, important questions. At least in some loose, arm-wavy way. For instance, lots of trendy phylogenetic community ecology and functional trait ecology gets linked back by its authors to old questions about the maintenance of diversity and species coexistence.

      It’s not that ecologists ignore big old important questions in favor of new ones, or purportedly new ones. It’s that they place too much value on new angles or spins on those old questions.

  9. Sociologist Kieran Healy notes that, for a scholarly discipline to exist at all, there has to be sufficient agreement on what questions are worth asking and what constitutes a good answer. Disciplines that don’t have that don’t really exist as coherent disciplines. Think of anthropology, which is riven by basic disagreements between “physical” and “cultural” anthropologists over fundamentals.

  10. Peter: How much is the “novelty” vs “solid science” distinction you describe confounded with concerns about generality? I am guessing your paper about intransitive loops feels like it could apply to many ecological communities, while a careful measurement of a particular interaction may feel more limited in scope.
    I spent years coming up with estimates of one important flux as part of the SBC-LTER (giant kelp NPP) and then more years improving those estimates. We have learned a ton from the work, but it is hard to know how broadly our work applies. Strictly speaking we know a ton about kelp NPP at three sites on the Santa Barbara Channel. The species is found around the world but varies quite a bit morphologically. I would argue our results apply pretty broadly in southern and (maybe) central California. But Vancouver Island? Chile? Tasmania? When I improve our estimates for the Santa Barbara Channel, how much do I add to our knowledge of more distant systems? This is different from physics – refining an estimate of a physical constant at some deep decimal place may be a small improvement, but it improves the estimate for everyone everywhere.

      • Andrew, I think you are probably right that perceptions of generality confound the trade-off I was describing. But I don’t think it has to be that way, it should be possible to set priorities in our field that wouldn’t place such value on generality. I am thinking of Tony Ives’ Macarthur award lecture. If we want to be able to DO certain things–like predict responses to disturbance, or global change, or extinctions and colonizations–then we have to have system specific empirical models. Researchers who can show how to do these highly valued things better, even in just one system, will be rewarded. In coexistence research this is the case: everyone recognizes that it is hard enough to rigorously quantify one coexistence mechanism in one community that successful attempts to do so get published in top journals. And studies that can address multiple mechanisms, or span multiple systems, do even better.

      • I hope that my old post on “many roads to generality” speaks to this. Generality in the sense Andrew identifies–basically, a statistical, meta-analytical sense of “generality”–is only one of several senses. Generality in the sense of Tony Ives’ MacArthur lecture–fruitful analogies between different case studies–is another. Mark Vellend’s sense of “generality”–a unifying theoretical umbrella or framework that subsumes various system-specific special cases–is a third. And there are at least two or three more!

        The five roads to generality in ecology (UPDATED)

        Personally, I think ecologists overrate the value of generality in the statistical/meta-analytical sense of what’s “usually” or “typically” the case. I think ecology as a whole would be better off if we all cared less about generality in this sense. But my view on this no doubt reflects the sorts of ecological questions I tend to think most about.

  11. Via Twitter:

  12. Thanks for this great post and for papers I agree. But I think with proposals it can be the other way around. It often seems more likely that a proposal is funded if it their are strong a priori indications of the results (at best with preliminary data that actually the same as the study) and if it avoids any risks and intellectual challenges. However, this might be not exactly the definition of novelty, Peter was talking about.
    I guess, one simple reason for this could be that any risk/challenge evoke comments/critics by the reviewers which lead to rejection regardless of the potential merit.

  13. Thanks for this post,

    I will be defending novelty here.

    The way I read the post is that “overselling novelty” bugs you more than novelty per se. Novelty opposes to applying over and over the same scientific recipe as a mean of boosting publication numbers, it provides an incentive for thinking outside the box. Novelty is riskier and takes more time, so it should be recognized for such. Of course, a study is not scientifically good because it is novel, but novelty is not to blame here.

    Novelty is also not to blame when stating that: “Pressure to demonstrate novelty creates a perverse incentive to minimize connections with previous work. In Vellend’s language, the incentive is to emphasize the uniqueness of low-level concepts and individual case studies, rather than the unifying commonalities of high-level concepts. This is a recipe for a discipline that cannot agree on its priorities. With everyone working to promote their own pet idea, rallying around a few grand challenges becomes impossible.”

    While I agree with the conclusion, I think funding is to blame. In a system where funding depends on a race for publishing and recruiting, the hand waving syndrome is inescapable and novelty is often the only way to set oneself apart.

    Let’s take an example. Four PIs obtain 5M $ to study meta-community dynamics in a climate change context, or any other “grand challenge”. They have two choices: 1) to launch a mega-experiment with field assistants, postdocs, PhDs, high-tech probes, data servers, and many field sites, or 2) to provide direct funding to 50 labs (ca. 100,000$ each) around the world to work-out and achieve a common scientific agenda. I think scenario 1) is more likely. Why money from a given country be used to support research conducted elsewhere? Why should I share the money if I went through the pain of writing the proposal? How important is scientific rallying in comparison to my ability to secure future funding?

    I also note that novelty is more likely in scenario 2) because of the number and diversity of collaborators involved. Conceptual fragmentation is the sign of a competitive system under “publish or perish” pressure.

    The lack of a conceptual connection with previous work has many origins, where novelty is not to blame either, starting with i) the exponentially increasing number of published papers, ii) the rejection of phenomenological mathematical models in ecology, iii) the growth of “applied sciences” and iv) a trend towards increasing “wordiness” in the older sub-disciplines.

    Regards

    -Raphaël Proulx

  14. Pingback: Why aren’t ecologists prouder of putting old wine in new bottles? | Dynamic Ecology

Leave a reply to Jeremy Fox Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.