On the rise of “best practices” in ecology

I procrastinated recently by doing a bit of text mining. I feel like I’ve been seeing a lot of papers describing, or purporting to describe, “best practices” in some area of ecological research. I don’t have any opinion on whether that’s a good or bad thing. It’s just a thing I’ve noticed.

But is it actually a thing? To check my anecdotal impression, I searched JSTOR for all papers in JSTOR’s “Ecology and Evolutionary Biology” category containing the phrase “best practices”. Here’s the percentage of EEB papers containing the phrase “best practices,” broken down by decade:

1900-1980: 0.001%

1981-1990: 0.008%

1991-2000: 0.017%

2001-2010: 0.287%

2011-2020: 2.013%

2021-2024: 3.51%

Yup, my anecdotal impression was correct!

Clearly, one should extrapolate these data, and conclude that, in a few decades, the EEB literature will consist of nothing but papers on “best practices.” 🙂

Presumably, the rise of “best practices” in the EEB literature reflects some combination of (i) the rise of methods papers, and (ii) trends in phrasing/rhetoric. (Or are there other possibilities?)

Re: (i), I wonder how much of the rise of “best practices” papers in EEB is demand-driven vs. supply-driven. Demand: has there been an increase in graduate students wanting methodological templates, checklists, or instruction manuals to follow? So that they can be sure they’re “doing it right,” rather than having to figure things out for themselves? Supply: Has there been an increase in authors who want to help others out with methodological advice and instructions? Or perhaps a rise in authors who are really bothered by (what they believe to be) the methodological mistakes of others, and so want to prevent those mistakes? Has there been a rise in authors who want to promote their own preferred methods as “best practice,” perhaps as part of a broader increase in authorial self-promotion? Or what?

Re: (ii), maybe what’s changed is not the number of “best practices” papers that ecologists write, it’s what they call those papers. For instance, Connell & Sousa (1983), “On the evidence needed to judge ecological stability or persistence,” easily could have been titled “Best practices for judging ecological stability or persistence.” Except it wasn’t, because the term “best practices” wasn’t yet in wide use back then.

Re: (ii), the term “best practices” originated in business management. Google Ngrams informs me that use of the term “best practices” started taking off rapidly in 1988 and peaked around 2020ish. So sometime after 2010, some ecologists and evolutionary biologists adopted a bit of trendy management jargon. Jargon that’s now started to run its course in the wider world. Raising the question of if, or when, use of the phrase “best practices” will run its course in ecology and evolution.

Re: (ii), perhaps the rise of the phrase “best practices” is a rhetorical move that some authors have adopted in order to discourage discussion, pushback, or change? After all, there’s no debating best practices–they’re the best, it’s right there in the name! Best practices can’t be improved upon, and any change would make them worse. And if you wanted to encourage discussion of whether your proposed “best practices” are in fact the best, presumably you’d have called them something else. “Promising practices,” or “advice,” or etc. I actually would be surprised if any EEB authors consciously adopted the phrase “best practices” with the intention of discouraging pushback. But the rhetorical effect of any term doesn’t depend solely, or even primarily, on authorial intent. Anyway, just musing out loud. I’d be interested to be pointed to any cases in EEB of pushback against proposed “best practices.”

p.s. To anyone thinking “this post would’ve been better if Stephen Heard had written it”: I agree. But “don’t write up any post idea that another blogger could write better” is not blogging best practice. 🙂

22 thoughts on “On the rise of “best practices” in ecology

  1. I wonder if the concomitant rise and popularity of syntheses/meta-analyses may also have some influence, or is perhaps related to an increase in best practices in some way. I like best practice guides for providing concrete ways to standardize methods, reduce research bias, and increase cross-study comparability, which is important for syntheses and meta-analyses. Although I have no evidence that best practices functionally do this, I think one goal of them is to at least try!

    • Hmm, that could be part of it. Meta-analyses took off in ecology in the early ’90s, well before the rise of the term “best practices”. So for this hypothesis to work, you might have to tell a story about how ecologists needed to get a lot of meta-analyses under their belts before they started realizing the value of standardization of methods (both for individual studies, and for meta-analyses).

      And yes, agree that “best practices” papers don’t generally manage to impose the field-wide standardization that they call for. A few do–I’m thinking for instance of how all ecological meta-analyses these days include a PRISMA diagram. They include it because of papers that recommended PRISMA diagrams as best practice (though I don’t recall if those papers actually used the term “best practice”). But more often, a paper calling for everyone to adopt “best practice” X for the sake of standardization ends up reducing standardization, as in this old xkcd cartoon: https://xkcd.com/927/

  2. Software engineering papers are full of “best practices” (i.e., the authors’ opinions) and “State of the Art” (SOTA), i.e., what they did was SOTA.

    Are ecology researchers pushing the boundaries of SOTA?

    • Good questions! Now I want to do a little comparative exercise. Use text mining to look at the rise of the phrase “best practices” in different scholarly fields.

      • It’s all marketing hype.

        What is needed is a database of claimed best practices. Perhaps I am a cynic, but I’m not expecting much overlap between the claims, and I wonder how many best practices contradict each other?

      • “I wonder how many best practices contradict each other?”

        In an early draft of this post, I asked that question too. I also want to know how often purported “best practices” change. I mean, yes, as circumstances change, presumably “best practices” need to change with them. But I bet there are plenty of cases of claimed “best practices” changing too fast for the changes to be justified by changing circumstances.

      • As a marketing tool, best practices can be expected to change with fashion, or the latest technique being promoted.

        Committees might deliberate and specify best practices for a field. Might a research group do the same?

  3. I’d be curious to know if the term ‘best practices’ is applied equally to things like the design of surveys or experiments, or how to empirically estimate X (e.g., abundance, traits), as to statistical analyses. (Any impressions, Jeremy?) With zero evidence, my impression is that it is used more often for stats than other things, and sometimes it sure feels like an attempt to forestall objections, or the flipside of that – to make an objection appear like a field-wide contract (that you somehow didn’t know about) rather than an opinion.

    I wonder if an increasing supply of methods options itself enhances the demand for a definition of best practices. With a few long-standing options, I can wrap my head around them and decide. With several papers per year proposing new options for every decision, and a project in which you have 10 decisions to make, I’d sure like for someone else to have figured out which are best. Or more realistically (given inevitable tradeoffs), evaluated the contexts in which one or another method works best. If a method requires a million dollars and 5 years to implement, it might get you the best results (most accurate, precise…), but it’s not going to be the best choice for a Ph.D. student, who can do 90% as well with a cheaper, quicker alternative method.

    I agree with the previous comment on the value of standardized methods. Might there sometimes be a tradeoff between ‘best’ in terms of performance in a specific case and ‘best’ in terms of comparability with other studies? When choosing among methods, ‘most common’ practice seems like one valid criterion to use. Plant traits seems like a good example here. I can’t help but see some marketing motivation in using the word ‘best’.

    Reminds one of ‘essential’ biodiversity variables. Essential for what exactly? Or when someone tells you that ‘everyone’ thinks so-and-so, when the definition of everyone is 12 vocal tweeters. If everyone says it’s essential and the best, well, how can I possible argue?

    • “I’d be curious to know if the term ‘best practices’ is applied equally to things like the design of surveys or experiments, or how to empirically estimate X (e.g., abundance, traits), as to statistical analyses. (Any impressions, Jeremy?)”

      Not yet, but now that you’ve asked, I want to go back and look.

      “I wonder if an increasing supply of methods options itself enhances the demand for a definition of best practices.”

      Ooh, good point. I bet there’s something to that.

      “If a method requires a million dollars and 5 years to implement, it might get you the best results (most accurate, precise…), but it’s not going to be the best choice for a Ph.D. student, who can do 90% as well with a cheaper, quicker alternative method.”

      Nor will it be the best choice for a coordinated distributed experiment funded largely through in-kind contributions from the participants. That’s why coordinated distributed experiments like NutNet use *cheap* standardized methods.

      “I can’t help but see some marketing motivation in using the word ‘best’.”

      Me too. Though now you’ve got me wondering exactly who the target audience of the marketing is. It might vary from case to case. Sometimes it’s the field at large. But maybe sometimes it’s the journal editor and the reviewers?

    • Ok, I went back and had a look at the 25 most recent EEB papers on JSTOR containing the term “best practices.” Here are their topics:

      -6 are environmental niche modeling/species distribution modeling papers. Only one of which is review/methods/critique paper. The other 5 are all regular ol’ environmental niche modeling papers for a specific species or taxonomic group.

      -6 are about best professional practices for ecologists. Those 6 break down as three papers about supervision (e.g., managing interpersonal conflicts; ensuring physical and emotional safety for the field crews you supervise), one paper about social justice/EDI, one paper about obtaining a faculty position at a small teaching-focused institution, and one paper about diverse forms of scientific research impact.

      -5 are ESA operations-related documents. Minutes of governing board meetings, ESA annual reports, etc.

      -5 are about sustainability, and the spatial distribution of ecosystem services, in some specific location. All either coastal or mountain locations.

      -2 are chapters from a book called Ecogames, about how the video game industry portrays and addresses ecological issues

      -1 is a review of a scientific conference

      • From which one can conclude…? Not sure. It’s only the first 6 that seem obviously to be about the practice of doing science (on a modeling/stats topic, for what it’s worth), rather than the practice of adjacent things. Which makes me think part of the trend is just the term ‘best practices’ being used more often now (well beyond ecology), instead of what might previously have been called ‘guidelines’ or ‘recommendations’ or something like that.

    • There have definitely been a bunch of papers coming out in the last few years advocating for best practices wrt evaluating biases arising from things like the use of ‘big data’ in ecology (e.g. selection bias and missing data) and issues of confounding (i.e. causal analysis best practices). I’d say these are ‘statistics adjacent’, but probably fall more in the category of best practices in study design, for macroecological/meta-analytic studies in the first case, and more generally for getting at mechanistic explanations and considering data generating processes in the second. Whether these papers actually use the term ‘best practices’ or not, I can’t remember off the top of my head, but I think they definitely fall under the same category of paper.

      My impression is that a lot of this is a case of ecologists interpreting and communicating insights from other fields that have been grappling with similar issues for a long time, like epidemiology and econometrics. I’m finding these kinds of papers very useful to increase my understanding of these kinds of issues. They also provide useful gateways into the literature in these other fields that can help provide partial solutions to problems that seem intractable if you’re just relying on the ecological literature (at least the literature I’m aware of…).

  4. When we noticed some (pretty big) problems with the way ecologists use meta-analysis, and that many MA papers were reporting things that were not based on sound statistics (not subtle problems, rather very large problems), we thought given the huge influence of MA papers in ecology that it would be a good idea to examine our “practices”. But we had no new statistical breakthrough — rather we were looking at the application of a tool well understood by statisticians, that was being poorly used. Seems like a study of “best practices” would be in order, if I’m understanding the term right. But how does one get funding for that? Given the difficulty in getting funding, I looked at NSF panel recent funded grants, and none (0) had a large visible component of reflective evaluation of how we do science (or experiments, or stats). So much for finding conceptually similar grants to point to the success of.

    I tend to be pretty oblivious and naive, so perhaps there is a whole funding line for such efforts, but even my colleagues did not find. This all led me to have the fairly cynical view that we are fine to fund 100s of grants to push forward on what is deemed novel (but current buzz topic here), but not on being reflective on our science. I think a “best practices” panel at NSF would be worth the resources if drawn from other funding lines. That would not be the same as developing methodology, but reflective analysis of the methodologies we are using. Here’s a test on this idea: The “replication crisis” is something that Jeremy has written about. Let’s say some motivated person want to examine it in experimental ecology — can one get a grant to do so? My apologies if the answer is “yes” and I’m just not knowledgeable in this area.

    • That seems like the kind of topic that could be funded through a synthesis center. NCEAS a decade or two ago, perhaps now through iDiv (https://www.idiv.de/en/index.html). Not sure about elsewhere, but Canada has some options for getting working groups funded, and a topic like this seems within the purview. If I recall correctly, some foundations might fund certains kinds of projects of this nature. Reflective, synthetic papers, with a few of a path forward can have a massive influence in the field.

  5. As several others here have commented, it seems to be a marketing buzzword / fad terminology, but not one entirely without value.

    Whether we call it “Best practices in field X”, “An evidence driven comparison of methodologies in field X”, or something else doesn’t really matter. The idea of systematically examing methodologies and coming up with a short list of good ones for different circumstances or maybe a list of bad ones to avoid is something we probably should be doing.

    It would be nice to standardize terminology, even if only to simplify search terms. But as the old saying goes, the great thing about standards is that there are so many of them to choose from. Given the “herding cats” tendencies of academics, I am in the camp of: “nice idea, but probably a lost cause” when it comes to standarizing terminology.

  6. Just to add to an already full discussion: I have also seen “best practice” and “best practise(s)” used in EEB papers, so you’d need to add in those variants to your search to get a full picture of the trend. But I agree, it’s a more recent phenomenon.

    @Mark Vellend – “Essential” in EBVs doesn’t refer to the variable being “absolutely necessary”, it’s used in the sense of being a fundamental element.

  7. > if you wanted to encourage discussion of whether your proposed “best practices” are in fact the best, presumably you’d have called them something else. “Promising practices,” or “advice,” or etc.

    For more or less this purpose, I’ve been hearing people consciously replace “best practices” with “leading practices”, to emphasize that they’re being descriptive of current standards and not prescriptive of future ones.

    A bit of wild speculation, but I wonder if improved technology has meant more remote collaborations, and more exposure to ways of doing things beyond one’s own institution. I could see that leading to more disagreements with collaborators about the “best practices” for a given task, which could lead the team to write a paper to settle the issue — or lead one person to think that there’s an audience who needs to hear about their Correct way to do things.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.