Don’t be so quick to recommend “best practices” in science or academia

I recently polled readers on their views on what turned out to be a somewhat controversial topic: whether someone who’s posted data to a public repository is entitled to co-authorship of any subsequent paper using that data. In response, a commenter stated that “Best practice for sharing data through any data repository is to provide a data use, authorship and acknowledgement policy with the data.” Which, as I and other commenters noted in response, is definitely not agreed “best practice”–other commenters and I argued for quite different practices. The same thread also featured disagreement over whether “best practice” is for data users to contact the data providers for advice about how the data were collected and how they should be interpreted, or whether it’s “best practice” for data providers to provide detailed metadata so that data users don’t have to contact them to find out how the data were collected and how they should be interpreted.

Which got me wondering about a broader question: when exactly do we need codified “best practices” in science or academia? Not just “best practices” regarding data sharing–“best practices” regarding anything. Because it’s my admittedly anecdotal impression that people often recommend “best practices” in situations in which best practices don’t exist or aren’t clear. There are even cases in which recommending “best practices” can produce confusion rather than clarity as to what our practices should be. So here are my current thoughts as to when we do–or don’t–need agreed-upon “best practices”.*

First, what are “best practices”, anyway? For purposes of this post, “best practice” means “instructions for the agreed right way to do X, where ‘right’ means ‘better than other ways of doing X’.” To count as “best practices”, those instructions need to be sufficiently detailed that others can follow them, and be seen to have followed them. It also needs to be the case that not following those instructions would lead to bad outcomes, or at least suboptimal outcomes.** And it needs to be the case that the best practices are widely agreed upon, if they’re to count as “best practices”. Though they needn’t be universally followed in order to count as “best practices”.

Here are the circumstances in which I think it’s useful to have codified “best practices”:

  • When one way of doing X really is just demonstrably superior to others on every dimension, but it’s not obvious or widely-known that that’s the case. For instance, Chialvo et al. (2020 Ecol Evol) described a new protocol for extracting cyclopeptides from mushrooms. Their new protocol extracts substantially more cyclopeptides than the previous standard protocol, in less time, with literally no downsides compared to the previous standard protocol. That new protocol seems to me like a paradigmatic example of a “best practice”. It’s a way of doing X that’s as good or better than every other known method in every respect.
  • To give people advice on how to deal with some unusual or challenging situation that most people will will rarely or never encounter, that they couldn’t easily figure out how to deal with on their first encounter. For instance, COPE’s recommended workflow for how journal editors should deal with suspected peer review manipulation.
  • When there are big negative externalities to everyone doing their own thing, or big benefits to everyone agreeing to do things the same way. This is why it’s best practice for everyone to drive on the same side of the road as everyone else. If even one person deviates, it causes mass chaos. Notice that it doesn’t matter which side of the road we all agree to drive on, merely that we all agree on one side or the other. A common ecological example is when authors recommend development of “best practices” regarding collection of a specific sort of data, in order to improve comparability of different studies. Authors of review papers often complain about the difficulty of comparing results of different studies that used different data collection protocols, and recommend development of agreed “best practice” protocols. That is, the authors of those review papers don’t think that one data collection protocol is better than another. They just think that everyone ought to agree on the same protocol, so that future studies can be compared easily without the confounding factor of methodological differences. Note, however, that if this is your reason for wanting to see “best practices” agreed upon, you’re not going to get agreement just by writing a paper calling for agreement. You can’t get everybody to agree to do things in one way just by saying “Hey everybody, please agree to do things in one way!” That’s why, when Elizabeth Borer and some of her colleagues were lamenting all the methodological differences among studies of nutrient enrichment in grasslands, they went out and started NutNet–a huge globally distributed experiment in which all participants agreed to run exactly the same experiment using exactly the same methods.
  • To train people in what the agreed right thing to do is, and force them to do the right thing. Think of medical and legal licensing boards defining and enforcing best practices for doctors and lawyers–another paradigmatic example of “best practices” to my mind. Note that, in these cases, we agree on and enforce best practices because the consequences of failing to follow best practices are very serious. Bad doctors can kill people. Bad lawyers can cost people a lot of money and get them imprisoned. There are of course other cases in which licensing schemes force people to learn and use purported “best practices”, even though the consequences of failing to follow best practice aren’t very serious. Hairdressing is one example in the US. In cases in which the consequences of failing to follow “best practice” aren’t very serious, the enforcement of “best practices” via licensing laws can be criticized as a way for incumbents to form a cartel and limit entry into the profession.
  • When people need to follow the law, or more broadly to avoid blame when something goes wrong. Basic fairness demands that the laws be clear and specific. Otherwise people don’t know what the laws are and can’t follow them. So there are cases besides occupational licensing in which the law codifies “best practices”. As with the previous bullet, we only want to codify “best practices” into law when the consequences of sub-optimal practices can be sufficiently serious. More broadly, there are various contexts in which people want best practices codified, because they want to be able to say “It wasn’t my fault, I followed best practices” in case something goes wrong.
  • When people want to publicly demonstrate their values. Saying “I follow best practices regarding X” can be a way of saying or demonstrating to others “I care a lot about X.”
  • When previously-agreed best practice needs to be updated in light of new evidence or goals. Note that, if you need to update “best practices” too often, then that’s probably a sign that there really isn’t such a thing as “best practice”. Because after all, evidence and goals don’t change that fast in most cases.

And conversely, here are the circumstances in which I don’t think it’s useful to have “best practices”:

  • When it’s completely obvious what “best practice” is. For instance, best practice for scientists is “don’t fake your data.” But it’s totally obvious that faking data is bad, so we don’t need to codify “don’t fake your data” as “best practice”. Ok, maybe we need to codify best practices for data management, or data sharing, or etc., so as to improve our ability to detect fake data. But it would be silly to write a paper saying “don’t fake data; that’s not best practice”.
  • When there’s a reasonable range of more-or-less acceptable choices, or there are trade-offs between different desiderata, and there are no huge negative externalities from different people making different choices. For instance, Ives (2015) argued that, when doing null hypothesis testing on partial regression coefficients, there’s no clear-cut, universal best choice between general linear models of log-transformed count data, vs. generalized linear models of untransformed data. And Brian’s series of posts on “statistical machismo” (starts here) argues that there are trade-offs involved in adopting “sophisticated” or “rigorous” statistical methods. Navigating those trade-offs requires careful thought and consideration of case-specific details, so that blanket recommendations of “best practice” are inappropriate. In general, the whole point of telling people to follow “best practices” is to keep them from having to think for themselves on a case-by-case basis. Which is undesirable if in fact they ought to be thinking for themselves on a case-by-case basis!
  • When following purported “best practice” would require sacrificing other desiderata. This is really a rephrasing of the previous bullet. For instance, implementing what’s widely regarded as pedagogical “best practice” is a lot of work for instructors to implement, even if they have some TA support. Following purported “best practice” might well oblige the instructor to work unreasonably long hours. And if all instructors at an institution implement “best practice”, the net result might be an unreasonably high aggregate workload for students. Purported “best practice” for a single course can be suboptimal if implemented in all courses–a pedagogical “tragedy of the commons”. As a third example, here’s Morgan (2020) listing purported “best practices” for implementing online learning in K-12 schools during a pandemic. It was published in late April 2020, as schools around the world were moving online. Those “best practices” are mostly..a list of the “best practices” for using technology in K-12 education, that a technology-in-education association came up with several years before the Covid-19 pandemic. To which, I’m sorry, but whatever best practices in technology use in K-12 education were before April 2020, they were completely useless to schoolteachers, administrators, and parents responding to emergency lockdown orders in April 2020! It’s a basic, familiar point of optimization theory that you can’t solve an optimization problem unless you know all the constraints on the solution space. But many papers I read that propose “best practices” in pedagogy seem to forget that. (See also this old post on what ecologists should learn less of, and this old post on how often and why scientists use one particular pedagogical method that’s purportedly not “best practice”.)
  • When there’s substantive disagreement about what “best practice” consists of. There’s often substantive disagreement as to what “best practices” are, even when everyone broadly agrees on what we’re trying to optimize, and what constraints we’re operating under. For instance, Agroustei et al. 2019 Oecologia highlight disagreement among researchers as to best practice in accounting for “lipid bias” when reconstructing carnivore diets using deltaC13 methods. Or see the examples with which I began the post, regarding disagreement about best practices in data sharing. In such contexts, I think it’s fine for people to recommend what they view as “best practices”, even in strong terms. And I think it’s fine for individuals or groups to act according to what they see as best practice, even if other individuals or groups disagree (assuming that there aren’t any big negative externalities caused by “different strokes for different folks”.) But I confess I’m not a fan of asserting that X is “best practice” in cases where there’s substantive disagreement as to what best practice is. I’m also not a fan stating or implying that anyone who disagrees with you as to what “best practice” is must be uninformed and ought to educate themselves. Unilaterally declaring what “best practice” is seems like an attempt to achieve victory by claiming victory. It seems like an attempt to substitute rhetoric for substantive argument, and impose your preferences on others without having to go to the trouble of convincing others first. (note: There are of course plenty of cases in which people are deliberately trolling in an attempt to create the impression of substantive disagreement where none actually exists, and are deliberately ignoring or twisting relevant evidence. Throughout this post, I’m setting such cases aside to focus on cases in which any disagreement about “best practices” is honest, substantive, informed disagreement, among people of good will.)
  • When there are already many proposed “best practices“. So that you’d just be adding to the confusion by proposing another one. Even if your purported best practice somehow builds on or synthesizes all previous proposals. For instance, the “Bari Manifesto” (Hardisty et al. 2019) claims to identify 10 “best practices” for collection/reporting/storage of data on “essential biodiversity variables”. One of the 10 best practices is that there should be a single unified ontology for biodiversity data that draws on and synthesizes…a bunch of previously proposed ontologies that aren’t all identical! The whole reason all those ontologies exist is because people keep proposing new ones, surely? So how will proposing yet another new ontology lead to fewer ontologies? (see this xkcd cartoon for an amusing illustration)
  • When purported “best practice” comprises broad principles, vague statements, or aspirational goals. It’s not “best practice” unless it’s straightforward for people to actually follow it, and to figure out if others have followed it. The US Constitution’s Bill of Rights is not a list of “best practices” for the US government. As evidenced by the fact that the Supreme Court often has to decide substantive controversies as to whether the US government’s actions violated the Bill of Rights. Following the Hippocratic Oath is not “best practice” for doctors–it’s a broad statement of ethical principle, not a step-by-step protocol. “Climb high, climb far/Your goal the sky, your aim the star” is not “best practice” for Williams College students and alumni. And no, I’m not knocking down a straw man here, because there certainly are purported “best practices” in science and education that fall into this category. For instance, “put your heart and soul into the class” is not a “best practice” for teaching quantitative ecology online (contra Acevedo 2020, with respect).
  • When what constitutes “best practice” is highly context-dependent, case-specific, or nuanced. Again, the whole point of trying to impose “best practices” is to substitute broadly-applicable, clear-cut rules and procedures for individualized, case-specific judgment calls. “Highly case-specific, nuanced best practice” is an oxymoron. That’s why I was puzzled recently to read a paper calling for others to follow the “best practice” of optimizing protocol X for their own study species. If “best practice” is “do whatever works best for your own particular study species”, that’s not really best practice!

My overall view is that “best practices” are useful only in a fairly narrow range of contexts. The best thing to do can’t be too obvious (otherwise we’d all be doing it already), but nor can it be too unclear (otherwise we won’t be able to agree on what it is). And the consequences of failing to follow “best practice” have to be sufficiently bad, since otherwise why bother codifying “best practices” at all?

But what do you think? Looking forward to your comments.

*No, this post is not about “best practices for recommending ‘best practices'”! I do think there are good reasons for others to adopt the views expressed in this post. But “when to recommend best practices” is not itself a task for which it is possible to codify “best practices”, for reasons outlined in the post.

11 thoughts on “Don’t be so quick to recommend “best practices” in science or academia

  1. That’s a very comprehensive and well argued set of criteria Jeremy! The only thing that I’d query is “When people want to publicly demonstrate their values.” Because “values” clearly are very subjective. There’s a concrete example which pops up on Twitter every now and again in which a scientist says something like:

    “I do not use my title [Dr/Prof.] because I don’t want to be elitist”

    Or conversely:

    “I make a point of using my title [Dr/Prof.] because it’s only privileged group X that has the luxury of not using their titles”

    Both of these are clearly value judgements, both have their merits, and both could be considered “best practice for demonstrating values”. But they are obviously diametrically opposite. It may not be the most profound or important example of a value, but it demonstrates my point and it’s one which causes a lot of heated debate on Twitter. But then maybe that says more about Twitter…..

    • It’s true that attempts to publicly demonstrate one’s values can be misconstrued. The messages we try to send, and the examples we try to set, aren’t always interpreted as we would wish. But still, I do think it’s true that some people who make a point of asserting or following purported “best practices about X” do so as a way of publicly demonstrating how much they care about X.

  2. My experience, based on software related papers, is that best practices is a marketing term attached to whatever practices the author thinks are best.

    It is rare to encounter an experimental comparison of so-called best practices, and I have never seen survey results on what others consider best practices.

    • “My experience, based on software related papers, is that best practices is a marketing term attached to whatever practices the author thinks are best.”

      That is the tl;dr version of the post. 🙂

      “It is rare to encounter an experimental comparison of so-called best practices”

      Skimming recent ecology papers with “best practice*” in the title or abstract, I did find a few of these. The paper on mushroom biochemistry linked to in the post is one example.

  3. Great essay! The term “best practices” has always bugged me because they usually clearly aren’t, with plenty of arguments for different approaches, and it just seems like a way to exert pressure for conformity. To me the term is inherently unscientific because it encourages uncritical application of a method or procedure. Similar to papers arguing for a particular narrow definition of a word or term that people use more loosely.

    • “Similar to papers arguing for a particular narrow definition of a word or term that people use more loosely.”

      Ye, there are definite similarities between papers that try to get others to adopt terminological uniformity, and papers that try to get others to adopt purported “best practices”.

  4. Jeremy, I’m curious to hear your take on the BES Guides to Better Science: https://www.britishecologicalsociety.org/publications/guides-to/

    Should these be interpreted as best-practice guidelines for ecologists? Or are they simply helpful tips?

    I wonder if the actual issue is mistaking ‘convention’ for ‘best practice’? There are many conventions in ecology (and science in general) that are sub-optimal, but we do them anyway because that is what the broader community expects. For example, the way we structure papers (intro, methods, result, discussion) is a convention, but not necessarily best-practice (e.g. Nature, Science, PNAS obviously prefer an alternative structure).

    I think the frustration comes when people try to justify conventions by rationalising that they are actually best-practice.

    • Good question re: the BES guides. I haven’t read them, so I’m not in a great position to answer. The fact that they’re official, typeset publications of a scientific society certainly does make them look like “best practices” rather than “some tips that might work for you”.

      Interesting remarks about the relationship between “best practices” and “conventions”. I’ll need to think more about that. Just offhand, none of the recent ecology papers I’ve skimmed that had “best practice*” in the title or abstract were seeking to justify existing conventions.

      Maybe, if we all agree that X is “best practice”, and we all do X for long enough, X just becomes a convention and everyone forgets why we decided that X is “best practice” in the first place? Or maybe there’s feedback in both directions? As you say, if enough people do X for long enough, then probably people are going to find post-hoc rationalizations for why X is a good idea.

      There are also cases where X is best practice *because* it’s a convention. Sometimes, if everyone expects you to X, it’s a bad idea to do not-X for that reason alone. For instance, maybe it’s a bad idea to deviate from the conventional intro-methods-results-discussion structure for papers, precisely because that *is* the convention. It’s what readers expect, and if you violate reader expectations maybe they’ll be confused or put off.

      Speaking generally, I think ecologists tend to slightly overrate the risks of violating scientific and academic conventions, and underrate the benefits. But that’s just an anecdotal impression on my part.

  5. Wonderful post Jeremy.
    I’ve seen problems with with the concept of ‘best practice’ in Engineering. These https://www.publish.csiro.au/book/2190/ best practice guidelines for stormwater management, published in 1999, were great at the time but are now causing problems. Authorities insist on their use because they describe ‘best practice’, but they are 20+ years old. There have been attempts to produce an updated version, but people can’t agree, plus a project like that requires special funding. Ideas about good stormwater management have moved on but codified ‘best practice’ has not.

    • Yes, good point. The post notes that you don’t want to try to codify “best practice” if “best practice” is constantly changing. But as you note, there are downsides to codifying “best practice” even if “best practice” doesn’t change all that often. Because if everyone’s been following the same “best practices” for decades, those practices will be hard to change even if they really need changing.

      It’s a tricky empirical issue, isn’t it? When does codifying “best practices” lead to better outcomes on balance over the long run? The answer depends on many variables: how closely will everyone hew to best practices once they’re codified? What will people do if best practices aren’t codified? And how well will codified “best practices” align with *actual* “best practices”, now or in future?

  6. An asymmetry I’ve been thinking about: if you think X is best practice, you can often publish a paper saying so. Then you can say “X is best practice” and cite your paper. But nobody ever writes papers saying “X is not best practice” or “there is no single best practice regarding X”. So it’s only people who want to claim that there is an established best practice who get to cite supporting papers.

    A related asymmetry: I’ve seen several recent ecology papers lamenting that no agreed best practice for X exists, and using that lack of agreed best practice as motivation for the work reported in the paper. The work reported in the paper is supposed to contribute to the development of an agreed best practice. But nobody ever writes a paper saying “there’s no agreed best practice for X, and there shouldn’t be” or “people should stop trying to develop an agreed best practice for X”.

    The only exception I can think of is Stewart-Oaten 1995, “Rules and judgement in statistics: three examples” (https://esajournals.onlinelibrary.wiley.com/doi/10.2307/1940736)

Leave a Comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.