I recently polled readers on their views on what turned out to be a somewhat controversial topic: whether someone who’s posted data to a public repository is entitled to co-authorship of any subsequent paper using that data. In response, a commenter stated that “Best practice for sharing data through any data repository is to provide a data use, authorship and acknowledgement policy with the data.” Which, as I and other commenters noted in response, is definitely not agreed “best practice”–other commenters and I argued for quite different practices. The same thread also featured disagreement over whether “best practice” is for data users to contact the data providers for advice about how the data were collected and how they should be interpreted, or whether it’s “best practice” for data providers to provide detailed metadata so that data users don’t have to contact them to find out how the data were collected and how they should be interpreted.
Which got me wondering about a broader question: when exactly do we need codified “best practices” in science or academia? Not just “best practices” regarding data sharing–“best practices” regarding anything. Because it’s my admittedly anecdotal impression that people often recommend “best practices” in situations in which best practices don’t exist or aren’t clear. There are even cases in which recommending “best practices” can produce confusion rather than clarity as to what our practices should be. So here are my current thoughts as to when we do–or don’t–need agreed-upon “best practices”.*
First, what are “best practices”, anyway? For purposes of this post, “best practice” means “instructions for the agreed right way to do X, where ‘right’ means ‘better than other ways of doing X’.” To count as “best practices”, those instructions need to be sufficiently detailed that others can follow them, and be seen to have followed them. It also needs to be the case that not following those instructions would lead to bad outcomes, or at least suboptimal outcomes.** And it needs to be the case that the best practices are widely agreed upon, if they’re to count as “best practices”. Though they needn’t be universally followed in order to count as “best practices”.
Here are the circumstances in which I think it’s useful to have codified “best practices”:
- When one way of doing X really is just demonstrably superior to others on every dimension, but it’s not obvious or widely-known that that’s the case. For instance, Chialvo et al. (2020 Ecol Evol) described a new protocol for extracting cyclopeptides from mushrooms. Their new protocol extracts substantially more cyclopeptides than the previous standard protocol, in less time, with literally no downsides compared to the previous standard protocol. That new protocol seems to me like a paradigmatic example of a “best practice”. It’s a way of doing X that’s as good or better than every other known method in every respect.
- To give people advice on how to deal with some unusual or challenging situation that most people will will rarely or never encounter, that they couldn’t easily figure out how to deal with on their first encounter. For instance, COPE’s recommended workflow for how journal editors should deal with suspected peer review manipulation.
- When there are big negative externalities to everyone doing their own thing, or big benefits to everyone agreeing to do things the same way. This is why it’s best practice for everyone to drive on the same side of the road as everyone else. If even one person deviates, it causes mass chaos. Notice that it doesn’t matter which side of the road we all agree to drive on, merely that we all agree on one side or the other. A common ecological example is when authors recommend development of “best practices” regarding collection of a specific sort of data, in order to improve comparability of different studies. Authors of review papers often complain about the difficulty of comparing results of different studies that used different data collection protocols, and recommend development of agreed “best practice” protocols. That is, the authors of those review papers don’t think that one data collection protocol is better than another. They just think that everyone ought to agree on the same protocol, so that future studies can be compared easily without the confounding factor of methodological differences. Note, however, that if this is your reason for wanting to see “best practices” agreed upon, you’re not going to get agreement just by writing a paper calling for agreement. You can’t get everybody to agree to do things in one way just by saying “Hey everybody, please agree to do things in one way!” That’s why, when Elizabeth Borer and some of her colleagues were lamenting all the methodological differences among studies of nutrient enrichment in grasslands, they went out and started NutNet–a huge globally distributed experiment in which all participants agreed to run exactly the same experiment using exactly the same methods.
- To train people in what the agreed right thing to do is, and force them to do the right thing. Think of medical and legal licensing boards defining and enforcing best practices for doctors and lawyers–another paradigmatic example of “best practices” to my mind. Note that, in these cases, we agree on and enforce best practices because the consequences of failing to follow best practices are very serious. Bad doctors can kill people. Bad lawyers can cost people a lot of money and get them imprisoned. There are of course other cases in which licensing schemes force people to learn and use purported “best practices”, even though the consequences of failing to follow best practice aren’t very serious. Hairdressing is one example in the US. In cases in which the consequences of failing to follow “best practice” aren’t very serious, the enforcement of “best practices” via licensing laws can be criticized as a way for incumbents to form a cartel and limit entry into the profession.
- When people need to follow the law, or more broadly to avoid blame when something goes wrong. Basic fairness demands that the laws be clear and specific. Otherwise people don’t know what the laws are and can’t follow them. So there are cases besides occupational licensing in which the law codifies “best practices”. As with the previous bullet, we only want to codify “best practices” into law when the consequences of sub-optimal practices can be sufficiently serious. More broadly, there are various contexts in which people want best practices codified, because they want to be able to say “It wasn’t my fault, I followed best practices” in case something goes wrong.
- When people want to publicly demonstrate their values. Saying “I follow best practices regarding X” can be a way of saying or demonstrating to others “I care a lot about X.”
- When previously-agreed best practice needs to be updated in light of new evidence or goals. Note that, if you need to update “best practices” too often, then that’s probably a sign that there really isn’t such a thing as “best practice”. Because after all, evidence and goals don’t change that fast in most cases.
And conversely, here are the circumstances in which I don’t think it’s useful to have “best practices”:
- When it’s completely obvious what “best practice” is. For instance, best practice for scientists is “don’t fake your data.” But it’s totally obvious that faking data is bad, so we don’t need to codify “don’t fake your data” as “best practice”. Ok, maybe we need to codify best practices for data management, or data sharing, or etc., so as to improve our ability to detect fake data. But it would be silly to write a paper saying “don’t fake data; that’s not best practice”.
- When there’s a reasonable range of more-or-less acceptable choices, or there are trade-offs between different desiderata, and there are no huge negative externalities from different people making different choices. For instance, Ives (2015) argued that, when doing null hypothesis testing on partial regression coefficients, there’s no clear-cut, universal best choice between general linear models of log-transformed count data, vs. generalized linear models of untransformed data. And Brian’s series of posts on “statistical machismo” (starts here) argues that there are trade-offs involved in adopting “sophisticated” or “rigorous” statistical methods. Navigating those trade-offs requires careful thought and consideration of case-specific details, so that blanket recommendations of “best practice” are inappropriate. In general, the whole point of telling people to follow “best practices” is to keep them from having to think for themselves on a case-by-case basis. Which is undesirable if in fact they ought to be thinking for themselves on a case-by-case basis!
- When following purported “best practice” would require sacrificing other desiderata. This is really a rephrasing of the previous bullet. For instance, implementing what’s widely regarded as pedagogical “best practice” is a lot of work for instructors to implement, even if they have some TA support. Following purported “best practice” might well oblige the instructor to work unreasonably long hours. And if all instructors at an institution implement “best practice”, the net result might be an unreasonably high aggregate workload for students. Purported “best practice” for a single course can be suboptimal if implemented in all courses–a pedagogical “tragedy of the commons”. As a third example, here’s Morgan (2020) listing purported “best practices” for implementing online learning in K-12 schools during a pandemic. It was published in late April 2020, as schools around the world were moving online. Those “best practices” are mostly..a list of the “best practices” for using technology in K-12 education, that a technology-in-education association came up with several years before the Covid-19 pandemic. To which, I’m sorry, but whatever best practices in technology use in K-12 education were before April 2020, they were completely useless to schoolteachers, administrators, and parents responding to emergency lockdown orders in April 2020! It’s a basic, familiar point of optimization theory that you can’t solve an optimization problem unless you know all the constraints on the solution space. But many papers I read that propose “best practices” in pedagogy seem to forget that. (See also this old post on what ecologists should learn less of, and this old post on how often and why scientists use one particular pedagogical method that’s purportedly not “best practice”.)
- When there’s substantive disagreement about what “best practice” consists of. There’s often substantive disagreement as to what “best practices” are, even when everyone broadly agrees on what we’re trying to optimize, and what constraints we’re operating under. For instance, Agroustei et al. 2019 Oecologia highlight disagreement among researchers as to best practice in accounting for “lipid bias” when reconstructing carnivore diets using deltaC13 methods. Or see the examples with which I began the post, regarding disagreement about best practices in data sharing. In such contexts, I think it’s fine for people to recommend what they view as “best practices”, even in strong terms. And I think it’s fine for individuals or groups to act according to what they see as best practice, even if other individuals or groups disagree (assuming that there aren’t any big negative externalities caused by “different strokes for different folks”.) But I confess I’m not a fan of asserting that X is “best practice” in cases where there’s substantive disagreement as to what best practice is. I’m also not a fan stating or implying that anyone who disagrees with you as to what “best practice” is must be uninformed and ought to educate themselves. Unilaterally declaring what “best practice” is seems like an attempt to achieve victory by claiming victory. It seems like an attempt to substitute rhetoric for substantive argument, and impose your preferences on others without having to go to the trouble of convincing others first. (note: There are of course plenty of cases in which people are deliberately trolling in an attempt to create the impression of substantive disagreement where none actually exists, and are deliberately ignoring or twisting relevant evidence. Throughout this post, I’m setting such cases aside to focus on cases in which any disagreement about “best practices” is honest, substantive, informed disagreement, among people of good will.)
- When there are already many proposed “best practices“. So that you’d just be adding to the confusion by proposing another one. Even if your purported best practice somehow builds on or synthesizes all previous proposals. For instance, the “Bari Manifesto” (Hardisty et al. 2019) claims to identify 10 “best practices” for collection/reporting/storage of data on “essential biodiversity variables”. One of the 10 best practices is that there should be a single unified ontology for biodiversity data that draws on and synthesizes…a bunch of previously proposed ontologies that aren’t all identical! The whole reason all those ontologies exist is because people keep proposing new ones, surely? So how will proposing yet another new ontology lead to fewer ontologies? (see this xkcd cartoon for an amusing illustration)
- When purported “best practice” comprises broad principles, vague statements, or aspirational goals. It’s not “best practice” unless it’s straightforward for people to actually follow it, and to figure out if others have followed it. The US Constitution’s Bill of Rights is not a list of “best practices” for the US government. As evidenced by the fact that the Supreme Court often has to decide substantive controversies as to whether the US government’s actions violated the Bill of Rights. Following the Hippocratic Oath is not “best practice” for doctors–it’s a broad statement of ethical principle, not a step-by-step protocol. “Climb high, climb far/Your goal the sky, your aim the star” is not “best practice” for Williams College students and alumni. And no, I’m not knocking down a straw man here, because there certainly are purported “best practices” in science and education that fall into this category. For instance, “put your heart and soul into the class” is not a “best practice” for teaching quantitative ecology online (contra Acevedo 2020, with respect).
- When what constitutes “best practice” is highly context-dependent, case-specific, or nuanced. Again, the whole point of trying to impose “best practices” is to substitute broadly-applicable, clear-cut rules and procedures for individualized, case-specific judgment calls. “Highly case-specific, nuanced best practice” is an oxymoron. That’s why I was puzzled recently to read a paper calling for others to follow the “best practice” of optimizing protocol X for their own study species. If “best practice” is “do whatever works best for your own particular study species”, that’s not really best practice!
My overall view is that “best practices” are useful only in a fairly narrow range of contexts. The best thing to do can’t be too obvious (otherwise we’d all be doing it already), but nor can it be too unclear (otherwise we won’t be able to agree on what it is). And the consequences of failing to follow “best practice” have to be sufficiently bad, since otherwise why bother codifying “best practices” at all?
But what do you think? Looking forward to your comments.
*No, this post is not about “best practices for recommending ‘best practices'”! I do think there are good reasons for others to adopt the views expressed in this post. But “when to recommend best practices” is not itself a task for which it is possible to codify “best practices”, for reasons outlined in the post.