Rethinking Best Practices

Best practices aren’t universal and the use of the term without deeper consideration can be problematic. They’re straightforward, simplistic answers to difficult questions. Quick answers can at times work in our favor, a way to avoid cognitive overhead and set a clear path to a solution. In fact, we regularly need shortcuts in our day to day work to be able to function, the trade off of efficiency for thoroughness. This is unfortunately too often co-opted and overused, rendering it specious as a mantra of “don’t think, just do” or as hand waving promotion of a product over guidance on a course of action. What should be a starting point to a conversation, to give way to deeper consideration and approaches, is instead left to be short circuited in favor of an unassailable talking point. The concepts behind best practices may be sound, but as they are not ubiquitous, they should be up for debate. Our tendency to skip deeper investigation, to assume an answer is correct based on a label, makes use of the term “best practice” dangerous.

Akin to Root Cause

I’ve written a bit before about the failures of root cause as a mindset and how tempting it is to go searching for a silver bullet answer to your problems. It’s a challenge to get folks to look past root cause because our brains are hardwired to use less energy to search for answers, which isn’t necessarily a bad thing! We want to optimize for how much focus is required to accomplish a given task after all. The difficulty is in making the effort to look past shallow answers that only superficially solve your problems. By labelling solutions as best practices there’s a tendency to fall into this same trap.

Think of how excited the software industry was with the concept of immutable infrastructure when we first began discussing it a few years ago. We could avoid the pain points of drift in config management with environments that we could construct/destruct programmatically from a shared common image. Its rise in popularity is indicative of just how useful it has been to orgs, giving way to folks to say it was a “best practice” to use it. Of course there are cases where immutable infrastructure doesn’t work (stateful instances, such as a datastore, would be a poor choice for this). An argument could be made that engineers (the experts) are responsible for their systems, where their expertise lies, and they should. The problem is when we make generalities about systems we lack expertise in, said model (a best practice) may not apply. It’s not that immutable infrastructure is inherently good or bad, but that we have to consider that calling it a best practice is flawed as it can’t be a universal constant.

Similarly, we give folks who are early in their careers lists of best practices to follow to help guide them. “Write self documenting code”, “include tests with your changes”, and “hide features behind flags” are all useful bits of advice regardless of your skill level but notably for folks just starting out. At some point, though, we’ve all written code that has failed to live up to one or more of these. Best practices fall into “Work as Imagined” or perhaps “Work as Prescribed”, an expectation of what could or should be done, while in reality “Work as Done” can’t continuously meet these well intentioned ideals1. Why write a test for an experiment we’ll take down in a week or give an exhaustively long but accurate variable name localized to a tight loop? We ignore best practices all the time because our experience guides us better than a cursory generalization of a problem.

When do we decide what’s best?

Another example, a perennial hot button topic, is asking “Should deploys go out on Fridays?“. We endlessly debate this in conference talks, in blog posts and tweets, but depending on who you talk to it is a best practice to either deploy whenever and wherever (in other words, get good at doing hard things) or to absolutely never deploy unless your team is at full strength (caution is the better part of valor). Each side claims a best practice mentality, so who’s right?

We call them best practices because we assume, by the name, that it is optimal and, in a word, best. They’re labeled as such by experts typically, but even experts can disagree on how to approach problems because the subtle nuances of our day to day work require it. The expertise of experienced folks is the fine tuning in the socio-technical system that keeps it running.

Take password management as a further example of an attempt to codify what’s best. We all agree that having a password set to a loved one’s birthday or your pet’s name is poor, an antipattern. The best practice might be don’t use those to secure a login. While this establishes the negative, it still doesn’t tell us an optimal password schema to use. Should it be 6 characters in length, 7, maybe 8? Why not 32 or 64? Do we use uppercase and lower case mixed with special characters and what words should be avoided as too obvious? Do we know if our user is using a password manager or will they have to type it manually on their mobile device, making repetitive input a challenge? We can’t all use “correct horse battery staple” as a universal password.

This is because the term “best practices” is a misnomer. It’s not trying to optimize for some perfect solution, be it for a given period or across environments. I think proponents of best practices would agree with that and I could be taken to task for being too literal here. Instead, they’re about avoiding the most problematic case(s) to focus on better alternatives. The trouble associated with a phrase like best practice is that we’ve become so enamored with assuming there’s a “one true way” that it can leave little room for flexibility.

Best Practices lack flexibility

Best practices by their very nature have a level of brittle robustness baked in, a predetermined model in mind, but lack the subtle nuances that an expert in their system would know. Resilience, that application of adaptability as we adjust our mental model, is what allows us to make the leap from knowledge sharing of successful ideas to real world execution of getting things done in differing environments. This can be summarized more succinctly as follows:

Best practices are good until expertise is developed (ie when you’re training a new person). At that point (once expertise is developed), over reliance on them and an expectation of following them can become problematic. This is why runbooks can often have adverse impacts

Nora Jones, @nora_js

Several years ago, Nora passed a paper along my way that challenges many of our conceptions on what works as a best practice, “Can we trust Best Practices? Six Cognitive Challenges of Evidence-Based Approaches” (Klein et. al., 2016). I could spend another blog post entirely on this paper, but this in particular feels central to the discussion:

We should regard best practices as provisional, not optimal, as a floor rather than a ceiling.

Many folks, some proponents of best practices included, will agree that they should be considered guidelines rather than strict rules demanding adherence. This isn’t always the case though. It happens frequently enough that we’ll be forced into situations where best practices will directly conflict with our better judgment in a situation.

When a failed deploy goes out, should we wait for tests to complete when looking to revert or do we push through to restore stability more quickly to end users? This can easily become weaponized by bad actors, putting folks into an unwinnable situation of competing best practices, restoring functionality quickly vs waiting for successful tests. Being too adherent to an idea that is considered “best” means an inability to adapt.

Keeping your software up to date is considered a best practice, but that can require considerable thought before attempting. We’ve all experienced the pain of an upgrade that goes south or bumping up your OS version only to be stuck behind an install screen for two hours waiting for it to complete. Wisdom tells us that critical updates may be needed but are most often successful with a plan to execute in due time, robustness coupled with resilience. Best practices on their own don’t demand this, but likewise they are not enough for success without additional guidance for successful execution.

Best Practice: Don’t use “Best Practice”..?

Well, maybe. There is an aspect to how we share them that does exhibit a lot of the learning we want to impart with one another. Best practices can be a story telling device and “coping with variability” that can be more useful than a set of data points laid out without interpretation. For folks who are first setting out, best practices can be a reasonable set of principles if you’re lacking expertise to see deeper nuance.

It’s also jargon. Jargon can be a powerful tool to cut through to what you want. It’s the reason why, despite writing an entire blog post on it, I don’t correct people in the midst of an incident if they say “ok, so what do we think the root cause is here?”. Yes we want to know more about our context than a single problem that masks what are a host of issues. The point isn’t to be pedantic in the moment, but to answer a concern. If someone is describing best practices or asking for a best practice with a given technology, it’s an opportunity to think to ourselves “What are they really asking for here?”.

The pathological cases are why the term can garner such a strong pushback in use, including inspiring this post. There’s nothing stopping a company from splashing “best practices” in their sales decks and homepages without providing any concrete evidence of providing value to the customer, which obfuscates the term and wears it thin. Lawyers, without malice, will ask that legal documents assure efforts towards accomplishing best practices as a means of an attempt at guaranteeing success, with their enforceability in question. Software engineers too are not entirely innocent in this context either. Think of how many conference talks we’ve attended claiming a best practice, when under the hood we all know there’s duct tape and glue holding up the shiny veneer of our working systems. Don’t question this idea too much, it’s a best practice, just follow it to a T. This is all to say it gives the appearance of quality or even rightness without the need to prove it. It claims objectivity but is instead subjective in practice.

It’s unlikely we’ll be giving up the term any time soon. There isn’t much consensus at large otherwise, but I have heard several alternatives that include:

  • “Standard Operation Procedure (SOP)”
  • “Standard Industry Practice”
  • “Best Current Practice (BCP)”
  • “Recommended Practice”
  • “Leading Practice”
  • “Common Practice or Recommended Practice”
  • “Generally Accepted Software Practices”

I tend not to use the term “best practices” when describing a methodology and I would encourage others to do the same, while hopefully avoiding being dogmatic about it. Subtleties in language can trip us up, so when we challenge a term and find it lacking, we should be flexible enough to seek out alternatives that can better reflect concepts. For those who may still hold some skepticism, finding value in the concept, I’d ask you to consider: what are you trying to express when you refer to a concept, a strategy, your day to day work as exhibiting best practices? It’s critical to be mindful of our words as we’re able and to understand the gaps in language that colloquialisms may fail in conveying nuance. I would encourage that consideration of what situations it is used with some restraint can be effective as well.

1. See: The Varieties of Human Work, Shorrock 2017

Thanks to Dr. Laura Maguire, Vanessa Huerta Granda, and Nora Jones for their feedback in putting this together.

Photo: https://flickr.com/photos/bensonkua/3078316785/

  3 comments for “Rethinking Best Practices

Comments are closed.