Skip to main content

Definition-of-Done – Are We There Yet? Part-1

Introduction

There are several terms used for it within agile contexts. Sometimes you hear:

  • Done
  • Definition-of-Done or DoD
  • Done-Ness Criteria
  • Acceptance Criteria
  • Release Criteria

Sometimes you even hear it repeated, as in: This story isn’t complete until its—“Done…Done…Done”.

Apparently the more “done’s” you have, the more emphasis there is on completeness. Although I don’t think I’ve heard more than four “done” used in a row.

Done-Ness

Consider done-ness to be the criteria that constrains the teams’ work. If the team were building a bridge, then it would be the engineering rules, practices, inspection steps, local regulations, and completion requirements that would permeate everything the construction team would do. In many ways, the Definition-of-Done should seep into every aspect of agile team collaboration. If agile were a game then DoD would be the “rules” of the game…and the rules would be read, understood, and consistently applied across the team.

I’ve always been a strong proponent of a 4-layer view to done-ness. In this worldview, the layers build upon one another, moving from individual-based work, to story-based word, to sprint-level work, and ultimately to a release. I’ll often use the term “guardrails” to indicate the guideline nature of the criteria in guiding the teams’ efforts. Now let’s review each of the four levels in turn.

Work Product

This is the layer of the individual contributor. For example, your front-end developers should define some rules, conventions, and standards for how they design, develop, test, and deliver their UI code. The adherence to these standards should be defined specifically as done-ness criteria. This same logic applies to each functional role within your agile teams. Everyone should define a set of criteria that surrounds professional delivery of their work products.

  • Who contributes these? Usually there are two sources. Probably the most powerful is each team defining its own engineering rules. So there is a strong sense of uniqueness as you move from team to team. The other source is more organizational. Say for instance you’re working at a large web design shop where there needs to be consistent UI coding conventions and standards across the teams. I would expect “someone” in the organization to define these—and then for each team to adhere to these broader done-ness criteria in addition to their own.
  • Some examples: I literally gave one above, in that you might have UI development standards. Another example could be coding standards for you primary language or technology stack. Still another could be process based, for example, an “agreement” at a team level to try to “pair” as much as possible on each user story OR to have a “pair review” prior to checking in each story.

Story Level

This is the work delivery level. If you are using user stories, then done-ness surrounds defining a rich and meaningful set of acceptance tests per story and then holding yourself accountable to delivering to those functional constraints. Remember, acceptance tests are incredibly useful as design aids and test aids when the team is sorting out the nuance of each story. I consider that the most important part of the acceptance tests—the business logic design hints.

You should also develop clear quality goals at this level. It may sound prescriptive, but I like the criteria that all bugs that have been found in the development of a story to be fixed prior to declaring that story complete. These aren’t legacy bugs, but bugs created and found during the development of the story. I can’t tell you how many times teams run into problems at the end of the sprint in delivering a completed story with no known bugs.

  • Who contributes these? The Product Owner is ultimately responsible for defining the functional conditions of acceptance for each story. However, there are also inputs from the organizational side. For example, agreeing that each story will receive a solid pair-based code review or that a complete set of automated unit tests (TDD) will be developed and running before checking in and “declaring victory” might be decided as overall quality criteria that impacts every team.
  • Some examples: I gave several above. Clearly the story has to meet the Product Owners established acceptance criteria. I also like the notion of the Product Owner signing off on each story. That is, they review it, demo it, and determine that it is indeed—done. Then they physically sign-off on the story. Usually story done-ness also surrounds the design integrity, process steps to develop and test the story, and known bugs.

Sprint-Level Criteria or Sprint Goal(s)

This level is focused towards how well the team has met all of the goals and criteria they set forth when the planned their sprint. A large part of these criteria are typically driven by a successful sprint review or demo. I like the notion of “connecting the dots” between the sprint goal and the sprint review, so the team should think about the goal as a cohesive demonstration of business value from the customers’ point of view.

In my classes I often get asked, can a sprint have multiple goals, i.e. deliver on multiply focused activities? The answer is probably yes, but what the question is really looking for is the ability to say:

The goal of this sprint is to deliver 524 hours of work towards these specific 12 User Stories that are sized at 27 Story Points.

I think this is an incredibly poor goal because of the tactical, work effort focus. For example, there is no “customer” or no “demo description” in this goal. I’d much prefer goals that have a clear connection to the customer, value, and their challenges embedded within the goal. Having 2-3 separate goals articulated in this way seems fine too.

  • Who contributes these? Truly it’s the responsibility of the Product Owner to define or establish a goal for each sprint. I usually encourage them to bring a tentative sprint goal into sprint planning and then align that with the team and the agreed sprint work as part of the sprint-planning meeting. It then becomes a shared and achievable goal.
  • Some examples: If, for example a team were working on an ATM project, then a few related sprint goals might include: Complete single customer sign-on and account interrogation to include balance and transaction lists for the past month. Another one might be: Complete and demonstrate all deposit based activity (single/multi/business) account transactions with receipt printing. Only check deposits will be supported. I hope you see the connection to real-world customer usage scenarios. I’ve found these goals, which have open-ended functional details, to best inspire the team to “solve a problem” versus “deliver a set of stories”.

Release-Level Criteria or Release Goal(s)

If you’ve ever been part of a team that delivered software in more waterfall environments, a common practice is to create release criteria. These are project constraints requirements that are usually established at the beginning or early on in a project. Often they are consistent from project to project or release to release, because they quantify organizationally important criteria. For example, quantifying whether you could release with specific levels of bugs (both in priority and count) or quantifying how much testing (coverage) needed to occur prior to a release.

One of the unfortunate parts of many agile adoptions is that these sorts of criteria have been dropped. I think they’re incredibly valuable in defining meta-requirements or key constraints for the teams to adhere to within each release. Typically they exist anyway within the organization, but calling them out creates a focus on them in planning, execution, and delivery. They’re particularly important in at-scale delivery—so that multiple teams are maintaining a consistent focus on their overall release goals.

  • Who contributes these? Usually they’re defined outside of the team proper. Either being defined by the Product Ownership team or Chief Product Owner as part of establishing the definition of a release. As I mentioned, they often carry-over from release to release. They are typically “not optional”, so the organization needs to be willing to block a release or drop a feature if it does not meet the release goals.
  • Some examples: I’ve already mentioned allowable defects in the release and test coverage as solid examples. Global performance targets or usability constraints are often mentioned in applicable projects—so there is often a focus on non-functional requirements. Process constraints or commitments might also be mentioned, for example, the fact that each user story needs a minimum of 70% automated test coverage before being considered a candidate for your release train.

Getting Done-Ness into your DNA

Creating a list of your Done-Ness Criteria is only the first step. Just because you have created and communicated them, doesn’t mean that everyone is supporting them. The next step is establishing a culture where everyone aligns with and personally supports the criteria. Not just when things are going smoothly, but when the going gets tough as well.

You know that your done-ness has seeped into your culture when the team sees no other recourse but to do things the right way. I’ll share this example from my time leading teams at iContact that illustrates this cultural transformation:

We were a SaaS e-mail marketing software product and our customers used us 7×24. In fact, our weekends were often quite busy as SMB owners worked on their next week email campaigns. There was one weekend where a nasty mail sending component bug cropped up. It brought down our ability to send email, which clearly affected all of our clients. Not only that, when this happened, we would queue the mail. So this started created an endlessly increasing pool of mail that would cause delays even when we fixed the bug.
So the pressure was on.
Our teams would normally assign a “support engineer” for weekend support. The engineer in this case was notified of the problem, looked into it, and devised a repair. As part of our DoD, we’d agreed that no fix or repair could be checked-in without a paired code review. Now keep in mind—this was a holiday weekend, so people were on vacation. The support engineer determined that he needed a review with two others who were experienced in this area of the mail-sending stack.
He found them via text messages and phone calls and they all committed to a distributed/remote code review session on Saturday afternoon. They discussed a few issues and changes related to the repair, and then he completed those adjustments and released the repair.
When I came in on Monday morning I was amazed at how committed the team was to doing a proper review. It would have been the easiest thing in the world to have either checked-in an un-reviewed repair OR waited until everyone was back on Monday. But the support engineer and their teammates were committed to our customers and to their Definition-of-Done. It was in their DNA.

Why Done?

So after all this discussion, you might be asking yourself – why all of this focus on done? Why does it matter?

It matters from several perspectives:

  • It helps with your estimates. Pre-agile methods, I’ve used done-ness like criteria in my teams because I felt that if we didn’t have clarity around what went into completing our work, how could we estimate our work. That’s the very point we’re focusing on here. Clear understanding of what is expected in completing our work.
  • It helps with your quality. It provides guidance to the team surrounding what makes each step or deliverable complete. It focused on quality of the steps. And it amplifies consistency – so that every check-in or deliverable meets a consistent level of completeness.
  • It helps your Product Owner and customer gain confidence as the team delivers. And it’s not just confidence on the individual stories. It’s confidence on the overall plans and teams ability to meet their commitments with consistent quality.

Is it a panacea for any of the above? Of course not! But it is a key driver for some of the core behaviors that agile tries to amplify in teams. That’s why you hear so many agile coaches “harping” on it.

Wrapping Up

I often emphasize Done-Ness as a place for the organizations leadership team to influence the behavior, focus, and results of their agile teams. I encourage them to get engaged with establishing a deep, broad, and relevant set of criteria for their teams. I ask them to align their DoD to the business, customer, and constraints. I sometimes call them “guardrails” because of my view that they can keep the team “safely on the road” in their delivery of business value and results.

Since I see so many sparsely defined DoD’s it the real world, I usually ask for organizations and teams to over define them – risking a bit of prescriptiveness. I’d rather have more clarity than less guiding the teams.

Hopefully this article has helped clarify a view to what done looks like in agile teams. Now go take a look at your own Definition-of-Done and see if you need any adjustments?

Stay agile my friends,
Bob.

Don’t forget to leave your comments below.

References