Skip to main content

I Just Want to be “Accepted”

Have you heard of the “3’C’s” of User Stories? It’s a heuristic to remind storywriters of the three key aspects of a User Story:

  • Card
  • Confirmation
  • Conversation

There’s quite a bit of debate as to what the most important ‘C’ is. Often in my classes I talk about “conversation” or collaboration being the most critical ‘C’. But to be honest, I have a hard time making a priority distinction between the three components of a user story.

In this article I want to explore an area that is often overlooked. It’s the confirmation-C. I sometimes refer to it as:

  • Acceptance Criteria;
  • Acceptance Tests;
  • Mini-UAT for each story;
  • Or Confirmation Tests.

Acceptance tests seem to be the most often used terminology. For example, leading to the notion of ATDD or Acceptance Test Driven Development, which can be a powerful side effect of how you approach writing your stories.

So let’s start with an introductory example.

Here’s what I would call an Epic with several related epics derived from it. We don’t have any acceptance tests yet, but we’re starting to develop a related set of epic-level stories.

  1. As a writer, I want to allow for text font changes; 20-30 different font types, colors, so that I can highlight different levels of interaction with my readers
  2. …Allow for various attributes: underline, bolding, sub/super script, italicize, etc…
  3. Allow for a form of headings; 3 primary levels
  4. Allow for indenting of text
  5. Allow for lists (numbered and bulleted); single level first, then move to multi-level
  6. Allow for alignment – right/left justified, centered, variable
  7. Allow for do/un-do to include ongoing text activities
  8. Establish a paragraph model (or a variety of models)
  9. Show/hide ‘hidden’ formatting marks
  10. Establish the notion of a “style set” that can be used to establish a collection of favorites

Let’s expand upon the second Epic:

As a Writer, I want to allow for various attributes: underline, bolding, sub/super script, italicize, etc. so that I can highlight different levels of interaction with my readers

We’ll start writing acceptance tests for this story. I have a preference for using “Verify that…” phrasing when writing my acceptance tests.

  1. Verify that underline works
  2. Verify that bold toggles for all font / color types
  3. Verify that all combinations of all attributes can be combined
  4. Verify that font size changes do not impact attributes
  5. Verify that paragraph boundaries are not effected by attribute changes
  6. Verify that attributes continue in pre-text, post-text ; for example, if we bold a numbered list text, the number should be bolded

    You’ll notice in this case, that the acceptance criteria are all functionally focused. I don’t think that’s necessarily bad, but it would be nice to put in some significant error cases as well. For example, lets say that sub/superscript are not allowed in headers and footers for some reason. Then I’d expect the following acceptance criteria to be added to the list:

  7. Verify that super & sub script are not allowed in Header or Footer areas and that an error messaged is displayed in-line and on the error console

I hope you see the clarity and value that solid acceptance tests can make to your story writing. I always refer to them as helping the 3 Amigos who are collaborating around story writing:

  • From a Development perspective: they should share design hints with the developer(s) exploring what’s important and the business logic behind each feature. They should share non-functional requirements as well, for example performance requirements.
  • From a Testing perspective: they should share some of the ‘How’ and ‘Why’ behind the customers usage and their intentions. The tester(s) should use this information to construct a series of tests that exercise the most important bits surrounding customer value.
  • From a Product Owner perspective: they are a rich communication landscape to augment the ‘C’ard of the user story. Typically the PO writes them in a grooming session with their team—so they are collaboratively explored and defined. They also serve as an acceptance checklist when the team delivers a ‘Done’ story for Product Owner sign-off.

This combination of roles (perspectives) surrounding the acceptance criteria helps to ensure the customer deliverable meets the need AND that you have a rich set of “tests” to confirm it.

Readiness Criteria

I often recommend establishing story readiness criteria that all of your stories must meet before they are “ready” to enter a sprint. From an acceptance test perspective, I’m looking for something like the following:

  • No fewer than 3-5 acceptance tests per story; no more than 10 per story
  • Of those, 1-2-3 that are focused on functional behavior
  • Of those, 1-2 that are focused on non-functional behavior
  • Look for positive and negative tests; for example error conditions
  • They should be as quantitative as possible
  • Each test should be independent
  • I can see as many as 10 acceptance tests for a story; but more starts looking like functional test cases
  • They should avoid being “process” oriented, for example, Product Owner sign-off as one of the acceptance tests

Now I’ve been exhaustive here in defining the readiness criteria for the article. In real life, only a few of these would be used or valid. I’ve found that establishing readiness criteria can be incredibly helpful in improving the quality of your sprint execution

(see the references for a link to an article explore them in more detail)

How do Acceptance Tests help?

First of all, I don’t think you can accurately estimate a story without identifying its acceptance criteria. For example, Epics in my experience often don’t have them and they’re estimated. But we’re not committing to those estimates and the story will be re-estimated as we break it down and refine the subsequent child stories. And of course, they will have acceptance tests.

So they help immensely in determining the true scope of a story and getting more valid or accurate estimates.

The also help in story decomposition. I often find that a fully defined Epic, with acceptance tests in place, will be easier to decompose. Often the acceptance tests will give hints or boundaries where you can break up the story.

If we go back to our example story, if the story was too big then we might start breaking it apart along the boundaries identified by the acceptance tests. For example—core attributes versus handling them in paragraphs and headers & footers.

Product Ownership

But ultimately they are “for” the PO. They are the mechanism to communicate priority-based business value. And they are the measure of a story being complete. I personally like the approach where a team brings stories to the Product

Owner whenever they’re complete, the PO checks out the story including running all of the acceptance tests, and then signs-off on the story being done. So from that point of view, the acceptance criteria are conditions of done-ness for each and every story.

Meta Acceptance

I have heard of a notion of Met-Acceptance Criteria that crosscut all stories that an organization will be writing. These are typically for domain requirements. For example,

I worked at EMC in the past and there was a lightly documented meta-requirement that no function (use case, story, feature) should corrupt data. Since EMC produced data storage devices, this made incredibly good sense.

So, did you need to mention this physically on each and every user story? We decided that the answer was no. That we would document it as a meta-acceptance criteria and teams would, when it applied, consider it as part of the stories acceptance testing.

I often find organizations describing these requirements for more non-functionally oriented acceptance criteria—particularly in the areas of security and performance.

Technical User Stories

Another area where a focus on acceptance tests will really help your story writing is with technically focused stories. These could be stories focused towards refactoring, infrastructure, tooling, bug fixes, testing infrastructure, virtually anything that is technically of value to the team but isn’t directly part of a customer facing feature.

With these stories, the criteria are focused towards expanding the design understanding of the story. Here’s an example technical story –

As a user requesting authentication, 
I need to be able to login via the web app,
so that I can manage my account details via the web

Let’s spend a little time writing the acceptance tests, here are a few ideas:

  1. Verify that all web-based requests get thru the service layers and receive a reply within 2 seconds
  2. Verify that HTTP, Radius SecureID, and LDAP authentication protocols are supported
  3. Verify that the authentication timeout performs at 25 seconds
  4. Verify that 2-phase questions (3 in total) are presented every 3-5 login attempts
  5. Verify that 2-phase questions are applied after a 3x password entry failure
  6. Verify that password entry retry limit is set at 5x

I hope you can see how useful the acceptance tests are for this technical story and that the example gives you an idea of the distinction between the two types.

Wrapping Up

I was inspired to write the post/article by a colleague at Velocity Partners – Martin Acosta. He wrote me a note asking for references or help that would emphasize the role of acceptance tests. As I reflected on my writing, I realized that this was a ‘gap’ that I hadn’t previously addressed.

Martin, I hope you find some value in this post. And anyone “out there”, if you have examples of user stories and acceptance tests, please add them as comments. I’d love to see more real world examples.

And don’t forget, the primary purpose of the confirmation tests is to inspire, drive, initiate: CONVERSATIONS!

Stay agile my friends,
Bob.

Don’t forget to leave your comments below.

References

Gojko Adzik’s book, Specification by Example, is a great place to go for a more deep and broad treatment
• The user story used in the first example was borrowed from this blog post
Jeff Langr and Tim Ottinger talk about acceptance test characteristics that they’ve identified on their Pragmatic Programmer reference cards 
• The technical user story in the second example was borrowed from this blog post 
• Here’s a blog post on Readiness Criteria
• And finally, here’s a blog post on the 3 Amigos