Skip to main content

Author: Robert Galen

Sprint Reviews – Learning’s from the Movies

My wife and I saw two movies over the holidays. One was The Hobbit: The Desolation of Smaug and the second was The Hunger Games: Catching Fire. Both were the second episodes in a three part series. I suspect both were shot at the same time as their concluding episodes as well.

No, this is not a movie review, although both of them were “reasonable” follow-ups to their opening episodes. But both also had a similarity—one that bugged both my wife and I very much.

Both of them left you (the audience, the customer) hanging at the end. And I’m not talking about a subtle ending that hinted at a future plot direction. I’m talking about an abrupt, no warning at all, CUT off of the movie in lieu of the next (and hopefully final) installment.

It was so abrupt that it was painful and it tainted our impressions of the overall movies. In fact, that’s all we’ve been talking about afterwards. Not so much the positives or the things we enjoyed, but how rude the ending was. How blatantly it disregarded the experience of the audience in order to tack on another episode.

But enough about that. Yes, we were unhappy, but that’s not the point of this article. It struck me that many sprint reviews are like these movies and I want to explore some things to consider for your sprint reviews or demo’s. Call it lessons learned or inspired from these two movies and movies in general.

Here are 5 things to consider when you’re executing your Sprints and planning your Demo’s:

Tell a Story

Some of the most challenging demos are where the team insists on showing everything they’ve done in the sprint AND where the work is disparate and not well connected. I’d much rather the team tell a cohesive story with their efforts for the sprint and tie it to their sprint goal. Another part of story telling is sharing the “behind the scenes” challenges and efforts. For example, how did you plan, what got in your way, how did the team swarm around the work, and how was adversity met – are all questions I might have. I’m often interested in the teamwork and trade-offs as I am in the results themselves.

Practice

I’m sure in most movies the actors don’t just “dive in” and make up their scenes and the dialogue as they go. There are storyboards, planning, and of course, the script. All go into the practice and preparation to make the story unfold in a smooth fashion. I’m not suggesting that any agile team needlessly prepare for the sprint demo, but they should be ready. Have a “script” and do a dry run if you need to. Make sure your audience experiences a strong and thoughtful demo versus your unpreparedness.

Focus on What’s Important

I’ve often heard folks complain that a movie didn’t exactly follow the book. That much of the story had been modified or cut out. I often think to myself what a move would look like if the scripts exactly followed the book. How long and how boring would it be? I think movie adaptations for books must focus on the important threads of a story. Themes matter, priority matters, and you have to be comfortable with trimming things down to their essence via “just enough” and “just in time” thinking.

Connect the Dots – Coming Attractions

One of my “pet peeves” for sprint demo’s is that they need to avoid stand-alone deliver. I like it when they connect the results in the current sprint to past AND future deliverables. Aligning with whatever release plans, roadmaps, or strategy your organization has. Another part of connecting the dots is talking about look ahead, as it relates to architecture, design, dependencies, and integrated testing. Show attendees that you’re not only delivering in the small (sprint), but that you have the “big picture” in mind.

Endings Matter

And back to my two latest movies, please realize that endings matter. You want to leave your audience fully understanding what you’ve delivered and the “big hairy business why” behind it. And you want to tease then with the future by connecting the dots forward. BUT, you want to do it with thoughtfulness and subtlety. Provide insufficient connection, and they won’t know what’s next. So they might not come to your next “performance”. But do it too heavy handed, and they might forget everything else but the hard sell at the end.

And here’s something I like to do in my Sprint Demo’s that movies can’t really do. I like to gain immediate feedback from the audience. You might try the Fist-of-Five as a quick technique to see how everyone feels about the show 😉

Wrapping Up

I continue to be amazed by the soundness of basic agile principles, ceremonies and tactics. That something as simple as a Sprint Review or Demo has so much variability in it – from doing it poorly to doing it incredibly well.

This is where the team comes into play in collaborating around and planning high-impact sprint reviews. One of the biggest mistakes a team can make is falling into a tempo of doing the same old, by-rote sprint reviews. And then wondering why attendees are falling asleep or missing in action.

Consider each Demo to be a unique presentation or movie. Give it its due and consider the above when crafting each performance. Your customers will appreciate you for it.

Stay agile my friends,
Bob.

BTW: here’s a link to a more thorough discussions I’ve had on Sprint Reviews or Demo’s:

  • Slide deck
  • Webinar
  • Blog post

all are somewhat interrelated

Don’t forget to leave your comments below.

I Just Want to be “Accepted”

Have you heard of the “3’C’s” of User Stories? It’s a heuristic to remind storywriters of the three key aspects of a User Story:

  • Card
  • Confirmation
  • Conversation

There’s quite a bit of debate as to what the most important ‘C’ is. Often in my classes I talk about “conversation” or collaboration being the most critical ‘C’. But to be honest, I have a hard time making a priority distinction between the three components of a user story.

In this article I want to explore an area that is often overlooked. It’s the confirmation-C. I sometimes refer to it as:

  • Acceptance Criteria;
  • Acceptance Tests;
  • Mini-UAT for each story;
  • Or Confirmation Tests.

Acceptance tests seem to be the most often used terminology. For example, leading to the notion of ATDD or Acceptance Test Driven Development, which can be a powerful side effect of how you approach writing your stories.

So let’s start with an introductory example.

Here’s what I would call an Epic with several related epics derived from it. We don’t have any acceptance tests yet, but we’re starting to develop a related set of epic-level stories.

  1. As a writer, I want to allow for text font changes; 20-30 different font types, colors, so that I can highlight different levels of interaction with my readers
  2. …Allow for various attributes: underline, bolding, sub/super script, italicize, etc…
  3. Allow for a form of headings; 3 primary levels
  4. Allow for indenting of text
  5. Allow for lists (numbered and bulleted); single level first, then move to multi-level
  6. Allow for alignment – right/left justified, centered, variable
  7. Allow for do/un-do to include ongoing text activities
  8. Establish a paragraph model (or a variety of models)
  9. Show/hide ‘hidden’ formatting marks
  10. Establish the notion of a “style set” that can be used to establish a collection of favorites

Let’s expand upon the second Epic:

As a Writer, I want to allow for various attributes: underline, bolding, sub/super script, italicize, etc. so that I can highlight different levels of interaction with my readers

We’ll start writing acceptance tests for this story. I have a preference for using “Verify that…” phrasing when writing my acceptance tests.

  1. Verify that underline works
  2. Verify that bold toggles for all font / color types
  3. Verify that all combinations of all attributes can be combined
  4. Verify that font size changes do not impact attributes
  5. Verify that paragraph boundaries are not effected by attribute changes
  6. Verify that attributes continue in pre-text, post-text ; for example, if we bold a numbered list text, the number should be bolded

    You’ll notice in this case, that the acceptance criteria are all functionally focused. I don’t think that’s necessarily bad, but it would be nice to put in some significant error cases as well. For example, lets say that sub/superscript are not allowed in headers and footers for some reason. Then I’d expect the following acceptance criteria to be added to the list:

  7. Verify that super & sub script are not allowed in Header or Footer areas and that an error messaged is displayed in-line and on the error console

I hope you see the clarity and value that solid acceptance tests can make to your story writing. I always refer to them as helping the 3 Amigos who are collaborating around story writing:

  • From a Development perspective: they should share design hints with the developer(s) exploring what’s important and the business logic behind each feature. They should share non-functional requirements as well, for example performance requirements.
  • From a Testing perspective: they should share some of the ‘How’ and ‘Why’ behind the customers usage and their intentions. The tester(s) should use this information to construct a series of tests that exercise the most important bits surrounding customer value.
  • From a Product Owner perspective: they are a rich communication landscape to augment the ‘C’ard of the user story. Typically the PO writes them in a grooming session with their team—so they are collaboratively explored and defined. They also serve as an acceptance checklist when the team delivers a ‘Done’ story for Product Owner sign-off.

This combination of roles (perspectives) surrounding the acceptance criteria helps to ensure the customer deliverable meets the need AND that you have a rich set of “tests” to confirm it.

Readiness Criteria

I often recommend establishing story readiness criteria that all of your stories must meet before they are “ready” to enter a sprint. From an acceptance test perspective, I’m looking for something like the following:

  • No fewer than 3-5 acceptance tests per story; no more than 10 per story
  • Of those, 1-2-3 that are focused on functional behavior
  • Of those, 1-2 that are focused on non-functional behavior
  • Look for positive and negative tests; for example error conditions
  • They should be as quantitative as possible
  • Each test should be independent
  • I can see as many as 10 acceptance tests for a story; but more starts looking like functional test cases
  • They should avoid being “process” oriented, for example, Product Owner sign-off as one of the acceptance tests

Now I’ve been exhaustive here in defining the readiness criteria for the article. In real life, only a few of these would be used or valid. I’ve found that establishing readiness criteria can be incredibly helpful in improving the quality of your sprint execution

(see the references for a link to an article explore them in more detail)

How do Acceptance Tests help?

First of all, I don’t think you can accurately estimate a story without identifying its acceptance criteria. For example, Epics in my experience often don’t have them and they’re estimated. But we’re not committing to those estimates and the story will be re-estimated as we break it down and refine the subsequent child stories. And of course, they will have acceptance tests.

So they help immensely in determining the true scope of a story and getting more valid or accurate estimates.

The also help in story decomposition. I often find that a fully defined Epic, with acceptance tests in place, will be easier to decompose. Often the acceptance tests will give hints or boundaries where you can break up the story.

If we go back to our example story, if the story was too big then we might start breaking it apart along the boundaries identified by the acceptance tests. For example—core attributes versus handling them in paragraphs and headers & footers.

Product Ownership

But ultimately they are “for” the PO. They are the mechanism to communicate priority-based business value. And they are the measure of a story being complete. I personally like the approach where a team brings stories to the Product

Owner whenever they’re complete, the PO checks out the story including running all of the acceptance tests, and then signs-off on the story being done. So from that point of view, the acceptance criteria are conditions of done-ness for each and every story.

Meta Acceptance

I have heard of a notion of Met-Acceptance Criteria that crosscut all stories that an organization will be writing. These are typically for domain requirements. For example,

I worked at EMC in the past and there was a lightly documented meta-requirement that no function (use case, story, feature) should corrupt data. Since EMC produced data storage devices, this made incredibly good sense.

So, did you need to mention this physically on each and every user story? We decided that the answer was no. That we would document it as a meta-acceptance criteria and teams would, when it applied, consider it as part of the stories acceptance testing.

I often find organizations describing these requirements for more non-functionally oriented acceptance criteria—particularly in the areas of security and performance.

Technical User Stories

Another area where a focus on acceptance tests will really help your story writing is with technically focused stories. These could be stories focused towards refactoring, infrastructure, tooling, bug fixes, testing infrastructure, virtually anything that is technically of value to the team but isn’t directly part of a customer facing feature.

With these stories, the criteria are focused towards expanding the design understanding of the story. Here’s an example technical story –

As a user requesting authentication, 
I need to be able to login via the web app,
so that I can manage my account details via the web

Let’s spend a little time writing the acceptance tests, here are a few ideas:

  1. Verify that all web-based requests get thru the service layers and receive a reply within 2 seconds
  2. Verify that HTTP, Radius SecureID, and LDAP authentication protocols are supported
  3. Verify that the authentication timeout performs at 25 seconds
  4. Verify that 2-phase questions (3 in total) are presented every 3-5 login attempts
  5. Verify that 2-phase questions are applied after a 3x password entry failure
  6. Verify that password entry retry limit is set at 5x

I hope you can see how useful the acceptance tests are for this technical story and that the example gives you an idea of the distinction between the two types.

Wrapping Up

I was inspired to write the post/article by a colleague at Velocity Partners – Martin Acosta. He wrote me a note asking for references or help that would emphasize the role of acceptance tests. As I reflected on my writing, I realized that this was a ‘gap’ that I hadn’t previously addressed.

Martin, I hope you find some value in this post. And anyone “out there”, if you have examples of user stories and acceptance tests, please add them as comments. I’d love to see more real world examples.

And don’t forget, the primary purpose of the confirmation tests is to inspire, drive, initiate: CONVERSATIONS!

Stay agile my friends,
Bob.

Don’t forget to leave your comments below.

References

Gojko Adzik’s book, Specification by Example, is a great place to go for a more deep and broad treatment
• The user story used in the first example was borrowed from this blog post
Jeff Langr and Tim Ottinger talk about acceptance test characteristics that they’ve identified on their Pragmatic Programmer reference cards 
• The technical user story in the second example was borrowed from this blog post 
• Here’s a blog post on Readiness Criteria
• And finally, here’s a blog post on the 3 Amigos 

Technical User Stories – What, When, and How?

georgep Nov27It happens to me on a weekly basis. I’m teaching a class on how to write User Stories. Usually it’s part of my Product Owner workshop. We’re happily writing stories for an iPad application simulation. Typically halfway thru the exercise someone raises their hand because they’re struggling with the format of a purely technical story. Quite often they don’t know how to frame the “user” clause and are stuck there in their writing.
My first recommendation is often to tell them to skip it. I tell them that the “As a” and the “So that” clauses are usually quite different for technically related stories. I just ask them to quantify the need (technically), in clear English with perhaps a couple of sentences, and then move on.

In fact, I ask them to spend more time on the confirmation or acceptance part of the story, because I find that this is an area that needs “development” for more technical stories. Usually they’re still frustrated because they want to write “good” stories, but they reluctantly move on. But I’m getting ahead of myself a bit. Let’s focus first on a simple definition for Technical User Stories so that we’re all on the same page.

Technical User Stories Defined

A Technical User Story is one focused on non-functional support of a system. For example, implementing back-end tables to support a new function, or extending an existing service layer. Sometimes they are focused on classic non-functional stories, for example: security, performance, or scalability related.

Another type of technical story focuses more towards technical debt and refactoring. And still another might focus on performing technical analysis, design, prototyping and architectural work. All of these are focused towards underlying support for base functional behavior.

The other difference is that these stories usually need to be defined with someone who understands the technical design and implications of the product stack. Sometimes a traditional Product Owner has the skill to do it, but most often they do not. So this implies the need for team members to “step up” and take ownership of these sorts of stories—not only at the point of definition, but also if there are questions, clarifications, and for sign-off when the stories are delivered.

Functional User Story vs. Technical User Story

The basic User Story is structured towards functional descriptions of system behaviors. Most often the user drives them, i.e., they align with a usage scenario that a customer would follow in leveraging the application or system. On the other hand, technical stories are often driven to support this upper level behavior. I often call them infrastructural stories.

For example, if there were a User Story to allow for logon and authentication of a user to a web based Credit Union application. Suppose there was no infrastructure for web-based authentication within the Credit Union customer infrastructure. It exists for the kiosk-based ATM’s, but would be a new function for the web app. To me this would expose infrastructure that is required to support the base functionality for the Login story. One-way to approach this would be to build this as a function, requirement or dependency within the base story. While that would work, it’s really not part of the function and we’re overloading the story.
Another way to approach it would be to write a Technical User Story. In clear English the story might look like:

We need to extend the kiosk authentication code in our security services layer to include a new authentication mechanism for web-based (browser) applications. It needs to include 2-layer authentication: passwords and user-centric questions.

I could force a typical User Story format on the same story, but I’m not sure it helps here:

As a user requesting authentication,
I need to be able to login via the web app, 
so that I can manage my account details via the web

While this does define the functionality, it doesn’t address underlying infrastructure. You could easily make a note of this on the story, but I personally like the clarity of the technically phrased story. In either case, I believe we need to spend out time writing the acceptance tests. If they’re important for Functional User Stories, I believe they’re doubly important for Technical User Stories. Let’s define a few:

  • Verify that all web-based requests get thru the service layers and receive a reply within 2 seconds
  • Verify that HTTP, Radius SecureID, and LDAP authentication protocols are supported
  • Verify that the authentication timeout performs at 25 seconds
  • Verify that 2-phase questions (3 in total) are presented every 3-5 login attempts
  • Verify that 2-phase questions are applied after a 3x password entry failure
  • Verify that password entry retry limit is set at 5x

I hope you can see how useful the acceptance tests are for this technical story. I hope this example at least gives you an idea of the distinction between the two types.

Mining for Technical Stories

Technical User Stories are often forgotten during backlog maintenance or grooming activity. The Product Owner and the team more easily gravitate toward the functionality and defer the technical infrastructure to later.

This happens for new applications (defining base architecture and design) and ongoing maintenance efforts (extending architecture and refactoring) alike.

One of the best ways to expose technical stories is to perform a Story Brainstorming Workshop as defined by Mike Cohn. I would also include end-to-end Release Planning as another effective tactic. When you take an end-to-end or holistic view to the work to deliver the entire project, then the technical stories often emerge with the discussions.

You can read more about Release Planning and User Story Brainstorming Workshops in the references at the end of the article.

Pay attention to Acceptance Criteria

Very often your acceptance criteria or tests give you hints about technical stories. For example, the acceptance criteria above:

  • Verify that 2-phase questions (3 in total) are presented every 3-5 login attempts
  • Verify that 2-phase questions are applied after a 3x password entry failure

Give a solid hint about decomposing them out of the base story and perhaps creating another Technical User Story that would be focused towards a sub-service related to handing 2-phase questions. I could easily see doing this, particularly if the estimates on the base story are relatively large.

Of course it would create a dependency between the base authentication story and this one, but that might be worthwhile from an implementation and testing perspective. In the end, it’s a decision for the team.

Types of Technical User Stories

Sometimes it’s useful to identify different types of technical stories. Mostly because it gets the team thinking at different levels about all of the needs they might have to properly implement within the application.

  1. Product Infrastructure – stories that directly support requested functional stories. This could include new and/or modified infrastructure. It might also identify refactoring opportunities, but driven from the functional need.
  2. Team Infrastructure – stories that support the team and their ability to deliver software. Often these surround tooling, testing, metrics, design, and planning. It could imply the team “creating” something or “buying and installing” something.
  3. Refactoring – these are stories that identify areas that are refactoring candidates. Not only code needs refactoring, but also it can often include designs, automation, tooling, and any process documentation.
  4. Bug Fixing – either clusters or packages of bugs that increase repair time or reduce aggregate of testing time. So this is more of an efficiency play.
  5. Spikes – research stories that will result in learning, architecture & design, prototypes, and ultimately a set of stories and execution strategy that will meet the functional goal of the spike. Spikes need to err on the side of prototype code over documentation as well, although I don’t think you have to “demo” every spike.

Strategy

Joe Little talks a lot about this in his Release Planning writing—how release planning is really an effort to get the team (and the organization) to move from the tactical (sprint-level) to strategic (release-level) planning. It’s a fairly common agile anti-pattern for teams to fail to look “down the road” in their backlog grooming and backlog management in order to see where they’re going technically. This then drives up rework, unmet dependencies, confusion, and worst of all, disappointed stakeholders when their expectations are unmet.

While functional strategy is important, meaning how will we integrate and demonstrate customer facing stories and features. I find that technical strategy is even more important. Consider this the architectural and design workflow for the development effort. Technical strategy should address functional and non-functional support, end-to-end demonstrability, technical risk, and testing concerns.
Do you DEMO Technical User Stories?

I can’t tell you how often I get “pushback” from my students and coached teams surrounding this point. Usually teams don’t want to demo the Technical User Stories. The rational (excuses) normally fall into the following areas:

  1. There is no UI so we can’t demo it;
  2. They (the stakeholders) only care about the functional software. They don’t care about infrastructure or Technical User Stories;
  3. It’s going to be a very odd demo to show this behavior off IF we don’t have the supporting functional software completed at the same time;
  4. It’s not going to be worth the effort to demonstrate our meeting non-functional requirements (acceptance) for the story.

While I clearly acknowledge that it’s often harder to demonstrate Technical User Stories, I think the payback is worth the investment and effort. Often stakeholders trivialize the technical stories—discounting them as “fluff” that is often low to no cost work that surrounds the functionality. While team members know that technical stories are often quite challenging and consume a large part of the overall project effort.

Demoing the stories and talking about the complexity, effort, and results can truly help narrow this gap for your stakeholders.

Wrapping Up

I actually don’t like separating User Stories into types. So the distinction of Functional vs. Technical to me is sort of artificial. I’d much rather teams simply evolve “all of the stories” necessary to fully deliver “on the goals of a project or release”. So simply put, some of the backlog stories will be:

  • Crucial functionality
  • Supplemental functionality
  • Technical supporting stories
  • Infrastructure – both functional and technical
  • Tooling or architecture & design stories
  • Research spikes

But in the end, it’s for the Product Owner(s) and the Team(s) job to define a robust set of stories that align with the customers’ needs and value proposition and then deliver towards those goals. The product backlog is simply a roadmap that guides the team towards that goal, with continuous adjustments all along the way.

Stay agile my friends,
Bob.

Don’t forget to leave your comments below.

References

Technical Product Ownership

I hear this challenge over and over again from Product Owners. They have little to no problem writing functional user stories, but then…

Bob, the team is placing tremendous pressure on me to write technology centric User Stories. For example, stories focused on refactoring, or architectural evolution, or even bug fixing. While I’d love to do it, every time I give them to the team and we discuss them, they nit-pick the contents and simply push back saying they can’t estimate them in their current state.

So I’m stuck in a vicious cycle of rinse & repeat until the team gets frustrated and pulls an ill-defined story into a sprint. And this normally “blows up” the sprint. What can I do?

I think the root cause of this problem is that the company views the Product Owner role as the final arbiter of user stories; meaning they need to write them all. I feel that’s an anti-pattern, but the question remains, what to do in this situation.

I’ve seen several clients apply approaches that significantly helped in handling, what I refer to here, as technical user stories. Let me share a couple of real-world stories (nor user stories mind you 😉 that should help you envision some alternatives.

Two Stories

Creating a Role of Technical Product Owner

Around 2008 I was working with a client who had developed a SaaS product offering on the Microsoft .Net stack. However, they had used an open source database for some of their core functionality in addition to SQLServer for the majority. The open source database started suffering from increased performance degradation over time and the engineering team decided it was time to replace it and normalize everything to SQLServer.

While this was a sound business and technical decision, the work definition, design and planning needed to be executed within the Scrum framework that the organization had been using for several years. On the surface that wasn’t a problem, but it was a rather large-scale infrastructural project and the product organization and teams hadn’t tackled something like that yet within Scrum.

The other problem was that this was a highly technical project and the teams’ Product Owner was not technical. They came up with the approach of creating a Technical Product Owner and selected one of the development managers to fill the role.

This role was a “strong partner” with the Functional Product Owner for the team. Over time, they began to draw a distinction between the Technical Product Owner and the Functional Product Owner in discussions and work focus.

They setup some rules for the two to collaborate on the same set of Backlogs as they directed work towards their team(s):

  • The Functional Product Owner was the primary PO for the team;
  • The Technical PO was in an advisory or assistant capacity;
  • Both PO’s needed to understand “the other side” of the backlog so they could easily represent the overall workflow, investment decision-making, and back each other up;
  • At a release planning level, they would guide their backlogs and teams towards the agreed upon percentages of investment for functional vs. technical change;
  • They would each address questions for their stories during the releases’ sprints;
  • They would sign-off on their own stories; often the Definition of Done was different between Functional and Technical stories.

I vividly recall how wonderfully the two Product Owners collaborated on the project. I think that’s important for creating this “dual-role” and having it work. There needs to be professionalism, trust, and respect across the two. I liken it to a really strong partnership in order for the results to be balanced and so the team sees a “consistent & united front” with respect to backlog priority.

It took approximately 3-4 months for the database replacement project to complete. After that, the Technical Product Owner reverted to his old role. But as an organization, this notion continued for larger-scale, technically focused work, even if it was only a small set of stories. They typically made software functional managers or architects into Technical Product Owners when the need arose, which seemed to generally make sense.

Including Architecture and Sound Design

Another client was focused on developing a SaaS eMail application. They had about 10 Scrum teams working in parallel across the applications’ code base, which was based on the LAMP stack. Organizationally, they had a small group of UX engineers who were guiding the functional evolution of the product. In fact, at the time they were “re-facing” the product and trying to simplify and update the user experience.

Their customer base was growing quite rapidly, so they were experiencing performance issues as the architecture was stretched beyond its limits. This created tension to inject both the UX redesign efforts and foundational architectural upgrades across the teams Product Backlogs.

The client CTO was also the head of a small group of architects. He was struggling in how to ‘guide’ architecture across 10 teams in a consistent way, while integrating with the overall company product road-maps. He initially tried doing that by simple influence; getting involved with the teams and informally asking them to take on architectural tasks. However, he became frustrated when the tasks were inconsistently delivered and deployed. Most often there was a lack of cohesion and integration as architectural elements were implemented across teams.

He finally struck on a recipe that seemed to work. He took on the role of Chief Technical Product Owner. He consolidated all of their technical work intentions, both from a software architecture, test architecture, and UX design perspective, and placed them on a single technical backlog. He and his team members worked hard to write solid stories, break them down (with the development teams) and stage them in the right technical flow (priority order). He considered x-team dependencies and deployment efforts as part of it as well.

Another important part of his strategy was to guide what I’ll call “look ahead” within the teams. This was largely done by creating the right number of User Story Research Spikes and scheduling them appropriately so that the designers and architects could work with the teams on research & prototyping. The scheduling of this was critical, not too early and not too late, so as to not derail the clients’ functional commitments on the roadmap.

Then he met with our Chief Product Owner and her team of functional Product Owners, and integrated the technical product backlog with the functional or business facing product backlog. They did this at a roadmap level and also at an individual team backlog level. Over time they refined this approach and it worked quite well. The teams received a backlog that was ‘balanced’ across the architecture, design, and functional perspectives. If they had technical questions or needed help, they would engage the architects. The architects also shared in the “acceptance” of the stories, but it was considerably less formal than the first story I shared.

Technical Product Ownership (TPO)

Clearly in these two stories, there evolved notions of technical product ownership. In the end, it truly doesn’t matter if the TPO is a partner or external adviser. What’s important is that the “Voice of” architecture, design and technical flow is well represented towards the team via product road-maps and individual backlogs.

Informal TPO

There are probably two categories of this. First is, you have a smattering of technical stories needed within a product backlog and someone needs to help the Product Owner define, manage, and accept them. In these cases, I think just asking someone on the team to serve an informal (TPO) role is a fair and reasonable response.

They would partner with the PO and consolidate the backlog together. The TPO would lead grooming and maturation of the technical stories and the PO would manage the rest.

Formal TPO

This is an extension to the first case. I usually find it needed if there are large-scale technical initiatives in play OR a consistent flow of architectural stories trying to make it into the products’ evolution. Usually this would be for “older” products and the flow is related to accrued Technical Debt. In either case, there are more technical stories flowing through the team/backlog, which need consistent time and attention.

Chief Technical Product Owner

And for more organizational-wide guidance, the second story introduced the notion of UX and/or architecture group heads taking on the role of road-mapping architecture and design areas via user stories on product backlogs. This creates virtually two backlogs, that is a functional product backlog and a technical product backlog, that then need to be strategically merged into a single, thoughtful whole.

In this case, the two product views are merged and then ‘fed’ into their respective teams. The key here is the grooming process that surfaces dependencies and research spike needs so that the integration of the two and the execution dynamics are thoughtfully planned.

It’s vitally important that the teams themselves are involved in this process as soon as possible. Usually this happens when executing the spikes and via Release Planning meetings/activity. But each leader also needs to share their high-level strategies and goals with the teams on a periodic basis as well.

Wrapping Up

One of the largest challenges associated with Technical Product Ownership isn’t really technology-driven. It’s the tension between the business wanting to get as much functionality in the product as quickly as possible versus the need for technical debt reduction and technical evolution within the product. And how do I say this politely—usually the functional-side wins, which drives more and more technical debt and more pressure for improvement from that side.

So the Technical Product Owner needs to be someone who is balanced, who’s recommendations are trusted by all sides of the organization, and who can communicate the WHY behind the technology evolution strategies.

They also need to be able to “partner” with their Functional Product Owner partners. Indeed, they must acknowledge that the functional-side is always in the “drivers seat”. As there can be ONLY ONE Product Owner per team.

I’m incredibly interested if any readers have similar experiences to share in how they’ve handled “technically heavy” work in product backlogs. Please add your stories and approaches as comments.

As always, thanks for listening,
Bob.

Don’t forget to leave your comments below.

Pareto and You – Separating the Wheat from the Chaff

I can’t recall when I first came upon the Pareto Principle. I think it might have been when I was studying for my Six Sigma Green Belt. But I’m unsure. I know I was operating as a QA Director at the time, because most of my example uses for it surrounded testing and defects. Nonetheless, it’s probably been over 15 years.

That being said, I don’t think I hear people “considering” Pareto enough in their day-to-day activity, so I thought I’d bring it up and remind everyone of the Pareto Principle or 80:20 Rule and it’s implications for software engineering in general and agile teams in particular.

Basics

In 1906, Italian economist Vilfredo Pareto created a mathematical formula to describe the unequal distribution of wealth in his country, observing that twenty percent of the people owned eighty percent of the wealth. In the late 1940s, Dr. Joseph M. Juran inaccurately attributed the 80/20 Rule to Pareto, calling it Pareto’s Principle. While it may be misnamed, Pareto’s Principle or Pareto’s Law as it is sometimes called, can be a very effective tool to help you manage effectively.

Where It Came From

After Pareto made his observation and created his formula, many others observed similar phenomena in their own areas of expertise. Quality Management pioneer, Dr. Joseph Juran, working in the US in the 1930s and 40s recognized a universal principle he called the “vital few and trivial many” and reduced it to writing. In an early work, a lack of precision on Juran’s part made it appear that he was applying Pareto’s observations about economics to a broader body of work. The name Pareto’s Principle stuck, probably because it sounded better than Juran’s Principle.

As a result, Dr. Juran’s observation of the “vital few and trivial many”, the principle that 20 percent of something always are responsible for 80 percent of the results, became known as Pareto’s Principle or the 80/20 Rule.

–Quoted from About.com

Implications

Let me give you a couple of scenarios that illustrate “80/20 in action”:

  • If you’re testing a software application, then 80% of the bugs will reside/surface from 20% of the applications components.
  • If you’re counting costs, then 80% of the cost of a Toyota Prius, will be contained in 20% of the component parts.
  • Continuing the Prius example, 80% of the weight, will be contained in 20% of the component parts as well. And if we’re putting them in storage, there will be a warehouse space equivalent.
  • Back to software, 80% of the technical complexity (perhaps call it risk as well) resides in 20% of an applications components.
  • And so on…

I really like Juran’s wording around “the vital few”. The 20% turns out to be the interesting case and, once we find it, we can adjust our views to handle it much differently than the 80%.

Disclaimer

Now of course the numbers aren’t quite that precise and I don’t want you to build your every action around or upon Pareto. But making it a part of your analysis and thinking has served me well for years in focusing towards what truly matters.

Agile Implications

Now let’s get around to some of the implications or examples within agile teams:

Backlogs & Product Ownership

  • 20% of the User Stories probably need some sort of “research spike” in order to sort through the technical implications and ambiguity.
  • 20% of the User Stories (functional work) probably contain 80% of the customer value. So find them and do those first.
  • 20% of the User Stories (non-functional work) probably need expanded Acceptance Criteria to better guide the confirmation of completeness.
  • 20% of the User Stories need to be groomed multiple times (discussed, broken down, estimated, explored) before they become “ready” for sprint-execution.
  • 20% of the Features drive probably 80% of the customer usage.
  • 20% of the Features will contain 80% of the stakeholder & customer driven change.

Technical Risk

  • 80% of the technical complexity is in 20% of the component work a team is taking on. Find it and handle it differently: designs and design reviews for example, teamwork and pairing.
  • The estimates for 20% of the more complex User Stories will be inaccurate or contain more variance. Consider this when estimating.
  • 20% of the backlog will have strong architectural implications.
  • 20% of the backlog will have cross-team technical dependencies.
  • 20% of the application will contain 80% of the technical debt. Or will be attractive targets for refactoring.
  • 20% of the application will require 80% of the maintenance activity.

Planning

  • 20% of the Release Plan will contain 80% of the risk.
  • 20% of a Sprint Plan (backlog) will contain 80% of the value, 80% of the risk, 80% of the swarming opportunity.
  • 20% of the Sprint Plan (backlog) will contain 80% of the testing activity, testing work, testing risk, bugs/rework.
  • 20% of the overall work will take up 80% of the time; I wonder if that has anything to do with “90% Done Syndrome”?
  • 20% of the teams work will result in 80% of the “blocking issues”.

Quality & Testing

  • 20% of the User Stories will contain 80% of the bugs.
  • 20% of the User Stories will contain 80% of the testing complexity and/or repeated testing risk.
  • 80% of the User Stories or Features need less testing than you might originally think—think risk-based testing here.
  • You’re test strategies and plans ought to include the 80/20 Rule.
  • 20% of the defect repairs will contain 80% of the defect rework.
  • 20% of your tests will take 80% of the time to run; find these and automate them…then go to the beach.

These were not intended as “exhaustive” lists. More so, they are intended to get you thinking of the implications of the Pareto Principle in your daily agile journey.

Wrapping Up

Now all of that being said, there IS a challenge in using the 80/20 Rule.

It’s finding the 20%! It’s not always evident where it is.

Let’s take the bug example. It clearly aligns with my experience that 80% of the bugs “cluster” around a small percentage of the code in every application I’ve ever tested. Let’s call that 20%. So from a testing strategy and planning perspective, 80% of my effort (testing hours) should be focused there. However, finding or predicting those defect clusters isn’t that easy. If I’m presumptuous and think that I can predict them all, then I will most likely have wasted some time and missed some critical areas. So blind use of Pareto isn’t in your best interest nor is it prudent.

However, you should constantly be thinking of Pareto sweet spots in your daily work. It aligns nicely with the Agile Manifesto principles, Lean thinking, and common sense.

One final request: please add comments to this post with other “Pareto scenarios” that you can think of within agile contexts. I’d love to build on the examples I provided.

Stay agile my friends,
Bob.

Don’t forget to leave your comments below.

Quick references:
1. The photo is from Wikipedia article on Vilfredo Pareto.
2. http://management.about.com/cs/generalmanagement/a/Pareto081202.htm
3. http://betterexplained.com/articles/understanding-the-pareto-principle-the-8020-rule/