Skip to main content

Tag: Scrum

Simplicity – The Newest Craze in Agile

My motivation for this article isn’t to slam or denigrate any of the scaling frameworks (and those that continue to emerge). Full disclosure, I’m a SAFE SPC, and I find significant value in leveraging some framework for Agile at scale.

That being said, I have a great concern that I want to share. I think we’re losing some of the original thinking and approaches from early instances of Agile. The methods grew out of small teams doing big things.

But as we’ve gone forward and introduced agility into larger instances (enterprises, large-scale companies, distributed projects, etc.), I’m afraid that we’re losing the very essence of agility that made it attractive in the first place. Things like:

  • Small teams
  • Simple, focused solutions
  • Just enough
  • Face-to-face collaboration
  • Working code
  • High-value delivery

These seem to have lost their attractiveness. I want to share a series of stories that I think (hope) might shine a light on returning to our Agile roots. Or at the very least, might get you to rethink some things.

First Story – Remember Wordstar?

I doubt whether many of the readers will recognize this program. But at one point, pre-Microsoft Word in the early to mid-1980s, Wordstar was the premier word processor on the planet. It was supported on something called CP/M and later on DOS.

What was amazing about Wordstar, at least to me, was that it fit into 64k of RAM. In fact, it fit into less than that because it had to share that space with the operating system. Imagine that, a simple, but full-function word processor fitting into less than 64k of RAM. Clearly the designers had to make incredibly tough choices as to what features were critical to the program. In today’s terminology, we call it an MMP or Minimal Marketable Product.

Did folks complain about missing features and capabilities? You bet! But that didn’t stop a whole generation of programmers from being able to use Wordstar to get the jobs done.

I reminisce about Wordstar and similar programs because I think today we’ve gotten a bit lazy. Since we have hardware resources to spare, we design bloated products just to prove that we can design 100 complex ways to do inherently simple things. It’s easier than thinking about simple designs and simple solutions to complex problems. When Agile came around, it placed a renewed focus on design simplicity, heck simplicity everywhere. Getting to the essence of a problem and solving…just it.

I think we could relearn something important by revisiting those software design goals from so long ago…

The Lesson: Wordstar had hardware limitations that forced the developers to minimize the product to an MMP feature set. Today we would do well to imagine that we had a small hardware footprint to encourage us to be creative, innovative, and frugal in our applications.

Related Article: Agile is Focused on Capacity Equalization

Second Story – Unix

The beginning of Unix is another one of those minimalist stories. It was developed by a handful of developers in the early 1970s at AT&T Bell Labs. While some of the early Unix contributors were considered super-programmers, nonetheless it’s another case where a small group did big things.

As Unix’s popularity grew, the software got bigger of course. However, I still remember the “erector set” nature of basic Unix commands and how you could “pipe them together” to do more complex tasks. The architecture itself was simple, yet elegant, and it remained that way as it historically expanded.

In the early 1990’s, Linus Torvalds famously released an Open Source version of Unix called Linux. In the beginning, it too was incredibly small. He initially worked on it himself and then with a small group of dedicated maintainers. Again, Linux has grown well beyond its original size constraints. But both systems are clear examples of elegantly designed small systems that had a huge impact on our technology.

The Lesson: Again simplicity, but not only that, the beginning of the realization that a small group of programmers could deliver big, useful things. It didn’t take a team the size of Montana to deliver meaningful products or applications.

Third Story – Bloatware and Pareto

I’ve been using a quote for quite a long time in my classes. I can’t remember where I’ve seen it before, but it goes like this:

Less than 20% of the features of Microsoft Word are actually used by human beings on the planet earth. So how did the other 80% of features get in there if they have no (lean) value?

  • Competitive pressures
  • Marketing checklists
  • Flat out guesses
  • Gold plating
  • Feature creep
  • Youthful enthusiasm

All probably contributed. But at the end of the day, the only determinant of value should be REAL customer usage. Now if only Microsoft would agree to trim Word down to size.

Here’s a supporting quote by Kelly Waters from a 2007 www.allaboutAgile.com blog post:

Anyway, back to my point about the 80/20 rule, Microsoft’s own research found that the average user of Word uses only *8%* of the functionality. That’s 8%! And I wouldn’t mind betting at least 80% of us use the same 8% too! [assertion]. If Microsoft had developed only the important 8% of Word, maybe they could still have captured the same market share? Maybe, maybe not; sadly we will never know.
http://www.allaboutAgile.com/Agile-principle-8-enough-is-enough/

The article also alludes to leveraging the Pareto Principle when it comes to mining for customer features that truly make a difference. Otherwise known as the 80/20 Rule, here’s an article I shared about Pareto in another context.

The Lesson: Pareto is alive and well and applicable in everything we do in software. We need to continue to look for the 20% of features that deliver 80% or more of the value to our clients. Then deliver that FIRST before we worry about the edge cases.

Fourth Story – Chrysler C3

The original Chrysler C3 Extreme Programming project was the inspiration behind the Extreme Programming Agile approach. I wasn’t part of the project, but my understanding is that it was a typical, large-scale, Waterfall project that failed. There were perhaps 50+ folks who were working on the effort for over a year.

When it was initially cancelled, the project had been a dismal failure. The story goes that Kent Beck made a case for a much smaller team to use a different approach to try and recover the project. The team was about 20% of the original team size.

As you can guess, the team recovered the project and delivered a workable HR system. But the project never met the overall goals and was ultimately cancelled again in 2000.

This was the project where Kent, Ron Jeffries, and several other famous Agilists earned their chops. It also inspired XP as a popular methodology and initiated the movement, as it exists today.

The majority of software projects today make use of XP practices even if they’re not Agile. Those early XP experiments in things like pair programming, refactoring, and continuous integration have proven to be incredibly sound ways to build software iteratively.

The lesson: A small group of focused technical folks can do big things IF they apply the principles and practices of the Agile manifesto. Bigger isn’t necessarily better.

Fifth Story – iContact vs. MailChimp

I worked at iContact from 2009 – 2012. iContact provided a SaaS email marketing application that competed primarily with Constant Contact and MailChimp. At the time, we mostly focused on MailChimp because of their phenomenal customer growth.

Beyond MailChimp being a direct competitor, we were both Agile shops. And I was at the time and still am enamored with the nimbleness of MailChimp.

In 2012, iContact had nearly 400 employees. Around the same time, MailChimp was reported to have approximately 80 employees. So they were 20% of our size. I just did a LinkedIn search for MailChimp and it showed 106 employees. So they are still relatively small and nimble.

What’s my point?

Well, to put it bluntly, MailChimp cleaned our clock! They kicked our butt when it came to head-to-head comparative measures of our SMB email marketing applications. They were quicker to deliver features. They were quicker to change and pivot directions. They were more creative, bringing out a freemium offering that drove tremendous growth A strategy that we tried to copy and failed.

I’m currently a MailChimp customer, and I’m amazed how insightful they are into the needs of their customers and how frequently they bring out new and, more importantly, useful features.

If you extrapolated from the numbers, we had about 100 people in technology. Mailchimp would have had 20-30 of the same folks developing, deploying and supporting their products.

I was at the time and still am envious of them. They were truly Agile in spirit and action. They had an incredibly small team that built and evolved a robust email platform that simply rocked in its market space.

The Lesson: We all seem to have a tendency to bring more people to bear on problems, under the impression that we’ll achieve our goals faster. However, that often doesn’t happen. Instead focusing a small, Agile team on thoughtful, high-impact client goals seems to be a winning strategy.

BTW: there are numerous other companies that are quite small, but that do incredible things. One historical example is 37 Signals the makers of Basecamp. Another is Menlo Innovations, who focuses on application development.

Final Story – ChannelAdvisor

Here is one final story just to finish whetting your appetite around my article themes.

This comes from my personal experience. During 2007 – 2009 I worked as an Agile coach and leader at a company called ChannelAdvisor here in Raleigh, North Carolina. As with iContact, ChannelAdvisor was an Agile shop (Scrum), and they built a SaaS application suite for eCommerce customers.

We had a large Scrum team that was focused on developing web search application extensions for our platform. The company was growing, and we were of the habit to grow our Scrum teams rather large before we would split them into two smaller teams. Our search team was approximately 12 in size.

A business priority change happened that caused us to split the search team in two – essentially making two Scrum teams of six members each. One continued to develop the search application, and the other was redirected to another component area.

What’s interesting about the story is that the search teams’ velocity didn’t change after we split them. They were averaging 25 points per sprint before the split and 24 points per sprint after the split.

I know what you’re thinking…that’s impossible. Well, no, this really happened. So what was going on here?

Simply put, the smaller team was more efficient in delivering software. There were fewer communications channels, fewer hand-offs, more collaboration, and better focus on their goals.

The Lesson: Try to solve your problems with as small a team as possible. Let them form and mature and achieve a steady velocity. Then “feed them” a prioritized backlog. You may just be surprised at how much they can get done IF you get out of their way.

Wrapping Up

I think that all of the scaling hoopla is just that. I don’t believe we have a distributed team problem or a scaling problem in today’s small or large-scale Agile contexts. And huge applications don’t need to be built by 10-50 Scrum teams.

We have an un-scaling problem. We have a boiling the ocean problem. We have a trying to throw too many people at it problem. We have a love of size and scope problem.

We’re looking at problems and creating the most complex solutions we can come up with. We are enamored with:

  • Distributed teams and/or large groups of teams
  • Way too complex architectures and designs
  • Solving problems with project management thinking
  • Management solving problems with “old thinking”
  • Building bloated software with features nobody uses
  • Not truly understanding our clients
  • Allowing business-facing folks to ask for everything
  • Scattershot vision hoping that we eventually hit the target

I can’t tell you how often I hear someone explain why his or her systems are complex. They reference the complexity as a badge of honor and a must have.

I would argue that there is no need for 100’s of developers to build systems. Small teams can do great things. That is if we allow them to do it.

That was the essence of agility in the beginning and it still is. I hope this article has inspired you to reduce your teams, break up your products, and take a new look at how you build your software and what you build.

Remember – a handful of Agile teams can do great things. And a handful of product visionaries can guide them towards just enough, lean, and wonderful software.

Stay Agile my friends,

Bob.

Even Business Analysts Need Love

Attention all stakeholders of the solar federation. My personal life has recently tuned me into pain in a way that makes me realize that the BA’s professional life offers great pain. “NO!”, you say, yet here is a list of hurts you may well have inflicted on a BA if you think about it carefully.

Don’t do these things:

Don’t kick the BA off the project just because they made a deliverable that looks like requirements to you even though you didn’t read it (sponsor, sponsor, sponsor). Your (anyone’s) deadline allows only the beginning of organizational learning. The loss of the BA guarantees the end of learning, never mind the teaching that must follow for change management success.

Don’t hire a BA if you know better than they do. You will annoy yourself while amusing the BA, which will only annoy you more.

Don’t give requirements as if they were dictation. If a BA takes your requirements EXACTLY as you give them and does not present an “analyzed” model showing what the requirements might REALLY be, it might even be that your BA does not like you.

Don’t dislike the BA. At worst they are only messengers, and at best they can get you results that you actually want.

Don’t withhold pay. Because may BA assignments are “temporary” (see above), many BAs are consultants/contractors/temporary. Making them wait long periods to be paid is not good for them, you OR them.

Don’t keep the BA in the dark. They can see how silly you look in the dark

Don’t yell at, or curse at the BA. Yeah, really. You know who you are, and so does everybody else on the project. Recriminations are not requirements. Not.

Don’t not read the requirements.

Don’t not read the email. Yes, it is OK for an email to have more than one sentence.

Don’t gush over the diagram just because it is nice. Be specific in your gushing, as in “I really like the fact that I can see ALL the redundancies across payment types” or “The pink really highlights just how risky a full cutover is, and how it pays to set up three teams.”

Don’t be impressed with nice looking diagrams unless the content is robust. The cuter the graphics, the less likely the analyst spent time on re-factoring process, the more likely the analyst spent time on re-factoring the format.

Don’t bring everyone to every meeting. The amount of work progress made in a meeting is inversely proportional to the square of the number of people. Eight people will accomplish 1/16 of the work that 2 might. Enforce the rule by accomplishing actual work at every meeting. This will help you keep meetings small.

Don’t accuse the BA of “blowing your scope”. It isn’t your scope, it belongs to the business. Besides, you got the scope wrong in the first place by rushing to solution, then broke everyone’s spirit by preventing any improvement once the team understood what the scope actually meant.

Don’t rush to solution. Plan the different approaches, from simple/cheap to complex/low chance of success. Use each simple approach to advance the solution by REALLY learning from it. REALLY.

Don’t have the BA walked out the door immediately, even when you must let them go. They don’t have access to sensitive systems functions (are they sysadmins or BAs?) and if what they know is dangerous to you walking them out is no way to make friends, and you are going to need friends (see 1st item above).

Don’t ignore maintenance of the highest level descriptions just because “everyone knows that”. If you are engaged in enterprise systems transformations, everyone is going to go from 5 people to 10 people to 40 people over a couple of years, and then to 20,000 overnight. Be ready for everyone – there will ALWAYS be new people, and a failure to model the highest levels of business requirements and process is the number one cause of failure – the big mistakes happen at the top levels of description.

Don’t insist that BAs produce meeting minutes – better they should produce models that represent business thoughts and decisions so stakeholders can see what they said. The alternative is to build based on the say as seen in the minutes. Then you will hear stakeholders say “I can’t say I see what this does for me.”

DO learn along with your BA, who is learning your needs and combining them with others that you don’t have time to learn. So many voices channeled through one, neutral, zero attitude model for all to reflect upon. Bliss, you think?

Don’t forget to tell me what you think, since you read this far, your opinion is already 1st percentile. Comments below 🙂

Don’t forget to leave your comments below.

Grooming, Maintaining, or Refining your Backlogs – Practices for Success, part-1

In 2009 I wrote the first edition of Scrum Product Ownership as a way of helping Product Owners understand their roles and responsibilities better. Before that, it was mostly an exercise in guessing and survival. In 2013, I updated the book in a second edition . In both books I took on the topic of Backlog Grooming.

As it turns out the term “grooming” is losing its luster in the community and terms like maintenance and refinement are replacing it. I believe the latest copy of the Scrum Guide uses the term refinement. So I will try to start using Backlog

Refinement consistently throughout this article. That being said, I really, really like the implications of the term grooming.

Backlogs & Refinement

Why don’t we first start with a definition of Product Backlog. From the July 2013 Scrum Guide, I’ve captured the following:

The Product Backlog is an ordered list of everything that might be needed in the product and is the single source of requirements for any changes to be made to the product. The Product Owner is responsible for the Product Backlog, including its content, availability, and ordering. 


A Product Backlog is never complete. The earliest development of it only lays out the initially known and best-understood requirements. The Product Backlog evolves as the product and the environment in which it will be used evolves. The Product Backlog is dynamic; it constantly changes to identify what the product needs to be appropriate, competitive, and useful. As long as a product exists, its Product Backlog also exists. 


The Product Backlog lists all features, functions, requirements, enhancements, and fixes that constitute the changes to be made to the product in future releases. Product Backlog items have the attributes of a description, order, estimate and value.

And because this article is about Backlog Refinement, let’s see what the Scrum Guide has to say about it as well:

Product Backlog refinement is the act of adding detail, estimates, and order to items in the Product Backlog. This is an ongoing process in which the Product Owner and the Development Team collaborate on the details of Product Backlog items. During Product Backlog refinement, items are reviewed and revised. The Scrum Team decides how and when refinement is done. Refinement usually consumes no more than 10% of the capacity of the Development Team. However, Product Backlog items can be updated at any time by the Product Owner or at the Product Owner’s discretion.

Higher ordered Product Backlog items are usually clearer and more detailed than lower ordered ones. More precise estimates are made based on the greater clarity and increased detail; the lower the order, the less detail. Product Backlog items that will occupy the Development Team for the upcoming Sprint are refined so that any one item can reasonably be “Done” within the Sprint time-box. Product Backlog items that can be “Done” by the Development Team within one Sprint are deemed “Ready” for selection in a Sprint Planning. Product Backlog items usually acquire this degree of transparency through the above described refining activities.

Now that we’ve explored what a Product Backlog and Refinement is, by process definition, let me share some of my experiences around effective refinement.

12 Tips for Effective Backlog Refinement

  1. Regularly Scheduled Refinement Meetings – I’m a big fan of creating a tempo of regularly scheduled Backlog Refinement meetings within your teams. I usually schedule 1-2 of them a week for an hour each. The entire team is invited and expected to attend. I want everyone to come to the meeting “backlog aware”, that is, they’ve looked at likely refinement candidates before the meeting AND they have thoughts surrounding (size, ordering, design, dependencies, strategy, quality, risks, and optimal flow).

    I also recommend that a team member take detailed notes that capture the valuable discussions and next-step decisions that are made. This is invaluable information and you don’t want to lose it. I normally ask members to round-robin note keeping, which helps to keep everyone engaged in the meeting, and in the backlog.

  2. Rigorous Prioritization – You must truly reinforce the notion of order or priority in your backlogs. I think of the Highlander movies and the phrase – “There can be only One” in this regard, so please don’t overload priorities

    From my perspective there are a variety of factors that should influence priority:

    • Customer value (right problem solved)
    • Business value (revenue generated)
    • Technical value (fosters learning, reduces risk, solid solutions, intelligent workflow)
    • Quality value (mitigated risk or improves quality)
  3. And I look for the team to consider and balance against all of these variables when setting priority. For example, it should never be the case that customer value always drives prioritization without consideration of technically sound solutions.

  1. Examine Your Stories Frequently – I often encounter teams who only refine their stories once. In that context they write them, refine the wording, write acceptance tests, estimate them, and order them – all at the same time. I could see doing that for trivial or straightforward stories, but never for complex ones.

    I much prefer an approach where the team “samples” the stories over several refinement meetings. Taking the story from concept or idea (Epic) and methodically breaking it down into refined and executable stories. I sometimes recommend to teams that—a “good story” should be refined a minimum of 3-4 times during its evolutionary lifetime. And this includes sufficient space in between refinement discussions so that the team has time to think about the story in relation to all of their other work and the project and product goals.

  2. Egg Timer – I usually recommend that teams stay aware of their story refinement velocity, that is, how many stories do they discuss in a 1-hour meeting. I often see velocities of 1-2-3 stories, which to me implies over –discussion. I prefer the team have a goal of “advancing” stories in their refinement meetings and not necessarily complete them in one sitting.

    The real point of a backlog refinement meeting is not to complete stories as quickly as possible, but to advance the understanding and clarity around the stories. As long as the team makes progress, and keeps chipping away at the stories, I’m happy with their progress. You might ask, what is a fair or rough velocity goal? I’m not sure there’s a magic number, but refining a story every 5-6 minutes might be a reasonable goal—so perhaps 10-12 per 1-hour meeting.

  3. The Estimates are NOT the most important thing – We’re in the middle of a refinement meeting and leveraging Planning Poker as a means of collaborative estimation. In one case, 2 developers have been debating whether the story is 5 or 8 points in size for the last 30 minutes. Eventually, the Scrum Master has to move on and the team still hasn’t agreed on the estimate. In another case, the testers on the team think a story is 13 points, but the developers strongly disagree. And the story ends up being estimated at 3 points. After this happens a few times, the testers disengage in the estimation and simply acquiesce to the developers on all estimates.

    In both cases, the estimates (numbers) have been the focus point of the team. I strongly would argue that the estimates are much less valuable than the DISCUSSION that the process of Planning Poker estimate enables.

    Who cares if it’s a 5 vs. an 8? At the end of the day, pick a reasonable, relative value and move on. BUT, have rich, deep, collaborative discussions across the team about the story. Hear everyone’s experience. Hear their concerns. Listen to what’s said and unsaid. And as a team, come to a fair and balanced relative estimate for “all the work” to move the story to meet your Definition of Done. That’s the value that the estimates drive.

Wrapping Up

I hope I’ve established some baseline thinking on your part regarding the practice of Backlog Refinement. From my perspective, it’s much more than simply developing agile requirement lists. It’s also the planning and strategy part of agile execution and it makes all the difference in how well your sprints are delivered.

In the next post, I’ll finish delivering tips 6-12 and wrap-up this topic. I hope you “tune in” for the remaining tips, until then…

Stay agile my friends,
Bob.

Don’t forget to leave your comments below.

Technical Product Ownership

I hear this challenge over and over again from Product Owners. They have little to no problem writing functional user stories, but then…

Bob, the team is placing tremendous pressure on me to write technology centric User Stories. For example, stories focused on refactoring, or architectural evolution, or even bug fixing. While I’d love to do it, every time I give them to the team and we discuss them, they nit-pick the contents and simply push back saying they can’t estimate them in their current state.

So I’m stuck in a vicious cycle of rinse & repeat until the team gets frustrated and pulls an ill-defined story into a sprint. And this normally “blows up” the sprint. What can I do?

I think the root cause of this problem is that the company views the Product Owner role as the final arbiter of user stories; meaning they need to write them all. I feel that’s an anti-pattern, but the question remains, what to do in this situation.

I’ve seen several clients apply approaches that significantly helped in handling, what I refer to here, as technical user stories. Let me share a couple of real-world stories (nor user stories mind you 😉 that should help you envision some alternatives.

Two Stories

Creating a Role of Technical Product Owner

Around 2008 I was working with a client who had developed a SaaS product offering on the Microsoft .Net stack. However, they had used an open source database for some of their core functionality in addition to SQLServer for the majority. The open source database started suffering from increased performance degradation over time and the engineering team decided it was time to replace it and normalize everything to SQLServer.

While this was a sound business and technical decision, the work definition, design and planning needed to be executed within the Scrum framework that the organization had been using for several years. On the surface that wasn’t a problem, but it was a rather large-scale infrastructural project and the product organization and teams hadn’t tackled something like that yet within Scrum.

The other problem was that this was a highly technical project and the teams’ Product Owner was not technical. They came up with the approach of creating a Technical Product Owner and selected one of the development managers to fill the role.

This role was a “strong partner” with the Functional Product Owner for the team. Over time, they began to draw a distinction between the Technical Product Owner and the Functional Product Owner in discussions and work focus.

They setup some rules for the two to collaborate on the same set of Backlogs as they directed work towards their team(s):

  • The Functional Product Owner was the primary PO for the team;
  • The Technical PO was in an advisory or assistant capacity;
  • Both PO’s needed to understand “the other side” of the backlog so they could easily represent the overall workflow, investment decision-making, and back each other up;
  • At a release planning level, they would guide their backlogs and teams towards the agreed upon percentages of investment for functional vs. technical change;
  • They would each address questions for their stories during the releases’ sprints;
  • They would sign-off on their own stories; often the Definition of Done was different between Functional and Technical stories.

I vividly recall how wonderfully the two Product Owners collaborated on the project. I think that’s important for creating this “dual-role” and having it work. There needs to be professionalism, trust, and respect across the two. I liken it to a really strong partnership in order for the results to be balanced and so the team sees a “consistent & united front” with respect to backlog priority.

It took approximately 3-4 months for the database replacement project to complete. After that, the Technical Product Owner reverted to his old role. But as an organization, this notion continued for larger-scale, technically focused work, even if it was only a small set of stories. They typically made software functional managers or architects into Technical Product Owners when the need arose, which seemed to generally make sense.

Including Architecture and Sound Design

Another client was focused on developing a SaaS eMail application. They had about 10 Scrum teams working in parallel across the applications’ code base, which was based on the LAMP stack. Organizationally, they had a small group of UX engineers who were guiding the functional evolution of the product. In fact, at the time they were “re-facing” the product and trying to simplify and update the user experience.

Their customer base was growing quite rapidly, so they were experiencing performance issues as the architecture was stretched beyond its limits. This created tension to inject both the UX redesign efforts and foundational architectural upgrades across the teams Product Backlogs.

The client CTO was also the head of a small group of architects. He was struggling in how to ‘guide’ architecture across 10 teams in a consistent way, while integrating with the overall company product road-maps. He initially tried doing that by simple influence; getting involved with the teams and informally asking them to take on architectural tasks. However, he became frustrated when the tasks were inconsistently delivered and deployed. Most often there was a lack of cohesion and integration as architectural elements were implemented across teams.

He finally struck on a recipe that seemed to work. He took on the role of Chief Technical Product Owner. He consolidated all of their technical work intentions, both from a software architecture, test architecture, and UX design perspective, and placed them on a single technical backlog. He and his team members worked hard to write solid stories, break them down (with the development teams) and stage them in the right technical flow (priority order). He considered x-team dependencies and deployment efforts as part of it as well.

Another important part of his strategy was to guide what I’ll call “look ahead” within the teams. This was largely done by creating the right number of User Story Research Spikes and scheduling them appropriately so that the designers and architects could work with the teams on research & prototyping. The scheduling of this was critical, not too early and not too late, so as to not derail the clients’ functional commitments on the roadmap.

Then he met with our Chief Product Owner and her team of functional Product Owners, and integrated the technical product backlog with the functional or business facing product backlog. They did this at a roadmap level and also at an individual team backlog level. Over time they refined this approach and it worked quite well. The teams received a backlog that was ‘balanced’ across the architecture, design, and functional perspectives. If they had technical questions or needed help, they would engage the architects. The architects also shared in the “acceptance” of the stories, but it was considerably less formal than the first story I shared.

Technical Product Ownership (TPO)

Clearly in these two stories, there evolved notions of technical product ownership. In the end, it truly doesn’t matter if the TPO is a partner or external adviser. What’s important is that the “Voice of” architecture, design and technical flow is well represented towards the team via product road-maps and individual backlogs.

Informal TPO

There are probably two categories of this. First is, you have a smattering of technical stories needed within a product backlog and someone needs to help the Product Owner define, manage, and accept them. In these cases, I think just asking someone on the team to serve an informal (TPO) role is a fair and reasonable response.

They would partner with the PO and consolidate the backlog together. The TPO would lead grooming and maturation of the technical stories and the PO would manage the rest.

Formal TPO

This is an extension to the first case. I usually find it needed if there are large-scale technical initiatives in play OR a consistent flow of architectural stories trying to make it into the products’ evolution. Usually this would be for “older” products and the flow is related to accrued Technical Debt. In either case, there are more technical stories flowing through the team/backlog, which need consistent time and attention.

Chief Technical Product Owner

And for more organizational-wide guidance, the second story introduced the notion of UX and/or architecture group heads taking on the role of road-mapping architecture and design areas via user stories on product backlogs. This creates virtually two backlogs, that is a functional product backlog and a technical product backlog, that then need to be strategically merged into a single, thoughtful whole.

In this case, the two product views are merged and then ‘fed’ into their respective teams. The key here is the grooming process that surfaces dependencies and research spike needs so that the integration of the two and the execution dynamics are thoughtfully planned.

It’s vitally important that the teams themselves are involved in this process as soon as possible. Usually this happens when executing the spikes and via Release Planning meetings/activity. But each leader also needs to share their high-level strategies and goals with the teams on a periodic basis as well.

Wrapping Up

One of the largest challenges associated with Technical Product Ownership isn’t really technology-driven. It’s the tension between the business wanting to get as much functionality in the product as quickly as possible versus the need for technical debt reduction and technical evolution within the product. And how do I say this politely—usually the functional-side wins, which drives more and more technical debt and more pressure for improvement from that side.

So the Technical Product Owner needs to be someone who is balanced, who’s recommendations are trusted by all sides of the organization, and who can communicate the WHY behind the technology evolution strategies.

They also need to be able to “partner” with their Functional Product Owner partners. Indeed, they must acknowledge that the functional-side is always in the “drivers seat”. As there can be ONLY ONE Product Owner per team.

I’m incredibly interested if any readers have similar experiences to share in how they’ve handled “technically heavy” work in product backlogs. Please add your stories and approaches as comments.

As always, thanks for listening,
Bob.

Don’t forget to leave your comments below.

10 Indicators That You Don’t Understand Agile Requirements

I presented at a local professional group the other evening. I was discussing Acceptance Test-Driven Development (ATDD), but started the session with an overview of User Stories. From my perspective, the notion of User Stories was introduced with Extreme Programming as early as 2001. So they’ve been in use for 10+ years. Mike Cohn wrote his wonderful book, User Stories Applied in 2004. So again, we’re approaching 10 years of solid information on the User Story as an agile requirement artifact.

My assumption is that most folks nowadays understand User Stories, particularly in agile contexts. But what I found in my meeting is that folks are still struggling with the essence of a User Story. In fact, some of the questions and level of understandings shocked me. But then when I thought about it, most if not all of the misunderstanding surrounds using user stories, but treating them like traditional requirements. So that experienced inspired me to write this article.

In a small way, I hope it advances the state of understanding surrounding the proper use of User Stories. So here are ten indicators that you might not be looking at your stories the right way:

#1) Writing a complete story all at once – trying to estimate and get it “right” the first time

Stories evolve as they move closer to sprint execution and as they can decomposed and further refined. I consider a good heuristic that a team should “visit” each story (and its offspring) about 3-4 times before they make it into a Sprint or iteration. This would include grooming meetings and also discussions across the team.

I really want teams to take ownership of their backlogs in real-time—thinking about upcoming stories, themes, their interrelationships, and design & testing strategies. Having this occur on a daily basis creates the emergent nature of agile requirements, design, and planning.

#2) If the Product Owner asks for something not articulated on the story, you split the story and make them wait for the “additional scope”

The expectation here is that once the story enters a Sprint, there is no “scope creep” allowed. That everything needed up-front definition. If the Product Owner forgot something or wants to react to some implemented functionality, tough. They get exactly what they asked for and changes get deferred to a new story in the next Sprint.

I disagree. User Stories are intentionally ambiguous; intentionally incomplete. I use the heuristic that stories should enter the Sprint at 70% clarity and exit the Sprint at 100% clarity. The point of the 70% is that there should be ambiguity – so things are discussed, clarified and determined during the Sprint. The requirement emerges based on conversations. This should not be penalized or deferred; instead it’s the “way of things” in agile requirements.

#3) Allowing stories with very minimal or trivial Acceptance Tests to enter your Sprints

One of the more important parts of the User Story is actually on the back of the card—it’s the Acceptance Tests or Acceptance Criteria. Establishing these conditions for acceptance early on help focus the team on the purpose and important bits in a story.

The help the developers design for what the customer values. They help the testers ensure the stories’ functionality meets expectations. They help the Product Owner verify acceptance, Done-ness and then move on. They speak to functional and non-functional requirements.

A bad example of a set of acceptance tests would be having one that says: “the story is accepted when the Product Owner signs off on the story”. You can’t imagine how often I see variations of these singular acceptance criteria. I have a heuristic that says a story should have a minimum of 5 acceptance criteria and at least one of them should be non-functional. If it’s a technical story, I look for most of the acceptance tests to be non-functional.

#4) Writing ALL stories with the [As a – I want – So that] format, even if it takes an hour or more to do so

Anything that you take prescriptively and do it “just because” we’re supposed to is probably a bad idea. Context matters in software teams—especially in agile teams. The standard format for writing User Stories is incredibly useful. In fact, I normally want to see teams leveraging the outer clauses of persona and business why, simply because they add so much value and nuance to the story (and the discussions).

However, I’ve seen teams often succumb to the tyranny of the story and feel compelled to write all of their stories in this format, even when they don’t truly fit. For example, I don’t know if technical stories truly fit this format. Sometimes all I want for a technical story is a definition and then strong emphasis on the acceptance tests (or how we’ll know when the story is complete). And that seems to be “good enough”.

At the end of the day, the story needs to have “words” that have meaning for the team to interpret, estimate, and execute towards delivering what the customer needs. It’s as simple as that!

#5) Believing User Stories are only for functional work (features, customer facing functionality) and not for anything else

There’s an illness in User Story writing that I want to cure. It goes by many names, but most popularly it’s called feature-itis. It’s an insidious disease that creeps up on you. The most significant symptom is backlogs that contain only feature-centric User

Stories. There are no others to be found. No stories focused on infrastructure, technical debt, bug fixing, test automation, refactoring, design, research spikes, nothing.

While the customers LOVE this disease, it ultimately does them a disservice. It creates narrowly focused products that are typically very fragile. The investment in quality in all dimensions simply isn’t there. Here’s a heuristic for your stories, all work should be crafted in stories and it should always be balanced beyond—just features. If you lack discipline, perhaps use the 80:20 rule to help; 80 percent features and 20 percent internal investment.

#6) Having a goal to get as many stories done as possible within each Sprint; so heads-down get to work

What should be the goal of each Sprint? Story count, points produced, meeting the Sprint Goal? All of these are primary targets. In my workshop, the gentleman used this example. He said what if:

The product owner changed their mind during the sprint. When they thought they wanted a blue background, but in reality then asked for green. Or asked for green, when they hadn’t specified it in the first place?

His suggestion or response was to split the story. Deliver what was clearly defined and then rework the additional story later. That delivering the story was the primary goal. I.e. meeting the plan or gaining the points.

I had to disagree with him. I didn’t look at this event as a ‘fault’. I looked at it as the normal way of things as the story requirements emerged. The answer should be: listen to the customer, change it to green, and deliver the story. From my perspective, the customer “drives” in agile teams. And we should primarily measure ourselves by delivering customer value.

#7) Feeling that stories have to be complete (100% understood, written, explained) before they enter a Sprint for execution

Teams are afraid to say “I don’t know” and “We need to do some additional research, prototyping, and experimentation to more fully understand this story and how to decompose it”. Instead, they write based on assumptions. The focus is on filling in the template and going through the motions to get a complete story. Very often this takes an incredible amount of time and the team “drops into” design discussions.

What’s insidious about this is it gives the team a false sense of security. As in – they now know everything about the story and the rest is simply “implementation details”. But that was never true in Waterfall requirements and its certainly not true here.

Remember the 70% clarity heuristic from #2.

#8) Teams “hold onto” stories until the very end of a Sprint, demo them, and receive feedback on the acceptance of the stories

This reinforces that view of a complete story that is developed – then tested – then demonstrated – then accepted serially through the sprint, which is probably not the best strategy. I much prefer to be demoing and interacting around stories all along their evolution within the Sprint.

In fact, there is the notion of a “Triad” in User Story collaboration; the three players being the Developer(s), Tester(s), and the Product Owner / Customer(s) all collaborating around the story. Questions are raised, clarifications are made, and the stories evolve. Only upon exit of the Sprint is the story at 100% clarity.

#9) The Product Owners must write the stories until the team “accepts” them as well-defined

I’ve seen this pattern over and over within many less experienced agile teams. Since the Product Owner ‘owns’ the backlog, then it’s their responsibility and theirs alone to write all of the stories. And beyond that, their work isn’t “done” until the team accepts the story as meeting their vision on completeness and clarity.

I say hogwash to this! Yes, the Product Owner is the final arbiter of the backlog. However, the WHOLE TEAM needs to contribute to the backlog. And the team is not some gating factor for “perfect stories”. That’s simply a Waterfall requirements mindset seeping back into the team’s behavior. Instead, everyone is responsible for getting their stories ready for execution and delivery.

#10) Thinking that estimates are only good for planning purposes

One of the best ways to move forward in understanding and decomposing a User Story it to throw an estimate via planning poker. Do it as soon as you can. Then have that wonderful discussion around the team as to what they’re thinking surrounding the estimates. Teams too often debate the nuance of a story that’s too large or complex for far too long. I like to estimate as quickly and as often as possible. It usually leads to insights at what to do with the story—break it down, run a research spike, let it alone, have some off-line discussions, etc.

Always remember, the most important ‘C’ in the 3-‘C’s of User Stories is the “conversation”. That’s the same goal for planning poker; less about the estimates and much more about the conversation(s).

Wrapping Up

I think the primary misunderstanding surrounding User Stories is that they evolve towards clarity; that you have the freedom to NOT define everything in advance, but to explore the requirements. The requirements, understanding, design, coding, testing, integration, defect repairs, and maturity of each story EMERGE over time. In other words, we’re lean in our thinking leading to just enough, just-in-time definition and delivery.

Now if we can only leave those ingrained Waterfall requirement behaviors behind us!

As always, thanks for listening,
Bob.