Skip to main content

Author: Robert Galen

Grooming your Agile Backlogs for Success

Rgalensept27th1In this blog post I want to share some guidelines around creating sound Product Backlogs for your agile teams. While this is certainly the realm of Product Managers or Product Owners, it often falls to the BA to assist or ‘own’ the Backlog details within agile teams.

In general, I find that teams’ spend too little time grooming. This leads to problems like –

  • Painfully long sprint planning meetings
  • Little thought being placed into designs
  • Poor planning and execution
  • Lack of creativity when solving business problems
  • Poor forecasting

Think of backlog grooming as something beyond simple requirement definition. It’s inclusive of project planning, design & architecture, and strategy development. So spending some time here is a great investment—both in the short term sprint execution and in longer term release strategy development.

Product Backlog & Grooming ‘Quality’ Checklist

  1. The Scrum Master facilitates the grooming meeting; not the Product Owner, driving everything. Core Scrum roles come nicely into play…
    1. Scrum Master as facilitator; process guide; coach
    2. Product Owner as business needs and value driver; defining the what and NOT the how (design) or how long (time)
    3. Team participates as a partner with the Product Owner in the real-time evolution of the Product Backlog
  1. Backlog Length – the backlog should contain sufficient detailed items to entertain your team for a release; using team velocity and your release tempo as a multiplier. And sufficient epic items to accommodate 2 releases. (however you define ‘release’ in your organizational context)
    1. Another rule-of-thumb, is to target / limit the Backlog to somewhere between 50 – 100 User Stories
    2. Every backlog item should have a distinct, thoughtful priority – order
  1. Grooming – run grooming meetings 2x a week. Remember the 10% investment guidance provided by Schwaber – so 4 hours per team member (individually and in meetings) per sprint week
    1. Another guideline is to have every User Story iteratively explored with the teams a minimum of 4x before sprint execution.
      1. As a high level epic; at least a release before execution
      2. As a story or set of stories 3-4 sprints away from execution
      3. As a story 1-2 sprints away from execution
      4. As a story right before execution
  1. For more complex work, new functionality, hard refactoring, etc. – ensure that sub-teams are identified that do off-line, collaborative grooming of these stories, focusing on
    1. Early design discussion
    2. Identifying story workflow & breakdown
    3. Capturing challenges & risks

and always leave sufficient work notes behind either in the story or on a wiki. Usually I capture this as a SPIKE. Remember two things about spikes:

    1. You should judiciously use Spike Stories as a way to engage your team in refining complex epics into meaningful stories and workflow. I’d say about 20% of User Stories are candidates for Spikes. Don’t short shrift them!
    2. Spikes should be run in the sprint before the sprint where you target their execution. This encourages some healthy look-ahead on the part of the team.
  1. The Product Backlog should be initially set to priority order and always re-ordered by priority by the Product Owner. Order can be changed, but it should also be stable—representing the market/customer awareness of the Product Owner in meeting the business needs and leading towards team confidence. So don’t incessantly churn the Backlog as it lowers team confidence.
  1. Grooming meetings focus on 3 levels of the Backlog
    1. Epic-Stories: breaking down larger scale Epics into parts; estimating their size, complexity and determining priority / workflow; probably no more than two releases in advance.
    2. Mid term-Stories: grooming stories that are 2-3 sprints away from execution. Having the team begin to think of design; execution efficiency,
    3. Short term-Stories: grooming stories “right before” their targeted sprint.
  1. Run end-to-end Blitz planning as a means of gaining team feedback on workflow. I think it’s healthy to run Blitz Planning often. For example several times before a release is firmly planned. You can also do it mid-way in one release for early guess-timation for the content possibilities in the next release. Don’t be afraid to perform these end-to-end views often with your team!
  1. Allocate time for bug fixing per sprint! Even if you set these up as “stretch stories” for the team…allocate the time as User Stories with an intent and acceptance criteria.
  1. Allocate time for hardening and testing periods per sprint. These ARE User Stories and particularly useful when Blitz Planning. They should have acceptance criteria that are focused towards guiding the team towards the LOE required to achieve Done-ness.
  1. Speaking of done-ness, there should be a clear definition of Done-Ness for the entire organization and uniquely for teams where appropriate. All story estimates and acceptance criteria should be created with this done-ness level in mind. Included with this should be the effort to professionally and responsibly complete each story.

Rgalensept27th2

What are some of the ‘smells’ of a well groomed backlog?

  1. Sprint Planning is incredibly crisp, short and easy; usually taking 2 hours or less for a 2 week sprint. There are NO architectural or design discussions within the meeting—the relevant parts of these discussing having occurred earlier.
  1. Team members are talking about Epics and Stories targeted for sprints 2-3-4 sprints in the future—nearly daily during each sprint and quite naturally aligning with the Product Owners’ vision.
  1. The team easily contributes new stories to the Backlog to represent non-feature based work; for example: testing artifacts, non-functional testing work, refactoring, automation development, performance tuning, SPIKEs, etc.  They view it as a shared responsibility.
  1. The team has a feel for where the product is going long term and map effort, designs, theme suggestions, and trade-offs towards that vision.
  1. Each Sprint’s Goal is easily derived from the Backlog; i.e. there is a sense of thoughtful and meaningful themes that easily surface from within the Backlog. Sometimes think of these as “packages”.
  1. The Product Owner includes team feedback (bugs, refactoring, improvement, testing, etc.) in EVERY sprint—in some percentage of focus. They clearly show the team faith in their feedback, judgment, and technical opinions.
  1. The Product Owner rarely changes priority of elements because of size estimates. This doesn’t include breaking them into Now & Later bits. Illustrating that priority is mostly driven from external business needs that are translated into stories.
  1. Blitz Planning is done every 2-3 weeks not only as a planning tool, but also as a risk / adjustment tool.  For XP folks, consider Release Planning as being a similar exercise. The point is end-to-end planning towards the next release milestone occurs quite frequently.
  1. Teams are hitting stretch items and pulling in more work per sprint. There is an enthusiasm to deliver more towards the goals by trading off story sub-elements creatively.
  1. The Backlog is mapped to the teams’ skills and capabilities, stretching them – yes, but not asking them to do things that they are not capable of doing either by skill or experience.
  1. Every sprint the Product Owner is making micro-adjustments to scope based on interactions with the team. Always – looking for that Minimal Marketable Feature set!
  1. The team is never surprised in Sprint Planning. Not even by a single Story.  I know, change is supposed to happen, but surprising the team with last minute changes…is not! Rather – wait till the next sprint.
  1. The team feels they have a say in what’s on the backlog and the distribution of features vs. improvement items. But they can’t be solely parochial in their views. They need to make a business case from the customers POV, for all non-feature work introduced into the Backlog. And they do this willingly and well!

I hope you find the guidance helpful…

Bob.

Don’t forget to leave your comments below.

Faultless Facilitation – Leveraging De Bono’s Six Thinking Hats

BA’s guiding, challenging, soft and squishy, crucial, and team-based conversations

G1

De Bono’s Six Thinking Hats

In my last post on facilitation, I shared some general principles for facilitating technical outcomes in software teams. Much of the focus was on creating just the right atmosphere so that team members shared their opinions and made thoughtful, team-based decisions.

In this post, I want to share Edward De Bono’s Six Thinking Hats model and how it can help you in facilitating much richer discussions surrounding your technical decision-making. But first, I want to share another facilitative model with you, though it’s more of a dynamic that occurs between individuals and teams who are trying to create new products, solve new problems, add features, or generally compete in their respective markets with creative solutions that drive customer value.

Divergent vs. Convergent Thinking

G3

Divergent – many possible answers

G2

Convergent – one answer

One way to think about any team-based discussion that needs to lead towards a decision is that it occurs in two phases. The first phase is focused on divergent thinking. In divergent thinking, you want to get ideas on the table, so consider this team-level process as equivalent to brainstorming.

When brainstorming, you want to get ideas and options on the table. You don’t want to judge, prioritize or analyze them. You simply want to generate as many as possible. If your divergent timeframe is too limited then team members will feel as if they haven’t had a fair sharing or vetting of their ideas.

As a facilitator, letting a team ramble on or throw around crazy ideas is a perfectly sound thing to do. In fact, if you don’t do it, it will lead to less buy-in and less permanent decisions. Quite often, you’ll want to leave half of your meeting time-box for divergent conversation in order to foster engagement across the entire team.

Then, mid-way through the meeting, you need to turn the discussions around and focus more on convergent thinking. That is, you want the team to start winnowing down ideas and converging on a single decision or at least a very small subset of options to decide upon later.

If you recall, I presented some decision-making tools in my last post. For example, as part of closing down or converging on an option, you could do some of the following:

  • Vote on the options on the table; define loose, strong or unanimous majority before voting.
  • Prioritize the options, keeping the top 2-3 in play for a second round of discussion.
  • List pros/cons for each option, leading towards prioritization and elimination.
  • SWOT (Strength, Weakness, Opportunities, Threats) analysis for the top 2-3 options, then discuss and converge on a selection.
  • Select a decision-leader to lead the convergence discussion; if resulting discussion exceeds your time-box, decision-leader decides.

Usually, the above are led by the facilitator who is at the front of the room, actively collecting data from the team at a whiteboard or flip chart and converging towards a decision.

It’s this oscillation between divergent and convergent thinking that is the hallmark of good facilitation when making important or groundbreaking decisions. Striking the balance between over/under discussion and fostering a whole-team view is crucial. I usually allocate set times for divergent, interim, convergent, and decision-making close as part of this exercise, even allowing these themes to cross over multiple meetings.

Now that I’ve set the stage for this process, let’s move back to the thinking hats…

G4

Back to the Thinking Hats

So what does this notion by De Bono have to do with team facilitation, and in particular, technical team-based decision-making?

Depth to our decisions

First, it gives us a breadth of perspectives to examine our options. For example, the White Hat is the data, facts and figures hat. If we were making a decision on database architecture to support our performance needs, it would be incredibly important for someone on the team to adopt the White Hat and mine the team (and customers, stakeholders and analysts) for performance-specific targets that are required for the architecture. In this case, hand-waving and conjecture isn’t helpful. What’s necessary is hard numbers and targets, something the White Hat perspective would foster and drive towards specifics.

Breadth or perspective to our decisions

As we’re reviewing and discussing something as a team, quite often we focus on a single or small set of the hats. For example, the Green and Yellow Hats often begin the discussion of new ideas or approaches for a specific design or architectural element, with Green driving the creative ideation part and Yellow the logic and positive part of the approach. If these two are the only hats that drive a decision, then the team missed four other perspectives in making their choice.

While this might not lead to a bad choice, the result will be narrowly conceived and narrowly considered. If someone on the team played the part of Devil’s advocate then they put on the Black Hat to consider some of the negative possibilities that could result from the decision. They criticized major and even minor points, trying to get everyone to consider this perspective and then respond with alterations or alternatives to the original design.

I like to think of each hat as testing the steel of an approach—tempering it and making it stronger. The more hats you use in your facilitation (divergent discussion, convergent discussion and decision-making), the broader your view is towards the problem and the broader your solutions.

Team-based influence

Quite often it’s difficult for some team members to raise issues in public. This is particularly evident in teams with strong personalities. Often, if a strong team member brings up an idea, say a Green Hat idea, many on the team will feel too intimidated to play Devil’s advocate or bring up any criticism because of the Green Hat member’s position in the team, their authority or their strength of personality.

The hats can actually dilute the personal nature of the commentary in these situations in that you’re not attacking the individual or their idea, but are simply putting on a hat and trying to fulfill its purpose. It abstracts individuals from personal engagement and quite often it strengthens their ability to bring out alternative positions that they would normally not be willing to do.

In practice, you see team members engaging the hats in this way. For example, if someone wants to bring up a criticism, they’d say, “as a Black Hat, I think the design lacks fault tolerance in the back end because of the limitations of the message-broker we’ve chosen…,” which challenges the idea and clearly not the individual.

G5

The hats themselves

The hats often lend themselves to a flow of discussion. I’ve laid that flow out below in the order in which I present the hats themselves. Not that you have to precisely follow this flow, but it does have a sense of symmetry to it as you’ll see after going through it.

While I do recommend that you try and engage all of the hats, there are no Six Thinking Hat police that will come into your meeting, declare a violation and haul you away. Trust your team and use your best judgment in determining which hats are appropriate for a specific decision.

White Hat 

Objectives; Facts; Requirements

Quite often, laying out the facts will help drive team decisions. In software, this often takes the form of requirements, use cases or user stories. White Hat discussion usually occurs in the beginning and rarely needs to be initiated. However, part of the hat is going back to the facts and refining them. In the case of functional requirements, perhaps look for tradeoffs and phasing to meet customers’ key goals.

Green Hat

Creative; Ideas

This is truly the brainstorming hat. In agile teams, this is the predominant hat, as the team works to be inclusive, respectful and attempts to get all viable ideas on the table. It’s the essence of planning and value poker as well where you get the teams’ opinions out in the open to drive discussion. This is also the creative hat, where different ideas aggregate into more creative solutions—the more the merrier.

Yellow Hat

Positive; Benefits

To use a common use case term, this is the “happy path” case for the requirement or the design that reflects the easiest implementation or has the greatest potential on the surface. It’s a great hat to explore early to gain momentum in your divergent thinking as many options will often surface for more analysis. The other side of this hat is exploring the benefit or potential of the idea—it’s business case or ROI. 

Black Hat

Negative; Criticism

As I mentioned earlier in the post, the Black Hat or Devil’s advocate perspective can be incredibly useful in honing your designs and solutions. It usually refines the Yellow Hat perspective in the edge cases, creating a more robust, and error- and fault-tolerant solution. It’s also one of the most familiar of the hats for the team to leverage.

Red Hat

Emotional; Reactions

Quite often in software it’s not the logical flow that attracts customers or gets them excited. Usually, it’s something emotional or unexpected. The Red Hat is the one that reminds you to do customer focus groups and to do research within your U/X team so that you focus on those “delighters” that excite and gain an emotional reaction from your customers. It also exemplifies your gut feelings when making decisions.

Blue Hat

Rational; Conclusions

This is the decision process hat, so the facilitator often wears this hat by default. It also focuses on documentation, maintaining roles and responsibilities, and driving the teams towards conclusions. In the case of roles, this hat holds the team responsible for the role of who is responsible for architectural decisions, perhaps an architect or team leader, and who can weigh in with alternatives.

Wrapping Up

I hope you found this post and its predecessor useful. I also hope they inspire you to work on your facilitation skills—particularly if you’re part of an agile team. Why? Because many teams spin and spin around discussions and desperately need quality facilitation. I hope you can broaden your role to help fill this need.

I’ve found that Edward De Bono’s Six Thinking Hats model can truly help your facilitation within your teams by fostering depth and breadth in your decision-making. For now, I sincerely hope your decisions improve in their quality!

Don’t forget to leave your comments below.

Faultless Facilitation – The Art of Team-Based Decisions

Blog_RGalen_June24th1

Business Analyst’s guiding challenging, soft & squishy, crucial, and team-based conversations

Nowadays, I spend 100% of my time in agile teams either engaged in direct coaching, teaching, or participating directly within the team. One of the core tenets of agile teams is self-direction. This is a state that is much easier to say than it is to achieve. One of the more critical activities that fosters self-direction is effective facilitation and the role of facilitator.

Leveraging Scrum then, this ‘art’ largely falls within the realm of the Scrum Master. A large part of that role is directed towards focusing the teams’ energy on effective discussion, debate, and decision-making. Trying to create an environment where the team experiences what Jim Surowiecki calls The Wisdom of Crowds. The key point is that the collective wisdom of a team, group or crowd is quite often greater and more valuable than any singular domain expert.

These team-based innovative solutions surround architectural & design choices, surfacing and analyzing critical customer requirements, and crafting the simplest yet most powerful feature sets in response. There are often a myriad of directions or choices a team can make and getting the path right isn’t always easy. Effective facilitation can be one of the differentiators for teams hovering between average and outstanding delivery. 

5 Dysfunctions – The Passionate Debate…

As it turns out, technologists seem to debate everything.  Or at least that is my experience from over 30 years of software development. They’ll be just as passionate about naming conventions for a particular nondescript configuration file as they are about designing a high performance databases for a new large-scale CRM application.

I think it might have something to do with personality type or perhaps just a fondness for debate. Regardless, just as we have a tendency to be overly optimistic with respect to estimates, we have a tendency to deaden the horse on nearly all technical topics.

I recently read The Five Dysfunctions of a Team by Patrick Lencioni and received some excellent training surrounding that material. One of the key points the instructor made focused towards encouraging teams to have Passionate Debate…but about the Things That Truly Matter. It’s the last part that we often forget as technologists.

A good facilitator will try and focus the discussion away from the myriad and towards the things that truly matter. It’s a prioritization game that aligns incredibly nicely with the agile methods.

Blog_RGalen_June24th2

So what does this have to do with BA’s?

If you’ve read any of my posts about agile and BA’s, you’ve seen me continuously reframing your role—for example, from an early requirement provider, to a whole-project oracle for requirements and their evolution. Or reframing towards establishing an ongoing and intimate partnership with your customers.

In this post, I’m trying to influence your facilitation skills. I think most BA’s have a wonderful capacity to facilitate team-based discussions surrounding requirements. But I feel you can extend that towards general facilitation surrounding all aspects of an agile team attacking a project. It’s this extension that I hope you entertain.

A Quick List of Facilitation Tools & Techniques

So, if you’re a BA who wants to improve your facilitation skills, I thought I’d provide a list of some techniques that I’ve found helpful in guiding teams’ towards successful agile execution. While these tools and techniques can be helpful in all contexts, I feel they’re particularly helpful in agile contexts. Enjoy!

Ask why; ask why five times

There is something quite powerful about asking why. Why are we doing this? Why is this complex design the only way to solve this problem? Why are we taking on so much scope in delivering this feature?

In lean circles a common approach is to ask why five times. The tactic is to peel the onion and drill through peripheral points into the core of an issue or requirement. And when you ask why, don’t be afraid to wait for an answer. Allow time for folks to think about the question and respond. Sometimes that silent pause can be most helpful in getting to the essential core of a discussion.

Ask silly questions

One of my favorite approaches to foster expanded discussion is to ask silly or frivolous questions. Sort of putting myself out there as being clueless, so that others will seize the moment to correct me and explain the options, true nature of each, and why we’ve chosen the direction we’re taking.

The other side effect is that teams’ will also re-examine their drivers for a decision and often look for simpler approaches. It sort of shocks their nervous systems into reconsideration. But clearly you need a thick skin and self-confidence to take this approach.

Make controversial statements – see who responds and how

A variation on the silly question approach is to make absolute or other controversial statements. Let me give you an example. In your business domain, designing and testing for high security is an important criteria.

So in a requirement planning session you exaggerate as to how little security testing you’ve seen in the requirements—knowing that there is a reasonable level. You’re looking for team members to respond with the facts. You’re also looking for realization across the team of any security testing ‘gaps’ that might still exist.

Put on a different hat (Development, Sales, Marketing, QA, Architecture, Regulations, PMO, Management)

One of the more powerful actions you can take is changing your point-of-view or perspective. That’s why personas are so powerful when developing User Stories. They help you to clarify the ‘User’ in the “As a _____” clause.

But you don’t need formally defined personas to put on different perspectives. Simply ask the team to consider the requirements, design, or problem from various lenses. I think the facilitative art is in selecting the perspectives based on the problem at-hand and not simply going down a by-rote list.

Devil’s Advocate

I sometimes struggle when someone adopts the Devil’s Advocate position. I’ve seen it miss-used as a stalling or blocking tactic from those who aren’t truly interested in specific directions or decisions. In these cases, it’s an unhealthy plow.

However in the healthy case, it’s a wonderful perspective. It focuses the teams’ energy on the opposite case, causing them to think about decision alternatives and how to defend & strengthen their case. Often it drives slight alternative approaches that might not have otherwise surfaced.

Recognize / thank folks that exhibit and weigh-on with candor

Actively recognizing folks who are weighing-in with valuable feedback is another way of encouraging feedback. First acknowledge those that are engaging and thank them for their contributions. If someone takes a risky position or challenges an incumbent in a courageous manner, I like to point this out as well.

In fact, the more candor I see being driven into the debate, the more I visibly appreciate and recognize it. Now you have to walk carefully here as a facilitator so you’re not perceived as picking favorites.

Exaggerate – small or large

This one of my personal favorites and I probably overuse it a bit. It’s related to the controversial statement option above. However, in this case, you minimize or maximize the point being made. It serves to get the teams’ attention and focus them on the “shades of gray” related to any discussion.

For example, if I detect that the team is minimizing the testability aspects of a design discussion, I might ask them how they’d propose testing it if they had to do it? What if they didn’t have any ‘testers’ at all? In this case, that exaggeration might pull the teams’ consideration towards the importance of building in efficient, up-front testability.

Ask quiet folks to weigh-In OR ask loud folks to weigh-In last

Team dynamics often seem to include quiet and loud characters in their fabric. Part of the role of a facilitator is to equalize these voices – in an effort to create an environment where all voices (opinions, options, thoughts) are heard.

One technique for loud voices is to privately ask them if they’re aware of how influential they are on the teams’ decisions and to ask them to weigh-in more carefully and after others have had their opportunity. For quiet members, often simply asking them directly will get them to engage.

Or assigning them a position that you think they’ll struggle with—in order to drive them from their comfort zone. Another approach is to setup ground rules that expect everyone to fairly contribute to decision-making.

Facilitative Tools

As a means of wrapping up this post, I thought I’d share some traditional facilitative tools:

  • Clearly rank options as a team; then converge on the best option based on team discussion – removing outliers first.
  • Discussion, then team voting; re-vote as required. Use a technique where you surface supportability of a decision vs. agreement with the decision.
  • Time-boxed discussion; then a pre-declared decision-leader decides if the team can’t come to a decision within the time-box.
  • List Pro / Cons and vote as a team – consensus or majority or decision-leader led decision.
  • Explore the overall cost of doing it vs. the opportunity cost – not doing it. Keep the discussion focused on value & cost—making it a mathematical decision of sorts.

Quite often it’s useful to write down or specifically quantify your discussions – making lists, ranking items, and generally clarifying the discussion in words at a whiteboard or on a flip chart. This has a tendency to bring the team back towards reality and help them to converge on a direction.

A large part of this is driving towards a decision—often the hardest part of facilitation. So having a variety of decision-making models can help.

Wrapping Up

I hope you found this post useful. I also hope it inspired you to work on your facilitation skills—particularly if you’re part of an agile team. Why? Because many teams spin-and-spin around discussions and desperately need quality facilitation. I hope you can broaden your role to help fill this need.

In my next post I’ll be sharing another tool for facilitation – Edward De Bono’s Six Thinking Hats model and how it can also help your facilitation. Till then, happy decisions… 

Don’t forget to leave your comments below.

Kupe – Agile is NOT a Fad!

Pet Rock – Clearly a Fad.

My esteemed blogging colleague Jonathan “Kupe” Kupersmith took a stance in his last blog post that the term agile had become a something of a fad. He quoted an article with four key points and then a general comment/quote from Scott Ambler that mentioned driving repeatable results. He put those two together to make a case that agile vs. waterfall isn’t the point. Instead, applying practices that work is the point in your projects — driving to valuable results.

RG1

So when I first read the post, I was entertained and I couldn’t agree more with the spirit of his message. Kupe had a wonderfully catchy title, a few references, and a short and sweet treatise on agile. I only wish I could be as succinct as Kupe was in this case (as you can see by the length of this post).  However, it also struck me that the post misrepresented some of what “agile is” and I found that regrettable. It’s not the Kupe was wrong…but perhaps just a bit too succinct in his selection of references.

 

So I thought I’d respond to the post, not exhaustively explaining agility, but highlighting a few critical points to get a different perspective out there —

RG2Firby – Fad (say that three times quickly)

Fad?

I really don’t like the use of the term Fad. Here are two definitions:

  • Google Dictionary – An intense and widely shared enthusiasm for something, esp. one that is short-lived and without basis in the object’s qualities; a craze;
  • Dictionary.com – A temporary fashion, notion, manner of conduct, etc., especially one followed enthusiastically by a group.

After reading those, I don’t think the agile methods, practices and approaches are a fad. The agile manifesto just celebrated its tenth anniversary. Scrum was first used and shared in 1993. XP was formed in 1999. Lean principles were established well before those time-frames.

So the practices, while potentially being practiced with youthful enthusiasm, are not temporary. They’ve also crossed over into the mainstream. And they are certainly not a craze. For example, the esteemed PMI is now introducing an agile certification, and while that’s quite scary to many agilists including myself, I don’t think they’d do that for a mere fad.

RG3Cabbage Patch Kid – Fad

Oversimplifying the Practices

The reference Kupe used represented agile practices identified four agile practices: collaborate, be lean, iterate and visualize. He mentions that he’s worked in teams that did all four of these practices and therefore were agile.


Advertisement

I’m not stuck on the word agile, but I am disappointed that Kupe chose an article with such a short list of practices as illustrative of agile practices. The original author left off a few things; for example: the emphasis on the whole team, trust of your team, a relentless focus on building in quality, the dynamics of transparency, the power of and need for customer engagement and the notion of customer acceptance are just a few of the missed concepts.

I also think each one of the referenced practices can be expanded. Take visualization for example. User Stories, Burndown charts, StoryMapping, Release Planning in various forms, Story Brainstorming Workshops, attending sprint Planning & Daily stand-ups are just a few of the ways for teams and stakeholders to visualize the various states of agile projects. The terse term does the depth, breadth and usefulness of the technique a disservice.

And each one of those techniques requires specific skill levels to do well — so not every team that is practicing agile is really applying the tools and techniques consistently or well.

RG4Tickle me Elmo – Fad

Agile vs. Waterfall

I will take a hit for the agilists falling into the trap of seemingly always badmouthing waterfall teams and approaches. We seem to look at these sorts of projects as being anathema to agile. Holding a position that waterfall is somehow beneath us now that we’ve “gone Agile…”

At least speaking for myself, my history in waterfall-esque projects is fraught with Death Marches and project failures. Only lightly interspersed with the occasional project success. So yes, there are a few scars. But that shouldn’t give me a license to diminish traditional project approaches. Even if I do personally find waterfall-esque thinking to be inappropriate for most of today’s software projects.

I interpret one of Kupe’s points to be that we need to stop getting stuck on the name of the methodology or process, but simply dive into our projects, applying the practices that make the most sense whether derived from agile or waterfall-applicable projects. I would buy this if the agile practices could be individually and easily parsed into equally useful tools. Sort of like a Swiss army knife. But I don’t think they can.

My experience tells me that agile practices are fundamentally diluted when you pull them apart and consider individual practices as options. For example, I can’t tell you how many teams I encounter that consider having a daily stand-up, a backlog, a time-boxed iteration and an iteration review to be an effective agile implementation. They don’t even consider self-directed and x-functional teams, pairing, full transparency, customer inclusion, quality practices, teamwork & collaboration, and retrospectives as important or necessary.

They iterate for a short while — then fail.

Immediately they blame it on “Agile,” implying that the approach doesn’t work. While nothing could be further from the truth, they simply don’t understand that. Here’s the point — agility is not simply a set of practices that can be individually applied. Instead, it’s a holistic set of related practices that work together to reinforce the core tenets of agile teams.

Can there be some flexibility?

Sure. In fact, the methods foster the notion of “Inspect & Adapt” as a core principle. However, you also need to be mindful of the experience of the team and the relational nature of the practices. Many agile coaches recommend that new agile teams adopt all of Scrum or all of XP’s practices and learn them well before they try composing practices on their own. I think that’s my key issue with the reference that Kupe used and his list — he should have mentioned something about it being a subset and that the methods should be leveraged as ‘packages’ for best performance.

If you want to get a better sense for the depth and breadth of agility, please read the following:

  1. The Agile Manifesto
  2. The Principles ‘behind’ the Manifesto
  3. The Declaration of Interdependence
  4. The Craftsmanship Manifesto
  5. The Context-Driven School of Testing
  6. Lean Software Principles

I guarantee you’ll come away with a renewed appreciation for the depth and breadth of the agile methods. You still may not “buy them,” but you’ll at least understand their scope.

Wrapping Up

I certainly hope Kupe isn’t offended by this post. I’m one of his biggest fans and we need a diverse set of views to drive discussion and evolution in our profession. I also think he values and viscerally ‘gets’ the benefit of agility from a holistic perspective.

That being said, we all need to be careful about how we characterize methods. If we’re going to do it, we need to err on the side of completeness. Particularly with something as important as a set of practices (methods) that have proven to be so successful in changing the way we attack and deliver value for technically challenging software projects.

And finally, let’s not call it a fad.

Don’t forget to leave your comments below.

Product Backlogs: Boulders, Rocks, and Pebbles…Oh My!

Galen_BA_May3One of the hardest things to do in agile requirements is to break things down into constituent parts for execution. I use the term backlog grooming to represent this activity. There are a couple of key factors in it. First is the notion that you can’t (or shouldn’t) break everything down at once into small components of work.

Truly it’s not that you can’t. In traditional projects we almost always break things down into small, more granular pieces. That’s not that hard. What’s hard is doing this without any knowledge of the real-world aspects of the work, since you haven’t done any yet.

In agile projects you work from a Product Backlog or a list of features, to-do’s and activities that are all focused towards completing a project. The list is in priority order and is executed in that order. Not all elements of the list are defined at the same level of clarity or granularity. Why? Because it can often be a waste of time and you revisit the backlog often as you complete bits and pieces of the work.

My analogy is that a solid Backlog consists of different sizes of User Stories. There are boulders, rocks, and pebbles. Pebbles are finely grained stories that are ready for the very next sprint-i.e. high priority items. As priority decreases, the size of the stories increases-from Rocks to Boulders. In this post I want to explore the dynamics of these three levels of size for handling Product Backlogs represented by User Stories.

Boulders

Let’s start out with an example of a Boulder. The project in this case is a Word Processor-something we’re all generally familiar with. Call it Microsoft Word or Google Docs.

The Boulder-level story is:

As a writer, I want to format text in a wide variety of fashions, so that I can highlight different levels of interaction with my readers.

That’s the description of the story itself. As you know, solid stories contain acceptance tests as well. Would you even create acceptance tests at this level? I suspect not.

Now if this was in a list of other Boulders, we wouldn’t work on it very often / much-particularly if it was a lower priority. But what if this was a relatively high priority and the Product Owner wanted us to get some traction on it? What would be the next step?

Rocks

Galen_BA_May3_2

I’d say break it down into Rocks. In this case, these would still be rather large. They would be ambiguous and their priority would be fluid. So what’s the value in breaking them down?

So we can start to visualize the various parts of the Boulder and decide which Rocks to tackle first. So here are some Rocks pulled from the Boulder:

  1. As a writer, I want to allow for text font changes; 20-30 different font types, colors, so that I can highlight different levels of interaction with my readers
  2. Allow for various attributes: underline, bolding, sub/super script, italicize, etc…
  3. Allow for a form of headings; 3 primary levels
  4. Allow for indenting of text
  5. Allow for lists (numbered and bulleted); single level first, then move to multi-level
  6. Allow for alignment – right/left justified, centered, variable
  7. Allow for do/un-do to include ongoing text activities
  8. Establish a paragraph model (or a variety of models)
  9. Show/hide ‘hidden’ formatting marks
  10. Establish the notion of a “style set” that can be used to establish a collection of favorites

I’ll stop now because I’ve run out of energy, but we could clearly go on and on. I think this is a nice set of Rock-level stories for initial discussion. These are essentially ready for Backlog Grooming with the team. Clearly some of them border on Boulders in their own right and most if not all are quite large stories.

There are two activities that can help us break them down into Pebble-level stories. Having the team estimate them always helps. First the estimates will clearly tell you whether the Rock will fit into your sprint length. If it doesn’t, then you are truly forced to break it down further.

Even then, you don’t want to fill your sprints with Rocks that need the entire sprint to complete. That sort of end-to-end execution dynamic would be a very risky proposition and would potentially defer done-ness and delivery too late in the game. Point being – you want a variety of Pebble-level sizes to effectively load-balance and fill each of your sprints.

But estimation helps drive the discussion and decomposition.

The other activity that helps in decomposition is writing solid acceptance tests for your Rock stories. Let’s do one by example – using this Rock-level story:

Allow for various attributes: underline, bolding, sub/super script, italicize, etc.

We’ll start writing acceptance tests for it.

  • Verify that underline works
  • Verify that bold toggles for all font / color types
  • Verify that all combinations of all attributes can be combined
  • Verify that font size changes do not impact attributes
  • Verify that paragraph boundaries are not effected by
  • Verify that attributes continue in pre-text, post-text ; for example, if we bold a numbered list text, the number should be bolded

Now imagine this Rock story with and without the acceptance tests. I see it getting quite a bit larger as we peruse the tests and start considering all of the nuance and complexity of the Rock.

Without them I would probably have underestimated its’ size. I would also have fewer ideas around how to decompose it further-if that was required. With them I can start fully considering the size and breadth of the story. They drive done-ness checks as well; so would be an inherent part of our testing.

Pebbles

Galen_BA_May3_3

So, what would be good Pebble-level stories derived from the above Rock? Let’s attack some of the attribute characteristics individually and see if that helps:

  1. As the editor, allow for underline attributes, so that users can embellish their text…
  2. As the editor, allow for bold attributes, so that users…
  3. As the editor, allow for italics attributes, so that…
  4. As the editor, allow for sub-script attributes, so that

Since all of these Pebble Stories are sort of related, I could ostensibly bundle them into a theme. That would make sense in building them and in demonstrating their behavior. It might also make the testing a bit simpler.

I also suspect that I could bundle the acceptance tests for this theme together-simply to reduce the writing I have to do. I’ll copy a few of the above acceptance tests that might apply to the theme or collection:

  • Verify that font size changes do not impact attributes
  • Verify that paragraph boundaries are not effected by
  • Verify that attributes continue in pre-text, post-text ; for example, if we bold a numbered list text, the number should be bolded

Are examples of the bundled acceptance tests.

Wrapping Up

One of the hardest challenges in adopting agile methods is understanding the nuance associated with the “simple list” called a Product. It’s an organic list that the team needs to revisit time and again; breaking stories down and drilling into their details. Not trying to understand everything, but gaining sufficient understanding to (1) effectively size and plan a story for a specific sprint and (2) be able to have some design & construction idea so that the story isn’t a surprise in execution.

The other factor is that stories often (always) beget more stories. For example, as part of the above story we might write a story related to writing a tool to change font size and attributes dynamically across the spectrum of supported fonts and attributes. This would allow us to automate the process of testing this particular behavior.

So the Boulder-Rock-Pebble metaphor is intended to remind you of the requisite pattern of continuously breaking your backlogs down. I hope it helps in making your backlogs look more like this-

Galen_BA_May3_4

So, start breaking down those Boulders…

Suggested Terminology from this post:

  • Product Backlog – a prioritized list of work for the team to complete; aligned with a project or application release goal or goals
  • Backlog Grooming – periodic visit of the product backlog; refining stories towards execution and delivery; deciding gaps in knowledge and how to fill them
  • User Stories – from the Extreme Programming space; a small use case on a card/post-it note
  • Themes – collections of User Stories that make sense to implement together; usually driven by demo or testing considerations
  • Story size language:
  • o Boulders – Very Large stories; synonymous with Epic
  • o Rocks – Medium to large stories; small Epics or larger stories
  • o Pebbles – Small stories; ready for execution

Don’t forget to leave your comments below.