Skip to main content

Tag: Agile

Five Requirements Prioritization Methods

When customer expectations are high and timelines are short you need to make sure your project team delivers the most valuable functionality as early as possible.

Prioritization is the only way to deal with competing demands for limited resources. 

Stakeholders on a small project often can agree on requirement priorities informally. Large or contentious projects with many stakeholders demand a more structured approach. You need to removes some of the emotion, politics, and guesswork from the process. This article discusses several techniques teams can use for prioritizing requirements and some traps to watch out for.

Two Big Traps

Be sure to watch out for “decibel prioritization,” in which the loudest voice heard gets top priority, and “threat prioritization,” in which stakeholders holding the most political power always get what they demand. These traps can skew the process away from addressing your true business objectives.

In or Out

The simplest method is for a group of stakeholders to work down a list of requirements and decide for each if it’s in or it’s out. Refer to the project’s business objectives to make this judgment, paring the list down to the bare minimum needed for the first iteration or release. When that iteration is underway, you can go back to the previously “out” requirements and repeat the process for the next cycle. This is a simple approach to managing an agile backlog of user stories, provided the list of pending requirements isn’t too enormous.

Pairwise Comparison and Rank Ordering

People sometimes try to assign a unique priority sequence number to each requirement. Rank ordering a list of requirements involves making pairwise comparisons among all of them so you can judge which member of each pair has higher priority. This becomes unwieldy for more than a few dozen requirements. It could work at the granularity level of features, but not for all the functional requirements for a good-sized system as a whole.

Rank ordering all requirements by priority is overkill, as you won’t be releasing them all individually. You’ll group them together by release or development iteration. Grouping requirements into features, or into small sets of requirements with similar priority or that otherwise must be implemented together, is sufficient.

Three-Level Scale

A common approach groups requirements into three priority categories. No matter how you label them, if you’re using three categories they boil down to high, medium, and low priority. Such prioritization scales usually are subjective and imprecise. To make the scale useful, the stakeholders must agree on what each level in their scale means.

I like to consider the two dimensions of importance and urgency. Every requirement can be considered as being either important to achieving business objectives or not so important, and as being either urgent or not so urgent. This is a relative assessment among a set of requirements, not an absolute binary distinction. These alternatives yield four possible combinations (Figure 1), which you can use to define a priority scale:

* High priority requirements are important because customers need the capability and urgent because they need it in the next release. Alternatively, there might be compelling business reasons to implement a requirement promptly, or contractual or compliance obligations might dictate early release. If a release is shippable without a particular requirement, then it is not high priority per this definition. That’s a hard-and-fast rule.

* Medium priority requirements are important (customers need the capability) but not urgent (they can wait for a later release).

* Low priority requirements are neither important (customers can live without the capability if necessary) nor urgent (customers can wait, perhaps forever).

Watch out for requirements in the fourth quadrant. They appear to be urgent to some stakeholder, perhaps for political reasons, but they really aren’t important to achieving your business objectives. Don’t waste your time implementing these—they don’t add sufficient value to the product. If they aren’t important, either set them to low priority or scrub them entirely.

5freq 1
Figure 1. Requirements prioritization based on importance and urgency.

On a large project you might want to perform prioritization iteratively. Have the team rate requirements as high, medium, or low priority. If the number of high-priority requirements is excessive and you can’t fit them all into the next release, perform a second-level partitioning of the high-priority ones into three groups. You could call them high, higher, and highest if you like, so people don’t lose sight of the fact that they were originally designated as high.


Advertisement

Those requirements rated “highest” become your new group of top-priority requirements. Then, group the “high” and “higher” requirements in with your original medium-priority group (Figure 2). Taking a hard line on the criterion of “must be in the next release or that release is not shippable” helps keep the team focused on the truly high-priority capabilities.

5freq 2
Figure 2. Multipass prioritization keeps the focus on a manageable set of top-priority requirements.

Watch for requirement dependencies when prioritizing with the three-level scale. You’ll run into problems if a high-priority requirement depends on another that’s planned for later implementation.

MoSCoW

The four capitalized letters in the MoSCoW prioritization scheme stand for four possible priority classifications:

Must: The requirement must be satisfied for the solution to be considered a success.

Should: The requirement is important and should be included in the solution if possible, but it’s not mandatory to success.

Could: It’s a desirable capability, but one that could be deferred or eliminated. Implement it only if time and resources permit.

Won’t: This indicates a requirement that will not be implemented at this time but could be included in a future release.

The MoSCoW scheme changes the three-level scale of high, medium, and low into a four-level scale. It doesn’t offer any rationale for making the decision about how to rate the priority of a given requirement compared to others. MoSCoW is ambiguous as to timing, particularly when it comes to the “Won’t” rating: does it mean “not in the next release” or “not ever?” The three-level scale that considers importance and urgency and focuses specifically on the forthcoming release or iteration.

Agile projects often use the MoSCoW method, but I’m not a big fan of it. Here’s how one consultant described how a client company actually practiced the MoSCoW methods:

All the action centers around getting an “M” for almost every feature or requirement that is captured. If something is not an “M” it will almost certainly not get built. Although the original intent may have been to prioritize, users have long since figured out to never submit something that does not have an “M” associated with it.

Do they understand the nuanced differences between S, C, and W? I have no idea. But they have figured out the implications of these rankings. They treat them all the same and understand their meaning to be “not happening any time soon.”

$100

One way to make prioritization more tangible is to cast it in terms of an actual resource: money. In this case, it’s just play money, but it’s money nonetheless.

Give the prioritization team 100 imaginary dollars to work with. Team members allocate these dollars to “buy” items they’d like to have implemented from the set of candidate requirements. Allocating more dollars weights the higher-priority requirements more heavily. If one requirement is three times as important to a stakeholder as another, she might assign nine dollars to the first requirement and three dollars to the second.

But 100 dollars is all the prioritizers get—when they’re out of money, nothing else can be implemented, at least not in the release they’re currently focusing on. You could have different participants in the prioritization process perform their own dollar allocations, and then add up the total number of dollars assigned to each requirement. That will show which ones collectively come out as having the highest priority.

Watch out for participants who game the process to skew the results. If you really, REALLY want a particular requirement, you might give it all 100 of your dollars to try to float it to the top of the list. In reality, you’d never accept a system that possessed just that single requirement.

Nor does this scheme take into account any concern about the relative amount of effort needed to implement each of those requirements. If you could get three requirements each valued at $10 for the same effort as one valued at $15, you’re likely better off with the three. The scheme is based solely on the perceived value of certain requirements to a particular set of stakeholders, which is a limitation of many prioritization techniques.

It’s Not All Going to Fit

Sometimes customers don’t like to prioritize requirements. They’re afraid they won’t ever get the ones that are low priority. Maybe they won’t. But if you can’t deliver everything, make sure you do deliver those capabilities that are most important to achieving your business objectives. Prioritize to focus the team on delivering maximum value as quickly as possible.

 

Developing a Roadmap for a New Product

If you’re a Business Analyst or a Product Manager, chances are you have been responsible for developing a business/product roadmap at some point in your career.

In some organizations, particularly those that either (1) have growing or mature products or (1) those that are predominantly waterfall, product roadmaps are developed with crystal vision and high precision for 2–3 years down the road. There’s also a little room for change (i.e. scope creep!) primarily because the roadmap is developed keeping in mind market & innovation trends, competition and profitability and hundreds of resources are already actively working on stuff.

However, what do you do when you’re managing a new product, particularly in a large & established (process-heavy) organization, where everyone’s after you for crystal clear product vision, and you’re barely trying to make the only customer you have, happy?

There are several challenges to developing new products, particularly in the enterprise space, when your first few customers are extremely key to the future of your product and much of the product shapes from the requirements & needs given to you by them. As a business analyst or product lead, you’re often caught in the battle of building features for that first customer, even if you don’t particularly think this might help you scale. Moreover, everyone in the team is looking to you to figure out what to build, both in the near term and the long term and when this get’s particularly challenging is when the customer asks you for a huge feature or integration that requires a lot of ground-work upfront — and also wanted it yesterday!

Nevertheless, as a business analyst, it always helps to plan your roadmap to the best of your knowledge in order to keep your product vision and strategy aligned at all times while giving the rest of your team & organization a preview of what’s likely to come. Here are some roadmapping methods I’ve learned & executed as a PM through my experience in developing new products and solutions while trying to put some method to the madness!

1. Reduce as much ambiguity on the ‘now’, while setting some context on the future.

A roadmap does not necessarily have to span over multiple years or even a full year. And it also does not necessarily have to be time-based. Particularly for new products that are yet to see the light of the day, or are just beginning to, a now-next-later framework works great — the key is to create something that everyone can understand and get a sense of your product vision and why you’re building something.

BA Dec22 2020 1
Source: https://www.prodpad.com/blog/how-to-build-a-product-roadmap-everyone-understands/

I have in all-honesty built roadmaps with my team that look something like this, mostly because of varying release cycles and business priorities:

BA Dec22 2020 2
Source: Self-illustrated


Advertisement

2. Take your stakeholders along the ride.

I’ve been in many roadmap planning sessions where the product & design team have put down all of the things we think we want to build on sticky notes, brought the key stakeholders into one room (usually KOLs representing sales, marketing, business development & engineering teams), and had them group things either by product category or by ‘must-haves’ and ‘nice-to-haves’ and dot vote on the top 3 things they would like built. This technique works well particularly when a product is still in stealth mode and you as a PM is figuring out what the MVP version should have.

BA Dec22 2020 3
Source: http://dotmocracy.org/dot-voting/

An even more methodical and quantitative approach to roadmap planning & development can be to define evaluation metrics and identify a rating & score for each feature in consideration. You could either come up with the rating based on your own knowledge of the space or let your team determine a rating for each category. As a next step, you could either self-score all of the items in the list and review with your stakeholders for feedback and changes (which can be more time-efficient) or you could have each stakeholder score the items individually or together as a group. At the end of the process, you should have your prioritized roadmap based on the weighted totals.

BA Dec22 2020 4
Source: Self-Illustrated

3. Resource based roadmapping

Another way to plan your product roadmap can also be from the bottom-up — i.e. resource based roadmapping. Perhaps you work in a large organization and there are offshore resources who are available and allocated to work on your product (because it’s going to be the next big thing!). When this happens, you could use your team’s help to figure out (1) how big a feature is (Large, Small, 1 sprint, etc.) (2) what components does it involve (i.e. client, server, UX, etc) and (3) what value it could bring to your product & customer. This approach is great for those “big ticket items” that your customer would like to have in your product — like an app or service integration or a new algorithm — that need more time & resources to develop and could potentially be done in parallel to your “now & next” items.

Obviously, there are pros and cons to all of these approaches, and a lot of factors go into planning new products besides just customer needs — like competition, growth, revenue generation, etc. and sometimes PMs are also simply “told” to develop products as a response to the market or your competitor. In any case, roadmaps are a great way to visualize your product journey while enabling the rest of your team better plan and organize their work.

The Focused Analyst

Have you ever:

  • Started what you thought was a simple analysis task, only to find yourself unravelling a web of mystery?
  • Revised work that was completed early because things changed in the interim?
  • Suffered from Analysis Paralysis – continually re-analysed a situation instead of moving forward?

If you answered yes to the above, you’re probably a business analyst. So what steps can you take to promote the delivery of analysis that is relevant, has a clear purpose, and is available at the point-of-need? Perhaps there are some lessons to be learnt from agile delivery.

A distinction is often made between agile and “other” business analysis techniques. However, many of the techniques that are used to organise and prioritise work in agile projects can be employed more generally. This article summarises agile principles and techniques that can be used to plan work, manage stakeholder expectations, and focus on the delivery of real value – regardless of the delivery environment… agile, waterfall, other or undefined.

1. Definition of Done

The Definition of Done is a list of criteria which must be met before a work item (usually a user story) is considered “done.” In agile delivery, the criteria are agreed by the agile team prior to commencing work. Once a work item meets the criteria, it is considered done – and “done” means done!

The idea of “done” can be expanded to work performed outside of agile projects. Agreeing a Definition of Done with relevant stakeholders is a good way to set and manage expectations and identify priorities, prior to commencing work. This can promote more focussed analysis by providing a clear end-point, reducing the risk of both over-analysis and scope creep.

2. Relative Estimation

In agile delivery, effort is a relative measure. Effort is measured by comparing tasks and assigning points. Each task is assigned a single point value (usually a number from the Fibonacci sequence) individually by each agile team member to denote how much effort they believe the task requires. Tasks that are considered more onerous compared to others are assigned a higher number of points, while tasks that are considered comparable are assigned the same number of points.  Where individual point values differ, the team discuss the reasons why to reach a consensus value. As tasks are completed over the course of an initiative, the actual effort-to-point ratio becomes apparent leading to more accurate estimates.

Relative estimation is useful as it is generally easier for individuals to predict whether a task will take more, less, or the same effort compared to another, as opposed to accurately measuring the time and resources required to complete a single task without comparison. Receiving estimates from multiple stakeholders/individuals is likely to result in more accurate estimates as additional variables may be considered. While it may take time to estimate enough tasks to accurately understand how points translate into actual effort, the result should ultimately be better, more reliable estimates for a range of work activities.


Advertisement

3. Just Enough, Just In-Time

As a rule, analysis and design activities in agile are performed on a just enough, just in time basis – just enough to not waste time conducting analysis and/or completing documentation that doesn’t add value, and just in-time to ensure analysis is current and available at the point-of-need.

Analysis is rarely done for its own sake – it is an input for further analysis, design and/or decision-making activities. Ideally, analysis work should be completed as-and-when it is required to ensure it is relevant. Focusing on why and when analysis is required provides a foundation for delivering just enough value-adding analysis, just in time.

4. Regular Reflection

In agile delivery, a retrospective is held at the end of every iteration. Agile team members reflect on the previous iteration as a way of learning lessons and implementing changes to improve the delivery process. Open and honest feedback is encouraged, and elicitation techniques such as Stop-Start-Keep, Affinity Mapping and Brainstorming are used to promote full participation. Because reflection happens regularly, the changes required to address concerns are usually small and/or incremental, making them easier for the team to implement and monitor.

While most project methodologies promote the logging and reviewing of lessons at fixed points in time, this often happens too late to be applied to current initiatives. The regular reflection and change promoted in agile has the benefit of allowing learnings to be applied early when they may be of most benefit to the initiative. Including regular retrospectives in normal work practices can be a good way of promoting and implementing continuous improvement.

5. Focus on Progress – Not Perfection

Agile focuses on making progress – not obtaining perfection. Agile delivery sets up a system that can respond to new information through iterative feedback and refinement. Work items are constantly reviewed and revised as new information comes to light. As work items are completed, they are made available to stakeholders who are encouraged to provide feedback. It is generally accepted that initial versions will not be good enough, but that open and regular reviews will provide the information required to make them acceptable.

There are some simple steps for promoting the principle of progress over perfection in everyday business analysis. Set expectations early – never promise perfection. Engage stakeholders regularly – multiple short, sharp engagements can be more productive compared to longer one-off meetings. Focus on analysing what is important. Produce deliverables in a format that supports regular updating. Where possible, create “living documents” that are updated as-and-when information becomes available. Don’t be afraid to present work that may be incomplete or incorrect – showing the wrong answer can be an effective route to right answer.

Conclusion

Many of the principles and techniques associated with agile delivery can be employed in business analysis more generally. Pigeonholing techniques into “agile” and “non-agile” may mean suitable analysis techniques are overlooked.  The principles and techniques that help agile teams organise and focus on delivering can assist in the delivery of timely, value-added analysis – regardless of the delivery approach.

Resources:

The Agile Samurai: How Agile Masters Delivery Great Software, Rasmusson, Jonathon, 2010.

Agile Alliance – Definition of Done, https://www.agilealliance.org/glossary/definition-of-done, accessed October 2020.

Minimum Viable Product (MVP) Released: Move on or Resume?

What’s next? If you are like me, then you are pondering the next item on your to-do list and you can relate to this question.

I end up planning for the next task while my current one is still in progress. Typically, a multi-year project is broken into phases. Prior to the completion of the first phase, discussions are already under way for the next phase. As humans, it is natural to get excited about the new features in an application and want to continually improve on those features. Yet, it is worthwhile to take a pause from “What’s next” or “What’s new”, so that the team can reflect on parking lot items and lessons learned to help define product value.

Here are a few action items that a Business Analyst (BA) or any team member can resume post MVP release:

  • Revisit the user’s wish list: I have worked on initiatives where we got so focused with the delivery of MVP that the immediate next step was to continue to improve the released MVP. In this process, the wish list of the end users or “nice to have” requirements that were tabled were permanently left off.

How can I help? Record, revisit and re-evaluate wish list items on the backlog once the dust settles down with the MVP release. Follow up discussions relating to these items act as a reminder and help discover new backlog items.

  • Address edge cases: Recall exceptional scenarios that came up in previous meetings? Often, these exceptions do not transpire on a regular basis and end up on the back burner because the goal initially is to just rollout the MVP.

How can I help? Schedule a discussion, create a project plan and address the one-off scenarios. Risk and prioritize these scenarios to determine which ones will be in the next MVP release and which ones will potentially never need attention.


Advertisement

  • Reiterate “New functionality vs value: I was shopping online once, and the website met all the primary needs of an online shopper. However, it was when I had entered all the payment information, I was notified that an item was not in stock. As an online customer, I see more value in receiving live inventory updates for an item instead of the fancy features offered on the website. From the vendor’s perspective, MVP release could be “complete”, but did anyone analyze and evaluate what is truly valuable to an online customer?

How can I help? Perform value analysis. Collaborate with the business partners to define “value”. Value may look very different from the lens of a manager vs the lens of an end user.

  • Update glossary: I have attended meetings where participants call out terms and abbreviations. When it is a global project, there is an extra layer of chaos since there are a myriad of words and languages thrown all over the place. A lack of standard global terminology is an ongoing problem.

How can I help? Volunteer to author this list and get the definitions reviewed by business partners. Maintain a glossary as a living document in a central repository where everyone can review it after every MVP iteration.

  • Gather feedback: Are the users inundated with the new application or functionality? Are they forced to adopt the new application? Do they feel it is the same way of doing business but in a new application this time?

How can I help? A survey is a great option to capture the true sentiments of a user. It gives them an opportunity to vote on their likes and helps the team determine value for the next MVP release.

Have you resumed tasks that were placed on hold due to the MVP release? What are the action items you would like to pick up from where you left off? Think about it!! You do not want to keep improving the last MVP endlessly and overlook the features that never made it to the MVP release.  The end goal is not to get so absorbed with the MVP that the tasks or action items post MVP release slip through the cracks.

“Strive Not to Be A Success, But Rather to Be of Value” – Albert Einstein

Defining the Minimum Viable Product

When Eric Ries helped popularise the concept of the Minimum Viable Product (MVP) in his book ‘The Lean Startup’,

he defined it as ‘that version of a new product which allows a team to collect the maximum amount of validated learning about customers with the least effort’ . This supports products with a clear external customer and an anticipated revenue stream; allowing the team to focus on specific user sought enhancements and features while minimising the risk of wasted time and effort developing features that are not wanted. On the other hand, what if the product is being developed for internal company use; where the functionality required is fully scoped and defined? Is developing an MVP still a worthwhile activity?

In this case there are a number of benefits that can be realised by defining a Minimum Viable Product; for example, and most importantly, it enables the Product Owner to:

  • better optimise the Product Backlog and
  • descope user stories with more precision when required.

So why is defining the MVP rarely done for internal projects? There are a number of reasons for this and it can vary from project to project. I have been involved in a number of projects where the MVP has not been used. There have been a variety of reasons why this is the case:

  • The project is not truly agile; the Product Owner working on behalf of the client has defined user story as a ‘Must’.
  • Difficulty getting client buy-in; while they may support user story/requirements prioritisation, they refuse to relinquish features which are essentially enhancements to an MVP.
  • The product backlog is never truly completed; if the MVP is constantly evolving it is felt that it is not worth the effort developing in the first place.

However, this does not have to be the case; defining the MVP for a product should not be a burdensome task. To help us develop the MVP for internal products, or any product whose required functionality is fully known, we must first understand the purpose of the MVP and define it.
The purpose of the MVP is to deliver a working product, fulfilling the goal of the project. It does not preclude the delivery of features that add to, or enhance the MVP but allows the team to focus on the core functionality when user stories need to be descoped or new user stories are added to the backlog following testing and sprint reviews. It is important to remember that delivering an MVP allows for user feedback that that can greatly enhance the final delivered product. The MVP is not the delivery of a product with minimal functionality. 


Advertisement

Marek Hasa identified the key elements of an MVP as follows:

  • Functionality – a bundle of features which are all intertwined and can be used to reach a goal within the product and receive specific value
  • Design – needs to meet the commercial quality standards so that the feedback of your target users is not biased by amateurish execution
  • Reliability – needs to be thoroughly tested and fully functional
  • Usability – users are able to complete target actions and gain specific benefits from using the product

The issue is identifying what the core functionality is.

So, what is the best way to identify the core functionality and define the MVP? Traditionally, this has been done through prioritisation of user stories in the product backlog or using a user journey map. Neither approach, because of their discrete nature, allows the analyst/product owner to fully understand how the user stories deliver the project goal. These approaches can also miss system features from the MVP.

To identify the core functionality of a product it is necessary to understand the relationship between the features being delivered. This relationship is frequently more than just a user journey map and needs to capture the actions required to deliver the project goal, the data requirements and non-functional requirements. The latter two support the implementation of the design and reliability needs of the MVP. Modelling this relationship allows the Business Analyst to ascertain the core functionality, the key end to end flows (though ideally this is only a single flow) that deliver the project goal. By mapping user stories to this flow a Business Analyst is able to easily identify the functionality/user stories that are not directly relevant to these flows. They can also identify functionality that can be simplified or delivered via a workaround and enhanced in a later iteration. Examples of functionality that may not be considered part of the MVP include:

  • Automatic upload of data via an FTP site – an interim solution may be users uploading data from an email attachment
  • Calculations that handle exceptions to the norm – an interim solution may support the calculations being done outside the product and the results entered manually

Once the MVP has been identified user stories can be prioritised accordingly. The description of this relationship should not be detailed; it is a high-level understanding enables analysis of what is required to deliver the project goal. The detail will be expanded upon as part of the user story refinement ahead of inclusion in a sprint. This approach enables the user stories to be easily added and removed from the definition of the MVP and the impact of those changes to be assessed.

We have looked at the role of the MVP in an agile project and how it can be utilised in projects delivering software for internal use. We have also shown how an MVP can be identified in such an environment.

Should an MVP be the cornerstone of any agile project? To better deliver a product using agile it needs to be. The benefits of identifying an MVP are:

  • better user story prioritisation;
  • user story descoping without losing core functionality and;
  • allows user feedback on the product so specific user sought enhancements and features can be delivered