Skip to main content

Tag: Lean

Why Visualize Requirements?

How many times have you been in a meeting discussing a set of requirements, a methodology, or a project plan etc and someone has gotten up from their chair and said “where’s the whiteboard let me draw what I mean”?

I can tell you for me it has been plenty!!!!

Whilst requirements specifications are a great way to document the detailed information related to a new or existing product’s functionality we all live in a time poor society and few of us have the time to trawl through large documents and extract the information we need and then start the seemingly endless e-mail threads to discuss the individual use cases associated with each requirement consisting of many messages that start and end with “what did you mean by X?”, ” I meant X and Y but I think you thought I meant Z!” Instead why don’t we adhere to the adage of a picture tells a thousand words and instead of page after page of documents create a visual representation of those requirements – hopefully communicating a thousand words in a single picture.

However, what we must remember is that visualization of requirements can vary in its meaning. For example, some people may view requirements visualization in the same context as simulation diagrams, whilst others interpret visualization to mean simple use case diagrams or business process flows typically created in a MS Visio type tool. For me, all of these usage contexts can represent visualization, so instead of trying to classify visualization into one genre I thinking it is best to view it on a scale with simple flows at one end and high end simulations at the other – and the user selects which method is most appropriate for them at any given time. For example, if you are trying to show how a user will move through an application to make a purchase then using MS Visio to define process flows may be enough. However, if you are trying to envisage how a new UI (User Interface) may look then mockups and more rich content visualizations would serve you better. Whichever method is selected there are a number of benefits that come from visualization these include:

  • Flexibility and Usability – flow diagrams can be easier to navigate helping to find content
  • Mistakes can be easier to identify in a visualization
  • Easier to identify potential parallelisms between requirements and business processes
  • Easier to spot missing Use Cases in business process
  • Increase understanding of the requirements themselves
  • Increase understanding of the dependencies between requirements
  • Visualization of business flows can provide a first bridge to Business Process Models or SOA repositories

Now that we have explored some of the benefits of visualization the question now becomes when should it be used? Should we visualize every requirements we write or just some and if we are going to be selective which requirements should we chose?

In my opinion there are a number of questions we can ask ourselves which can help to determine when to and when not to visualize. These include (and there are many more):

  • Type of development method – we need to ask ourselves the question do requirements visualizations fit in with the need for more agile and rapid requirements definition or will they add more time to the development process?
  • Complexity of the requirement – if a requirement has too many sub requirements will this create a “spiders web” diagram which may overcomplicate the definition of the requirement?
  • Type of requirement – should we visualize the user story only and define the functional requirements associated with this user story as text or do we want to visualize all requirements?
  • Risk level of the requirements – should only high priority or high risk requirements be candidates for visualization?

It is important to note that I am not saying that requirements visualization is a “panacea” for enabling effective business and IT communication but what it will do is act as a good facilitator to help initiate a better degree of communication and understanding between the two parties.

So now the decision is yours. Why not try visualizing requirements and feed back to the group how things go.


Genefa Murphy works and blogs with Hewlett-Packard where she is Product Manager for Requirements Management. This article first appeared in HP Communities.

© Copyright 2009 Hewlett-Packard Development Company, L.P.

Requirements Definition for Outsourced Teams

In today’s economic environment, business organizations are demanding focused attention to fiscal discipline. IT organizations are finding themselves asked to support in-production applications on flat budgets, and new development is largely being approved only by the rule of efficiencies. Software applications are the focal point of improving efficiencies, as consolidation and integration projects can both reduce support costs of multiple siloed applications and streamline business processes for end users.

In this effort to do more with less, IT software groups are turning to outsourcing in record numbers in 2009. According to IT World, the economic collapse of 2009 has accelerated the use of outsourcers for software projects to record levels 1.

With CIOs turning to outsourcing as a strategic imperative to increase efficiencies for software projects, new challenges are being introduced that threaten the same efficiencies CIOs are moving to achieve.

By definition, outsourcing introduces third-party goods and services to augment capacity and capabilities. Since IT software has mission-critical implications, such third-party influence places a new burden on the business to ensure that these outsourced teams are properly goal-oriented, properly instructed, and properly managed to ensure productivity.

While there are many areas that can be influenced to ensure outsourcer success, there has been study after study that indicate the true control point for IT software projects is application requirements definition.

What follows explores industry, analyst, and customer recommendations on how to focus on requirements to ensure application development accuracy and to control risks so that the IT organization can turn those efficiencies into increased horse-power and lower operational costs.

Requirements Communication: A Challenge for IT Project Teams

The quality of requirements communication is a significant challenge for IT project teams, whether they are co-located or distributed. In the Software Development Lifecycle, the time dedicated to requirements definition has largely been consumed at

the early stage of the lifecycle, and it has involved dozens of subject matter experts who typically carry the title of business analysts or business systems analysts.

However, recent studies indicate that while business analysts do consume up to 10% of the project budget documenting requirements specifications 2, the result of their effort is typically in the form of difficult-to-understand paper-based documents. These paper-based documents are largely consumed by IT project teams, who must work to understand the intent of the author and translate the business need into detailed specification documentation.

Even in IT projects which largely consist of in-house development teams, i.e. not outsourced, the resulting rework and waste has been measurable. IAG tells us that typical waste and rework levels of poor requirements trace directly to upwards of 40% of budget consumption 3.

As IT organizations move to embrace outsourced teams as an extension of IT software project teams, the challenge of communicating requirements is exacerbated. MetaGroup tells us that over 50% of organizations that leverage outsourced teams have critical business-application knowledge in the minds of in-house developers that have been disenfranchised by the outsourced labor pool 4. As a result of this loss of subject matter expertise, outsourced IT organizations increase their dependence on the customer to

produce highly precise and specific requirements documentation. MetaGroup also tells us that turnover rates in outsourced service providers run at an average of 15-20%, resulting in a likely chance that specific talent assigned to your project will experience turn-over during the project cycle. This continues to reinforce the need for easily referenceable and consumable requirements direction. Experienced outsourcing customers and industry analysts have identified the appropriate focus areas to ensure IT project teams’ success when deploying outsourcing. While there are many areas that can impact the success of an IT team that has moved to leverage outsourced teams, there are a select few that dramatically improve success. IDC’s recent report on control points to outsourcing success helped draw focus to the most important areas.

IDC continues to articulate the control points that ensuring outsourcing is an opportunity for efficiency and not a threat to efficiency involve the strategic touch-points to the outsourced team. IDC documents these touch-points to be requirements definition, quality assurance, and in-flight project collaboration 5 Other analysts such as Gartner, voke, and Forrester all have offered supporting research, which point to these same control points.

Controlling the Control Points

As mentioned earlier, there is a generally accepted principal of the importance of requirements, quality assurance, and collaboration when aligning outsourced teams. However, when IT arranges the control points in relation to one another, the logical focus priority of requirements is revealed. As you can see in Figure A, the quality assurance and project collaboration control points are directly impacted by the depth, quality, and understanding of the project Requirements Definition phase. In fact, rarely is a single quality assurance test scenario not directly based on (and traced back to) functional or non-functional requirements. A modern quality assurance trend is a move toward test-based-development, a trend that is accelerating with outsourced teams. Test-based-development builds a one-for-one relationship between test cases and requirements, where literally a test case can function as a requirement asset. In addition, collaboration with development is often directly linked to the implementation of a business requirement, or how the software influences that requirement.

Figure A: Relationship with Requirements to QA and Collaboration

By directing focus on improving requirements definition, IT project teams that leverage outsourcing groups can better manage all control points, and thus improve the impact and focus of quality assurance and inflight collaboration efforts.

Problems with Requirements Communication

As we discussed in the previous section, working with outsourcers bring obvious challenges of aligning distributed, third-party resources around project goals. Location challenges alone can introduce time zone and collaboration barriers that can tax productivity and efficiencies. However, with third-party organizations, IT groups can introduce additional challenges that include processes, tools, training, context, domain expertise, and incentives.

Requirements communication fits squarely into the center of this challenge. As we discussed, traditional methods of communicating requirements, which include enumerated lists of features, functional and non-functional requirements, business process diagrams, data-rules, etc., generally are documented in large word-processing or spreadsheet documents. When applied to an outsourced team, this method of communicating creates significant waste and opportunities for failure, as the barrier to understanding is too large to overcome.

Incorrect interpretation and the lack of requirements validation can create artificial (or false) goals which consume valuable outsourcing resources. Due to the nature of software development, these false goals usually manifest themselves into incorrectly implemented code, resulting in costly waste and rework. Outsource providers often treat such rework as “changes”, and bill back these “changes” to the customer. This continues to erode the efficiency that IT organizations strive to achieve when adopting outsourcing in the first place.

Models, Validation and the Requirements Contract

To significantly reduce the probability of ineffective requirements communication through natural language documentation, IT organizations are transitioning to more precise vehicles to communicate requirements.

One of these vehicles is the adoption of the model-based approach to communicate requirements in a highly visual way. Requirements models provide detailed context capture through highly precise data structures. Complete models include the use of universally accepted formats as structural guides, interlinked them together to create a holistic representation of the future system. The formats used in these holistic representations include use-cases for role (or actor) based flows, user-interface screen mockups, data lists, and the linkage of decision-points to business process definitions. These structures augment enumerated lists of functional and nonfunctional requirements.

The benefits of models include the use of simulation to ensure requirements understanding. Simulation is a communication mechanism that walks requirements stakeholders through process, data, and UI flows in linear order to represent how the system should function. Stakeholders have the ability to witness the functionality in rich detail, consuming the information in a structured way that eliminates miscommunication entirely.Models and simulation also provide context for validation. Validation is the process in which stakeholders review each and every requirement in the appropriate sequence, make appropriate comments, and then sign-off to ensure the requirements are accurate, clear, understood, and are feasible to be implemented. Requirements validation can be considered one of the most cost-effective quality control cycles that can be implemented for an outsourcing initiative.

Since requirements are the “blueprint” of the system, outsourced stakeholders can make use of requirements models and simulation during implementation to gain understanding of the goals of the project. Simulation eliminates ambiguity by providing visual representation of goals which in turn eliminates interpretation.

Rich requirements documentation often is a specified deliverable for most IT projects for various reasons that include regulatory compliance (Sarbanes Oxley, HIPAA, etc.), internal procedural specifications, and other internal review cycles.

This documentation also serves as the contract between the customer and outsourced provider. Models can serve as the basis of this documentation and next generation requirements workbench solutions (such as Blueprint Requirements Center) can transform models into rich, custom Microsoft Word documentation. Since these documents are auto-generated, the amount of effort required to build and maintain these documents is minimal

Abstract vs. Detailed: Outsourcer Involvement in Requirements Definition

Outsourcing providers have learned a tremendous amount about how to improve the efficiencies of requirements communication. Many providers are shifting to a much heavier involvement in the process of Requirements Definition. Others continue to operate in a more traditional model, which abstracts them from the requirements definition process, leaving this on the shoulders of the customer.

Western outsourcers have heavily pioneered and practiced an approach that includes efforts to work with customers to articulate, document, and communicate requirements. Part of the value proposition of this approach is that the outsourced provider mitigates the risk of misunderstanding, and ensures that members of the outsourced team gain a clearer understanding of the project goals and deliverables. This approach is often referred to as a detailed approach for requirements definition in outsourced projects.

Indian and European outsourcers largely continue a practice which abstracts outsourced providers from the definition of requirements. Such abstraction means that the customer takes on responsibility to clearly document and articulate requirements to outsourced providers. This requires that the customer partake in extremely accurate and precise specification of project requirements, knowing that cultural, time zone, process, and alignment barriers that exist in the interpretation of these requirements. This approach is often referred to as an abstract approach for requirements definition in outsourced projects.

It is important for an IT organization to understand which of these two approaches are taken, as they can dramatically change the methodologies and practices required to ensure clear understanding of project requirements.

The Solution: A Case Study

The principles described in this paper should be considered and applied at the earliest stages of the project both to set the stage for the work that follows but also because the earlier errors are discovered and resolved, the less expensive the impacts of those errors will be. Just as the cost of errors increases exponentially the later they’re found in the lifecycle, the corollary is also true – to find and deal with them early can result in exponential savings. In the case of outsourced development this should happen even before the outsourced vendor is chosen, during development of the Request for Proposal (RFP).

An example of such a case is provided by Knowsys, a Blueprint partner. Knowsys staff were contracted to come in late to an RFP cycle that had gone awry at a major North American financial institution. This company needed to re-architect their entire e-commerce platform for their Wealth Management business. Significant investment had already been made in the RFP process and when Knowsys arrived, the client was just beginning a series of vendor presentations with senior executives of the client in attendance, summarizing their bid for the outsourcing contract. As the presentations continued, the “elephant in the room” kept getting bigger and bigger. It had become clear that something had gone terribly wrong.

The requirements specified in the RFP to the vendors were incomplete. There was conflicting information and inconsistent levels of detail. Upon further analysis, it was discovered that whole business areas were neglected. Compounding this was the fact that subject matter experts had incorrectly assumed how certain areas of the business functioned. These inaccuracies were further compounded by the various vendors who bid (seven in all) by layering on their own assumptions to fill in the gaps. The result was a series of vendor presentations that were wildly different, almost as if they were trying to address seven different problems and none of them being that of the customer. In addition to uncovering these major flaws in the requirements of the RFP, this event also made obvious that the process for inviting vendors was less than perfect. Some had clearly invested huge amounts of time and effort in the proposal, while others less so. None had sufficient familiarity with the customer’s business or situation to be able to point out obvious flaws.

In effect, the reset button was hit. The executives directed the group, with Knowsys now involved, to redevelop the RFP. This time executive sponsorship was front and center and all aspects of the business were directed to be accessible and support the initiative. All relevant aspects of the business were thoroughly analyzed and their needs amalgamated into a unified representation of the requirements. Validation was performed to ensure coverage, depth, and clarity. Much more rigor was applied to the process of vendor selection for bidding (a contrast from the open invitation used in the first cycle). A smaller group of more focused vendors who knew the client’s business were invited. The Knowsys team also made sure the vendor relationship was far more collaborative, while respecting the impartiality required of bidding process. Theyensured there were multiple points of client-vendor contact and also put measures in place to ensure that any and all assumptions were validated. Finally, an emphasis was also applied to quality assurance and testing aspects (as opposed to a sole focus on the requirements of what was to be built) to produce a much more rounded picture of the bidder’s proposals.

The net result of these initiatives was tremendous. The second set of presentations, by a much smaller group of bidders, was like night and day compared to the previous round. It was clear that each vendor had a very accurate grasp of the client’s problem and goals. Each proposal was compelling and had unique and interesting variations on their proposed solutions.

A vendor was selected and the project got underway, later than hoped due to the failed initial RFP cycle. Everyone felt much more confident entering such an important development initiative with the specifications they now had, the vendor they had selected, and the proposed solution. That confidence was validated when the project, even in the face of unexpected business changes during the project, was delivered on time and on budget with all success criteria being met. Had this financial institution selected a vendor in the first round, and proceeded with development on that basis, the results would undoubtedly have been quite different.

Conclusion

The steady rise in outsourcing of software development has increased in the recent economic climate as companies desperately seek ways to reduce IT costs. The promise of cost savings is realizable, but only to those who focus on three vital requirements control points in the outsourcing arrangement: requirements communication, requirements reference, and requirements validation. Gaining mastery of these through

appropriate processes, practices, and automation will dramatically improve the probability of success of the outsourced engagement, delivering the needed cost savings.

Footnotes

[1] Five trends that challenge technology offshoring in 2009, IT World
[2] Thorny Requirements Issues Handbook 2005 – Process Impact – Karl. E. Weigers.
[3] IAG Requirements Survey, 2007
[4] MetaGroup, Search CIO Top 10 Risks of Offshore World
[5] IDC Analyst, Melinda Ballou, Offshore your Way to ALM, RedmondMag


Matthew Morgan is a 15 year marketing and product professional with a rich legacy of successfully driving multi-million dollar marketing, product, and geographic business expansion efforts. He currently holds the executive position of SVP, Chief Marketing Officer for Blueprint, the global leader in Requirements Lifecycle Acceleration solutions. In this role, he is responsible for strategic marketing, partner relationships, and product management. His past tenure includes almost a decade at Mercury Interactive (which was acquired for $4.5B by HP Software), where he was the Director of Product Marketing for a $740 Million product category including Mercury’s Quality Management and Performance Management products. He holds a Bachelor of Science degree in Computer Science from the University of South Alabama. He holds a Bachelor of Science degree in Computer Science from the University of South Alabama.

Tools of the Trade Part II; Implementing a Requirements Management Tool

Part one in this series described how to prepare, plan, and select a requirements management tool. Selecting the tool is usually the easy part. Implementing the tool without causing mass chaos brings a greater challenge. Now that a tool has been selected, what is the best way to gain acceptance and adoption of the tool within your organization? Change rarely comes without some resistance. This article will address how to maneuver through the resistance in order to successfully implement a requirements management tool by recruiting early adopters, marketing the tool, and communicating the change early and often. Finally, I will address some lessons learned while implementing a tool at several organizations.

Implement a Tool

Production is not the application of tools to materials, but logic to work.
~ Peter F. Drucker

Form a Team

Once the tool is purchased, implementation will take some planning, training, and mentoring in order to effectively rollout the tool. If you haven’t already, start with forming an implementation team. This team will represent the tool and its benefits to the greater IT department. The team will also help plan, create guidelines and best practices, and mentor analysts in their given departments.

Treat this implementation just like you would any other IT project. Start with a project plan, determine implementation tasks, and assign resources. Then execute on the plan.

Once the project plan is in place, get the team members completely trained and comfortable with the tool. At one organization we brought in the tool vendor to train the team members and a few key QA folks. We gave the vendor some samples of use cases from our own projects and utilized these as examples during the training. Team members then began using the tool on their own projects. As we met together we learned from our own experiences and utilized these experiences to draft best practice guidelines for the organization. Best practices included how to structure requirements within the tool, creating templates for different types of projects, naming conventions, and tips and tricks for some of the tool’s quirks.

Recruit Early Adopters

Once the team has established some guidelines and tested the tool out on their own projects, it is time to branch out. Find a few experienced analysts who are willing to be early adopters of the tool. Have team members train and mentor the early adopters on how to use the tool. Early adopters should then go through a complete project lifecycle while using the tool. Periodically touch base with early adopters to apply what they learned from their experience to the best practice guidelines. Also, gather feedback from the developers and QA team members on their perceptions of the tool. Since these groups typically consume the output of requirements gathering, they will need to accept the tool and perhaps adapt their work habits to accommodate a new method for managing requirements. Don’t underestimate this change!

Communicate Early and Often

As with any change, there will always be nay-sayers and skeptics. Implementing a requirements management tool will be no different. In fact, writing requirements in a tool rather than a Microsoft Word document requires a change in mindset. This change is easy for some to make and difficult for others. The implementation team can smooth this transition through communication. Hold forums where the tool is demonstrated, the benefits and limitations are discussed, and early adopters’ experiences are shared. Hold these forums on a regular basis so that teams are kept informed as to the progress, and reminded of the tools benefits.

Word-of-mouth advertising will go a long way to help encourage other analysts to adopt the tool. Have the early adopters talk about their experiences and spread the good news throughout their teams. After trying the tool on a few development projects, one early adopter expressed his enthusiasm for the tool stating “I want to write all of my requirements in this tool.” By trying the tool out on a few simpler projects, he became comfortable with the tool, its limitations, and saw the benefits gained from utilizing the tool. We harnessed his enthusiasm to help sell the tool during an analyst open forum. He also spread the word to his immediate team members and more people signed up to use the tool on their next project.

An Excuse to Celebrate

Finally, use the tool as an excuse for a party! At one organization, to gather excitement for the event, we hosted online trivia questions on our SharePoint site. We posted daily questions and the top five winners received gift certificates to the event establishment. At the event we re-iterated the benefits of the tool, provided links to training simulations, demonstrated examples of successful projects, and distributed the best practice guidelines. Once the formalities were complete, we broke out the entertainment and used the opportunity to socialize with our peers.

When it was all said and done, the tool implementation was really a non-event. There was no loud outcry, no grumbling amongst peers, no mass chaos, and no wasted money. We methodically went about our task of implementing the selected tool, sought help from our peers, and repeatedly delivered the same message throughout the organization which resulted in an easy transition to tool adoption. The complete process took about a year. We steadily increased user adoption during that year and by the time we held our event, most people had already begun using the tool. Compared with the previous implementation of a tool mentioned in part one, where little thought went into the needs of the user community, this tool implementation went off without a hitch.

Final Thoughts

Wisdom too often never comes, and so one ought not to reject it merely because it comes late.
~Felix Frankfurter

Despite the claims of many vendors, no tool is perfect. It is better to discover early on in the process the limitations of the requirements management tool. Once you know the limitations, devise a plan on how to work around the limitations and minimize the impact to your organization.

Test the performance of the application prior to purchasing. One of the biggest frustrations of a tool I have personally used is its inability to perform when multiple users are accessing the repository. Loading some projects will take 10 minutes or longer, while working in other projects completely freezes the tool. If at all possible, learn this before you buy! It will save you great frustration and keep analysts productive.

Never underestimate an employee’s need to resist change. It’s only natural. We all do it. Plan for it, accept it, and continue to communicate the benefits of the tool even when met with organizational resistance. Eventually people come around and the use of the tool will become a part of daily life within the organization.

Finally, learn from your experiences. Have an open mind and listen to the experiences of early adopters and implementation team members. Tout your successes and learn from your failures. Success will, undoubtedly, follow.


Renee Saint-Louis is a Senior Systems Analyst with a subsidiary of The Schwan Food Company where she established and led an Analyst Center of Excellence. Prior to joining Schwan, Renee served as the Requirements Elicitation Practice Lead at a large insurance company. Renee has been a practicing analyst for more than 10 years.

Tools of the Trade Part 1; Selecting a Requirements Management Tool

Have you ever experienced this? Management attends a trade show and discovers the greatest requirements tool since the bread slicer. It will solve all your requirement issues and produce happy, satisfied business customers – or so the vendor claims. The manager purchases the tool and suddenly it’s your job to implement it throughout the organization. “Go forth and do great things.” your manager mandates. You walk away dumbfounded wondering, “Where do I go from here?” Experience tells you there is more to it than just purchasing the software; some analysis is necessary in order to successfully launch a new tool at your organization.

This two-part series describes a tried and true approach to selecting and implementing a new requirements management tool within an organization. Based on experience implementing tools at several organizations, this first article describes a method to successfully prepare, plan, and select a new requirements tool. Part two will describe how to successfully implement the selected tool as well as some lessons learned along the way.

Assess Tool Readiness

“Assessing is theory, facts, and art form.”
~ Jim Murphy

Through my years of experience I’ve seen the obsession some organizations have with implementing a tool in order to enforce a loose, poorly followed requirements gathering process – believing that a tool will solve all of the requirements management issues in an organization. However, if poorly implemented, a tool will only exacerbate a poor or non-existent process and wreak havoc on the organization.

As an example, one organization had little to no requirements processes in place. Management was determined to implement a tool in order to enforce a process across the organization. Elaborate rules and regulations were devised by teams of people who were not requirements practitioners. These rules did little but enforce the nomenclature utilized within the tool. Little thought went into the intent of capturing requirements, determining stakeholder goals and needs, and managing requirements over the development lifecycle. What ensued was mass chaos throughout the analyst organization. Analysts were angry at the need to capture requirements in the tool, saw little value in the process, and out-right refused to utilize the tool. The result: the use of the tool was discontinued. Thousands of dollars were wasted on license fees, training, and implementation. Analyst productivity declined and there was a feeling of ill-will throughout the organization!

Don’t let this happen to you! Before embarking upon a tool implementation, one should begin by understanding what is expected from the tool and then assess the organizational readiness for a tool. This can be as formal or informal as needed for your organization. The point is to actually look at your organization, determine why a tool is necessary, and asses the readiness for adopting a tool.

Know the Theory

First, have a clear understanding of what the expectations are of the tool and what problems your organization is looking to solve by purchasing a tool. It is a good idea to attack this project with the same analysis skills you would a typical IT project. Begin by writing the vision statement and problem statements. These will give you a clear direction for selecting a tool and guide the team when rolling out the tool across the organization. Some points to consider:

  1. Determine what pain points you are trying to eliminate by implementing a tool. For example:
    • Problem: Requirements are captured in disparate Word documents and continually change without stakeholders being alerted.
      Solution: Providing a repository to house all requirements
    • Problem: Once a project is launched, requirements are difficult to find and rationales for requirements are often forgotten.
      Solution: Providing a tool which captures discussion, rationale, and decisions for project requirements.
    • Problem: Projects are routinely launched with major re-work after the implementation due to poorly defined requirements.
      Solution: Providing a tool which illustrates requirements in a manner meaningful to stakeholders – through models, screen mock-ups, use cases etc.
  2. Determine what goal the organization is trying to achieve by implementing a tool. For example: 
    • Reduce the amount of money spent on re-work after a project is installed by generating models and written use cases from screen mock-ups to aid in eliciting higher quality requirements from stakeholders.
    • Increase speed to market by creating prototypes and generating requirements based on the prototype.
  3. Document what the vendor has promised (if one has been selected for you).

Get the Facts

The next step is to assess the readiness of your organization. Consider not only how it will impact the analysts, but also the other stakeholders involved. Some points to consider when assessing your organization:

  1. Is a tool necessary?
  2. What impact will implementing a tool have on the organization?
  3. What development methodology is utilized by analyst groups? Is it the same across the organization or do some follow waterfall while others follow agile development processes?
  4. How mature is the requirements process across all analyst groups?
    1. Are some stronger than others?
    2. Are processes similar across groups?
    3. How disparate are processes across groups?
  5. Are there additional organizational changes occurring that may impact a successful tool implementation?
    1. If so, what is the time frame for implementation? Will it interfere with a tool implementation?
  6. What format are requirements typically gathered in? (e.g. Word document, Excel, e-mail, napkins, hallway conversations, dreams)
  7. What is the maturity of the PM, Development, and QA organizations?
  8. How will the stakeholders perceive the change? Are they open to receiving requirements in a new format? Will the process be seamless to them?

During this exercise, capture the risks and issues that may occur if a tool is implemented. Record the response to the risks and issues so that they may be resolved during the tool rollout.

Remember that a tool is exactly that – just a tool. Its implementation will have significant impact on the organization and any tool that is selected needs to be thoughtfully considered and carefully planned.

Based on your assessment, it may be necessary to recommend deferring the purchase of a tool until any process issues are shored up. It is fundamentally necessary to have solid requirements practices in place before implementing a tool. Get your facts in order and make recommendations on how to improve your processes before a tool is implemented. For example, if documenting requirements is sketchy (at best) in your organization, recommend to management that all of IT, not just analysts, be trained in best practices for correctly documenting requirements within your software methodology. Training all of IT sets expectations on what is acceptable requirements deliverables and holds analysts accountable for producing quality requirements.

Create a plan for addressing the issues encountered during the assessment and recommend a time frame for improving the requirements practice, including an appropriate time for implementing a tool. Generally, management will appreciate a well thought out plan, rather than a haphazard approach to implementation.

Select a Tool

A fool with a tool is still a fool.
~Anonymous

Hopefully you find yourself in the situation where you and your team have the ability to evaluate and select a tool, rather than management dictating which tool you must use (or worse yet, one they have already purchased.) If you are able to select a tool, then you are in luck. If not, know that most tools on the market offer sufficient improvement over capturing requirements in an ad hoc manner that some benefit of implementing the tool will certainly follow.

When beginning the selection process, start with the problem statements that were created during the assessment phase. Any tool that is selected will need to address the pain points of your organization. This will help ground you as you begin your search, and keep you from being distracted by the many bells and whistles that companies offer. When it comes to selecting a tool, the most important criteria is that it addresses the pain points you are experiencing. Bells and whistles will only encumber your process rather than enhance it. At its most basic, a tool should enhance your existing process, not detract from it.

As a part of your selection process, determine the criteria which are most important and evaluate the tool according to these criteria. A simple spreadsheet that ranks the criteria for each candidate vendor will help in objectively evaluating a tool. Have each team member rank the features/needs for each vendor which are determined necessary for the tool to succeed at your organization. A scale from 1-10 may be used, where 1 is least favorable and 10 is most favorable. Do the same for the priority column and calculate the total value for the features, and in turn, the vendor’s product. Here is a sample:

Feature/Need

Product Ranking

Priority

Total Value

Integration with MS Word

3

7

21

Visual depiction of requirements

1

9

9

Traceability

4

8

32

Integration with development tools

3

2

6

Integration with test suite

4

3

12

Ability to define custom properties

5

1

5

 

 

Vendor total:

85

Once you have narrowed the field down to a few choice vendors, evaluate the products thoroughly. The best way to do this is to try before you buy. If at all possible, negotiate a trial period with each vendor where you can put the tool to the test on a real project. The only true test to tell if the tool will work in your organization is to try it out in real life. No demo projects, no smoke and mirrors, just good old fashioned writing your own requirements in the tool in order to learn its quirks and limitations. This may add some extra time and produce some redundancy in your project documentation as you will want to continue to store whatever you produce within your own directories, but the small pain up front will alleviate pain on the back end once the tool is implemented. Complete the ranking process again once you have completed the trial run. This should produce a clear winner.

Next up: Implementing a tool without causing mass chaos in your organization.


Renee Saint-Louis is a Senior Systems Analyst with a subsidiary of The Schwan Food Company where she established and led an Analyst Center of Excellence. Prior to joining Schwan, Renee served as the Requirements Elicitation Practice Lead at a large insurance company. Renee has been a practicing analyst for more than 10 years.

Authoring Requirements in an Agile World

Equipping and Empowering the Modern BA

The principles of agile development were proven before agile – as a defined approach – became vogue.  Agile principles were being practiced to varying degrees in most organizations as a natural reaction to the issues surrounding a rigid waterfall approach.

Every individual task on a software project carries some degree of risk.  There are many sources of these risks – some third party component may not function as advertised, some module your work is dependent on may not be available on time, inputs to the task were ambiguous and you made an incorrect assumption when ‘filling in the blanks’, the list is endless.  All these risks in the individual tasks contribute to the overall project risk and, quite simply, risk isn’t retired until the task is done.  Putting it another way, the risk associated with building something is really only retired when the thing is built.  That’s the principle behind iterative and incremental development.

So instead of leaving the first unveiling of a new or enhanced application until toward the end of the project, break it down so that pieces (functional pieces) can be built incrementally.   While this doesn’t always mean the portions are complete and shippable, stakeholders can see them working and try them out.  This offers several advantages: it retires risk, as mentioned before.  It often also exposes other hidden risks.  These risks will surface at some point, and the earlier the better.  Exposing hidden risk and retiring risk early makes the estimating process more accurate.  This increases the probability of delivering applications of value that address the business needs in terms of capability as well as budget and schedule needs.

While agile is becoming main stream on small and mid-sized projects, challenges do exist elsewhere such as how to manage this approach on larger projects and in distributed development.   Another challenge for many is how to apply a requirements lifecycle to agile projects.   Many agile principles such as “just enough and no more”, to “begin development before all requirements are complete”, to “test first”, can be counter-intuitive.   Also, what about non-functional requirements?  What about testing independence?  How can we cost something if we don’t know what the requirements are?

This article attempts to describe some ways to handle these challenges.  It is based on an example of real, ongoing, and very successful product development that uses agile with a globally distributed team.  It describes one set of techniques that is known to work.

Process Overview

There are many “flavors” of agile, but at a high level they all essentially adhere to the following basic principles:

PRINCIPLE

DESCRIPTION

Iterative

Both Requirements and Software are developed in small iterations

Evolutionary

Incremental evolution of  requirements and software

Just Enough “requirements details”

Time-Boxed

Fixed duration for requirements and software build Iterations

Customer Driven

We actively engage our customers in feature prioritization

We embrace change to requirements/software as we build them

Adaptive Planning

We expect planning to be wrong

Requirements details as well as macro level scope are expected to change as we progress through our release.

For the purposes of this article, our example uses a Scrum-based Agile process1.  In this approach the iterations (sprints) are two weeks in duration.  A sprint is a complete development cycle where the goal is to have built some demonstrable portion of the end application.  It should be noted that while the initial sprints do involve building a portion of the application, often this is infrastructure-level software that’s needed to support features to be developed in later sprints. This means there may not be much for stakeholders to “see”. 

Each organization needs to determine the sprint duration that’s optimal for them.  It needs to be long enough to actually build something, but not long enough for people to become defocused and go off track (thereby wasting time and effort).  We found two weeks to be optimal for our environment.

Key during the first part of the process is to determine and agree on project scope, also known as the “release backlog”.  Determining the release backlog itself could be a series of sprints where the product owner and other stakeholders iterate through determining relative priority or value of features along with high level costing of these features to arrive at a release backlog.  

At the other end of the project, development of new code doesn’t extend to the last day of the last sprint.  We typically reserve the last few sprints, depending on the magnitude of the release, for stabilization.   In other words, in the last few sprints only bug fixing is done, and no net new features are developed.   This tends to go against the agile purists approach to software development as each sprint should, in theory, develop production ready code.  However, to truly achieve that requires significant testing and build automation that most organizations don’t have in place.  This is always a good goal to strive towards, but don’t expect to achieve this right away.

Requirements Definition in this Process

There are several ways you could perform requirements definition in an agile process, but again our goal is to introduce an example that’s been tried and is known to work.   This example begins with the assumption that you already have a good sense of the “business need”, either inherently in the case of a small and cohesive group, or by having modeled the business processes and communicating in a way that all understand.  So we begin at the level of defining requirements for the application.

Requirements at the Beginning

Begin with a high-level list of features.  Each feature is prioritized by Product Management or Business Analysts (depending on your organization).  These features are typically then decomposed to further elaborate and provide detail and are organized by folder groupings and by type.   If needed to better communicate you should create low-fidelity mockups or sketches of user interfaces (or portions) and even high-level Use Cases or abstract scenarios to express user goals.  We sometimes do these and sometimes not, depending on the nature of what’s being expressed plus considering the audience we’re communicating with.  For example, if we’re communicating a fairly simple concept (how to select a flight) and our audience is familiar with the problem space (they’ve built flight reservation applications before) then clear textual statements may be “just enough” to meet our goals at this stage.  These goals are to establish rough estimates (variance of 50-100%) and based on these and the priorities, to agree on the initial scope of the release (what features are in and which are out). 

Once reviewed, this list of features becomes the release backlog.  The features are then assigned to the first several sprints based on priority and development considerations.

authoring1_small.png
Click image for larger view
Example High Level Features with Properties

Requirements During the Sprint

With respect to the requirements, the principle of “just enough” is paramount.  If the need has been expressed to a degree adequate enough to communicate and have it built as intended, then you’re done.  Going further provides no additional value.   This means you’ll have a varying level of detail for the requirements across the breadth of the product.  For some parts of the product a high-medium level of detail may be “just enough”, while for other more complex areas a very fine level of detail may be needed. 

In each sprint there are tasks of every discipline taking place.  Requirements, design, coding, integration, and testing tasks are all being performed concurrently for different aspects of the system.   The requirements being defined in one sprint will drive design and coding in a subsequent sprint.  The net effect is that all these disciplines are active in every sprint of the lifecycle, but the relative emphasis changes depending on what stage you’re at in the project.  For example, requirements tasks are being performed in all sprints, but they are emphasized more in the earlier sprints.

In each sprint, the high level features are elaborated into greater levels of detail.  This more detailed expression of the requirements usually begins with usage scenarios/stories and/or visuals and it’s expressed in the form of a model.  The models can emphasize user interface, use cases, scenarios, business rules, and combinations of these, depending upon the nature of what is being expressed.  Sometimes these are created collaboratively but more often in our experience, one person creates an initial version and then holds a review with others for feedback.   In our case it is typically the product managers and/or business analysts who create these models and usually between one to three reviews are held with the developers, testers and other stake holders.  The review serves multiple purposes including:

  • To facilitate knowledge transfer to all stakeholders including architects, UE designers, developers, testers, and executive sponsors on what is needed
  • To allow the architects, UE Designers and developers to assess feasibility
  • To determine if there is sufficient detail in the requirements to allow development to proceed

With appropriate technology, tests are automatically generated from the requirements producing tests that are 100% consistent with the requirements and enable the immediate testing of code developed during sprints.

Continuous and Adaptive Planning

With this approach planning is continuous and adaptive throughout the lifecycle allowing resources to be redirected depending on new discoveries that come to light during each sprint.  This ability to course correct in mid-flight is what gives projects their “agility”.   At the end of each sprint we take stock of what was achieved during the sprint and record progress actuals.  The work of the next sprint is adjusted as necessary based on this but also based on testing results, feedback from reviews of that sprint’s build, any new risks or issues that surfaced or others that were retired, and also any external changes in business conditions.  Estimates and priorities are adjusted accordingly and any changes to release scope and sprint plans are made.  In general we try not to make major mid-flight corrections during a sprint, which is one of the reasons why we like two week sprints.  If sprints were, say, four weeks then we would lose agility.  Also a two week sprint is easier and more accurate to estimate than a four week one.

authoring2_small.png 
Click image for larger view
Example Story with Tasks, and Estimates

With respect to the requirements, for those features assigned to the sprint along with any high-level requirements models, development creates high-level goals for the particular feature and estimates them.   The goals express what aspects of the feature they will attempt to build during that sprint, recognizing that one sprint is often not enough time to implement an entire feature.  The feature and its high-level goals become the content of the “story”.  Once the story is defined the developer then details and estimates the tasks to be done for that story over the next two weeks (sprint) and proceeds with development, tracking daily progress against these tasks in an agile project management tool, and covering issues in the daily scrum.

What about the Non-functional Requirements?

The various agile approaches have evolved several techniques to express system functionality.  These are things like user stories, use cases, or usage scenarios, that represent “observable system behaviors and artifacts that deliver value to users” like screens, reports, rules, etc.   These functional requirements express “what” the system is to do.  Examples of this could be things like “generate a stock-level report”, “make a car reservation”, “register for a webinar”, or “withdraw cash”.

Associated with the functionality of a system are its “qualities”.  These express “how well” the system is supposed to do what it does – how fast, how reliably, how usable, and so on.  Sometimes these non-functional requirements are associated with certain functional requirements and other times they apply to whole portions of the system or the entire system.   So how do these very important requirements get accounted for in our agile process?

They are expressed at the high-level in the form of textual statements.  For example:  “Any booking transaction shall be able to be completed by a user (as defined in section a) in less than three minutes, 95% of the time”. 

As functional requirements are decomposed and derived any associated non-functional requirements should similarly be decomposed, derived, and associated to lower levels.  For example the above performance requirement is associated with all the “booking transaction” functional requirements (book a car, book a flight, book a hotel).  If the functional requirements are decomposed into the lower level requirements “select a car”, “choose rental options”, and “check-out”, then the non-functional requirement may similarly be decomposed into requirements for selecting a car in less than 30 seconds, choosing options in less than one minute, and checking out in less than 1.5 minutes.

During review sessions the functional requirements take center stage.  However, during these reviews any non-functional requirements that are relevant need to be presented and reviewed as well.  Traceability is usually relied on to identify them. Any non-functional requirements relevant to the functional requirements of the story need to be expressed as part of the story, so the developer can take these into account during development. 

QA needs to create tests to validate all requirements, including the non-functional requirements.  Sometimes before a feature has been completely implemented, non-functional requirements can be partially tested or tested for trending, but usually cannot be completely tested until the feature is completely implemented (which can take several sprints).

What about Testing?

The high degree of concurrency in agile processes means that testing is performed in every sprint of the lifecycle.  This can be a considerable change from traditional approaches and offers several benefits.  First, it tends to foster a much more collaborative environment as the testers are involved early.  It also, of course, means that items which wouldn’t have been caught until later in the lifecycle are caught early when they can be fixed much more cheaply.

In agile, Product Owners play a very big role in the testing process and they do so throughout the development lifecycle.  Whereas many traditional approaches often rely on requirements specifications as “proxies” of the product owners, agile places much more reliance directly on the product owner effectively bypassing many issues that can arise from imperfect specifications.  In addition to the product owners, developers also test.  Test-driven development is a prominent technique used by developers in agile approaches where tests are written up-front and serve to guide the application coding as well as performing automated testing, which helps with code stability.  To augment test driven development, which is primarily focused at the code level testing done by developers, new technologies that can auto-generate functional tests from functional requirements  enable a QA team to conduct functional testing based on test cases that are not “out of sync” with the requirements specification.  This enables the QA team to conduct testing on a continuous basis, since executable code and test cases are available throughout the lifecycle.  In our example, all three are employed – product owner, development staff, and independent QA – on a continuous basis.

The requirements that we develop in our example are a decomposition of high-level text statements augmented by more detailed requirements models that give a rich expression of what is to be built.  Requirements reviews are based on simulations of these models and they are incredibly valuable for a number of reasons.  First, just as agile provides huge benefit by producing working software each sprint that stakeholders can see and interact with, simulation lets stakeholders see and interact with ‘virtual’ working software even more frequently.  Second, people often create prototypes today trying to do much the same thing.  The difference with prototypes, however, is that simulation engines built for requirements definition are based on use cases or scenarios and, therefore, guide the stakeholders in how one will actually use the future application, providing structure and purpose to the requirements review sessions.   Prototypes, including simple interactive user interface mock-ups, on the other hand, are simply partial systems that ‘exist’ and provide no guidance as to how they are intended to be used.  Stakeholders have to try to discover this on their own and never know if they’re correct or if something has been missed.  It is important to note that the principle of “just enough” still applies when producing these models. We rely on the requirements review sessions held with designers/developers to determine when it is “enough.”  This approach produces very high quality requirements and it is from these requirements that the tests are automatically generated.  In fact, such thorough testing at such a rapid pace without automatic test generation is likely not possible.

Although we strive to have shipable code at the end of each sprint, this goal is not always achieved, and we may need to use the last sprint or two to stabilize the code.  Since testing has already been taking place continuously before these final sprints, the application is already of considerably high quality when entering the stabilization phase, meaning risk is manageable and rarely is ship date missed.

What about Estimating?

Remember in agile it is typically ‘time’ that is fixed in the triad of time, features, and quality.  In our approach also remember that, with continuous testing and the reserving of the final sprints for stabilization, quality tends to be fairly well known as well.  This leaves the features as variable so what we’re actually estimating is the feature set that will be shipped. 

As always, the accuracy of estimates is a function of several factors but I’m going to focus on just three

  • The quality of the information you have to base estimates on,
  • The inherent risk in what you’re estimating, and
  • The availability of representative historical examples that you can draw from.

In our approach, estimates are made throughout the development cycle, beginning in the initial scoping sprints.  As mentioned earlier, once the list of candidate features is known and expressed at a high (scoping) level, they are estimated.  Naturally at this point the estimates are going to be at their most “inaccurate” for the project lifecycle, since the requirements have not been decomposed to a detailed level (quality of information). This mean there is significant risk remaining in the work to be done.  Similar projects done in the past may help mitigate some of this risk and help increase the accuracy of the estimates (e.g. we’ve done ten projects just like this and they were fairly consistent in their results).  

The initial estimates are key inputs to the scoping and sprint-planning processes.   As the project proceeds, with each sprint risks are exposed and dealt with, requirements are decomposed to finer levels of detail, and estimates naturally become more accurate.   As you might guess, estimation is done toward the end of each sprint and is used in the planning of future sprints.

What about Distributed Teams?

Today distributed development is the norm.  For reasons of efficiency, cost reduction, skills augmentation, or capacity relief, distributed development and outsourcing is a fact of life.  There’s no free lunch however – there are costs associated with this approach, and much of these costs are borne in the requirements lifecycle.  Chief among these is “communication”.  There are practices and technologies that can mitigate this issue, so that the promised benefits of distributed development can be realized.  The approach in this we’ve looked at here, for example, uses the following and has been very successful:

  • Concerted effort for more frequent communication (daily scrums, and other scheduled daily calls)
  • Liberal use of requirements simulation via web-meeting technology
  • Direct access to shared requirements models via a central server
  • Automated generation of tests and reviewing these in concert with the requirements to prove another perspective of what the product needs to provide.

Conclusion

“Have you heard that England is changing their traffic system to drive on the right-hand side of the road?  But to ease the transition they’ve decided to phase it in –  they’ll start with trucks”.

A common mistake of development organizations making the shift from waterfall to agile is that their organization  mandates they still produce their big, heavy set of documents and have them reviewed at the same milestones, clinging to these familiar assets like security blankets. It doesn’t work.  As scary as it might seem all significant aspects of the approach, like documentation, need to change in unison if it’s to be successful, and requirements are one of those significant aspects. 

However if you still want that security blanket and you want to have some benefit of agile, at least generate your requirements specification in an agile manner (iterative, evolutionary, time boxed, customer driven, adaptive planning) that includes simulations integrated and driven by use cases traced to feature.  This is one way to reap some agile benefits without making the leap all at once.

Risk is the ‘enemy’ on software projects.  High risk profiles on projects drive uncertainty, render estimates inaccurate, and can upset the best of plans.   One of the great things about agile is that its highly iterative nature continually ‘turns over the rocks’ to expose risk early and often so it can be dealt with.   

On the other hand, one of the great challenges for larger and distributed teams is keeping everyone aligned as course-corrections happen sprint by sprint.   A big part of this is the time and effort it takes to produce and update assets and the delays caused by imperfect and strained communication. The good news is that tools and technologies now exist to produce many of the assets automatically, and to also dramatically improve communication effectiveness, allowing agile to scale.

With the right approach, techniques and technology, distributed agile can be done.  We’ve done it.  So can you.


Tony Higgins is Vice-President  of Product Marketing for Blueprint, the leading provider of requirements definition solutions for the business analyst. Named a “Cool Vendor” in Application Development by leading analyst firm Gartner, and the winner of the Jolt Excellence Award in Design and Modeling, Blueprint aligns business and IT teams by delivering the industry’s leading requirements suite designed specifically for the business analyst. Tony can be reached at [email protected].

References

1  The Scrum Alliance   http://www.scrumalliance.org/

2  Approaches to Defining Requirements within Agile Teams
Martin Crisp, Retrieved 21 Feb 2009 from Search Software Quality, 
http://searchsoftwarequality.techtarget.com/tip/0,289483,sid92_gci1310960,00.html 

3  Beyond Functional Requirements on Agile Projects
Scott Ambler, Retrieved 22 Feb 2009 from Dr.Dobb’s Portal, 
www.ddj.com/architect/196603549

4  Agile Requirements Modeling
Scott Ambler, Retrieved 22 Feb 2009 from Agile Modeling, 
http://www.agilemodeling.com/essays/agileRequirements.htm 

5  10 Key Principles of Agile Software Development
Retrieved 22 Feb 2009 from All About Agile, 
http://www.agile-software-development.com/2007/02/10-things-you-need-to-know-about-agile.html

6  Agile Requirements Methods
Dean Leffingwell, July 2002, The Rational Edge         

7  Requirements Honesty
Francisco A.C. Pinheiro, Universidade de Brasilia

8  Requirements Documents that Win the Race
Kirstin Kohler & Barbara Paech, Fraunhofer IESE, Germany

9  Engineering of Unstable Requirements using Agile Methods
James E. Tomayko, Carnegie Mellon University

10  Complementing XP with Requirements Negotiation
Paul Grunbacher & Christian Hofer, Johannes Kepler University, Austria

11  Just in Time Requirements Analysis – The Engine that Drives the Planning Game
Michael Lee, Kuvera Enterprise Solutions Inc., Boulder, CO.                                                            03/09