Skip to main content

Author: Tony Higgins

Optimized Use Cases

Use cases have been around for some time now, bursting onto the public spotlight at the 1987 OOPSLA conference in Orlando. Since that time they’ve been used in countless projects. In some situations use cases have excelled, producing remarkable results while in many others they’ve failed miserably. Use cases have what seems to be an endless stream of textbooks and articles written about them which, especially to those new to the subject, can be overwhelming and confusing.

One missing piece to this equation was the fact that requirements definition as a professional discipline has been underserved until recently by the industry and by software tool vendors. Communities, professional organizations, and modern software toolsets are now finally available for requirements authors and to support the requirements lifecycle. Some of these tools bring new and innovative capabilities not imagined before, and as they get applied to existing approaches like use cases, new sets of best practices emerge. This article examines some use cases best practices learned from using Blueprint Requirements Center on real projects.

Fundamentals

Less is More

One of the most unfortunate habits of requirements authors is the belief that providing greater quantities and more detailed information is a good thing. The goal with requirements is to communicate precisely what is needed, and for the information to be completely and accurately understood by the consumer. Unfortunately many authors feel the more information they can stuff into the document and the more detail they provide, the better job they’re doing.

Example: Pretend for a moment that you just got the job of running the county. Tourists arrive with the goal of discovering the area – its culture, character, history, attractions, and so on. Your job is to help them achieve this goal. In your enthusiasm you start creating brochures detailing the various aspects of the county. And you keep on making brochures. So many that you end up with hundreds that cover every mundane little thing in the area. After a month of operation you discover there’s no pattern to the brochures taken – one from here, one from there. Something like 90% of the brochures haven’t even been touched. Surveys of tourists show no consistency in their perceptions of the county – it’s almost like the different tourists have visited different counties entirely. To summarize the results of this:

  • The goal of communicating the unique culture and character of the area wasn’t met;
  • Visitors all left with very different impressions and understandings of the county;
  • You spent a lot of time, effort, and money providing information and 90% of it wasn’t even used.

This example is much like the requirements situation on many projects and organizations. “Information overload” is a huge problem. So often the answer to every issue and misinterpretation in requirements documents is to add more content in order to ‘elaborate’ and ‘clarify’. Simply adding more content is generally counter-productive, often doing more harm than good by introducing conflicting information, inconsistencies, and redundancies. When authoring requirements you should always put yourself in the position of the consumer. You should strive to communicate what’s needed using the smallest volume of content possible. Since even this can be considerable in size, you also should strive to make that content navigable. This means you should structure the content so that readers know where to start and how to traverse the content in order to best understand what’s being communicated. While this takes skill and effort on behalf of the requirements author, the positive effects on the software project can be dramatic.

Know Your Boundaries

If I had to pick one aspect of use case models that people should ensure they do right, it would be to have a good understanding among all stakeholders of what the system boundary is. For some applications it’s obvious and apparent, while for others it can be quite the opposite. Since a use case documents a dialog that spans this boundary, not having a good understanding of it can severely reduce the clarity of your requirements. For those who rely on the use case model to do their work, people like designers and testers, their work-products will suffer similarly.

Fantasy vs. Reality. Try Bottom-Up

For those new to the use case approach, it’s easy to get lost in use cases with unusual terms like, includes, extends, actors, associations, models, and so on. With so much focus on learning to navigate this new and imaginary world, it’s easy to lose sight of the real world it’s supposed to represent. Developing work-habits that regularly move you back and forth between the two can help keep your modeling work grounded in reality.

Use cases do of course provide many benefits, not the least of which is to clearly identify a higher purpose for a collection of steps and decisions, to answer the question “why are we doing these actions”. So, where a group of steps and decisions together fulfill some goal of an external user, we group them and that becomes a use case. Where a flow crosses from one group to another, this becomes a relationship. If the flow came from a decision it’s an “extends” relationship. Otherwise, it’s an “Include” relationship.

There is typically little debate over what the actions and decisions are as this is simply the real-world work that needs to get done. On the contrary, there is often debate over what the use cases are, since these are abstract groupings of the real-world steps and decisions, and one person could come up with a different grouping than the next person. For multiple people to independently group the steps and decision and come up with a comparable set of use cases is an indication of a well-understood problem space.

It’s important when modeling to not lose sight of the real world which is the subject of your model.

Modeling Style

There are many different styles of modeling. Some modeling styles, for example, make use of decomposition, where abstract actions or steps are decomposed into finer detail contained in included use cases. There is much debate over whether decomposition with use cases is a good idea or not. It is simply one approach. One advantage it offers is to allow readers to “adjust” the level of detail at which they work at will – work at a high level, or follow the path of decomposition to drill into the details of a specific action. One objection to its use is that it will influence the developers who work from the use cases toward implementations that are not object-oriented in nature.

Think Like a Tester

Tests are very similar to requirements. Both are really just descriptions of “what the system is supposed to do.” So if they’re talking about the same piece of software then they need to be completely consistent. Some, in fact, hold the point of view that tests are really just a more detailed form of the requirements. Looking at the requirements from the perspective of a tester can be very valuable for detecting issues early. In particular I’ve found the tester’s perspective can help a great deal in defining the system boundary. This is because testing is all about providing “stimulus” to some “thing” and then observing that “thing’s” behavior in response, so the borders of where the “thing” begins and ends need to be clear, and testers are great at thinking this way. A second area where I’ve found testers to be a great help in requirements definition is thinking of exception cases. Stakeholders and business analysts tend to be very good at identifying what the system should do when things go right, but experienced testers excel at thinking of all the possible ways that things can go wrong. Having this knowledge up-front means it can be accounted for and influence the requirements while they’re being defined, as opposed to being an after-thought late in the cycle. In addition to the tester as an individual, modern requirements toolsets that can automatically generate tests provide tremendous value as well. When reviewing requirements – actually even when authoring them – these tools allow the corresponding tests to be instantly generated and used to provide another litmus-test for requirements quality. Often inconsistencies and errors can be spotted in the generated tests that were missed when reviewing just the requirements.

Be Expressive

Use cases are our tool for communicating what is to be built. To achieve this you need to be as expressive as possible with use cases. No matter how good you are with words, written text can only go so far. One of the easiest yet most effective ways is to mockup potential user interfaces for the steps of the use case. In other words, render the more significant aspects of the user interface as it evolves through a use case. Most often this will just be simple sketches, but where appropriate can be higher-fidelity visualizations. Together this set of mockups will form a storyboard for the various scenarios. If the nature of the project is enhancing an existing application, screen-shots of the existing application can serve as the starting point, then annotate, markup, or add new screen controls as needed. Shifting focus onto storyboards as opposed to the text of the use case flows can make reviews significantly more effective. As with test generation mentioned earlier, there are tools now that support this more visual approach to defining use cases.

Another way to be more expressive is with data or information. Where the use case effects or transforms data in the application or where data influences the behavior of the use case, instead of describing this in text, modern requirements tools will actually allow you to encode these calculations and updates. During a simulation session not only will the visualizations be shown but samples of real data can be entered, displayed and calculated similar to the real application. Together this has the effect of bringing the story to life as opposed to forcing reviewers to imagine it from textual use cases.

Another example of increasing the expressiveness of the use cases again is provided with modern automation. There is typically a large amount of reference material on a typical software project related to the requirements. This can be standards, guidelines, regulations, vision documents, operational documents, and more. Tools today can automatically open, navigate to relevant content, and highlight it when these references are called upon. This automatically brings relevant content into the discussion as opposed to leaving it buried off to the side where important aspects could be missed.

The Bigger Picture

Where Do They Come From?

Let’s say I came up to you and said, “Hey, please make me three or four detailed use cases. I’ll be back after lunch to pick them up, and I expect them to be correct!” Your chances of delivering what I need are pretty much zero. You need to find out what my goals are for the application, what the application is to be used for, what major decisions will I need it to support, is this enhancing something that exists or is it new, are there any constraints, are there any special needs like security or safety, and other questions like this. In other words, you have some work to do before you get to the use cases. Typically this results in textual, hierarchical lists of goals, rules and other categorizations of need, and often sketches of the business processes in which the application is to play a part.

So I’ve Made Some Use Cases. Now What ?

After creating use cases, you’ll need them to be reviewed with the client whom the application is for, and also with the people that are to build and test the application. Reviews of requirements are one of the most crucial control points in the software lifecycle. It’s an opportunity to point the project in the right direction, and to do so early. Errors missed in reviews are simply errors whose discovery has been delayed – they will eventually be found, just later when they’re more expensive to fix. The effectiveness of the review depends on how well the requirements can be communicated. The more expressive the requirements, the more likely they’ll be communicated clearly. Another major way to be more expressive, is to use simulation during reviews. Modern Requirements Definition toolsets support simulation of requirements where the requirements can be “brought to life” to give an impression of how the future application, if built to these requirements, will look, feel, and behave. After all, most people when reading requirements are in fact performing a simulation in their minds trying to visualize the future application to help decide if this requirement document is acceptable. The problem is that in their minds’ simulation things are missed, all the interactions cannot be accounted for, and perhaps worst of all everyone has a somewhat different vision depending on how they interpret the written text. Automated simulation, projected for all to see, has none of these issues and provides all the benefits – literally a common vision.

Not only is communication vital during reviews to get the requirements right, but it’s also vital for those who will build and test the application to understand what they need to do. Any miscommunication here means people will go off in the wrong direction with their work. Simulation is a very good tool for this as well. Once they do understand however, they actually need access to the use cases and associated requirements information to do their work, since their tasks depend upon this information. These are areas where modern tools can really make a difference, in a number of ways.

First, tools today can automatically generate tests directly from use cases. This is a huge time-saver. Not only is the work done automatically, but it’s correct. Even more advanced tools allow you to filter the set of tests produced to focus only on those of high-risk, or the most business-critical.

Second, requirements definition tools today also can auto-populate the tools used by the designers and testers with the requirements, and tests, produced using the requirements definition tool. This avoids transcription errors and oversights that often happen when you deliver a document, and the practitioner needs to manually enter relevant information into their toolset.

Third, requirements definition tools today can automatically generate the documentation you need, either because your process calls for it, or to comply with corporate standards. Document generation governed by templates allows you to define entirely the format and content of the documents. More advanced tools can even automatically produce redlined documents showing changes since some previous version like the last review session, for example.

Conclusion

There have been significant gains made in requirements definition tools in recent years. This perhaps shouldn’t be surprising given that this area, arguably one of the most crucial for determining the success of software projects, was neglected by software tool vendors for decades. These advancements, coupled with best practices learned by applying this new technology in real and complex projects, has the potential to clear the log-jam of software project failures that has plaguing the industry for years.

Don’t forget to leave your comments below


Tony Higgins is Vice-President of Product Marketing for Blueprint, the leading provider of requirements definition solutions for the business analyst. Named a “Cool Vendor” in Application Development by leading analyst firm Gartner, and the winner of the Jolt Excellence Award in Design and Modeling, Blueprint aligns business and IT teams by delivering the industry’s leading requirements suite designed specifically for the business analyst. Tony can be reached at [email protected].

Authoring Requirements in an Agile World

Equipping and Empowering the Modern BA

The principles of agile development were proven before agile – as a defined approach – became vogue.  Agile principles were being practiced to varying degrees in most organizations as a natural reaction to the issues surrounding a rigid waterfall approach.

Every individual task on a software project carries some degree of risk.  There are many sources of these risks – some third party component may not function as advertised, some module your work is dependent on may not be available on time, inputs to the task were ambiguous and you made an incorrect assumption when ‘filling in the blanks’, the list is endless.  All these risks in the individual tasks contribute to the overall project risk and, quite simply, risk isn’t retired until the task is done.  Putting it another way, the risk associated with building something is really only retired when the thing is built.  That’s the principle behind iterative and incremental development.

So instead of leaving the first unveiling of a new or enhanced application until toward the end of the project, break it down so that pieces (functional pieces) can be built incrementally.   While this doesn’t always mean the portions are complete and shippable, stakeholders can see them working and try them out.  This offers several advantages: it retires risk, as mentioned before.  It often also exposes other hidden risks.  These risks will surface at some point, and the earlier the better.  Exposing hidden risk and retiring risk early makes the estimating process more accurate.  This increases the probability of delivering applications of value that address the business needs in terms of capability as well as budget and schedule needs.

While agile is becoming main stream on small and mid-sized projects, challenges do exist elsewhere such as how to manage this approach on larger projects and in distributed development.   Another challenge for many is how to apply a requirements lifecycle to agile projects.   Many agile principles such as “just enough and no more”, to “begin development before all requirements are complete”, to “test first”, can be counter-intuitive.   Also, what about non-functional requirements?  What about testing independence?  How can we cost something if we don’t know what the requirements are?

This article attempts to describe some ways to handle these challenges.  It is based on an example of real, ongoing, and very successful product development that uses agile with a globally distributed team.  It describes one set of techniques that is known to work.

Process Overview

There are many “flavors” of agile, but at a high level they all essentially adhere to the following basic principles:

PRINCIPLE

DESCRIPTION

Iterative

Both Requirements and Software are developed in small iterations

Evolutionary

Incremental evolution of  requirements and software

Just Enough “requirements details”

Time-Boxed

Fixed duration for requirements and software build Iterations

Customer Driven

We actively engage our customers in feature prioritization

We embrace change to requirements/software as we build them

Adaptive Planning

We expect planning to be wrong

Requirements details as well as macro level scope are expected to change as we progress through our release.

For the purposes of this article, our example uses a Scrum-based Agile process1.  In this approach the iterations (sprints) are two weeks in duration.  A sprint is a complete development cycle where the goal is to have built some demonstrable portion of the end application.  It should be noted that while the initial sprints do involve building a portion of the application, often this is infrastructure-level software that’s needed to support features to be developed in later sprints. This means there may not be much for stakeholders to “see”. 

Each organization needs to determine the sprint duration that’s optimal for them.  It needs to be long enough to actually build something, but not long enough for people to become defocused and go off track (thereby wasting time and effort).  We found two weeks to be optimal for our environment.

Key during the first part of the process is to determine and agree on project scope, also known as the “release backlog”.  Determining the release backlog itself could be a series of sprints where the product owner and other stakeholders iterate through determining relative priority or value of features along with high level costing of these features to arrive at a release backlog.  

At the other end of the project, development of new code doesn’t extend to the last day of the last sprint.  We typically reserve the last few sprints, depending on the magnitude of the release, for stabilization.   In other words, in the last few sprints only bug fixing is done, and no net new features are developed.   This tends to go against the agile purists approach to software development as each sprint should, in theory, develop production ready code.  However, to truly achieve that requires significant testing and build automation that most organizations don’t have in place.  This is always a good goal to strive towards, but don’t expect to achieve this right away.

Requirements Definition in this Process

There are several ways you could perform requirements definition in an agile process, but again our goal is to introduce an example that’s been tried and is known to work.   This example begins with the assumption that you already have a good sense of the “business need”, either inherently in the case of a small and cohesive group, or by having modeled the business processes and communicating in a way that all understand.  So we begin at the level of defining requirements for the application.

Requirements at the Beginning

Begin with a high-level list of features.  Each feature is prioritized by Product Management or Business Analysts (depending on your organization).  These features are typically then decomposed to further elaborate and provide detail and are organized by folder groupings and by type.   If needed to better communicate you should create low-fidelity mockups or sketches of user interfaces (or portions) and even high-level Use Cases or abstract scenarios to express user goals.  We sometimes do these and sometimes not, depending on the nature of what’s being expressed plus considering the audience we’re communicating with.  For example, if we’re communicating a fairly simple concept (how to select a flight) and our audience is familiar with the problem space (they’ve built flight reservation applications before) then clear textual statements may be “just enough” to meet our goals at this stage.  These goals are to establish rough estimates (variance of 50-100%) and based on these and the priorities, to agree on the initial scope of the release (what features are in and which are out). 

Once reviewed, this list of features becomes the release backlog.  The features are then assigned to the first several sprints based on priority and development considerations.

authoring1_small.png
Click image for larger view
Example High Level Features with Properties

Requirements During the Sprint

With respect to the requirements, the principle of “just enough” is paramount.  If the need has been expressed to a degree adequate enough to communicate and have it built as intended, then you’re done.  Going further provides no additional value.   This means you’ll have a varying level of detail for the requirements across the breadth of the product.  For some parts of the product a high-medium level of detail may be “just enough”, while for other more complex areas a very fine level of detail may be needed. 

In each sprint there are tasks of every discipline taking place.  Requirements, design, coding, integration, and testing tasks are all being performed concurrently for different aspects of the system.   The requirements being defined in one sprint will drive design and coding in a subsequent sprint.  The net effect is that all these disciplines are active in every sprint of the lifecycle, but the relative emphasis changes depending on what stage you’re at in the project.  For example, requirements tasks are being performed in all sprints, but they are emphasized more in the earlier sprints.

In each sprint, the high level features are elaborated into greater levels of detail.  This more detailed expression of the requirements usually begins with usage scenarios/stories and/or visuals and it’s expressed in the form of a model.  The models can emphasize user interface, use cases, scenarios, business rules, and combinations of these, depending upon the nature of what is being expressed.  Sometimes these are created collaboratively but more often in our experience, one person creates an initial version and then holds a review with others for feedback.   In our case it is typically the product managers and/or business analysts who create these models and usually between one to three reviews are held with the developers, testers and other stake holders.  The review serves multiple purposes including:

  • To facilitate knowledge transfer to all stakeholders including architects, UE designers, developers, testers, and executive sponsors on what is needed
  • To allow the architects, UE Designers and developers to assess feasibility
  • To determine if there is sufficient detail in the requirements to allow development to proceed

With appropriate technology, tests are automatically generated from the requirements producing tests that are 100% consistent with the requirements and enable the immediate testing of code developed during sprints.

Continuous and Adaptive Planning

With this approach planning is continuous and adaptive throughout the lifecycle allowing resources to be redirected depending on new discoveries that come to light during each sprint.  This ability to course correct in mid-flight is what gives projects their “agility”.   At the end of each sprint we take stock of what was achieved during the sprint and record progress actuals.  The work of the next sprint is adjusted as necessary based on this but also based on testing results, feedback from reviews of that sprint’s build, any new risks or issues that surfaced or others that were retired, and also any external changes in business conditions.  Estimates and priorities are adjusted accordingly and any changes to release scope and sprint plans are made.  In general we try not to make major mid-flight corrections during a sprint, which is one of the reasons why we like two week sprints.  If sprints were, say, four weeks then we would lose agility.  Also a two week sprint is easier and more accurate to estimate than a four week one.

authoring2_small.png 
Click image for larger view
Example Story with Tasks, and Estimates

With respect to the requirements, for those features assigned to the sprint along with any high-level requirements models, development creates high-level goals for the particular feature and estimates them.   The goals express what aspects of the feature they will attempt to build during that sprint, recognizing that one sprint is often not enough time to implement an entire feature.  The feature and its high-level goals become the content of the “story”.  Once the story is defined the developer then details and estimates the tasks to be done for that story over the next two weeks (sprint) and proceeds with development, tracking daily progress against these tasks in an agile project management tool, and covering issues in the daily scrum.

What about the Non-functional Requirements?

The various agile approaches have evolved several techniques to express system functionality.  These are things like user stories, use cases, or usage scenarios, that represent “observable system behaviors and artifacts that deliver value to users” like screens, reports, rules, etc.   These functional requirements express “what” the system is to do.  Examples of this could be things like “generate a stock-level report”, “make a car reservation”, “register for a webinar”, or “withdraw cash”.

Associated with the functionality of a system are its “qualities”.  These express “how well” the system is supposed to do what it does – how fast, how reliably, how usable, and so on.  Sometimes these non-functional requirements are associated with certain functional requirements and other times they apply to whole portions of the system or the entire system.   So how do these very important requirements get accounted for in our agile process?

They are expressed at the high-level in the form of textual statements.  For example:  “Any booking transaction shall be able to be completed by a user (as defined in section a) in less than three minutes, 95% of the time”. 

As functional requirements are decomposed and derived any associated non-functional requirements should similarly be decomposed, derived, and associated to lower levels.  For example the above performance requirement is associated with all the “booking transaction” functional requirements (book a car, book a flight, book a hotel).  If the functional requirements are decomposed into the lower level requirements “select a car”, “choose rental options”, and “check-out”, then the non-functional requirement may similarly be decomposed into requirements for selecting a car in less than 30 seconds, choosing options in less than one minute, and checking out in less than 1.5 minutes.

During review sessions the functional requirements take center stage.  However, during these reviews any non-functional requirements that are relevant need to be presented and reviewed as well.  Traceability is usually relied on to identify them. Any non-functional requirements relevant to the functional requirements of the story need to be expressed as part of the story, so the developer can take these into account during development. 

QA needs to create tests to validate all requirements, including the non-functional requirements.  Sometimes before a feature has been completely implemented, non-functional requirements can be partially tested or tested for trending, but usually cannot be completely tested until the feature is completely implemented (which can take several sprints).

What about Testing?

The high degree of concurrency in agile processes means that testing is performed in every sprint of the lifecycle.  This can be a considerable change from traditional approaches and offers several benefits.  First, it tends to foster a much more collaborative environment as the testers are involved early.  It also, of course, means that items which wouldn’t have been caught until later in the lifecycle are caught early when they can be fixed much more cheaply.

In agile, Product Owners play a very big role in the testing process and they do so throughout the development lifecycle.  Whereas many traditional approaches often rely on requirements specifications as “proxies” of the product owners, agile places much more reliance directly on the product owner effectively bypassing many issues that can arise from imperfect specifications.  In addition to the product owners, developers also test.  Test-driven development is a prominent technique used by developers in agile approaches where tests are written up-front and serve to guide the application coding as well as performing automated testing, which helps with code stability.  To augment test driven development, which is primarily focused at the code level testing done by developers, new technologies that can auto-generate functional tests from functional requirements  enable a QA team to conduct functional testing based on test cases that are not “out of sync” with the requirements specification.  This enables the QA team to conduct testing on a continuous basis, since executable code and test cases are available throughout the lifecycle.  In our example, all three are employed – product owner, development staff, and independent QA – on a continuous basis.

The requirements that we develop in our example are a decomposition of high-level text statements augmented by more detailed requirements models that give a rich expression of what is to be built.  Requirements reviews are based on simulations of these models and they are incredibly valuable for a number of reasons.  First, just as agile provides huge benefit by producing working software each sprint that stakeholders can see and interact with, simulation lets stakeholders see and interact with ‘virtual’ working software even more frequently.  Second, people often create prototypes today trying to do much the same thing.  The difference with prototypes, however, is that simulation engines built for requirements definition are based on use cases or scenarios and, therefore, guide the stakeholders in how one will actually use the future application, providing structure and purpose to the requirements review sessions.   Prototypes, including simple interactive user interface mock-ups, on the other hand, are simply partial systems that ‘exist’ and provide no guidance as to how they are intended to be used.  Stakeholders have to try to discover this on their own and never know if they’re correct or if something has been missed.  It is important to note that the principle of “just enough” still applies when producing these models. We rely on the requirements review sessions held with designers/developers to determine when it is “enough.”  This approach produces very high quality requirements and it is from these requirements that the tests are automatically generated.  In fact, such thorough testing at such a rapid pace without automatic test generation is likely not possible.

Although we strive to have shipable code at the end of each sprint, this goal is not always achieved, and we may need to use the last sprint or two to stabilize the code.  Since testing has already been taking place continuously before these final sprints, the application is already of considerably high quality when entering the stabilization phase, meaning risk is manageable and rarely is ship date missed.

What about Estimating?

Remember in agile it is typically ‘time’ that is fixed in the triad of time, features, and quality.  In our approach also remember that, with continuous testing and the reserving of the final sprints for stabilization, quality tends to be fairly well known as well.  This leaves the features as variable so what we’re actually estimating is the feature set that will be shipped. 

As always, the accuracy of estimates is a function of several factors but I’m going to focus on just three

  • The quality of the information you have to base estimates on,
  • The inherent risk in what you’re estimating, and
  • The availability of representative historical examples that you can draw from.

In our approach, estimates are made throughout the development cycle, beginning in the initial scoping sprints.  As mentioned earlier, once the list of candidate features is known and expressed at a high (scoping) level, they are estimated.  Naturally at this point the estimates are going to be at their most “inaccurate” for the project lifecycle, since the requirements have not been decomposed to a detailed level (quality of information). This mean there is significant risk remaining in the work to be done.  Similar projects done in the past may help mitigate some of this risk and help increase the accuracy of the estimates (e.g. we’ve done ten projects just like this and they were fairly consistent in their results).  

The initial estimates are key inputs to the scoping and sprint-planning processes.   As the project proceeds, with each sprint risks are exposed and dealt with, requirements are decomposed to finer levels of detail, and estimates naturally become more accurate.   As you might guess, estimation is done toward the end of each sprint and is used in the planning of future sprints.

What about Distributed Teams?

Today distributed development is the norm.  For reasons of efficiency, cost reduction, skills augmentation, or capacity relief, distributed development and outsourcing is a fact of life.  There’s no free lunch however – there are costs associated with this approach, and much of these costs are borne in the requirements lifecycle.  Chief among these is “communication”.  There are practices and technologies that can mitigate this issue, so that the promised benefits of distributed development can be realized.  The approach in this we’ve looked at here, for example, uses the following and has been very successful:

  • Concerted effort for more frequent communication (daily scrums, and other scheduled daily calls)
  • Liberal use of requirements simulation via web-meeting technology
  • Direct access to shared requirements models via a central server
  • Automated generation of tests and reviewing these in concert with the requirements to prove another perspective of what the product needs to provide.

Conclusion

“Have you heard that England is changing their traffic system to drive on the right-hand side of the road?  But to ease the transition they’ve decided to phase it in –  they’ll start with trucks”.

A common mistake of development organizations making the shift from waterfall to agile is that their organization  mandates they still produce their big, heavy set of documents and have them reviewed at the same milestones, clinging to these familiar assets like security blankets. It doesn’t work.  As scary as it might seem all significant aspects of the approach, like documentation, need to change in unison if it’s to be successful, and requirements are one of those significant aspects. 

However if you still want that security blanket and you want to have some benefit of agile, at least generate your requirements specification in an agile manner (iterative, evolutionary, time boxed, customer driven, adaptive planning) that includes simulations integrated and driven by use cases traced to feature.  This is one way to reap some agile benefits without making the leap all at once.

Risk is the ‘enemy’ on software projects.  High risk profiles on projects drive uncertainty, render estimates inaccurate, and can upset the best of plans.   One of the great things about agile is that its highly iterative nature continually ‘turns over the rocks’ to expose risk early and often so it can be dealt with.   

On the other hand, one of the great challenges for larger and distributed teams is keeping everyone aligned as course-corrections happen sprint by sprint.   A big part of this is the time and effort it takes to produce and update assets and the delays caused by imperfect and strained communication. The good news is that tools and technologies now exist to produce many of the assets automatically, and to also dramatically improve communication effectiveness, allowing agile to scale.

With the right approach, techniques and technology, distributed agile can be done.  We’ve done it.  So can you.


Tony Higgins is Vice-President  of Product Marketing for Blueprint, the leading provider of requirements definition solutions for the business analyst. Named a “Cool Vendor” in Application Development by leading analyst firm Gartner, and the winner of the Jolt Excellence Award in Design and Modeling, Blueprint aligns business and IT teams by delivering the industry’s leading requirements suite designed specifically for the business analyst. Tony can be reached at [email protected].

References

1  The Scrum Alliance   http://www.scrumalliance.org/

2  Approaches to Defining Requirements within Agile Teams
Martin Crisp, Retrieved 21 Feb 2009 from Search Software Quality, 
http://searchsoftwarequality.techtarget.com/tip/0,289483,sid92_gci1310960,00.html 

3  Beyond Functional Requirements on Agile Projects
Scott Ambler, Retrieved 22 Feb 2009 from Dr.Dobb’s Portal, 
www.ddj.com/architect/196603549

4  Agile Requirements Modeling
Scott Ambler, Retrieved 22 Feb 2009 from Agile Modeling, 
http://www.agilemodeling.com/essays/agileRequirements.htm 

5  10 Key Principles of Agile Software Development
Retrieved 22 Feb 2009 from All About Agile, 
http://www.agile-software-development.com/2007/02/10-things-you-need-to-know-about-agile.html

6  Agile Requirements Methods
Dean Leffingwell, July 2002, The Rational Edge         

7  Requirements Honesty
Francisco A.C. Pinheiro, Universidade de Brasilia

8  Requirements Documents that Win the Race
Kirstin Kohler & Barbara Paech, Fraunhofer IESE, Germany

9  Engineering of Unstable Requirements using Agile Methods
James E. Tomayko, Carnegie Mellon University

10  Complementing XP with Requirements Negotiation
Paul Grunbacher & Christian Hofer, Johannes Kepler University, Austria

11  Just in Time Requirements Analysis – The Engine that Drives the Planning Game
Michael Lee, Kuvera Enterprise Solutions Inc., Boulder, CO.                                                            03/09

  • 1
  • 2