Skip to main content

Be Expressive – Improve the Clarity of Your Requirements

Click on all images to enlarge

Make Sure Everyone’s on the Same Page

If you were somehow able to hold a requirement in your hand and ask, “Why do I need this? What is its purpose?” I would offer that you’re holding an agreement. A requirement is how two (or more) people have chosen to express their agreement regarding what is needed from a software application. The assumption is that all parties have the same understanding or interpretation of these written words. Quite often this assumption proves to be false – different stakeholders in fact have different ideas in their minds regarding these same words, and confidently think that others share their vision as they warmly smile and shake hands over the agreement. If only we could have a “Vulcan mind-meld” ala Star Trek, a lot of our requirements issues would be gone.

These issues are possible with each and every written requirement. When you multiply this by hundreds, or sometimes thousands, of requirements and then consider how the requirements must combine to paint a picture of the whole solution, it’s quite understandable why there can be so many surprises in the latter parts of a software project.

The fundamental problem is that traditional legal-type textual requirements are simply not very expressive, which tends to leave a lot open to interpretation. Very often, requirements authors attempt to compensate for this lack of clarity by adding even more legal-style text to elaborate and constrain what they’re trying to communicate. Sometimes this can lessen the range of interpretation as desired; but just as often, it can actually do the opposite, as these new words themselves may contain ambiguities, inconsistencies, or redundancies. These additional words are added to areas where misinterpretation has been detected. But what about all the hidden areas of misinterpretation that have yet to be detected? They simply remain hidden, lying in wait to rear their heads later on in the development cycle.

Stakeholders directly involved in defining the requirements through elicitation, authoring, or review, have the benefit of sometimes hours of discussion and debate that go into formulating the requirements. This rudimentary form of Vulcan Mind-Meld is really quite valuable in avoiding misinterpretation of the written requirement. But what of the requirements consumers who did not have a seat at the definition table, and didn’t have the benefit of all this undocumented communication? It is crucial that developers, testers, outsourcers, sub-contractors, and others who consume requirements understand what they are being asked to build, test and manage. The risk of requirements miscommunication among these key individuals is magnified, since they’re even more likely to misinterpret the original intent.

So in addition to serving as an agreement between parties, the requirements have another key purpose: to communicate what is needed completely, precisely, and unambiguously to others who may not have been intimately involved in their definition.

Have you ever noticed what someone does when they’re having difficulty communicating a thought or concept? They become animated. They start using their hands. They make sketches on the whiteboard. They start pointing at the computer screen and say things like “so imagine this”. In short, they reach out for other, more expressive forms of communication. As they do this, they often expose other areas of misconception or misunderstanding. One by one they tackle these new areas, again with various forms of expression. The same is true with requirements. Increasing the expressiveness of your requirements actually exposes additional areas of miscommunication you never knew were there – in other words, areas where you need to be even more expressive. This effect can be very powerful, and can help to raise the quality of requirements dramatically.

Some go so far as to replace textual statements altogether in favor of other more expressive forms of specification. Personally, I’ve found a combination of the two offers the best solution, and this is the approach I’ll describe.

Setting Context

Software requirements express what is required of the application in order to fulfill some higher-level business need. Requirements could describe how to provide customers with an online flight booking capability to fulfill the business need of increased sales for the airline, or how to withdraw cash to fulfill my business need of going to a movie. Regardless, this business need must be expressed along with the associated business process so that this world in which the software application will function is:

  1. Understood by all team members, and
  2. Used as a reference for the project to ensure it stays focused on addressing the business need.

This business context can be expressed as a combination of textual statements with a set of business process diagrams. The textual statements can be categorized according to an appropriate taxonomy (e.g. business need, business goal, business objective, business rule, business requirement, etc). There are various taxonomies in use, and I won’t use this forum to debate their merits. Business processes are best expressed using business process diagrams. There are numerous notations in use for this, from fill-blown Business Process Modeling Notation (BPMN), with its plethora of symbols, to very light-weight notations suitable for sketching processes. Since the purpose of the diagrams is to simply provide context for the software requirements – we’re not trying to re-engineer the business – I generally opt for the simpler notations. All I look for is ability to support:

  1. Roles. In other words, who is performing the activity? This is commonly represented using swimlanes.
  2. Activities. The tasks performed by the role.
  3. Decisions. The ability to identify the decisions made by the roles and to specify associated conditions.
  4. Start & End. Clear notation of where the process begins and ends, with ability to specify any pre or post conditions.
  5. Nesting. The ability to abstract or “nest” portions of a process and denote it with a special symbol.
  6. Free Form Notes. The ability to add unrestricted notes (i.e. text of any size and position).

I’ve found with these essentials, I’m able to illustrate most processes, and in the odd case where I can’t, I simply augment with notes.

The business need and the process, as you can imagine, are related. In areas where the relationship between the two is noteworthy or illustrative, it makes sense to somehow maintain this relationship via a traceability relationship using whatever requirements tool/platform you have. Conceptually, the context may therefore appear as in Figure 1.

improveclarity1_sml

Figure 1 Business Context

Generally, I ensure the business needs are exhaustive in terms of breadth – meaning if there’s a need, I want it to be specified. On the other hand, I selectively apply business process diagrams in areas where the process is business-critical, high-risk, or complex. This is admittedly a fairly simple approach, but I find it adequate in a surprisingly large number of situations. Of course, it can be extended where necessary.

Defining Expressive Requirements

As with the business context, I typically begin with textual statements describing application requirements. I mostly use a categorization scheme and traceability espoused by Rational Unified Process (RUP), which originated at HP, grouping requirements into five main categories: Functional, Usability, Reliability, Performance, and Supportability. Each of these has numerous sub-categories which some people use and others don’t. Another popular approach is that promoted by Karl Weigers in his book Software Requirements. This approach arranges User Requirements, Business Rules, Quality Attributes, System Requirements, Functional Requirements, External Interfaces, and Constraints into a hierarchical traceability strategy. As with categorization of business needs, there are numerous taxonomies for categorizing software requirements, along with many strategies for tracing the relationships among them. You need to decide on one that’s appropriate for you. Regardless, this is simply a way to organize your textual statements, and does little to improve their expressiveness.

Example legal-style text requirements list:

  • The system shall allow the user to book a flight
  • The system shall allow the user to choose the departure and destination airport
  • The system shall allow the user to specify multiple passengers
  • The system shall allow the user to specify for each passenger whether they are an adult, senior, or child.
  • The system shall allow the user to select either a one-way flight or a round-trip.

One of the most popular ways to improve expressiveness is to think like a user and imagine how they’d use an application. User stories are one way to do this. Another is use cases, which I’ll discuss briefly here.

Instead of a list of capabilities or features to be provided by the application, use cases describe the various scenarios of how the application will be used to accomplish some goal. As an example, imagine an online airline application for booking flights and other common traveler functions. Consider a couple of scenarios:

  1. Book a round-trip from Dallas to Chicago for two adults
  2. Book a one-way flight from Toronto to Atlanta for one adult, and one senior

These are different scenarios and will likely entail some different requirements. For example, in the first scenario, we actually have to handle two flights instead of one, because it’s a round-trip. In the second, there may be special requirements for seniors and perhaps senior discounts involved. Furthermore, the second example is an international flight, so there are likely some special requirements around that as well. However, you can look at these two scenarios, and probably think up several more, which are all variants of Booking a Flight, the goal of the user. To describe these various scenarios, we would create a use case called Book a Flight, and within it stipulate in detail how flights get booked. We do this in a dialog style such as …

Step 1. User chooses to book a flight

Step 2. System allows user to choose one-way or round-trip

Step 3. User chooses round-trip

Step 4. System allows user to specify departure location and destination

Step 5. User chooses departure location and destination

Step 6. System allows user to specify number and type of travelers as adult, child, or senior

Step 7. Etc.

Figure 2 Sample Use Case Steps

This is a quite different style than the legal-style text list shown earlier. This dialog describes the same functionality, but from an alternative perspective and style. Just having this alternative vantage point can itself shed new light on the requirements, help communicate what is truly needed, and help expose hidden issues. I would further argue that this form is more expressive than the legal-style text list. First, it is from the perspective of the user, which will allow existing and future end-users to consume it more readily. Further, notice that each statement makes sense only in context of those around it. If you move Step 4 to the end of the list, it will make no sense. In this style, it is quite easy to spot when things are missing or out of place, thereby ensuring completeness and integrity of the requirements.

Take a look at the requirements for an application screen below:

  • The screen must allow users to select meals for optional purchase
  • The screen must allow the user to purchase up to three extra luggage pieces
  • The screen must allow for the display of six different meals
  • The screen must display the totals of any and all optional purchases
  • The user must be able to select and re-select any options on the screen as many times as desired before submitting
  • The screen must prominently display the standard luggage allowance
  • The screen will inform the user at all times that their credit card used for ticket purchase will be billed for optional meal purchases
  • The screen must prominently display the price for meals

These are just a few of the requirements that could be specified for a particular screen. These requirements don’t even get into the various constraints on layouts and appearance that are commonplace when specifying screens.

Also notice the use case steps shown earlier in figure 2. The system steps describe many things that the screen will need to support.

Instead of relying exclusively on interpretation of this written text, what tends to be far more effective is to provide a screen mockup as shown below.

improveclarity2_sml

Figure 3 GUI Mockup

While some requirements authors dispense with the written requirement entirely, most generally favor maintaining some combination of textual statements and screen mockups.

A picture definitely is worth a thousand words, and this approach to visualizing what’s needed is indeed quite powerful. Supported by various visual requirements tools available today, there are some very effective approaches in practice. Just a sampling:

  • Wireframes: Simple wireframe mockups of user interfaces, like the one above, are fast and easy to create. Their simplicity means that they often can be updated in real-time during a review session. Another good thing about them is that they don’t look like real screens, so people don’t get confused and think that the wireframes represent a finished product. Keeping the fidelity low, at the level of wireframe, also provides silent guidance to the requirements author to stay at the level of screen concept, and not stray into designing the actual screen.
  • High Fidelity Screens: More richly defined screens that actually resemble the end product are at times warranted. Sometimes this happens in high-risk or complex areas of the application and often in collaboration with a user interface designer.
  • Screen Markups: Very common in situations where you’re enhancing an existing application, taking a screen-shot and applying markups and annotations is an effective way to quickly communicate what’s needed, and to highlight subtle points.
  • Screen Overlays: Similarly powerful in situations where you’re enhancing an existing application, Screen Overlays take a screen shot of the current application, and add screen controls that represent the enhancement. Modern requirements tools will provide an editor allowing you to do this, and a simulation engine allowing those overlays to become live during a requirements simulation.
  • System Interfaces: In areas where there’s no user interface, there’s still room for visualizations. Event or sequence diagrams are excellent visuals for illustrating behavior of a system interface.

Integrate different forms of expression

The use cases discussed earlier provide a narrative description of how the user interacts with the system each step of the way. In addition, the visualizations mentioned above illustrate what the user will actually see along the way. It only makes sense that these be integrated. While this could be accomplished manually by perhaps creating links between your use case steps and the visuals with office-automation applications, a far more effective solution is to use a modern requirements definition tool that supports this linkage, and provides other associated benefits. The most powerful of these tools allows you to have a mixture of visualization approaches associated with the use cases, as illustrated in the picture below:

improveclarity3_sml

Bring Data into the Picture

Data is an inherent part of any application. In some systems, it’s not very dominant; in other data-intensive applications, data occupies most of the mind-share. By adding visuals (GUI mockups) to the steps in a use case, we were effectively illustrating how the user interface is affected by actions of the user (use case steps). Data is similarly influenced by actions of the user, and this behavior should be specified as well as part of the requirements.

With things like user screens, you can specify them entirely using text or use visualizations, as discussed earlier. This of course isn’t new since many make sketches or mockups outside of the requirements definition process today, using a variety of graphics programs. One of the key values in what was discussed is making these visualizations an inherent part of the requirements, and associating them with the discrete steps of the use cases at points where they’d appear to the user.

Something very similar is possible with data. You could choose to express data elements only using textual statements or tables. Today, outside of the requirements, many create prototypes with various tools like Microsoft Excel to actually perform data operations and evaluate results. This allows requirements authors not only to validate their data requirements, but also to communicate them more effectively. As with visualizations, modern requirements toolsets allow you to bring data operations within the realm of the requirements and, like the visualizations, associate them with specific steps of the use cases where they will be performed. Below shows conceptually where data operations are being associated with use case steps, similar to how the visualizations were earlier.

improveclarity4_sml

Figure 4 Data Incorporated in a Use Case

By adding visuals to the steps, we’re able to illustrate how user interfaces presented by the system are influenced by user actions. Similarly, by adding data operations to steps, we’re able to show not only how data is influenced by user actions, but also how data can influence system behavior.

Pull it Together

So there are multiple ways you can express requirements. Of these, textual lists are typically the least expressive. As discussed earlier, I tend to use these alternate forms to augment rather than replace text lists. However, I typically don’t use them on all parts of the application. One criterion that tends to guide how I express requirements is requirements complexity; successfully communicating highly complex requirements demands these more expressive forms. Another criterion is risk. If the risk associated with getting the requirements wrong in a certain area of the application is particularly severe, I want to be as expressive as possible with the requirements, thereby reducing the possibility of miscommunication. The graph below provides an example of this:

improveclarity5_sml

While all areas of the application are covered by textual requirements at a high-level, I go to the greatest level of detail in those areas that are complex and risky. The functionality of virtually all areas would be illustrated with use cases. Most of those would have wireframe mockups, with a very few, the highest risk and most complex areas, having higher-fidelity renderings. Also in these important areas, I would leverage data operations on the use case steps. In summary, the highest risk/complex areas are specified to the greatest level of detail and in the most expressive manner, while the lowest risk/simple areas are specified much more simply.

Even though it is possible to do this manually with spreadsheets, drawing tools, and other point-solutions, this is where an integrated requirements workbench really shines in its ability to not only support all these forms of expression, but also to maintain requirements integrity, ensure consistency across them, and manage traceability. The diagram below gives a high-level overview of a traceability strategy for this information, including traceability back to the business context. Tracing back to business reference is vital to ensure that all business needs are being addressed. Conversely, a complete traceability strategy ensures that every application functionality that is being specified has an explicit business need.

improveclarity6_sml

Using Expressive Requirements

Defining expressive requirements can dramatically improve the outcome of software projects if only in the uncovering of latent errors early and a reduction in miscommunication. Expressive requirements can release even more value if modern requirements toolsets are leveraged. The following are just a few examples of the benefits you can derive by using a modern requirements toolset.

Generate Documents. A requirements workbench will allow you to generate requirements documents automatically from the information in the requirements model. This can be a significant shift for a more traditional requirements organization in that time is spent focused on the requirements themselves, making them more expressive, analyzing them and improving their quality as opposed to being focused on the mechanical tasks of producing and maintaining a document. You still get the document, just without the effort.

Generate Simulations. A requirements workbench will allow you to transform a requirements model into an animated, live simulation. Simulations with expressive requirements make reviews far more effective, quite frankly, enjoyable. The simulations will leverage all the content in the model including the textual requirements, the use cases, the visualizations, and the data, integrating them into a single “vision” of the future application. Traceability is a vital part of the review process. When reviewing requirements, you must be able to relate them back to the original business need on-demand. Comprehensive requirements workbenches available today expose this traceability not only during requirements authoring tasks, but also during the simulations used for analysis and review.

Comments, feedback, discussions often result from simulation sessions. These are useful exchanges of ideas that also need to be recorded and managed such that they’re not missed, that they’re accessible, and that you can always refer back to them. A modern requirements workbench will record this informal input alongside the comments during simulation and make it accessible throughout the workbench so people are aware of these discussions and so authors can effect change based on them.

Generate Tests. A requirements workbench will allow you to automatically generate tests from the requirements defined in the model. These tests will cover all possible usage scenarios throughout the model and express for each step in the test any relevant screens, data, or externally referenced materials.

The importance of traceability continues through to the tests as well. Each test generated has complete traceability information in its header that identifies which requirements it helps to prove. As I’m sure you can imagine, this ability to generate tests is hugely valuable to the QA professionals on a software project.

Integration with Development and Test Tools. Practitioners, such as developers and testers who consume the requirements, base their work products ultimately on the requirements. It’s therefore very important that the requirements information be available to their toolsets as well. Today’s requirements workbench will integrate with popular UML design toolsets as well as QA testing toolsets to provide them with high-quality, expressive requirements content.

Managing Requirements Change

With more expressive requirements, most change will occur at the beginning of the project when alterations are inexpensive. The modern requirements workbench provides several capabilities to help manage change. Some of these include:

  • Traceability: As discussed at several points earlier, the modern requirements workbench supports traceability among all requirements artifacts, provides mechanisms to create relationships that are fast and easy, makes traceability information available as you work, and provides sophisticated and filterable views into traceability for deeper analysis. Traceability is very important for analyzing impact as part of requirements change.
  • Versioning: The modern requirements workbench will version all requirements elements such that you can always go back in time to see them as they existed at points in the past.
  • History: The modern requirements workbench will provide a comprehensive historical record detailing who changed what, and when they did it.
  • Difference: The modern requirements workbench will allow you to select any two versions and instantly see the precise differences between them, for all requirements elements regardless of the form of expression.

Summary

The key problem for software requirements historically has been less-than-perfect requirements. One of the main reasons for poor quality requirements is miscommunication of requirements, owing to a lack of requirements expressiveness. Multiple forms of expression for requirements can remove this miscommunication. At the same times, integrating these multiple forms in a single unified model allows them to be manageable and scalable. The modern requirements workbench makes this approach for practical, expressive requirements possible.

Don’t forget to leave your comments below


Tony Higgins is Vice-President of Product Marketing for Blueprint, the leading provider of requirements definition solutions for the business analyst. Named a “Cool Vendor” in Application Development by leading analyst firm Gartner, and the winner of the Jolt Excellence Award in Design and Modeling, Blueprint aligns business and IT teams by delivering the industry’s leading requirements suite designed specifically for the business analyst. Tony can be reached at [email protected].

The Business Analyst’s Requirements Meeting from Hell!

Secrets to Managing the Difficult Personalities in Your Meetings

Business Analyst Sherry Martin couldn’t stop thinking about her last team meeting as she walked down the hall towards her office. She’s leading a team charged with developing a requirements document for a hot new IT product, and things have not been going well. Slamming her office door behind her, she let out an exasperated scream and looked for something to punch! Her team was driving her absolutely crazy and she channeled Scarlett O’Hara as she proclaimed, “I will never run a meeting like that again!” Her problem in a nutshell boiled down to three really difficult personalities that continually recurred on her team. These personalities were indeed a cancer, not just infecting the team and its results but also spreading throughout the group and infecting individual team members as well.

Sherry needs an antidote… now!

Here’s a little help for Sherry…and for you! Let’s explore these common dysfunctional personalities and take a look at how to effectively manage them.

The Dominator

We’ve all experienced “the dominator” in one way or another. Some people tend to dominate discussion simply because they’re excited and over zealous. These can actually be valuable members of the team if we can find appropriate approaches to harness and manage all that positive energy. Unfortunately, most of us are more familiar with the other type of dominator – the overly aggressive, bullying personality that tramples on others’ comments and may attempt to hijack the meeting completely! Sometimes, these dominators are overly negative (“That’ll never work here!”), and other times they just won’t let anyone else get a word in edgewise. In either case, dominators can certainly sour not just the effectiveness of the meeting but also the morale of the team.

Techniques for Effectively Managing the Dominator

  • Thank the dominator for the feedback and ask for others’ input (e.g. “Steven, that’s an interesting idea. Let’s see if others have suggestions as well.”)
  • Reiterate the dominator’s comment, write it visibly for all to see, and then ask for other ideas to complete the list. (e.g. “Steven, it sounds like you’re recommending that we use these three vendors as our short list…is that correct? That’s a great suggestion. Let’s compile a list of several suggestions, then discuss them all. We’ll list your suggestion as “A” on the list. I’d like to get at least three other suggestions from the team. What do others think?”)
  • Instead of having the group respond to an issue verbally, ask them to take 2 minutes to jot down their idea, issue, or recommendations on a post it instead. Then ask each person to share one comment they wrote.
  • Suggest the group use the round robin technique (go around the room asking each person to share a comment) and start at the opposite end of the table from the dominator (e.g. “This is such an important issue that I want to be sure I’m getting everyone’s ideas. Let’s do a quick round robin starting with Jill…”)
  • Call on a few people you haven’t heard from (e.g. “Michael, what are your thoughts on this issue?”)
  • Take a break and solicit the dominator’s support offline (“Steven, you’ve brought up several key points. I’m hoping to get some of the other team members involved in the discussion to hear their ideas as well. Some members of the group are not as assertive, but I want to be sure we hear from them.”)
  • Break the group into pairs or triads and let them discuss an issue in those smaller groups before initiating a large group discussion
  • Gain agreement with your team to use a physical object (e.g. sponge football) to balance discussion. The person holding the football has the floor, and they must toss it to someone else once they make their point.

The Multi-Tasker

Increasingly, we’re seeing more and more multi-taskers in our meetings. Aptly named, they’re the ones whose attention constantly darts between the meeting leader and any number of other tantalizing distractions (e.g. PDA, laptop, reading material, etc.). Indeed, the multi-tasker is physically present but mentally elsewhere.

Techniques for Effectively Managing the Multi-Tasker

Bring the issue up to the group during first few meetings and decide as a group how you want to handle the technology distractions…options may include

  • Using a “technology drop box” at the front of the meeting room and agreeing to drop it in prior to meeting start
  • Limiting meeting time to one hour to ensure participants aren’t away for too long
  • Agreeing on 15 minute technology breaks every hour
  • Participants bring a buddy to “cover” for them in case they have to step out for a call

Use facilitation techniques that keep participants actively engaged

  • Round robin
  • Active questioning
  • Affinity diagramming
  • Sub team work
  • Dot voting
  • Use a circular or U shape room setup that allows you to easily walk around (and near) various participants quite easily
  • Agree upon a mild punishment for texting, emailing, etc. during the meeting. One group used a PDA jar and violators had to put in $5 per violation. (Money was later used for team lunches)

The Rambler

The rambler can seriously derail a meeting with their circuitous, protracted, rambling commentary. Oftentimes, the rambling strays into areas bearing little resemblance to the topic at hand. The rambler can not only significantly extend the length of a meeting but also completely alter the meeting content, thereby minimizing the team’s efficiency and effectiveness.

Techniques for Effectively Managing the Rambler

  • Have a printed agenda (on a flip chart or whiteboard) in the room. When conversation strays off topic, stand up and point to the specific agenda topic to refocus the group.
  • Include timings for each section of the agenda so you can more easily focus the group on the time allotted for each discussion point. Possibly ask someone on the team to provide a five minute warning before the scheduled end time for each section of the agenda.
  • Simply raise your hand and interrupt discussion to ask if the conversation is on topic and helping the group reach their goal for the meeting. (“Guys, allow me to step in for a moment to ask whether the vendor discussion is relevant for this particular section of the agenda?”)
  • Introduce the Parking Lot at the beginning of the meeting and announce that you’ll interrupt discussion to place any off-topic discussion points on the parking lot to help keep the group on track. (“Jill, I realize that you feel strongly about the inventory control issue, but I’m wondering if we should try to resolve that now or could we possibly place it in the parking lot?”) Review all parking lot items at the close of the meeting and assign action items for each.
  • Assign someone on the team to act as the “rambler police” (use a badge if appropriate). This person is responsible for raising their hand anytime the discussion veers off topic.
  • Consider using the ELMO technique. ELMO = “Everybody, Let’s Move On!” Whenever anyone in the group feels the group is rambling too much, they’re expected to pick up the ELMO doll in the center of the table.

Don’t forget to leave your comments below


Dana Brownlee is President of Professionalism Matters, Inc. which operates www.meetinggenie.com, an online resource for meeting facilitation tips and instructional DVDs.

Maturity is More than Looking for Grey Hair and Wrinkles

The concept of capability maturity has been around since Deming first started the quality movement in the 1950s. So what’s a “mature” business analyst? Are we really looking for grey hair and wrinkles to determine that individuals or collective organizations are consistently going to perform at a high level of quality? Is there some measure of “prune-i-ness” that you can use to pick out top performers?

I think not.

Overall, if an organization wants to change the performance level of its business analysis function, it has to focus on six underlying capabilities: processes, practices, people, technology, organization and deliverables. Sure, you can get a short term bump in productivity by being draconian, but it is simply not possible to make material, sustained change without systematically addressing these six capability areas. The question in each of the six areas is not whether you have, for example, processes or not. The question is the degree to which an organization accepts and adopts that process as the best possible way of doing things. This means that some organizations think they have a process for requirements definition and management … but admit it’s actually quite ad hoc. At the other end of the spectrum, organizations have institutionalized the process, and can describe their efforts to continuously optimize this process so that it’s alignment to delivering stakeholder value is maximized. These two examples are the extremes of requirements definition and management maturity.

As an organization gets more mature (getting less ad hoc and going more institutionalized in processes, practices, skills, etc.) it radically changes its overall productivity. This productivity is a real, tangible thing you can measure. In fact, the maturity of business analysis capability dramatically reduces things like time-to-market for IT centric services, and makes little problems like over-runs and project failures start to go away. What people tend not to realize is that maturity – and the overall level of value delivered by an analyst organization – is also measurable. Maturity of requirements definition and management within organizations is a real, tangible thing – and no… it’s not based on looking for grey hair on the leadership, or seeing especially wrinkly analysts.

A funny thing happens when organizations suddenly wake up realize that poor requirements are killing their business (and likely careers). CIO’s think that fixing the problem of poor requirements can be solved purely by hiring smart people. I call this, “attempting to adjust the overall performance of your organization purely through the ‘prune-i-ness’ of your analysts”. Its results are as bad as it sounds and the CIO eventually takes the fall for having such expensive people on the payroll. In fact, lower skilled people in high requirements definition and management maturity organizations will VASTLY outperform highly skilled people in low maturity organizations. Getting performance is not strictly about adding grey hair, it’s about addressing the underlying issues that cause poor requirements: poor processes, lack of techniques, poor organizational support, lousy technology, incomprehensible requirements deliverables, and yes, there also might be an issue with skills.

Don’t forget to leave your comments below


Keith Ellis is the Vice President, Marketing at IAG Consulting (www.iag.biz) where he leads the marketing and strategic alliances efforts of this global leader in business requirements discovery and management. Keith is a veteran of the technology services business and founder of the business analysis company Digital Mosaic which was sold to IAG in 2007. Keith’s former lives have included leading the consulting and services research efforts of the technology trend watcher International Data Corporation in Canada, and the marketing strategy of the global outsourcer CGI in the financial services sector. Keith is the author of IAG’s Business Analysis Benchmark – the definitive source of data on the impact of business requirements on technology projects.

Building and Managing a Requirements Center of Excellence. Part 2.

Why Adopt the Requirements Center of Excellence Model?

The Centers of Excellence (CoE) trend has gained traction recently within the IT departments of large organizations. In fact, the META Group refers to the model as “the next step in IT’s evolution” (Source: The Application Center of Excellence, META Group). This section summarizes costs, risks, and various limitations of traditional quality management practices and contrasts them with the Requirements CoE model.

Problems with Traditional Requirements Practices

Requirements Development not Rigorously or Formally Applied. Most organizations today will author requirements and then attempt to raise the quality of these requirements through manual document walkthroughs and reviews, which has proven to be an onerous, expensive and, to a large degree, ineffective approach. Any errors in the requirements will be detected at some point and have to be remedied, if not during development or testing then during operation by the user. With more rigorous application of requirements best practices and automation, the majority of these expensive errors can be detected early when they are easy and far less expensive to fix.

Traditional Methods and Tools Fail to Close the Gap between Business and IT. The traditional requirements approaches and toolsets continue to focus on managing requirements through a lifecycle with little regard to their quality or if these requirements truly address the business needs. The result is a continual procession of “Death March” projects [Yourdon].

Piecemeal requirements practices and toolsets. In many, if not most cases today, software development is distributed to some degree within an organization (across divisions, departments, or at least projects) and each area tends to be a “microcosm” of disparate tools, approaches and practices, workflows, and artefacts. There is typically little systematic learning and improvement in such an environment. The business incurs not only the obvious direct costs of this but also indirect costs associated with staff turnover due to frustration and dissatisfaction of working in such an environment

Disconnected Processes The different disciplines on the project that utilize the outputs of requirements definition often have processes that are not integrated with those of the requirements discipline. This discontinuity is especially pronounced if the projects and departments have no consistency of tools and/or process across them. Any and all attempts to leverage common services such as technical documentation, customer support and training development, not to mention consistent management reporting and oversight, result in huge inefficiencies in such an environment.

The Business Value of the Requirements CoE Model

Drive Errors out Early in the Life Cycle. A key focus of the Requirements CoE is to promote the definition of requirements whereby they evolve through successive iterations to a high level of quality. The direct result is that less rework is required during later phases of the lifecycle resulting in a higher-quality product, faster, and at lower cost.

Communicate Unambiguously with IT so they Build what Business Needs. The Requirements CoE fosters enhanced communication between Business and IT. Consistency of notation and process is a large part of it, but modern Requirements Definition practices and toolsets make this a reality.

Give the Business the Necessary Control of the Process. Historically the business attempted to express the need, this was interpreted and documented as well as possible, and then IT would proceed largely unchecked until some version of the developed product was made available. It was then that any misinterpretations and miscommunications became obvious. Unfortunately by this time most of the budget and schedule was spent, so changes translated into project over-runs or, where this wasn’t an option, the business would simply accept the inadequate product. The Requirements CoE endeavors, through process and automation, to provide business with meaningful points of control during development to prevent and detect misinterpretations and miscommunications early.

Integrated with Other Disciplines. One of the largest gaps in software development solutions today is the lack of end-to-end integration throughout the software lifecycle. This is true of both processes and automation. While it may be possible to greatly enhance the performance of an individual discipline, one then runs into a brick wall when the results of that work cannot be passed or integrated into another discipline that needs it. The Requirements CoE not only focuses on ensuring the definition of high quality requirements, but also the seamless communication of information with other software development disciplines (testing, development, etc.).

How to Build a Requirements CoE

The flexibility of the CoE model enables companies to start small, use existing resources, and achieve tangible benefits almost immediately. This section outlines a typical process for building a Performance CoE step by step.

Assess

The Assessment activity begins with clearly identifying and articulating the business goals that the CoE is fulfilling, or helping to fulfill. This step is critical to maintain the focus of the CoE on addressing business needs.

As an outcome of this activity objective success criteria (KPIs) for the CoE will be established, consistent with business goals. This will allow for clear assessment of progress as the CoE is implemented and set the targets that staff needs to meet. The criteria will be prioritized – often based on short, medium, and long term need, and used to drive implementation planning. This is often based on the most prominent issues that need to be resolved (i.e. “pain points”) but can also be based on risk, criticality, or other drivers. Also, an agreement on how to calculate Return on Investment (ROI) is typically done at this point, since such information is usually required following implementation to justify existence of the CoE. The specific measures required as inputs to the ROI are determined so that their collection can be planned for.

In addition to the above, the organization’s “environment” in which the CoE will play a pivotal role needs to be documented to serve as input to subsequent scoping and design activities. This environment considers the three perspectives:

People. The skill levels and proficiency of staff in the Requirements discipline, and of those who work from the requirements in adjacent disciplines (e.g. design, test, etc.). It takes into account the diversity or consistency of this across the organization.

Process. The existing process, specifically in the area of Requirements, is examined. The activities performed and the artefacts produced as a result are recorded, in addition to how these artefacts are used by adjacent disciplines. The interface points of the discipline with management processes in terms of review, tracking and reporting are also recorded. Once again the consistency of application across the organization is taken into account.

Automation. The tools currently used and planned for use in Requirements and adjacent disciplines are recorded, as well as the manner in which they are used. Any prominent issues with current usage in the context of the existing process are documented.

Finally, the set of current projects along with their status, priority, impending milestones and commitments are itemized.

This accounting of the business goals the CoE is to fulfill and how to measure them, the environment in which the CoE will play a role and the current state of projects and initiatives, along with the constraints of impending commitments provides a basis on which to plan an optimum implementation approach for the organization.

Scope

Based on the information gathered during the assessment, prudent decisions can now be made on how best to implement the CoE in an incremental fashion to ensure business objectives are met at minimal risk to existing operations and commitments. The result of the scoping activity will be the high-level approach for incremental implementation identifying:

  • Which CoE capabilities will be implemented, in what order
  • Across what part(s) of the organization and projects,
  • What business objectives will be met, and how this will be measured
  • justification for the approach based on risk, opportunity, and other criteria

This results in a high-level Implementation Plan for the CoE.

Design

This activity will evolve the high-level Implementation Plan developed in the previous activity to the point where it can be enacted. These plans are most often based on a phased approach with the immediate phases being expressed in detail, and subsequent phases expressed in less detail as they will be updated based on lessons learned from the initial phases. The Implementation Plan will identify the activities that need to be performed, schedules, milestones, resources required, responsibilities, risks and mitigation strategies to successfully implement the CoE in an iterative fashion. Associated costs and budgets will also be determined.

It is important to note that the nature of the CoE is to consolidate and advance the level of corporate expertise in the area of Requirements. The very nature of this group is one of communication with other groups in the organization, and so the implementation of the CoE will require significant involvement of other groups and projects.

Implement

This activity results in the implementation of the CoE over time, by enacting the plan outlined during design activities above.

Continual monitoring of progress and measuring convergence on KPIs is performed throughout the implementation, along with risk monitoring to ensure objectives are attained without compromising existing projects and their goals.

A significant part of the initial implementation is education and awareness for everyone in the organization, and certainly those affected directly or peripherally so that expectations are effectively managed.

Validate

Once operational, the initial capabilities of the CoE are quickly assessed as to their effectiveness and their meeting of the objectives set, and adjustments made accordingly. Learnings from the implementations are used to influence detailed planning for subsequent implementation of additional CoE capability.

Validation is an essential step in managing the Requirements CoE implementation and growth based on the value-driven goals. In the validation activity, actual measures of KPIs are compared to targets set forth during Assessment.

Iterate

Following deployment of the initial Requirements CoE capabilities, high-level plans for additional iterations can be reviewed, modified, and detailed based on the additional knowledge gained and any other changes (e.g. business environment changes, project actuals, changes in risk, etc.). Subsequent iterations will either expand capabilities of the Requirements CoE, or its breadth in the organization, or both.

In many cases there are some centralization activities in various areas of the company already – for example, a CoE focussed on performance validation or defect management. If this is the case, it may make sense to formalize the activities and processes between the individual centers as their individual capabilities grow.

Another approach is to start by using the Requirements CoE model to resolve one specific critical issue (e.g. excessive requirements “churn” in critical project), then:

  • Gradually build up the resources and capabilities of the CoE to optimize Requirements processes and techniques on a project basis (pre-empt requirements issues through process consistency)
  • Extend the CoE model to other areas such as capacity planning, code optimization, etc.
  • Ramp up to standardized processes and solutions throughout the enterprise.

Answers to CIO Questions about a Requirements Center of Excellence

Q: How do I ensure quick results?
A: Use the iterative approach (approx. 2-3 months per iteration). Focus on the services that will generate immediate results, such as verification and validation of requirements

Q: What level of investment will be required?
A: An initial investment of even one FTE can be enough to deliver value by establishing an inventory of current practices and consolidating requirements toolsets. The iterative approach is a way to offset subsequent investments with returns from previous iterations.

Q: How do I ensure adoption of the CoE concept?
A: Executive commitment and support is considered essential. Find one or more project teams who are receptive to the establishment of a CoE. Leverage this collaboration to produce even a single success story that can be marketed to the next projects

Q: How do I prevent disruptions to the organization’s daily operation?
A: The initial focus on critical business issues will align CoE services with daily operations. At the same time, the CoE should be flexible enough to support specific project requirements and culture before it is used as the foundation for standardizing enterprise processes.

Q: How do I use the CoE model to compete with the alternative service providers?
A: Focus energies to ensure that you are faster and cheaper. This will overcome what are typically the biggest objections.

Q: How do I demonstrate value of the CoE ?
A: It is essential to measure CoE impact on the projects and CoE effectiveness. Some KPIs can include product improvement – lower defect rate, fewer change requests – as well as CoE efficiency – projects per person, project time, cost of project, etc.

Summary: 10 Tips for Building and Managing a Requirements CoE

  1. The earlier you plug into development and delivery processes the easier it is to deliver value to your internal customers
  2. The fastest path to success is to begin with high-value capability like automated verification and validation of requirements, and automated generation of tests. It is very easy to show value even on the first projects.
  3. Internal selling is essential. It is not enough to be the best technically. You need to communicate and demonstrate value to your organization
  4. You need to define your value proposition to compete for the projects where alternative providers may be under consideration. The emphasis in early projects will be the cost and speed of your services.
  5. Your organization might be unaware of all the capabilities and value of Requirements optimization and management. Educate your organization, evangelize IT management, and create workshops for the decision makers. Blueprint can assist you in this.
  6. Robust infrastructure and automation platforms are essential to success of the Requirements CoE. Simply having good processes will not prove that you do the job better than any other internal and external competitor
  7. Knowledge accumulation is critical for continual improvement of Requirements Development practices. You will need to ensure that every member of the CoE reports all findings in these areas
  8. Ensure that you measure your achievements. This is important for controlling the value of the CoE and also proving the value to the outside world.
  9. You need to provide your customers with high visibility into your progress, status, and findings. Lack of information drives customer dissatisfaction.
  10. While your goal is to standardize processes and practices to ensure lowest cost and the highest efficiency, you need to be “easy to do business with”. This means flexibility to adopt your approach and your capabilities to the customer’s project framework and even culture.

For Part 1 of this article click on: http://www.batimes.com/component/content/article/106-articles/500-building-and-managing-a-requirements-center-of-excellence-part-i.html

Don’t forget to leave your comments below


Tony Higgins is Vice-President of Product Marketing for Blueprint, the leading provider of requirements definition solutions for the business analyst. Named a “Cool Vendor” in Application Development by leading analyst firm Gartner, and the winner of the Jolt Excellence Award in Design and Modeling, Blueprint aligns business and IT teams by delivering the industry’s leading requirements suite designed specifically for the business analyst. Tony can be reached at [email protected].

Requirements Gathering in a Flat World

In his 2005 bestseller The World is Flat, New York Times columnist Thomas Friedman famously proclaimed that advances in global communications technology had essentially rendered the world flat. By this he meant that the massive investment in communications infrastructure that fueled the dot-com bust has, as a virtuous by-product, allowed a new generation of companies to leverage cheaper resources abroad.

And has it ever!

The ubiquity of persistent Internet access has enabled a new generation of companies to leverage resources – be they human or IT-based – wherever they might be physically located. Regardless of how you feel about the political implications of offshore outsourcing, it’s clearly no passing fad.

It wasn’t too long ago that developing software via offshore outsourcing belonged solely to the province of large enterprise organizations. This of course is no longer the case as even the smallest companies have come to rely on distant software development teams in Eastern Europe, former Soviet republics, and Asia. In fact, this ability to instantly tap cheaper development resources and expertise has directly contributed to a burst of innovation, as smaller companies are able to level the playing field with their larger enterprise counterparts.

Yet this opportunity also comes with some thorny strings attached – any short term savings realized can be quickly offset by poor requirements definition and gathering practices. As business analysts well know, clearly articulated software requirements represent the foundation of any successful software project. Indeed, the reason why so many software projects continue to fail is precisely because not enough attention is paid to this critical first step in the software development process.

Communicating and managing software requirements is of course challenging enough even when all your team happens to be situated in the same location. But when working with a distributed team, many of whom don’t even share a common first language, the potential for errors grows considerably. And thus, any savings realized by cheaper resources is suddenly squandered. Worse still, time is wasted, morale suffers, and the project team is forced to shift into “fix-it” mode.

On-Demand Apps Grow Up

When Salesforce.com was founded in 1999 by former Oracle executive Marc Benioff, the notion of applications hosted “off-premise” was still something of a radical idea. At that time, there were still a number of barriers to widespread adoption: application performance, security assurances, and bandwidth constraints, to name but a few.

Just 10 years later, practically every major software category now includes a healthy ecosystem of on-demand solutions. Not only has the supporting infrastructure matured with offerings like Amazon’s EC2 cloud platform – allowing software providers to more easily configure their application in the cloud — but the next generation of on-demand applications are pushing the envelope in terms of what can be delivered through a browser. More and more, these applications feel and act like traditional desktop software. But of course, they can do so much more because they are intrinsically connected. Now, every on-demand application has the potential to become a dynamic collaboration platform that succeeds in bringing all project constituents to the table. And, as more on-demand application developers create and publish their APIs, they can be integrated into other facets of the software development process.

But it’s this idea of connectedness that’s so powerful when it comes to keeping distributed teams marching to the same beat. Business analysts are not just responsible for defining requirements but also for ensuring that they’re properly communicated and correctly applied. While large enterprises might have the luxury of a robust requirements management tool at their disposal that incorporates collaboration (and surprisingly, only a small percentage of these enterprises have adopted these types of solutions), the average small to medium enterprise does not. Instead, most of these companies use some combination of Word, Excel, and email to define and communicate requirements to their development teams. Because these tools are disconnected from one another, the potential for miscommunication is an endless challenge. And when those development resources are located half a world a way, it’s not a question of if something will go wrong but rather when.

So what’s a budget-minded business analyst working with a distributed development team to do?

Requirements Outlook? Mostly Cloudy.

In many respects, business analysts can find inspiration in the latest hype cycle: social media. Social applications like Facebook and Twitter have reshaped our personal dialogue and communications framework. And aspects of these sites will likely transform the way we think about software requirements.

For better or worse, we are in constant communication with our own personal circle of contacts. Anyone who has been part of a software development team understands how important it is to be in regular contact with the rest of the team. From the beginning of a project (i.e., defining use cases) through the implementation phase (i.e., bug tracking), persistent communication is the new Critical Chain. Put another way, when you can’t all be present for the daily stand-up meeting, how do you ensure everyone is on track?

There are three key ways that migrating the requirements gathering process to the Cloud will help business analysts get better results from their distributed development teams:

Collaborate in Real Time, in Context. Email is a terrific communications tool. But it’s asynchronous and often detached from a larger context. This becomes especially problematic when multiple teams are working on different aspects of the same project. By unifying the requirements process in a hosted environment, distributed teams can collaborate in a meaningful way and identify potential problems before they get out of hand.

Visualize the Workflow to Mitigate Language Barriers. One of the most vexing challenges that business analysts face when it comes to working with offshore developers is communicating across multiple languages. There are the obvious spoken language barriers to overcome. But more often it’s the nuance and culture of language that presents the most significant obstacle. Because business and software requirements are technical in nature, employing a visual language to communicate requirements becomes a necessary complement. As traditional requirements gathering tools move to the Cloud, the ability to clearly represent business requirements in a visual manner (i.e., use case scenarios, parking lot diagrams, etc.) will help keep all project constituents on track.

Visibility for All, Across the Entire Requirements Spectrum. No software developer should be an island to themselves. Even though two developers might be working on two discrete and unrelated tasks, it’s still vital that they understand the underlying business requirements that are driving the project. By standardizing the requirements management process in the Cloud, the business analyst can provide all of their team members with the “situational awareness” they need to understand how their software will be used.

In Flat We Trust

The world is indeed becoming flatter by the day. Just as the path of water naturally finds the path of least resistance, free markets will invariably gravitate towards the most cost-efficient resources. However, even with all of Friedman’s optimism regarding the potential rewards of offshore outsourcing, he also goes on to warn us: “In this era of mounting complexity with more people, systems and products entwined in a bewildering web of global networks, explaining is an enormously valuable skill.” In a sense, that is exactly what software requirements are all about: explaining. And explaining becomes all the more important when teams are fragmented by geography, culture, and language.

Don’t forget to leave your comments below


Darren Levy is the founder of GatherSpace.com, a provider of on-demand requirements management solutions based in Los Angeles. Email him at [email protected].