Skip to main content

Writing Effective Project Requirements

Requirements are (or should be) the foundation for every project. Put most simply, a requirement is a need. This problem, this need, leads to the requirements, and everything else in the project builds off these business requirements.

Importance of Requirements
Requirements are considered by many experts to be the major non-management, non-business reason projects do not achieve the “magic triangle” of on-time, on-budget, and high quality. Very few projects do an effective job of identifying and carrying through the project and all the requirements correctly.

Various studies, have shown that requirements are the biggest problem in projects. Projects fail due to requirements problems, the most defects occur during the requirements phase. Project teams need to do a much better job on requirements if they wish to develop quality software on-time and on-budget.

Furthermore, requirements errors compound as you move through a project. The earlier requirements problems are found, the less expensive they are to fix. Therefore, the best time to fix them is right when you are involved with gathering, understanding, and documenting them with your stakeholders (who should be the source of your project requirements).

The hardest requirements problems to fix are those that are omitted. This really becomes the requirements analyst’s dilemma. The analyst often does not know what questions to ask, and the stakeholder does not know what information the analyst needs. Since the analyst doesn’t ask, the stakeholder doesn’t state requirements.

The requirements phase is also considered by many experts to be the most difficult part of any project due to the following:

  • The requirements phase is where business meets (IT) information technology.
  • Business people and IT people tend to speak different “languages.”
  • Business: “It has been determined that if we convolute the thingamajig or maybe retroactive the thatamathing, our profitability may, if we are extremely lucky, rise beyond the stratospheric atomic fundermuldering.”

In other words, English is an ambiguous language, and we tend to speak and write in a manner that makes it even more ambiguous, convoluted, and unclear.

Building Valid Requirements
The requirements analyst truly is responsible for the failure or success of the requirements on a project. With that at stake, building valid requirements up front is crucial. The four steps to this goal are: elicitation, analysis, specification, and validation.

Elicitation
The term elicitation is the industry-accepted term for getting the requirements from the stakeholders. Elicitation, however, implies much more than just capturing or gathering the requirements from users.

The truth is, one of the surest ways to fail in requirements is to say to your users, “Give me your requirements,” then stand back and “catch” them.

Why doesn’t this work? The stakeholders are experts in their domains. While the analyst probably has more expertise in the IT domain, the two speak different languages. The stakeholders truly do not understand exactly what IT needs to be able to develop an effective system for them.

So the only way for a project to obtain comprehensive, correct, and complete requirements from all stakeholders is to truly elicit them. Elicit means to probe and understand the requirements, not just capture or gather them.

The reality is that the quality of the requirements depends on the quality of the solicitation of them by the analysts.

Analysis
Analysis involves refining (or analyzing) the stakeholder requirements into system, then software requirements. Analysis is a critical step, that is too often omitted or bypassed in projects.

Analysis is the critical transition from stakeholder or business terminology, to system or IT terminology. For example, stakeholders talk about “Monthly Marketing Report,” while systems talk about file “MoMktRpt.doc.”

Analysis involves brainwork, but it is not a magic process (nor is any other part of software engineering, for that matter). Analysis is usually done in conjunction with various modeling techniques. Modeling-creating diagrams using standard notations-allows analysts to more fully understand processes and data in a rigorous manner. This understanding allows them to convert the often non-rigid stakeholder requirements into more concise, rigid system and software requirements.

Common modeling techniques include the following:

  • Traditional systems
  • Dataflow diagrams to understand processes and activities
  • Entity-relationship diagrams to understand data
  • Object-oriented systems:
  • UML (Unified Modeling Language) diagrams, especially class diagrams for analysis, but also possibly collaboration diagrams

Specification
The specification sub-phase involves documenting the requirements into a well-formatted, well-organized document. Numerous resources are available to help with writing and formatting good requirements and good documents. For general writing assistance, books on technical (not general) writing should be used. A major resource is the set of IEEE Software Engineering Standards.

Validation
Once a requirements specification is completed in draft form, it must be reviewed both by peers of the author and by the project stakeholders in most cases. If detailed stakeholder requirements were written and signed off by the stakeholders, they may not need to participate in reviews of more technical system and software requirements. This presumes good traceability matrices are in place.

The specifications are reviewed by a group of people to ensure technical correctness, completeness, and various other things. Often checklists are used to ensure all requirements in all possible categories have been elicited and documented correctly.

Validation is actually a quality assurance topic. All documents produced throughout a project should undergo validation reviews.


This information was drawn from Global Knowledge’s Requirements Development and Management course developed by course director and author Jim Swanson, PMP. Prior to teaching for Global Knowledge, Jim worked 25 years for Hewlett-Packard as a business systems developer, technical product support, and senior project manager. Prior to HP, Jim worked as a Geologist for the US Geological Survey.

 

Copyright © Global Knowledge Training LLC. All rights reserved.

Requirements Management: Process vs. Content

In my most recent entry, I suggested that the BABOK 2.0 introduces the separation of process (planning, elicitation, documentation, analysis, verification/validation) from content (software development, etc.). And if you read that entry, you know I am of the opinion that this is a smart move on the part of the IIBA and the BABOK committee and authoring community.
Why? To put it simply: everyone does requirements management! And the process framework that will be represented by BABOK 2.0 will be valuable in many different disciplines.

For example, consider the practice of Instructional Design (ID): it too defines an approach that includes:
• Gathering (needs assessment, task analysis, workplace assessment)
• Analysis • Documentation (Student Performance Objectives)
• Solution identification (delivery mode, material selection, etc.)
• Management of requirements through the content development process
• Verification/validation of the content (Kirkpatrick levels 2-4, Kolb learning cycle, psychometric analysis of related exams).

This is not to say that the BABOK would become the reference body for ID itself – that subject area is sufficiently covered. Viewing the ID process through the BABOK lens, however, further strengthens the fundamental notion of the separation of the requirement from the solution.
 
You may have noticed Enterprise Analysis has not been mentioned yet – I hope you stay tuned to read my thoughts on how that fits in…..

Meanwhile, I encourage you to share with me, and your fellow readers, your thoughts on this thread as it develops more fully over the next few entries.

If Agile Were to Go Mainstream

If agile methods are to go mainstream, it might be when their popularity and legitimacy reach a tipping point. An example that this could be happening is a recent NY Times article called “Google Gets Ready to Rumble With Microsoft” (16 December 2007), which my colleague Ken Orr wrote about in the newsletter Cutter Trends Advisor entitled, “Velocity Matters: Google, Microsoft, and Hyper-Agility, Part 1” (20 December 2007).

The articles are about Google going after Microsoft’s customer base, using something called its “Cloud” computing framework. But Ken Orr’s interpretation of the Google-Microsoft confrontation emphasizes the time-to-market advantages that Google’s software development lifecycle has over Microsoft’s. Google is apparently practicing a more agile, iterative-style approach (sometimes quarterly) to releasing software, while Microsoft is more tied to the big bang, multi-year cycle for its products.

Might the public start perceiving companies like Google as “agile and adaptive,” while tagging Microsoft as “heavy and slow?” Agile methods may have found their version of Malcolm Gladwell’s “sticky message.” Most agree that it began in earnest with the infamous Agile Manifesto — elegant in its simplicity. It emphasized the value of “individuals and interactions” in creating “working software via customer collaboration while being responsive to change.”

Simple and Elegant.

But some people felt the word “manifesto” carried an interesting connotation because of its perceived arrogance. One manifesto, by a crazed lunatic called the Unabomber, made headlines years ago by decrying the evils of an industrialized society and railing against the establishment. The agilists (who were NOT crazed lunatics) were also railing against an establishment; in this case, the Carnegie Mellon Software Engineering Institute (SEI). The agilists’ message was that they were “lean and fast,” while anyone who followed the SEI was “heavy and slow.” Put simply, it was the younger generation calling their village elders (or at least their software processes) FAT.

The defiance had gotten personal. They were mad about project overrun statistics and sick and tired of being blamed for them. All those Ed Yourdon Death March projects had taken their toll. They were not lunatics, but they were irreverent for lots of reasons, and it was understandable.

Manifestos and name-calling seemed to help the Agile message to stick. Moreover, if Agile rides a Google wave, it will make a lot of software development organizations consider following Google’s lead.

Meanwhile, there’s an interesting quote by a long-ago congressman named Willard Duncan Vandiver. In an 1899 speech, he said, “I come from a country that raises corn and cotton, cockleburs and Democrats, and frothy eloquence neither convinces nor satisfies me. I’m from Missouri, and you have got to show me.” Some people say that the speech is the reason why Missouri is famously nicknamed, “The Show Me State.”

Westerners at the time used the phrase to suggest that Missourians were slow and not very bright. (There’s that name-calling thing again.) Missourians, however, turned the definition around and claimed that “show me” meant that they were shrewd and not easily fooled. (It turns out that the phrase was current before Vandiver, so the thinking is that his speech may have merely popularized it.)

Now here’s where it gets interesting. Manifestos and name-calling might have some frothy eloquence to them, but they neither convince nor satisfy one important constituency that many agilists need badly so they can practice their agile craftmaking. This constituency happens to be the people who sign the checks: senior management. Senior management has to buy into the idea and take risks with a “manifesto” concept that can impact the company that employs all of them. They’ve been around long enough to see fads come and go and can be cynical at times. Management also doesn’t like the processes that they’ve invested in to be called fat.

The agilists come to the elders and they ask for some money. They want a new thing called Agile Methods. The elders respond with, “You have got to show me some productivity metrics.” No metrics, no money. The agilists cringe, because they associate metrics guys as process-heavy people spouting function points.

But you don’t have to be process-heavy or say “function points” all the time to be someone who knows a little bit about IT metrics. I am neither of these and have been collecting essential core metrics on hundreds of projects over the years. Many of my clients in the last 12 months are companies running Agile projects, mostly XP and Scrum, and they want to know how they compare against waterfall projects.

We have plenty of data in a worldwide database of over 7,400 projects — agile, waterfall, package implementation, new development, legacy, military, commercial, engineering — you name it. The numbers are so very interesting that I can’t fit them all into this article. Suffice to say I’ve been on the lecture circuit recently on this subject and conducting webinars for people who want me to show them.

So what have I found? Here are some of the highlights:
• Agile teams have metrics. The perception might be that Agile teams are a bunch of undisciplined programmers slinging code and not documenting anything. Not true. They know their schedules (time), keep records of team members working on their projects (effort), they count requirements and things called iterations and stories (a size metric), and they keep track of bugs (defects).
• We easily draw this out along with their velocity charts on a whiteboard sketch. This profile is all we need to capture the measures and load them into a computer database.
• Agile trends can be plotted on a chart where a picture says a thousand words (or in this case, metrics). Smaller releases, medium-sized releases, and large releases are charted from left to right. Vertically, we can chart the schedules, team size, and defects found and fixed.
• As a group, the projects were mostly faster than average. About 80% were below the industry average line. Note that some took longer, for several reasons (too long to explain here). Some companies developed software in two-thirds or even half the time.
• They used larger than average teams. Even though many of the smaller releases used small teams, some — apparently in response to deadline pressure — used large numbers of people. One company applied seven parallel Scrum teams totaling 95 people, where the norm was about 40 people.
• On waterfall projects, the “laws of software physics” showed a predictable outcome of large teams trying to create fast schedules — high defect rates (sometimes 4x-6x). On Agile projects, we saw a shockingly low number of defects — in some of the companies. The best performer had high-maturity XP teams. These project teams showed defects that were 30% to 50% lower than average. Other less-mature Agile teams had defect rates that were more like waterfall projects.

The initial results from these companies were fascinating. One thing that stood out was that there was in fact a learning curve. The sample had a range of Agile experience from one to four years. You can easily see that the highest performing teams achieved a level of performance that the others didn’t match. Agile was not a cure-all for all of the companies, but it will be interesting to see how the others fare as time progresses.

Another factor that was interesting indeed, was that all of the companies were being challenged by the outsourcing/India option from the top down. Some adopted Agile methods as a better, faster — and, yes, cheaper — alternative while saving their jobs in North America.

It will also be interesting so see more patterns emerge as more data comes in. Soon enough, we’ll have sufficient statistics for a series of Agile industry trend lines against which we can make direct Agile-to-Agile project comparisons. And the Agilists will have something they surely have longed for all along: Agile metrics. And the village elders just might buy into the idea.


Michael Mah is a Senior Consultant with Cutter Consortium, and also Managing Partner of QSM Associates. He welcomes reader comments and insights at [email protected].

Road to the Perfect Project

Introduction

Ever since software development projects have been around, people have been coming up with ways to help ensure they come out well. Unfortunately, history shows us that there is no process, methodology, or toolset that can guarantee project success. But there are some practical techniques, many of them non-technical, that can dramatically improve a project’s chances.

In this, and in the next couple of postings of Business Analyst Times, I will lay out techniques based on observations from my own personal experience that I know can reliably produce excellent results. In combination, they can result in what is experienced as a “perfect project”. I present from more of a project management perspective rather than pure BA one, because my experience has been that management activities and decisions tend to impact projects, for good or ill, more than technical ones. The ideas that follow are in approximate order of importance. They are necessarily quite brief, and each one is expanded in future postings. 

1. Keep the project as small as possible. A project’s size is inversely proportional to its chance of success. As people are added to a project, the potential number of interactions increases pretty much exponentially, and things fall through the cracks, often catastrophically.

2. Carefully select a leadership team comprised of a project manager and key business and systems analysts. They must be highly qualified technically and must work together well as a team.

3. Create a partnership relationship with the customer. To improve the project’s chances of going smoothly, these two organizations need to work in concert, where mutual trust, joint responsibility, transparency, and good will are operative.

4. Limit formal processes and documentation to a minimum. “Big Process”, as exemplified by CMM/CMM and the IEEE standards can get in the way of a small sized project with high quality, motivated people.

5. Work intuitively. Operate at least partially in the mode exemplified by the Agile Methodology. By being highly flexible and willing to change as users are exposed to the emerging system, the probability of system acceptance is greatly increased.

Anyone who has been around a while knows that the conditions for achieving the perfect project are not commonly present. But, if they do exist where you now are, or you can influence things to cause them to exist, then enjoy your perfect project.

Keep Project Size Small

My experience has been that project size, in terms of the number of people working on that project, is inversely proportional to its chance of success. I have been around several projects that had 100+ staff. While none of them outright failed, none of them came out particularly well. In one instance, a literal handful of people (myself included) redid one of those large systems in a year of intense work, and turned a mediocre system with poor performance characteristics into a roaring success.

Why do large projects have problems? In a nutshell, the active principle is project communications. As people are added to a project, the potential number of interactions increases pretty much exponentially. The result is that important communications don’t always get to all necessary recipients, and things fall through the cracks. Mistakes are made, and they can prove to be disastrous. People feel isolated, morale tends to be poor, and turnover is high. Big projects try to mitigate this by requiring huge amounts of formal documentation and by holding many, many meetings. The sheer mass of material makes it nearly impossible for individuals to read everything they need to read to stay up to speed on everything that might affect them. It also becomes almost impossible for any one individual to comprehend the full documentation set as a whole, meaning that there may be no one individual who understands the entire project.

Any project that has more people than can fit around a large conference table is going to experience degraded performance due to communications issues. Conversely, with a small team, any communications issues can be addressed face-to-face and be observed by the entire project team. Much can be done on an informal basis that would otherwise require formal process and documentation. For instance, the two persons responsible for both ends of an interface can sit down at a whiteboard and work out the interface, and then, instead of creating a formal Interface Control Document, they just send an email summarizing the design. When it comes time to test the interface, both parties can be present and can troubleshoot and fix any problems on the spot. I know this works, because I have done it.
 
A small, closely knit team of sharp people can work miracles. I have been lucky enough to have worked a few myself. I claim that with such a team, working informally and without undue process and documentation overhead, I can build a given system faster, with better quality, and with better user acceptance than can be done by your average 100-person team, working under what is considered best practice processes.

Some projects are just too big functionally to do them with a single small team. This is especially true if hardware and communications must be procured and integrated, facilities must be built, and national security data is involved. But it may be possible to cleanly partition the problem into a set of functions that can then be implemented using a set of closely cooperating small teams, with a top-level team comprised primarily of team leads. Each team then operates independently on its functional area, and the top-level team ensures that cross team issues such as interfaces get handled efficiently and effectively.

© Copyright John L. Dean, 2008.

Look for Part 2 of Road to the Perfect Project in the next Business Analyst Times


John L. Dean
is an IT professional whose long and broad experience base gives him an unusual perspective on what does and does not work. He has 40 years experience, mostly in government IT contracting, including systems engineering, systems analysis, requirements analysis, systems architecture and design, programming, quality assurance, testing, IV&V, and system acquisition, equally split between technical and management. He has worked on both the contractor and government sides of the fence, has been an employee of large (IBM, CSC, Booz-Allen, MITRE, ACS) as well as very small companies, and more recently has worked as an independent consultant. John has a Masters degree in Operations Research from Clemson University. He can be contacted at

[email protected].

A BA by Any Other Name?

Quick – what are the job titles of the people who attended the panel discussion Defining the Various Roles and Responsibilities of the BA Professional at the Project World / BA Summit in Palo Alto? If you answered Business Systems Analyst, Data Warehouse Analyst, IT Business Analyst, Systems Analyst, Process Analyst, Product Manager, Program Manager, Process Manager, Business Architect, Web Analyst, Requirements Analyst, Solution Architect, Business Business Analyst (really!), Application Architect, Operations Engineer, Operational Analyst, Information Architect and Business Analyst, you are correct!

And the one element common to those jobs, unanimously agreed by the attendees, is requirements management. Interesting. Not business analysis, but requirements management. For as the titles suggest (and as confirmed by several hours of job description investigation at monster.com), many of these jobs are defined within specific domains (business process, Web apps, data warehouse, etc.) and are connected to the domain of enterprise strategy by virtue of their contribution to the value chain.

Now some points to consider: 

  • Given the above, it seems safe to say that not everyone who does requirements management is a business analyst. 
  • The IIBA, the BABOK, IIBA chapters, and BA-related media and events are very interesting to anyone who does requirements management. 
  • Excepting (perhaps) the Enterprise Analysis section, the BABOK presents a useful framework for any job involving requirements management.

The IIBA’s plans for the BABOK 2.0 (see the subsection “What changes are planned for version 2.0?” here) represent significant benefit to BAs as well as requirements management practices in general. The two changes that I think are vital in terms of their direction-setting nature are: 

  • Requirements management tasks reframed as applicable toward iterative, agile, and maintenance activities 
  • Applicability to business process based solutions as well as software solutions

I interpret these changes as a separation of content (business analysis, process analysis, data warehouse analysis, etc.) from process (plan/manage requirements, elicit, analyze, document, validate).

If the BABOK is broadened to include a general treatment of requirements management, does it strengthen or weaken the IIBA’s ability to professionalize the BA role? I say it strengthens it significant way. And I hope you come back in a couple of weeks to learn why.


Terry Longo

has more than 25 years of IT experience, including software development, system and network administration, and instructing, as well as being responsible for the requirements, project management, and delivery aspects of complex training solutions. He currently holds the IT Service Manager ITIL and is responsible for HP Education’s ITIL/ITSM, Project Management and Business Analysis curriculums in the US (see http://www.hp.com/education/sections/pm_bus_analy.html). Terry can be reached via email at [email protected]