Skip to main content

Author: Cynthia Low

Road to the Perfect Project – Part 2

Project Leadership is Critical

The most important person on a project is the project manager. A good project manager with a supportive management style can make a project, and a bad one with an aggressive management style can single-handedly cause it to fail. I have seen both happen. As an ideal, the project manager will have personal technical experience that matches functionally the work being performed on the project. For instance, if it involves requirements elicitation, he/she should have personally done that kind of work. In a cradle to grave development effort, the more technical breadth that individual has the better. They absolutely must have excellent writing and presentation skills, and must be able to interface with both the customer and their own line management. The project manager has to have had management experience, some of which should be at the level required for the project.

After that, it is critical to select a high quality set of senior analysts for the project. This includes both business analysts and systems analysts. These people must have the requisite technical skills and experience to match the project’s needs. For example, a project might need senior analysts with skills in requirements management, database management, system architecture and design, and system testing. At a minimum, they must have successfully done the type of work to be performed. If specific tool sets are to be used, in the ideal they will have worked with those tools. As with the project manager, communication skills are a must. In addition, some supervisory experience is needed. Ideally there will be some experience overlap. For instance, a person with both requirements management and system testing skills is more valuable than someone who only has requirements experience because they can cover for a missing system tester.

I will call this set of people the Leadership Team.  The Leadership team must be individually flexible, and able to work together as a team. That is, they must be team players. Positive interpersonal dynamics among this set of key people must exist. Achieving consensus should be more important to them than individual heroics. If the team clicks, and they work together synergistically, the whole does indeed become more than the sum of its parts. In my experience, it is impossible to completely staff a project with super stars. They just are not going to be available when needed, and the cost is prohibitive. But given the existence of a synergistic leadership team, it is quite workable to use journeyman, and even some junior level staff for all remaining project functions, provided that the Leadership Team monitors and mentors them.

The reverse of the staffing coin is to ruthlessly cull out poor performers or discontents at any project level or role. These people can directly cause technical problems, and they will indirectly adversely affect project morale. If they are allowed to linger on the project they can and will make costly mistakes and they can and will negatively impact the performance of other project team members. It can be really painful to dump a malcontent, especially one who does not want to go. HR departments do not like wrongful termination suits no matter what the merits. Regardless, suck it up, and do what it takes to get rid of them.

Partner with Your Customer

To bring a system into existence, there is generally a project organization that is doing the work and a customer organization that has a need for a system and is financing the project.  In my personal experience the customer has been a government agency, which provides the experience base for this discussion. Unfortunately, as many have experienced for themselves, there is a tendency for the customer to try and dominate the customer-project relationship. All too often the customer tries to rule by intimidation and, all too often, they force non-optimal technical decisions. This creates a jointly adversarial relationship. And that causes poor project morale, lackadaisical effort, and high project turnover. Results are typically poor. In short, you can have a great project team and still fail due to the customer not holding up their end.

To improve the project’s chances of going smoothly, those two organizations need to work in concert, where mutual trust, joint responsibility, transparency, and good will are operative. When real partnering does occur, and both sides hold up their end of the bargain, results tend to be excellent. Technical people enjoy their project; they work harder, and they stay around. The customer tends to get a more user-friendly system, which they are happy with. It is a win-win situation.

Most of you will have only worked the project side of this relationship, and if you are lucky you may know what partnering looks like from the project side.  I have personally worked both sides of the fence. Here is what partnering looks like from the customer perspective.  It starts with the RFP. That document is well written, and expresses exactly what the customer expects the project to do and deliver. If a set of system requirements is provided with the RFP, it is professionally produced and clearly and fully expresses what the system is to do without forcing major design decisions. This will allow the project to correctly estimate project costs and make proper staffing decisions. On startup, the customer provides a briefing to the project staff, at which time any applicable existing documentation is turned over to the project. The project is invited to do its own briefing on how they plan to do the work. If on-site, adequate space, workstations, and connectivity are provided on day one. Even if off-site, some dedicated space, including a small conference room is provided. Technical questions and issues posed by the project are promptly and fully addressed. Access is provided to potential users, either for requirements elicitation (the ideal), or requirements validation and expansion.  Customer people attend all project briefings and project peer review sessions to which they are invited. Deliverables are promptly and thoroughly reviewed, and well-written formal comments are provided to the project. If needed, meetings are held to resolve conflicts. If the project is big enough, joint configuration control and risk review boards are formed.

On the project side, transparency is critical. Nothing is hidden from the customer. Any problems encountered are promptly conveyed to the customer for joint consideration. Schedules are rigorously met; documents produced are of high quality, including lack of grammatical or spelling errors; emerging user interfaces are offered for user review, and appropriate adjustments made; staff of the appropriate quality are placed on the project and vacancies are promptly filled.

Look for Part 3 of this series in the next Business Analyst Times online in mid-February.


John L. Dean is a seasoned IT professional with 40 years of varied experience, including systems engineering, systems analysis, requirements analysis, systems architecture and design, programming, quality assurance, testing, IV&V, and system acquisition, equally split between technical and management.  He has worked both the contractor and government sides of the fence, has been an employee of large (IBM, CSC, Booz-Allen, MITRE, ACS) as well as very small companies, and is currently an independent consultant. He has a Masters degree in Operations Research from Clemson University.  John is the “Project Whisperer”.  His depth and breadth of experience gives him an unusual perspective on what does and does not work. If your project is running you, he can use his experience and intuition to diagnose why and apply calm, assertive energy to start corrective action, so you can and regain control! He can be contacted at [email protected].

 © John L. Dean, 2008

Writing Effective Project Requirements

Requirements are (or should be) the foundation for every project. Put most simply, a requirement is a need. This problem, this need, leads to the requirements, and everything else in the project builds off these business requirements.

Importance of Requirements
Requirements are considered by many experts to be the major non-management, non-business reason projects do not achieve the “magic triangle” of on-time, on-budget, and high quality. Very few projects do an effective job of identifying and carrying through the project and all the requirements correctly.

Various studies, have shown that requirements are the biggest problem in projects. Projects fail due to requirements problems, the most defects occur during the requirements phase. Project teams need to do a much better job on requirements if they wish to develop quality software on-time and on-budget.

Furthermore, requirements errors compound as you move through a project. The earlier requirements problems are found, the less expensive they are to fix. Therefore, the best time to fix them is right when you are involved with gathering, understanding, and documenting them with your stakeholders (who should be the source of your project requirements).

The hardest requirements problems to fix are those that are omitted. This really becomes the requirements analyst’s dilemma. The analyst often does not know what questions to ask, and the stakeholder does not know what information the analyst needs. Since the analyst doesn’t ask, the stakeholder doesn’t state requirements.

The requirements phase is also considered by many experts to be the most difficult part of any project due to the following:

  • The requirements phase is where business meets (IT) information technology.
  • Business people and IT people tend to speak different “languages.”
  • Business: “It has been determined that if we convolute the thingamajig or maybe retroactive the thatamathing, our profitability may, if we are extremely lucky, rise beyond the stratospheric atomic fundermuldering.”

In other words, English is an ambiguous language, and we tend to speak and write in a manner that makes it even more ambiguous, convoluted, and unclear.

Building Valid Requirements
The requirements analyst truly is responsible for the failure or success of the requirements on a project. With that at stake, building valid requirements up front is crucial. The four steps to this goal are: elicitation, analysis, specification, and validation.

Elicitation
The term elicitation is the industry-accepted term for getting the requirements from the stakeholders. Elicitation, however, implies much more than just capturing or gathering the requirements from users.

The truth is, one of the surest ways to fail in requirements is to say to your users, “Give me your requirements,” then stand back and “catch” them.

Why doesn’t this work? The stakeholders are experts in their domains. While the analyst probably has more expertise in the IT domain, the two speak different languages. The stakeholders truly do not understand exactly what IT needs to be able to develop an effective system for them.

So the only way for a project to obtain comprehensive, correct, and complete requirements from all stakeholders is to truly elicit them. Elicit means to probe and understand the requirements, not just capture or gather them.

The reality is that the quality of the requirements depends on the quality of the solicitation of them by the analysts.

Analysis
Analysis involves refining (or analyzing) the stakeholder requirements into system, then software requirements. Analysis is a critical step, that is too often omitted or bypassed in projects.

Analysis is the critical transition from stakeholder or business terminology, to system or IT terminology. For example, stakeholders talk about “Monthly Marketing Report,” while systems talk about file “MoMktRpt.doc.”

Analysis involves brainwork, but it is not a magic process (nor is any other part of software engineering, for that matter). Analysis is usually done in conjunction with various modeling techniques. Modeling-creating diagrams using standard notations-allows analysts to more fully understand processes and data in a rigorous manner. This understanding allows them to convert the often non-rigid stakeholder requirements into more concise, rigid system and software requirements.

Common modeling techniques include the following:

  • Traditional systems
  • Dataflow diagrams to understand processes and activities
  • Entity-relationship diagrams to understand data
  • Object-oriented systems:
  • UML (Unified Modeling Language) diagrams, especially class diagrams for analysis, but also possibly collaboration diagrams

Specification
The specification sub-phase involves documenting the requirements into a well-formatted, well-organized document. Numerous resources are available to help with writing and formatting good requirements and good documents. For general writing assistance, books on technical (not general) writing should be used. A major resource is the set of IEEE Software Engineering Standards.

Validation
Once a requirements specification is completed in draft form, it must be reviewed both by peers of the author and by the project stakeholders in most cases. If detailed stakeholder requirements were written and signed off by the stakeholders, they may not need to participate in reviews of more technical system and software requirements. This presumes good traceability matrices are in place.

The specifications are reviewed by a group of people to ensure technical correctness, completeness, and various other things. Often checklists are used to ensure all requirements in all possible categories have been elicited and documented correctly.

Validation is actually a quality assurance topic. All documents produced throughout a project should undergo validation reviews.


This information was drawn from Global Knowledge’s Requirements Development and Management course developed by course director and author Jim Swanson, PMP. Prior to teaching for Global Knowledge, Jim worked 25 years for Hewlett-Packard as a business systems developer, technical product support, and senior project manager. Prior to HP, Jim worked as a Geologist for the US Geological Survey.

 

Copyright © Global Knowledge Training LLC. All rights reserved.

If Agile Were to Go Mainstream

If agile methods are to go mainstream, it might be when their popularity and legitimacy reach a tipping point. An example that this could be happening is a recent NY Times article called “Google Gets Ready to Rumble With Microsoft” (16 December 2007), which my colleague Ken Orr wrote about in the newsletter Cutter Trends Advisor entitled, “Velocity Matters: Google, Microsoft, and Hyper-Agility, Part 1” (20 December 2007).

The articles are about Google going after Microsoft’s customer base, using something called its “Cloud” computing framework. But Ken Orr’s interpretation of the Google-Microsoft confrontation emphasizes the time-to-market advantages that Google’s software development lifecycle has over Microsoft’s. Google is apparently practicing a more agile, iterative-style approach (sometimes quarterly) to releasing software, while Microsoft is more tied to the big bang, multi-year cycle for its products.

Might the public start perceiving companies like Google as “agile and adaptive,” while tagging Microsoft as “heavy and slow?” Agile methods may have found their version of Malcolm Gladwell’s “sticky message.” Most agree that it began in earnest with the infamous Agile Manifesto — elegant in its simplicity. It emphasized the value of “individuals and interactions” in creating “working software via customer collaboration while being responsive to change.”

Simple and Elegant.

But some people felt the word “manifesto” carried an interesting connotation because of its perceived arrogance. One manifesto, by a crazed lunatic called the Unabomber, made headlines years ago by decrying the evils of an industrialized society and railing against the establishment. The agilists (who were NOT crazed lunatics) were also railing against an establishment; in this case, the Carnegie Mellon Software Engineering Institute (SEI). The agilists’ message was that they were “lean and fast,” while anyone who followed the SEI was “heavy and slow.” Put simply, it was the younger generation calling their village elders (or at least their software processes) FAT.

The defiance had gotten personal. They were mad about project overrun statistics and sick and tired of being blamed for them. All those Ed Yourdon Death March projects had taken their toll. They were not lunatics, but they were irreverent for lots of reasons, and it was understandable.

Manifestos and name-calling seemed to help the Agile message to stick. Moreover, if Agile rides a Google wave, it will make a lot of software development organizations consider following Google’s lead.

Meanwhile, there’s an interesting quote by a long-ago congressman named Willard Duncan Vandiver. In an 1899 speech, he said, “I come from a country that raises corn and cotton, cockleburs and Democrats, and frothy eloquence neither convinces nor satisfies me. I’m from Missouri, and you have got to show me.” Some people say that the speech is the reason why Missouri is famously nicknamed, “The Show Me State.”

Westerners at the time used the phrase to suggest that Missourians were slow and not very bright. (There’s that name-calling thing again.) Missourians, however, turned the definition around and claimed that “show me” meant that they were shrewd and not easily fooled. (It turns out that the phrase was current before Vandiver, so the thinking is that his speech may have merely popularized it.)

Now here’s where it gets interesting. Manifestos and name-calling might have some frothy eloquence to them, but they neither convince nor satisfy one important constituency that many agilists need badly so they can practice their agile craftmaking. This constituency happens to be the people who sign the checks: senior management. Senior management has to buy into the idea and take risks with a “manifesto” concept that can impact the company that employs all of them. They’ve been around long enough to see fads come and go and can be cynical at times. Management also doesn’t like the processes that they’ve invested in to be called fat.

The agilists come to the elders and they ask for some money. They want a new thing called Agile Methods. The elders respond with, “You have got to show me some productivity metrics.” No metrics, no money. The agilists cringe, because they associate metrics guys as process-heavy people spouting function points.

But you don’t have to be process-heavy or say “function points” all the time to be someone who knows a little bit about IT metrics. I am neither of these and have been collecting essential core metrics on hundreds of projects over the years. Many of my clients in the last 12 months are companies running Agile projects, mostly XP and Scrum, and they want to know how they compare against waterfall projects.

We have plenty of data in a worldwide database of over 7,400 projects — agile, waterfall, package implementation, new development, legacy, military, commercial, engineering — you name it. The numbers are so very interesting that I can’t fit them all into this article. Suffice to say I’ve been on the lecture circuit recently on this subject and conducting webinars for people who want me to show them.

So what have I found? Here are some of the highlights:
• Agile teams have metrics. The perception might be that Agile teams are a bunch of undisciplined programmers slinging code and not documenting anything. Not true. They know their schedules (time), keep records of team members working on their projects (effort), they count requirements and things called iterations and stories (a size metric), and they keep track of bugs (defects).
• We easily draw this out along with their velocity charts on a whiteboard sketch. This profile is all we need to capture the measures and load them into a computer database.
• Agile trends can be plotted on a chart where a picture says a thousand words (or in this case, metrics). Smaller releases, medium-sized releases, and large releases are charted from left to right. Vertically, we can chart the schedules, team size, and defects found and fixed.
• As a group, the projects were mostly faster than average. About 80% were below the industry average line. Note that some took longer, for several reasons (too long to explain here). Some companies developed software in two-thirds or even half the time.
• They used larger than average teams. Even though many of the smaller releases used small teams, some — apparently in response to deadline pressure — used large numbers of people. One company applied seven parallel Scrum teams totaling 95 people, where the norm was about 40 people.
• On waterfall projects, the “laws of software physics” showed a predictable outcome of large teams trying to create fast schedules — high defect rates (sometimes 4x-6x). On Agile projects, we saw a shockingly low number of defects — in some of the companies. The best performer had high-maturity XP teams. These project teams showed defects that were 30% to 50% lower than average. Other less-mature Agile teams had defect rates that were more like waterfall projects.

The initial results from these companies were fascinating. One thing that stood out was that there was in fact a learning curve. The sample had a range of Agile experience from one to four years. You can easily see that the highest performing teams achieved a level of performance that the others didn’t match. Agile was not a cure-all for all of the companies, but it will be interesting to see how the others fare as time progresses.

Another factor that was interesting indeed, was that all of the companies were being challenged by the outsourcing/India option from the top down. Some adopted Agile methods as a better, faster — and, yes, cheaper — alternative while saving their jobs in North America.

It will also be interesting so see more patterns emerge as more data comes in. Soon enough, we’ll have sufficient statistics for a series of Agile industry trend lines against which we can make direct Agile-to-Agile project comparisons. And the Agilists will have something they surely have longed for all along: Agile metrics. And the village elders just might buy into the idea.


Michael Mah is a Senior Consultant with Cutter Consortium, and also Managing Partner of QSM Associates. He welcomes reader comments and insights at [email protected].

Road to the Perfect Project

Introduction

Ever since software development projects have been around, people have been coming up with ways to help ensure they come out well. Unfortunately, history shows us that there is no process, methodology, or toolset that can guarantee project success. But there are some practical techniques, many of them non-technical, that can dramatically improve a project’s chances.

In this, and in the next couple of postings of Business Analyst Times, I will lay out techniques based on observations from my own personal experience that I know can reliably produce excellent results. In combination, they can result in what is experienced as a “perfect project”. I present from more of a project management perspective rather than pure BA one, because my experience has been that management activities and decisions tend to impact projects, for good or ill, more than technical ones. The ideas that follow are in approximate order of importance. They are necessarily quite brief, and each one is expanded in future postings. 

1. Keep the project as small as possible. A project’s size is inversely proportional to its chance of success. As people are added to a project, the potential number of interactions increases pretty much exponentially, and things fall through the cracks, often catastrophically.

2. Carefully select a leadership team comprised of a project manager and key business and systems analysts. They must be highly qualified technically and must work together well as a team.

3. Create a partnership relationship with the customer. To improve the project’s chances of going smoothly, these two organizations need to work in concert, where mutual trust, joint responsibility, transparency, and good will are operative.

4. Limit formal processes and documentation to a minimum. “Big Process”, as exemplified by CMM/CMM and the IEEE standards can get in the way of a small sized project with high quality, motivated people.

5. Work intuitively. Operate at least partially in the mode exemplified by the Agile Methodology. By being highly flexible and willing to change as users are exposed to the emerging system, the probability of system acceptance is greatly increased.

Anyone who has been around a while knows that the conditions for achieving the perfect project are not commonly present. But, if they do exist where you now are, or you can influence things to cause them to exist, then enjoy your perfect project.

Keep Project Size Small

My experience has been that project size, in terms of the number of people working on that project, is inversely proportional to its chance of success. I have been around several projects that had 100+ staff. While none of them outright failed, none of them came out particularly well. In one instance, a literal handful of people (myself included) redid one of those large systems in a year of intense work, and turned a mediocre system with poor performance characteristics into a roaring success.

Why do large projects have problems? In a nutshell, the active principle is project communications. As people are added to a project, the potential number of interactions increases pretty much exponentially. The result is that important communications don’t always get to all necessary recipients, and things fall through the cracks. Mistakes are made, and they can prove to be disastrous. People feel isolated, morale tends to be poor, and turnover is high. Big projects try to mitigate this by requiring huge amounts of formal documentation and by holding many, many meetings. The sheer mass of material makes it nearly impossible for individuals to read everything they need to read to stay up to speed on everything that might affect them. It also becomes almost impossible for any one individual to comprehend the full documentation set as a whole, meaning that there may be no one individual who understands the entire project.

Any project that has more people than can fit around a large conference table is going to experience degraded performance due to communications issues. Conversely, with a small team, any communications issues can be addressed face-to-face and be observed by the entire project team. Much can be done on an informal basis that would otherwise require formal process and documentation. For instance, the two persons responsible for both ends of an interface can sit down at a whiteboard and work out the interface, and then, instead of creating a formal Interface Control Document, they just send an email summarizing the design. When it comes time to test the interface, both parties can be present and can troubleshoot and fix any problems on the spot. I know this works, because I have done it.
 
A small, closely knit team of sharp people can work miracles. I have been lucky enough to have worked a few myself. I claim that with such a team, working informally and without undue process and documentation overhead, I can build a given system faster, with better quality, and with better user acceptance than can be done by your average 100-person team, working under what is considered best practice processes.

Some projects are just too big functionally to do them with a single small team. This is especially true if hardware and communications must be procured and integrated, facilities must be built, and national security data is involved. But it may be possible to cleanly partition the problem into a set of functions that can then be implemented using a set of closely cooperating small teams, with a top-level team comprised primarily of team leads. Each team then operates independently on its functional area, and the top-level team ensures that cross team issues such as interfaces get handled efficiently and effectively.

© Copyright John L. Dean, 2008.

Look for Part 2 of Road to the Perfect Project in the next Business Analyst Times


John L. Dean
is an IT professional whose long and broad experience base gives him an unusual perspective on what does and does not work. He has 40 years experience, mostly in government IT contracting, including systems engineering, systems analysis, requirements analysis, systems architecture and design, programming, quality assurance, testing, IV&V, and system acquisition, equally split between technical and management. He has worked on both the contractor and government sides of the fence, has been an employee of large (IBM, CSC, Booz-Allen, MITRE, ACS) as well as very small companies, and more recently has worked as an independent consultant. John has a Masters degree in Operations Research from Clemson University. He can be contacted at

[email protected].

Defining Value to Prioritize Features

Building The Right Thing

One problem with software development methods these days is that there are a lot of different ideas on the right way to build things, but we don’t have too much guidance on how to make sure we are building the right things. There is plenty of advice on the best techniques to develop quality software, but all of this guidance is based on the assumption that the team already knows what they are supposed to be building. When it comes to how to find that out, just about every methodology has the customer/stakeholder/product owner prioritize the features/use cases/user stories.

Great, but do our stakeholders always know a sure fire way to properly define what is the most important aspects of a product or system. Do they always have an objective understanding of whether the product or service should be created or updated in the first place? All too often, stakeholders don’t have a clear idea of how to determine if they are asking for the right thing, and the project team doesn’t necessarily help them.

So what does it mean to build the “right thing”? Well, one way to look at it is to understand:

  • What projects should the organization undertake? 
  • What features should be provided for those products or services? 
  • What order should features be delivered?

So how do project teams “do the right thing”? It comes down to the project team, including the stakeholders, working together to understand the organization’s definition of value, applying that definition to determine the value delivered by their projects, determining how the features contribute to the project’s value delivery, and prioritizing the features accordingly.

We’ll discuss each of those steps in turn. One thing to note that whenever I say project team, I include stakeholders in that grouping, and I use stakeholders interchangeably with customers, product owners, users, goal donors, and gold donors.

The Organization’s Definition of Value

Value is one of those words that, while easy to define, often means different things to different people, especially when used in the context of software development projects. The most appropriate dictionary based definition is: “relative worth, utility, or importance.” (http://www.merriam-webster.com/dictionary/values). When you consider this definition, it is easy to see why the definition of value will vary between different organizations. It just so happens that an organization’s strategy, as expressed by its strategic decision filters, provides a definition of value. Strategic decision filters basically provide simple rules to determine which activities create and support sustainable competitive advantage, and which activities detract from an organization’s maintaining its competitive advantage. Anything that creates and supports the organization’s sustainable competitive advantage inherently is valuable to the organization.

The Value Delivered by the Project

Once the project team understands what is valuable to an organization, they can establish a value model for the project, which provides the project team with a tool to repeatedly assess the value delivered by a project.

The value model has four key inputs:

  • Purpose. A statement about what problem the project is trying to solve, or for those optimists out there, the job that the project is trying to get done.
  • Considerations. All those aspects of the project that could impact the value produced by the project, but are either too difficult to quantify, or haven’t happened yet, such as risks, issues, assumptions, and constraints.
  • Cost. The financial resources required to undertake the project, including such things as the people working on the project, hardware, and software.
  • Benefits. The positive impact the project has on the organization. Benefits can be financial in nature, such as increased revenue or reduced costs; or non-financial, such as support for other initiatives, or improved customer satisfaction. It is a good practice for the project team to collaborate on the purpose of the project first and run that purpose through the organization’s strategic decision filters. If the project fits within those filters, the project team can then continue to understand the remaining inputs. If however, the purpose of the project does not pass the strategic decision filters, then work on the project should stop, as it will most likely not deliver any value to the organization. Of course, if the result of the effort to evaluate the project purpose against the strategic decision filters results in the question, “what strategic decision filters?” then the organization has some more fundamental work to do.

If the project is a strategic fit, the project team then collaboratively identifies the considerations, costs, and benefits of the project and crafts a model to determine the value delivered by a project. This model should be built in such a way that it can be quickly updated and analyzed throughout the life of the project as considerations change, costs are better understood, and the benefits produced become clearer. The advantage of this approach over the typical practice of developing a business case to justify the project and then never reviewing it again, is that the project team is able to determine if the project is on track to deliver the expected value, or if conditions have changed such that the project no longer makes sense to continue.

Feature Contribution to Value

Once the project team has a grasp on how the project adds value to the organization, they can start analyzing how the various features under consideration contribute to the overall project value delivery. This can be difficult because the size of feature that delivers discernable project value is often different than the level at which the project team is comfortable fully defining features. A solution to that problem is to group finely grained features into the smallest grouping that delivers value to the stakeholder. Mark Denne and Dr. Jane Cleland-Huang introduced this concept and called them Minimal Marketable Features (MMF) in their book Software By Numbers (http://www.softwarebynumbers.org/default.htm).

 Project teams can organize their features into MMF and then, through an understanding of the relative value of those MMF’s to the overall project, prioritize the features delivered by terms of relative value contributed to the project. This approach is especially effective when the project team is following an iterative approach to development, when subsets of the overall feature set included in a project are delivered on a regular basis. Ideally, project teams will find themselves in a situation where they are able to identify a financial value delivered by a feature, but this is the exception rather than the rule. More often than not, project teams have to express the value in relative terms compared to other features included in the project. Regardless of the measure used, whether it be hard dollars, or a unit of less relative weighting, the key is for project teams to have some understanding, objective or subjective, of the value each feature contributes to the overall project and use that value as the basis for prioritization decisions.

Really Analyzing the Business

What I have just described is business analysis at its finest. It goes beyond simply “gathering” requirements to truly developing an understanding of the business, the problem that needs to be solved, and the benefits of the solution to the overall organization. This approach requires a great deal of collaboration between all involved in a project, especially business analysts, who should have the appropriate skill sets to facilitate the creation and revision of the value model, and the application of that definition of value to the features included in the project. A business analyst who has these skill sets is truly setting themselves up as an invaluable person to their organization.


Kent McDonald

is a Business Systems Coach with Knowledge Bridge Partners and Partner in Accelinnova, http://www.knowledgebridgepartners.com. He has more than a decade of experience guiding successful projects and designing business solutions in a variety of industries including financial services, health insurance, performance marketing, human services, non profit, and automotive. His background includes delivering data-intensive and web-enabled application development projects that provide outstanding business value. He has coached client staff to help teams reach project goals more productively and effectively. Kent is a speaker, writer, and coach on project leadership, business analysis, and delivering business value through projects. He can be reached at [email protected]