Skip to main content

Author: Curtis Reed

Agile Mistakes Part 2: A Failed Conversion to Agile

Having provided a high-level comparison of waterfall and agile methodologies (or frameworks) in my previous article, I will now begin to analyze core areas where misconceptions arise and create problems in an environment that has been declared agile.

A failed conversion from waterfall to agile

The term “requirements” means many things to many people, and I’ve often found that even in a waterfall environment there can be confusion. I once worked in a company that had a robust business development team whose job was to analyze trends, get ahead of the industry direction, and discover new business opportunities. This team flew around the country to meet with current customers and prospects alike, talking to customers about their needs, frustrations, and visions, and then capturing them in a series of business documents such as a business cases, RFPs, and SOWs before the project was approved and they were converted by the PMO into project documentation.

It was common to hear the business team discuss the “requirements” for the new product. And, in effect, they were requirements. But they were such high-level descriptions of a business need that there was a massive gulf between the theoretical need described and the nitty-gritty details that had to be elicited in order to translate the idea into actual code.

At a certain point, business analysts were brought in to dig deeper. They translated the 20,000-foot view of the business requirements into progressively detailed statements that described how the system should behave. So a statement such as “We need our customers to be able to order satellite television service online” was broken into thousands of detailed requirements in phrases such as: “The application shall allow the online customer to register an account.” Subordinate details then flowed from this, describing every detailed aspect of page layout, button sizes and descriptions, font types, colors, sizes, field validation rules, etc. This type of description was generally referred to as “software requirements,” sometimes “system requirements.” But one relatively constant limitation was placed on this type of requirement: they should describe what was being built, what the stimulus/response behavior would look like, and not how the system should be built.

That level of detail was left up to the architects and developers, captured in design artifacts that flowed from the technical design process after developers had fully analyzed and approved the “software requirements.” Confusingly, these design artifacts might also be called “system requirements.” We then input the business requirements, the software requirements and the system (design) requirements into an application and carefully performed traceability across thousands of requirements.

This process often took many months to complete before actual development ever began. And during the entire life cycle, the term “requirements” was bandied about loosely as if the term meant the same thing to everyone, which it did not.

As one might imagine, this laborious, tedious, time-consuming process often resulted in many months of expensive work being performed, only to have the customer decide to cancel the project before the first lines of code were developed or, even worse, just as code was about to be released for the first time.

And of course, the company suffered horrible embarrassments when customers first saw the product of many months of work only to find it did not in any way resemble what they thought they were getting. They held emergency meetings with the client to reassure them and figure out what had to be changed. In some cases, service-level agreements had been signed that included hefty penalties for failing to meet expectations and delivery dates, sometimes fining the company tens of thousands of dollars per day. In short, fingers pointed, heads rolled and morale was utterly destroyed.

Executive managers, always looking for a “new way” to improve efficiency, heard about this new “wiz-bang” development process called agile and thought it sounded nifty. With nary more than a cursory examination of the process, they came to the conclusion that the analysis phase of software development could be abbreviated, cutting costs and shortening time to market.

They had heard that code should be delivered more frequently, so they decided that they would perform “agile iterations” that only lasted three to six months before delivering some code. Requirements would no longer need to be so detailed, and instead many of the previously stated requirements that fit snugly into the realm of “application standards” would be assumed, not captured. Even aspects such as page layout, button and other feature designs, would be left to the teams to determine. The “brilliant” architects and developers would have more freedom to shape the product.

They made an announcement about the changes and essentially told the teams to go figure out how it would work, implement the changes, and watch the miracles happen.

But miracles didn’t happen.

Customers were still not happy. Delivery dates slipped. The product still did not meet expectations, and new problems arose: common features in one part of the product no longer looked or functioned like the same features in another part of the product. Customer requirements were either not documented at all, improperly captured, or—to great embarrassment—documented but then lost entirely!

What went wrong?

  1. Senior executives had decided to “go agile” without selecting a framework. Nor had they contracted coaches to implement a framework. Senior executives failed to recognize the complexity of the proposed cultural change and how to manage it in a way that would result in success.

  2. Executives briefly read about agile methodologies without gaining an in-depth understanding of how to implement a successful agile methodology. They did not have an understanding of what constituted a good “user story” and why, nor how to prioritize them properly. They also failed to implement anything resembling backlog grooming, burndown charts, daily standup meetings, retrospectives, or any of the other standard agile practices and ceremonies that are crucial to the framework.

  3. Although members of the team had warned that they were uncomfortable with the quality of requirements and way the projects were being handled, they were ignored; executives, after all, know more than the grunts.

  4. Certain technical resources heard that they would have more creative leeway with implementation, and so they unilaterally changed described functionality without prior approval. Sometimes they added features that were not requested (scope creep). Other times they altered functionality without validating that it met client needs (missing requirements). And quite often, they directed teams to focus their work on functionality that they thought were important, but which were much lower priority to the customer.

  5. Executives thought that if they had previously delivered code once every six months, then doing so once every two to three months would make them agile. But they had failed to have frequent demonstrations of the functionality along the way, and as a result, were surprised to find that the solution had strayed from what was desired until it was very difficult, and quite expensive, to fix it.

Thus, the effort went awry from the start. Only high-level business requirements were gathered, and details were left for the business analyst, architects, and developers to decide based upon their understanding of what the customer wanted. The requirements were no longer captured in any methodical manner, and were stored across a ticketing system instead of in any centralized location. No applications such as Rally or Jira, designed to assist with managing agile projects, were used. More shockingly, the teams did not even know how to manage story boards, so the most basic tools were overlooked to help achieve success.

Clearly, management fundamentally failed to understand that agile is a process, not a lack of process.

They misunderstood that functionality still needed to be documented, and just because software requirements specifications (SRS) documents were not part of the process did not mean that requirement elicitation and management could largely be ignored. Nor should the early warnings from experienced team members fall on deaf ears.

Senior managers who are considering a transition to an agile framework have a responsibility to fully understand everything that goes into the change and managing it appropriately. But that is just the beginning. They must also comprehensively examine the cultural changes that will be required, and plan carefully how to make the transition. And finally, they must provide their teams with the training and tools needed to operate within an agile environment. Had they done this, their projects would very likely have been moneymakers instead of bleeding away profits.

Book-learning and reading articles is no substitute for direct expert advice; it would be wise to hire a consultant to assist with the process.

Don’t forget to leave your comments below.

Agile Misconceptions: What Agile is Not

Introduction

Over the course of my career in software development, I have had the fortune of working in a wide variety of companies employing radically different approaches to the software development life-cycle (SDLC). Some strictly stuck to traditional Water Fall. Others called themselves “agile” but never actually bothered to adopt any agile framework, and were thus a blend of Water Fall and Agile—and all too often, not a successful blend either. Another strictly adopted a true agile methodology and decided that there was no need for a PMO since, as they put it, “we’re SCRUM.” Other companies used the same Scrum methodology but saw the value of having a PMO as well.

As I have closely followed discussions on Project Times (and other blog sites), I have noticed some discussions around Project Management methodologies that are fine in theory, but often seem divorced from real-world applications. You can often find arguments prefaced by statements such as “good project managers do X,” or “the PMBOK methods require Y,” or even “any experienced Scrum Master knows…”

After working in so many different environments, the most important lesson I learned was that a dash of humility goes a long way. In other words, no matter what framework is employed, most companies will adapt the theoretical suggestions of the framework to make it work for their needs. Project Managers who suffer from what I call “theoretical myopia” may fail to see the value in the adaptations the company has chosen. A strict enforcement of a theoretical framework, without regard to results, is sometimes just as harmful as not enforcing any framework or process at all.

Target audience

There are several types of readers for whom this article is intended. Some readers may currently work in Water Fall environments and may not have any experience with agile. Others may work in companies that claim to be agile, but really are not. And a few others may work in true agile environments that strictly follow the guidelines provided by whoever trained them in that framework, and don’t realize that there are “many ways to skin a cat.”

Let’s answer the question: “What is agile?”

The simplest explanation is that it is an approach to development based on iterative and incremental development. It is characterized by “adaptive planning,” “evolutionary development and delivery,” “time-boxed” iterations that result in incremental releases of functional product, and must provide rapid and flexible response to changes and discovery.

What exactly does this mean in the real world? One must remember that in the traditional “Water Fall” approach, the project flows through a series of steps or gates, and does not progress to the next step until the current one is complete. As an example, Requirements Gathering only begins once the Project Charter and Statement of Work (SOW) have been approved and signed off. Development begins only once the full set of requirements for the entire solution are gathered and approved by all stakeholders. A testing phase begins when the entire platform is developed. Because the development phase can last for many months, even years, before code is released, Project Managers may struggle to differentiate accurately between perceived progress (often reported as developers’ guesses of “percent complete”) and actual progress. This, in turn, results in unpleasant surprises when the delivery date looms two weeks away and the developers unexpectedly report that they still have eight weeks of work to perform!

What is the unsurprising response from senior management and the client? “Why didn’t we know we were behind schedule earlier?”

As a concrete example, let’s look at a real-world application, first for a Water Fall approach.

Think about developing a Web page, especially one designed to handle flight reservations. In a Water Fall approach, many weeks would be spent gathering comprehensive requirements for the entire page and every field on the page. This would include not just a list of fields but all the validation rules for every data point. Development would not begin until this comprehensive document was completed and then agreed upon by stakeholders.

Once the requirements were gathered and approved, development begins. The entire page (or even the entire website with all functionality) would then be created and tested. The page or site would then be released to the customer. This same process might be performed not only for the initial home page, but for every subsequent page in the site.

In many cases, the very first time the customer saw the direction the team had gone is when the page, or entire site even, is unveiled for the first time in a customer demo. This is when many customers balk in frustration because the released product did not accurately represent what they thought they had described. A crisis ensues because the client is very unhappy that functionality is unsatisfactory. From the development company’s perspective, changes to the design and implementation are quite difficult and very expensive.

I will make the comparison by examining the Agile framework I know best, which is Scrum. In Agile development, high-level ‘requirements’ are gathered during the project estimation phase, often when the project charter (or SOW) is being written. This information is gathered in “user stories” (or other equivalent) that describe how the major components are expected to behave, in terms of user experience. “As a customer, I want to be able to book a flight from one destination to another, for a range of dates.” Additional functionality may be captured in other stories, such as “Flight Cancellation,” “Reservation changes,” or “Choosing seating.”

Experienced teams are able to size the anticipated work by a measurement of complexity, rather than guessing how long it will take as a measure of time. Once work begins, refinement of requirements and behavior often occur with direct client involvement while development is underway, in an iterative process.

The Scrum approach breaks up the whole page into discreet areas of functionality and plans to work on each piece. The “Flight Reservation” page is broken into “Select an origination Airport,” “Select a Destination Airport,” “Select a range of origination dates,” “Indicate seating preferences,” etc. These become the User Stories, which are given acceptance criteria. The stories are then sized by complexity, tasked (with estimated hours for each small task), and planned out across short iterations. The stories and iteration plans are reviewed with direct client input. As development progresses, the customer sees the result every couple of weeks as each iteration (or “Sprint”) is completed. The customer then approves the output, or suggests changes that, being found early in the process, require minor “tweaks” instead of large architectural changes.

The frequency of these “demos” helps the customer participate in constant revision and modification, so each delivered piece of product is much more likely to satisfy the customer’s needs. Rework and associated cost is thus reduced. Client satisfaction is increased.

The benefits to Agile are quite simple to comprehend. The agile development cycle is designed to provide valuable functionality that accurately represents customer wishes on frequent deliveries, contrasting with the Water Fall approach, which often leaves customers waiting for long periods of time before they see the first benefit from their investments.

What agile is not

An Agile team is flexible, agile, and adaptable. The Agile team proactively elicits and documents the desired function with direct and frequent client involvement. It also frequently reviews the developed product with the client to verify that functionality meets client wishes. The talented development team is encouraged to meet those needs creatively. The project manager (Scrum Master) vigilantly monitors a daily “burn down chart” that allows the team to change direction swiftly in order to get the development back on track, in the case where circumstances have caused deviation from the plan or to quickly react to new information received.

Having laid out this basic understanding of what Agile is, and contrasting it with Water Fall, it is important to reiterate that the core concept of Agile is not a lack of process or a “watering down” of documentation! In my experience, this misunderstanding has been propagated by some managers who believe that Agile frameworks translate to nothing more than a loosening of control and reduction of analysis and planning. Improper implementation of an ill-defined concept of “Agile” can result in even graver problems arising. It has been my experience that “the cure was worse than the disease” in those cases.

I will dive into specific problematic areas in subsequent articles.

Don’t forget to leave your comments below.