Skip to main content

Tag: Software

Tackling Updates to Legacy Applications

After 40 years of serious business software development, the issues involved in dealing with legacy systems are almost commonplace. Legacy applications present an awkward problem that most companies approach with care, both because the systems themselves can be something of a black box, and because any updates can have unintended consequences. Often laced with obsolete code that can be decades old, legacy applications nonetheless form the backbone of many newer applications that are critical to a business’s success. As a result, many companies opt to continue with incremental updates to legacy applications, rather than discarding the application and starting anew.

And yet, for businesses to remain efficient and responsive to customers and developing markets, updates to these systems must be timely, predictable, reliable, low-risk and done at a reasonable cost.

Recent studies on the topic show that the most common initiatives when dealing with legacy systems today are to add new functionality, redesign the user interface, or replace the system outright. Since delving into the unknown is always risky, most IT professionals attempt to do this work in as non-invasive a manner as possible through a process called “wrapping” the application, which is an approach to keeping the unknown as ‘contained’ as possible and interacting with it through a minimal, well-defined, and (hopefully) well-tested layer of software.

In all cases, the more a company understands about the application – or at least the portions that are going to be operated on – the less risky the operation becomes. This means not only unraveling how the application was first implemented (design), but also what it was supposed to do (features). This is essential if support for these features is to continue, or if they are to be extended or adjusted.

Updating Legacy Applications: A Formidable Task

What characterizes legacy applications is that the information relating to implementation and features isn’t complete, accurate, current, or in one place. Often it is missing altogether. Worse still, the documentation that does exist is often riddled with information from previous versions of the application that is no longer relevant and therefore misleading.

Other problems can plague legacy development, including the fact that the original designers often aren’t around; many of the changes made over the years haven’t been adequately documented; the application is based on older technologies – languages, middleware, interfaces, etc. – and the skill sets needed to work with these older technologies are no longer available.

Nonetheless, it is possible to minimize the risk of revising legacy applications by applying a methodical approach. Here are some steps to successful legacy updating:

 

Gather accurate information. The skills of a forensic detective are required to gain an understanding of a legacy application’s implementation and its purpose. This understanding is essential to reducing risk and to making development feasible. Understanding is achieved by identifying the possible sources of information, prioritizing them, filtering the relevant from the irrelevant, and piecing together a jigsaw puzzle that lays out the evolution of the application as it has grown and changed over time. This understanding then provides the basis for moving forward with the needed development.

In addition to the application and its source code, there are usually many other sources for background information, including user documentation and training materials, the users, regression test sets, execution traces, models or prototypes created for past development, old requirements specifications, contracts, and personal notes.

Certain sources can be better resources for providing the different types of information sought. For example, observing users of the system can be good for identifying the core functions but poor at finding infrequently used functions and the back-end data processing that’s being performed. Conversely, studying the source code is a good way to understand the data processing and algorithms being used. Together, these two techniques can help piece together the system’s features and what they are intended to accomplish. The downside is that these techniques are poor at identifying non-user-oriented functions.

The majority of tools whose purpose is to help with legacy application development have tended to focus on one source. Source code analyzers parse and analyze the source code and data stores in order to produce metrics and graphically depict the application’s structure from different views. Another group of tools focuses on monitoring transactions at interfaces in order to deduce the application’s behavior.

Adopt the appropriate mindset. While this information is useful, it usually provides a small portion of the information needed to significantly reduce the risk associated with legacy application development. A key pitfall of many development projects is not recognizing that there are two main “domains” in software development efforts: the “Problem Domain” and the “Solution Domain.”

Business clients and end users tend to think and talk in the Problem Domain where the focus is on features, while IT professionals tend to think and talk in the Solution Domain where the focus is on the products of development. Source code analysis and transaction monitoring tools focus only on the Solution Domain. In other words, they’re focused more on understanding how the legacy system was built rather than what it is intended to accomplish and why.

More recent and innovative solutions can handle the wide variety of sources required to develop a useful understanding and can extend this understanding from the Solution Domain up into the Problem Domain. This helps users understand a product’s features and allows them to map these features to business needs. It is like reconstructing an aircraft from its pieces following a crash in order to understand what happened.

Pull the puzzle together. The most advanced tools allow companies to create a model of the legacy application from the various pieces of information that the user has been able to gather. The model, or even portions of it, can be simulated to let the user and others analyze and validate that the legacy application has been represented correctly. This model then provides a basis for moving forward with enhancements or replacement.

The models created by these modern tools are a representation of (usually a portion of) the legacy application. In essence, the knowledge that was “trapped” in the legacy application has been extracted and represented in a model that can be manipulated to demonstrate the proposed changes to the application. The models will also allow for validation that any new development to the legacy application will support the new business need before an organization commits money and time in development.

Once the decision is made to proceed, many tools can generate the artifacts needed to build and test the application. Tools exist today that can generate complete workflow diagrams, simulations/prototypes, requirements, activity diagrams, documentation, and a complete set of well-formed tests automatically from the information gathered above.

Legacy Applications: Will They Become a Thing of the Past?

Current trends toward new software delivery models also show promise in alleviating many of the current problems with legacy applications. Traditional software delivery models require customers to purchase perpetual licenses and host the software in-house. Upgrades were significant events with expensive logistics required to “certify” new releases, to upgrade all user installations, to convert datasets to the new version, and to train users on all the new and changed features. As a result, upgrades did not happen very often, maybe once a year at most.

Software delivery models are evolving, however. Popular in some markets, like customer relationship management (CRM), Software as a Service (SaaS) allows users to subscribe to a service that is delivered online. The customer does not deal with issues of certification, installation and data conversion. In this model, the provider upgrades the application almost on a continual basis, often without the users even realizing it. The application seemingly just evolves in sync with the business and, hopefully, the issue of legacy applications will become a curious chapter in the history of computing. 


Tony Higgins is Vice-President of products at Blueprint. He can be reached at [email protected].

Can I have My Requirements and Test Them Too?

A study by James Martin, An Information Systems Manifesto (ISBN 0134647696) has concluded that 56% of all errors are introduced in the requirements phase and are attributed primarily to poorly written, ambiguous, unclear or missed requirements  Requirements-Based Testing (RBT) addresses this issue by validating requirements to clear any ambiguity or identifying gaps. Essentially, under this methodology you initiate test case development before any design or implementation begins.

Requirements-based testing is not a new concept in software engineering – in fact you may know it as requirements driven testing or some other term entirely – and has been indoctrinated in several software engineering methodologies and quality management frameworks.  In its basic form, it means to start testing activities early in the life cycle beginning with the requirements and design phase and then integrating them all the way through implementation. The process to combine business users, domain experts, requirements authors and testers; and obtain commitments on validated requirements forms the baseline for all development activities. 

The reviewing of test cases by requirements authors and, in some cases, by end users, ensures that you are not only building the right systems (validation) but also building the systems right (verification).  As the development process moves along the software development life cycle, the testing activities are then integrated in the design phase. Since the test case restates the requirements in terms of cause and effect, it can be used to validate the design and its capability to meet the requirements. This means any change in requirements, design or test cases must be carefully integrated in the software life cycle.

So what does this mean in terms of your own software development lifecycle or the overarching methodology? Does it mean that you have to throw out your Software Development Life Cycle (SDLC) process and adopt RBT? The answer is no!. RBT is not an SDLC methodology but simply a best practice that can be embedded in any methodology. Whether the requirements are captured as use cases, as in Unified Process, or scenarios/user stories, as in Agile development models, the practice of integrating requirements with testing early on helps in creating requirement artifacts that are clear, unambiguous and testable. This not only benefits the testing organization but the entire project team. However, the implementation of RBT is much cleaner in formal waterfall-based or waterfall derived approaches and can be more challenging in less formal ones such as Agile or Iterative-based models. Even in the most extreme of the Agile approaches, such as XP, constant validation of requirements is mandated in the form of ‘customer’ or ‘voice of the customer’ sitting side-by-side with the developers.

To illustrate this, let us take the case of an iterative development approach where the requirements are sliced and prioritized for implementation in multiple iterations. The high-risk requirements, such as non-functional or architectural requirements, are typically slated in initial iterations.  Iterations are like sub-projects within the context of a complete software development project. In order to obtain validated test cases, the team consisting of requirements authors, domain experts and testers cycle through the following three sets of activities.

  • Validate business objectives, perform ambiguity analysis. Requirement-test case mapping.
  • Define and formalize requirements and test cases.
  • Review of test cases by requirements authors and domain experts.
canihave1.png

 

Any feedback or changes are quickly incorporated and requirements are corrected. This process is followed until all requirements and test cases are fully validated.

Simply incorporating core RBT principles into your methodology does not imply that fewer errors will be introduced in the requirements phase. What it will do is catch more errors early on in the development process. You have to supplement any RBT exercise by ensuring you have the means to build integrated and version-controlled requirements and test management repositories. You must also have capabilities to detect, automate and report changes to highly interdependent engineering artifacts.  This means proper configuration and change management practices to facilitate timely sharing of this information across teams. For example, if the design changes, the impact of this change must be notified to both the requirements authors and the test teams so that appropriate artifacts are changed and re-validated.

Automating key aspects of RBT also provides the foundation for mining metrics around code and requirements coverage, and can be a leading indicator of the quality of your requirements and test cases. True benefit from the RBT requires a certain level of organizational maturity and automation. The business benefits from having increased software quality and predictable project delivery timelines.  Thus, by integrating testing with your requirements and design activities, you can reduce your overall development time and greatly reduce project risk.


Sammy Wahab is an ALM and Process consultant at MKS Inc. helping clients evaluate, automate and optimize application delivery using MKS Integrity. Mr. Wahab has helped organizations with SDLC and ITSM processes and methodologies supporting quality frameworks such as CMMI and ITIL. He has presented Software Process Automation at several industry events including Microsoft Tech-Ed, Java One, PMI, CA-World, SPIN, CIPS, SSTC (DoD). Mr. Wahab has spent over 20 years in technical, consulting and management roles from software developer to Chief Technology Officer with companies including Tenrox, Osellus, American Express, Parsons, Isopia Compro and Iciniti. Mr. Wahab holds a Masters in Business Administration from the University of Western Ontario.

How to Complete a Software Development Project on Time, on Budget

Recent industry studies show that modern software projects on average spend 40 percent of their effort on rework, and as a result, over 80 percent of software projects overrun budgets, miss schedules and substantially reduce delivered functionality.

It’s a software development business analyst’s nightmare – that doesn’t seem to end.

The potential for error is further heightened because, unlike mechanical or civil engineering where the results of your efforts are tangible, the product of software development is largely conceptual.  When a manager is directing a complex project with several teams, the potential for mistakes or misdirection is especially magnified. Unlike a bridge being built from two sides of a river, significant discrepancies can creep in without an obvious reality check.

To avoid costly errors and delays, business analysts should consider seven key steps in tackling software development projects.

  1. To manage software projects effectively, business analysts need to have an explicit definition of the project’s scope. A clear demarcation of what is in and what is out, what is essential and what is would be nice to have, and what needs to be delivered at the end of the process. All major stakeholders and team members need to have a common understanding of the project goal. Ambiguities at this step can lead to major problems later that can only be resolved by a significant waste of time and money through rework.
  2. Develop concepts into clear requirements. Once stakeholders agree on a common goal, they need to refine their understanding into precise requirements, understandable to all. While it is common for requirements to evolve, starting from a specific requirements baseline provides a foundation to ensure the development process doesn’t drift. By ensuring that stakeholders are deeply involved in defining requirements, business analysts have a solid, universal understanding of the project’s path and scope.
  3. If the project is complex, use models that can be updated as the project evolves. Models represent the product in varying levels of detail and from various perspectives.  Sometimes, there is resistance to building models due to the effort required to maintain them, as new and different elements are incorporated. It is precisely because software development is so complicated that models are needed. With so many conceptual layers being tied together, it can be difficult to keep track of each and every element and their interrelationships. You wouldn’t consider building a bridge without a model. Why would you consider developing software, which is every bit as complicated, without one?
  4. Manage expectations through the project. As software development proceeds, stakeholders often suggest that more functionality be added to the project beyond its original intent. It’s necessary to rely on more than the legal contract to keep projects focused. As more people become involved in the project, regular get-togethers become even more important to ensure that all stakeholders stay aligned.  
  5. Keep the model up to date. Feedback loops are an essential part of most successful projects, and software development is no different. While it might seem time-consuming, keeping the model current provides a touchstone for all stakeholders as the project progresses. It helps maintain focus and exposes when any aspect of the development drifts from its original, or modified, intent. As much as possible, design the model so that it can be updated automatically.
  6. Decompose the model. The model should be designed in such a way that its constituent parts align with work tasks of the team.  In this way, the model parts can be delegated to individual teams to develop or maintain, and later reassembled as needed, to ensure overall integrity at regular milestones.  The process should be managed so that teams, including subcontractors, can come back to the model every so often for a reality check. By so doing, the business analyst keeps the potential for significant rework or outright failure at a minimum.
  7. The process should be built so that all aspects of the model, including those that have progressed, are pulled together regularly to ensure that everything still fits and that modules under development are still proceeding toward the ultimate goal.

But Won’t it Cost More?

Using a management structure that relies on a series of reality checks requires a project budget that allocates time and money for periodic review. The result, however, is that this marginal investment yields far more payback in terms of reduced rework.   An accurate and representative model is a catalyst for more valuable and more frequent feedback. Feedback loops are designed precisely to reduce risk and are found in nearly all engineering disciplines.

With software development projects spending on average 40 percent of their effort on rework, it is worthwhile to use an effective model to ensure your project achieves success.

Consider the alternative: A project that the client rejects, one that has to be reworked hastily, held together by shunts and duct tape. Not only are project funds wasted unnecessarily, but the delivered product’s quality suffers.   Status quo is the expensive path.


Tony Higgins is Vice-President of products at Blueprint Systems. He can be reached at [email protected].

Reducing the Cost of Change through Software Testing

In business environments that include software development, business analysts must be able to identify key areas for improvement and justify the cost of changing software development practices.  IT departments may not write automated unit tests and QC departments may manually click-through the application.  Automated testing may not be a part of the software development life cycle.  Automated testing directly impacts the cost of software by reducing the Cost of Change.

Definition: Cost of Change = Cost of Changing Code and Cost of Changing Process

A software project with a high cost of change typically exhibits some of the following bad smells.

  • Quality delivered to customer is unacceptable
  • Delivering new features to customer takes too long
  • Software is too expensive to build (product development and implementation services)
  • “Hardening” phase needed at end of release cycle
  • Integration is infrequent (Manual Testing Cycle)

Cost of Changing Code

CS2/T = (Complexity of Code base) * (Slack of “in-flexibility” of design) 2 / (Test Coverage)

The cost of changing code (making a programming change) is a factor of Code Complexity, Slack or “in-flexibility” of design and test coverage. 

Automated testing reduces this risk by:

  • Providing a way for programmers to know when they’re done.
  • Enforcing a quality standard.

What if unit tests are only written some of the time and the QC department manually “clicks-through” the application for every acceptance test?  What is the value of purchasing new tools and retraining staff?  Automated testing reduces the cost of maintaining code.

Cost of Changing Process

=  (Reduction in Cost of Changing Code) * (Number of changes implemented each year) / (Investment in training and tools)

The cost of any new investment in testing tools and training will be realized as a return on reduction in the cost of changing code.

The Bottom Line

  • Initial investment in automated testing tools and training for developers and QC.
  • Negligible change in cost for on-going training; cost of on-going training in automated testing similar to cost of current training in manual testing.
  • Increased licensing expense for better tools
  • Decreased cost of implementing new features
  • Increased quality to market
  • Increased flexibility – able to adapt to market conditions more quickly
  • Reduced time to market

Let me know what you think!  If there is sufficient interest in this topic I will return to it in my next blog and delve into the details of a test plan that adheres to this standard.


Jonathan Malkin is a Business Analyst at Plateau Systems.  Jonathan provides configuration, integration, documentation, and deployment support services for a leader in Talent Management Systems.  Jonathan’s areas of support include 21 CFR Part 11 Validation and customizations to COTS software for which he has won multiple awards.  His experience includes work in the federal government, telecommunications, mortgage and banking, and custom software development industries.  Plateau Systems is a leading global provider of adaptable, unified web-based talent management software, content and services to onboard, develop, manage and reward talent.

Jonathan may be reached by email at [email protected] or by visiting his LinkedIn page at http://www.linkedin.com/in/jmalkin

If Agile Were to Go Mainstream

If agile methods are to go mainstream, it might be when their popularity and legitimacy reach a tipping point. An example that this could be happening is a recent NY Times article called “Google Gets Ready to Rumble With Microsoft” (16 December 2007), which my colleague Ken Orr wrote about in the newsletter Cutter Trends Advisor entitled, “Velocity Matters: Google, Microsoft, and Hyper-Agility, Part 1” (20 December 2007).

The articles are about Google going after Microsoft’s customer base, using something called its “Cloud” computing framework. But Ken Orr’s interpretation of the Google-Microsoft confrontation emphasizes the time-to-market advantages that Google’s software development lifecycle has over Microsoft’s. Google is apparently practicing a more agile, iterative-style approach (sometimes quarterly) to releasing software, while Microsoft is more tied to the big bang, multi-year cycle for its products.

Might the public start perceiving companies like Google as “agile and adaptive,” while tagging Microsoft as “heavy and slow?” Agile methods may have found their version of Malcolm Gladwell’s “sticky message.” Most agree that it began in earnest with the infamous Agile Manifesto — elegant in its simplicity. It emphasized the value of “individuals and interactions” in creating “working software via customer collaboration while being responsive to change.”

Simple and Elegant.

But some people felt the word “manifesto” carried an interesting connotation because of its perceived arrogance. One manifesto, by a crazed lunatic called the Unabomber, made headlines years ago by decrying the evils of an industrialized society and railing against the establishment. The agilists (who were NOT crazed lunatics) were also railing against an establishment; in this case, the Carnegie Mellon Software Engineering Institute (SEI). The agilists’ message was that they were “lean and fast,” while anyone who followed the SEI was “heavy and slow.” Put simply, it was the younger generation calling their village elders (or at least their software processes) FAT.

The defiance had gotten personal. They were mad about project overrun statistics and sick and tired of being blamed for them. All those Ed Yourdon Death March projects had taken their toll. They were not lunatics, but they were irreverent for lots of reasons, and it was understandable.

Manifestos and name-calling seemed to help the Agile message to stick. Moreover, if Agile rides a Google wave, it will make a lot of software development organizations consider following Google’s lead.

Meanwhile, there’s an interesting quote by a long-ago congressman named Willard Duncan Vandiver. In an 1899 speech, he said, “I come from a country that raises corn and cotton, cockleburs and Democrats, and frothy eloquence neither convinces nor satisfies me. I’m from Missouri, and you have got to show me.” Some people say that the speech is the reason why Missouri is famously nicknamed, “The Show Me State.”

Westerners at the time used the phrase to suggest that Missourians were slow and not very bright. (There’s that name-calling thing again.) Missourians, however, turned the definition around and claimed that “show me” meant that they were shrewd and not easily fooled. (It turns out that the phrase was current before Vandiver, so the thinking is that his speech may have merely popularized it.)

Now here’s where it gets interesting. Manifestos and name-calling might have some frothy eloquence to them, but they neither convince nor satisfy one important constituency that many agilists need badly so they can practice their agile craftmaking. This constituency happens to be the people who sign the checks: senior management. Senior management has to buy into the idea and take risks with a “manifesto” concept that can impact the company that employs all of them. They’ve been around long enough to see fads come and go and can be cynical at times. Management also doesn’t like the processes that they’ve invested in to be called fat.

The agilists come to the elders and they ask for some money. They want a new thing called Agile Methods. The elders respond with, “You have got to show me some productivity metrics.” No metrics, no money. The agilists cringe, because they associate metrics guys as process-heavy people spouting function points.

But you don’t have to be process-heavy or say “function points” all the time to be someone who knows a little bit about IT metrics. I am neither of these and have been collecting essential core metrics on hundreds of projects over the years. Many of my clients in the last 12 months are companies running Agile projects, mostly XP and Scrum, and they want to know how they compare against waterfall projects.

We have plenty of data in a worldwide database of over 7,400 projects — agile, waterfall, package implementation, new development, legacy, military, commercial, engineering — you name it. The numbers are so very interesting that I can’t fit them all into this article. Suffice to say I’ve been on the lecture circuit recently on this subject and conducting webinars for people who want me to show them.

So what have I found? Here are some of the highlights:
• Agile teams have metrics. The perception might be that Agile teams are a bunch of undisciplined programmers slinging code and not documenting anything. Not true. They know their schedules (time), keep records of team members working on their projects (effort), they count requirements and things called iterations and stories (a size metric), and they keep track of bugs (defects).
• We easily draw this out along with their velocity charts on a whiteboard sketch. This profile is all we need to capture the measures and load them into a computer database.
• Agile trends can be plotted on a chart where a picture says a thousand words (or in this case, metrics). Smaller releases, medium-sized releases, and large releases are charted from left to right. Vertically, we can chart the schedules, team size, and defects found and fixed.
• As a group, the projects were mostly faster than average. About 80% were below the industry average line. Note that some took longer, for several reasons (too long to explain here). Some companies developed software in two-thirds or even half the time.
• They used larger than average teams. Even though many of the smaller releases used small teams, some — apparently in response to deadline pressure — used large numbers of people. One company applied seven parallel Scrum teams totaling 95 people, where the norm was about 40 people.
• On waterfall projects, the “laws of software physics” showed a predictable outcome of large teams trying to create fast schedules — high defect rates (sometimes 4x-6x). On Agile projects, we saw a shockingly low number of defects — in some of the companies. The best performer had high-maturity XP teams. These project teams showed defects that were 30% to 50% lower than average. Other less-mature Agile teams had defect rates that were more like waterfall projects.

The initial results from these companies were fascinating. One thing that stood out was that there was in fact a learning curve. The sample had a range of Agile experience from one to four years. You can easily see that the highest performing teams achieved a level of performance that the others didn’t match. Agile was not a cure-all for all of the companies, but it will be interesting to see how the others fare as time progresses.

Another factor that was interesting indeed, was that all of the companies were being challenged by the outsourcing/India option from the top down. Some adopted Agile methods as a better, faster — and, yes, cheaper — alternative while saving their jobs in North America.

It will also be interesting so see more patterns emerge as more data comes in. Soon enough, we’ll have sufficient statistics for a series of Agile industry trend lines against which we can make direct Agile-to-Agile project comparisons. And the Agilists will have something they surely have longed for all along: Agile metrics. And the village elders just might buy into the idea.


Michael Mah is a Senior Consultant with Cutter Consortium, and also Managing Partner of QSM Associates. He welcomes reader comments and insights at [email protected].