Skip to main content

Author: Cynthia Low

Tackling Updates to Legacy Applications

After 40 years of serious business software development, the issues involved in dealing with legacy systems are almost commonplace. Legacy applications present an awkward problem that most companies approach with care, both because the systems themselves can be something of a black box, and because any updates can have unintended consequences. Often laced with obsolete code that can be decades old, legacy applications nonetheless form the backbone of many newer applications that are critical to a business’s success. As a result, many companies opt to continue with incremental updates to legacy applications, rather than discarding the application and starting anew.

And yet, for businesses to remain efficient and responsive to customers and developing markets, updates to these systems must be timely, predictable, reliable, low-risk and done at a reasonable cost.

Recent studies on the topic show that the most common initiatives when dealing with legacy systems today are to add new functionality, redesign the user interface, or replace the system outright. Since delving into the unknown is always risky, most IT professionals attempt to do this work in as non-invasive a manner as possible through a process called “wrapping” the application, which is an approach to keeping the unknown as ‘contained’ as possible and interacting with it through a minimal, well-defined, and (hopefully) well-tested layer of software.

In all cases, the more a company understands about the application – or at least the portions that are going to be operated on – the less risky the operation becomes. This means not only unraveling how the application was first implemented (design), but also what it was supposed to do (features). This is essential if support for these features is to continue, or if they are to be extended or adjusted.

Updating Legacy Applications: A Formidable Task

What characterizes legacy applications is that the information relating to implementation and features isn’t complete, accurate, current, or in one place. Often it is missing altogether. Worse still, the documentation that does exist is often riddled with information from previous versions of the application that is no longer relevant and therefore misleading.

Other problems can plague legacy development, including the fact that the original designers often aren’t around; many of the changes made over the years haven’t been adequately documented; the application is based on older technologies – languages, middleware, interfaces, etc. – and the skill sets needed to work with these older technologies are no longer available.

Nonetheless, it is possible to minimize the risk of revising legacy applications by applying a methodical approach. Here are some steps to successful legacy updating:

 

Gather accurate information. The skills of a forensic detective are required to gain an understanding of a legacy application’s implementation and its purpose. This understanding is essential to reducing risk and to making development feasible. Understanding is achieved by identifying the possible sources of information, prioritizing them, filtering the relevant from the irrelevant, and piecing together a jigsaw puzzle that lays out the evolution of the application as it has grown and changed over time. This understanding then provides the basis for moving forward with the needed development.

In addition to the application and its source code, there are usually many other sources for background information, including user documentation and training materials, the users, regression test sets, execution traces, models or prototypes created for past development, old requirements specifications, contracts, and personal notes.

Certain sources can be better resources for providing the different types of information sought. For example, observing users of the system can be good for identifying the core functions but poor at finding infrequently used functions and the back-end data processing that’s being performed. Conversely, studying the source code is a good way to understand the data processing and algorithms being used. Together, these two techniques can help piece together the system’s features and what they are intended to accomplish. The downside is that these techniques are poor at identifying non-user-oriented functions.

The majority of tools whose purpose is to help with legacy application development have tended to focus on one source. Source code analyzers parse and analyze the source code and data stores in order to produce metrics and graphically depict the application’s structure from different views. Another group of tools focuses on monitoring transactions at interfaces in order to deduce the application’s behavior.

Adopt the appropriate mindset. While this information is useful, it usually provides a small portion of the information needed to significantly reduce the risk associated with legacy application development. A key pitfall of many development projects is not recognizing that there are two main “domains” in software development efforts: the “Problem Domain” and the “Solution Domain.”

Business clients and end users tend to think and talk in the Problem Domain where the focus is on features, while IT professionals tend to think and talk in the Solution Domain where the focus is on the products of development. Source code analysis and transaction monitoring tools focus only on the Solution Domain. In other words, they’re focused more on understanding how the legacy system was built rather than what it is intended to accomplish and why.

More recent and innovative solutions can handle the wide variety of sources required to develop a useful understanding and can extend this understanding from the Solution Domain up into the Problem Domain. This helps users understand a product’s features and allows them to map these features to business needs. It is like reconstructing an aircraft from its pieces following a crash in order to understand what happened.

Pull the puzzle together. The most advanced tools allow companies to create a model of the legacy application from the various pieces of information that the user has been able to gather. The model, or even portions of it, can be simulated to let the user and others analyze and validate that the legacy application has been represented correctly. This model then provides a basis for moving forward with enhancements or replacement.

The models created by these modern tools are a representation of (usually a portion of) the legacy application. In essence, the knowledge that was “trapped” in the legacy application has been extracted and represented in a model that can be manipulated to demonstrate the proposed changes to the application. The models will also allow for validation that any new development to the legacy application will support the new business need before an organization commits money and time in development.

Once the decision is made to proceed, many tools can generate the artifacts needed to build and test the application. Tools exist today that can generate complete workflow diagrams, simulations/prototypes, requirements, activity diagrams, documentation, and a complete set of well-formed tests automatically from the information gathered above.

Legacy Applications: Will They Become a Thing of the Past?

Current trends toward new software delivery models also show promise in alleviating many of the current problems with legacy applications. Traditional software delivery models require customers to purchase perpetual licenses and host the software in-house. Upgrades were significant events with expensive logistics required to “certify” new releases, to upgrade all user installations, to convert datasets to the new version, and to train users on all the new and changed features. As a result, upgrades did not happen very often, maybe once a year at most.

Software delivery models are evolving, however. Popular in some markets, like customer relationship management (CRM), Software as a Service (SaaS) allows users to subscribe to a service that is delivered online. The customer does not deal with issues of certification, installation and data conversion. In this model, the provider upgrades the application almost on a continual basis, often without the users even realizing it. The application seemingly just evolves in sync with the business and, hopefully, the issue of legacy applications will become a curious chapter in the history of computing. 


Tony Higgins is Vice-President of products at Blueprint. He can be reached at [email protected].

Can I have My Requirements and Test Them Too?

A study by James Martin, An Information Systems Manifesto (ISBN 0134647696) has concluded that 56% of all errors are introduced in the requirements phase and are attributed primarily to poorly written, ambiguous, unclear or missed requirements  Requirements-Based Testing (RBT) addresses this issue by validating requirements to clear any ambiguity or identifying gaps. Essentially, under this methodology you initiate test case development before any design or implementation begins.

Requirements-based testing is not a new concept in software engineering – in fact you may know it as requirements driven testing or some other term entirely – and has been indoctrinated in several software engineering methodologies and quality management frameworks.  In its basic form, it means to start testing activities early in the life cycle beginning with the requirements and design phase and then integrating them all the way through implementation. The process to combine business users, domain experts, requirements authors and testers; and obtain commitments on validated requirements forms the baseline for all development activities. 

The reviewing of test cases by requirements authors and, in some cases, by end users, ensures that you are not only building the right systems (validation) but also building the systems right (verification).  As the development process moves along the software development life cycle, the testing activities are then integrated in the design phase. Since the test case restates the requirements in terms of cause and effect, it can be used to validate the design and its capability to meet the requirements. This means any change in requirements, design or test cases must be carefully integrated in the software life cycle.

So what does this mean in terms of your own software development lifecycle or the overarching methodology? Does it mean that you have to throw out your Software Development Life Cycle (SDLC) process and adopt RBT? The answer is no!. RBT is not an SDLC methodology but simply a best practice that can be embedded in any methodology. Whether the requirements are captured as use cases, as in Unified Process, or scenarios/user stories, as in Agile development models, the practice of integrating requirements with testing early on helps in creating requirement artifacts that are clear, unambiguous and testable. This not only benefits the testing organization but the entire project team. However, the implementation of RBT is much cleaner in formal waterfall-based or waterfall derived approaches and can be more challenging in less formal ones such as Agile or Iterative-based models. Even in the most extreme of the Agile approaches, such as XP, constant validation of requirements is mandated in the form of ‘customer’ or ‘voice of the customer’ sitting side-by-side with the developers.

To illustrate this, let us take the case of an iterative development approach where the requirements are sliced and prioritized for implementation in multiple iterations. The high-risk requirements, such as non-functional or architectural requirements, are typically slated in initial iterations.  Iterations are like sub-projects within the context of a complete software development project. In order to obtain validated test cases, the team consisting of requirements authors, domain experts and testers cycle through the following three sets of activities.

  • Validate business objectives, perform ambiguity analysis. Requirement-test case mapping.
  • Define and formalize requirements and test cases.
  • Review of test cases by requirements authors and domain experts.
canihave1.png

 

Any feedback or changes are quickly incorporated and requirements are corrected. This process is followed until all requirements and test cases are fully validated.

Simply incorporating core RBT principles into your methodology does not imply that fewer errors will be introduced in the requirements phase. What it will do is catch more errors early on in the development process. You have to supplement any RBT exercise by ensuring you have the means to build integrated and version-controlled requirements and test management repositories. You must also have capabilities to detect, automate and report changes to highly interdependent engineering artifacts.  This means proper configuration and change management practices to facilitate timely sharing of this information across teams. For example, if the design changes, the impact of this change must be notified to both the requirements authors and the test teams so that appropriate artifacts are changed and re-validated.

Automating key aspects of RBT also provides the foundation for mining metrics around code and requirements coverage, and can be a leading indicator of the quality of your requirements and test cases. True benefit from the RBT requires a certain level of organizational maturity and automation. The business benefits from having increased software quality and predictable project delivery timelines.  Thus, by integrating testing with your requirements and design activities, you can reduce your overall development time and greatly reduce project risk.


Sammy Wahab is an ALM and Process consultant at MKS Inc. helping clients evaluate, automate and optimize application delivery using MKS Integrity. Mr. Wahab has helped organizations with SDLC and ITSM processes and methodologies supporting quality frameworks such as CMMI and ITIL. He has presented Software Process Automation at several industry events including Microsoft Tech-Ed, Java One, PMI, CA-World, SPIN, CIPS, SSTC (DoD). Mr. Wahab has spent over 20 years in technical, consulting and management roles from software developer to Chief Technology Officer with companies including Tenrox, Osellus, American Express, Parsons, Isopia Compro and Iciniti. Mr. Wahab holds a Masters in Business Administration from the University of Western Ontario.

Trends in Business Analysis and Project Management to watch for in 2009

The close of the year tends to make one reflect on the past and ponder the future. Here we ponder some trends in the business analysis and project management fields for 2009. We invite you to read some of these trends and ponder for yourself our views about what project professionals can do about them.

  1. Convergence of PM and BA Roles. As the economy tightens, organizations will decrease their project budgets. But, they still need projects done, so look for organizations to try and combine the role of the BA and PM on projects. A recent survey on BA Times finds that an equal number of “project professionals” (our term to encompass both project managers and business analysts) feel that the PM and BA role will be combined on many projects in 2009. Project managers will be asked to do more requirements elicitation and analysis. Business analysts will be required to manage more projects. Oh, and by the way – you will need to do that in addition to your normal roles!

    What Project Professionals can do about it: If you are a project manager, sharpen your requirements elicitation and analysis skills. If you are a BA, learn how to plan and execute projects better, and to manage risks. The other advice we can give is “Concentrate on the work, not the worker.” No matter what your job title, make sure you know the tasks and outputs expected of you to help achieve project and business success.

  2. Greater Emphasis on Requirements in Project Management. The upcoming 4th edition of the PMBOK® (Project Management Body of Knowledge) is due to be released in 2009. The Project Scope Management Knowledge Area contains a new section 5.1 called “Collect Requirements” that was largely written by us (Elizabeth and Rich).  It contains a number of requirements elicitation techniques that project managers should be able to use to elicit requirements for projects. They are a subset of the techniques described in the Business Analysis Body of Knowledge (BABOK®), so BAs also need to be familiar with these. The section places an emphasis on the Requirements Management Plan and use of the Traceability Matrix for managing requirements and product scope.

    What Project Professionals can do about it: When the new PMBOK® Guide becomes available, make sure you obtain it and read the section on Collect Requirements. It’s not because we wrote much of it (well, we are proud of it!). Both PMs and BAs should be aware of what this widely-used and referenced guide says about requirements. The PMBOK® has heavily influenced the practice of project management the past several years, and the new edition promises to do the same. 

  3. Change in Requirements Approaches. We see a continued trend in business analysis techniques continuing into 2009. Here are some to consider:
    1. Slightly less reliance on use cases and movement towards user stories and scenario-based requirements. Use cases will still be used, especially with complex requirements with intricate interfaces or tricky infrastructure considerations.
    2. Less emphasis on requirements specifications, more emphasis on modeling, prototypes and diagrams. For many reasons, there is a trend away from only formal written requirements specifications. That doesn’t mean written requirements have no place, but it does mean the industry is using additional methods of documenting requirements. 
    3. More requirements management. To control scope and fulfill business needs, there will be continued increase in business analysis and requirements planning in 2009. We see more and more organizations using traceability to control and manage product scope. Both the upcoming PMBOK® and current BABOK® feature this technique and emphasize the use of a traceability matrix.
  4. What Project Professionals can do about it: Keep using use cases, but bear in mind there are other good requirements analysis techniques. Supplement your requirements specifications with models to document and help you better analyze requirements. Learn about other methods, such as user stories and use the technique most appropriate for the type of requirement you are analyzing. For example, do data modeling for refining your data requirements.

  5. Increased use of Agile Approach and Techniques. Integrating Agile methods into project management and business analysis is a trend that will continue in 2009. Currently, the industry has a wide, varied, and inconsistent use of Agile techniques. This trend is likely to continue as organizations adopt Agile techniques and the industry adopts commonly accepted practices. Agile itself is evolving to the needs of the industry. For example, the need for more planning has been recognized. For instance, the concept of “Scrum of Scrums” to coordinate Agile teams has surfaced. Another trend we’ve noticed is Agile teams incorporating traditional techniques like requirements workshops and more documentation.

    What Project Professionals can do about it: Like any new approach, make sure you learn the generally accepted practices, not just the way a consultant or a single “expert” advises. There are many self-proclaimed experts out there, and some shortcuts on planning and requirements are being taken and justified by being called “agile.” 

  6. BABOK® continuing to have an impact. The practice of business analysis is being positively influenced by the Business Analysis Body of Knowledge (BABOK®). The BABOK® Knowledge Areas of Enterprise Analysis are beginning to gel in organizations, as is the need to do requirements planning. We’re seeing more formality and standardization in the methods, say, of doing business cases, or using traceability to manage requirements. 

    Also, the various elicitation techniques in the BABOK® area are being more widely adopted. Interviews and requirements workshops are common forms of elicitation, but we feel the BABOK is influencing BAs to use additional techniques such as prototyping and interface analysis and to include them in their requirements planning.

    What Project Professionals can do about it: Download the BABOK® from the IIBA and start reviewing it. Use it as an input to recommending business analysis standards in your organization. Being some of the firsts CBAPs (Certified Business Analysis Professionals), we believe in and urge others to pursue certification in business analysis. It helps promote the profession of business analysis in general and helps you to solidify and integrate the tools and techniques in the BABOK®, and to “personalize” them to your organization. 

  7. Business Intelligence Continues to Grow. This area of information systems has been growing steadily and 2009 promises to have no letup. As BI tools and techniques improve and solid benefits are realized, organizations will invest more and more in this tactic. Since BI relies heavily on tools such as Business Objects or Cognos, the underlying business requirements can be easily overlooked in favor of what the tools can produce.

    What Project Professionals can do about it: Learn how to identify how BI can help your business perform better. BI applications should be actionable and project professionals should focus on true business requirements instead of particular tools. Learning to ask the right questions is key, and anticipating how clients will use their data, although challenging, is well worth the effort. 

  8. “The Economy, Stupid,” as a past political slogan said. A slumping economy tends to affect travel and training budgets, and this one is no exception. That translates into fewer trips to national conferences or travel to out-of-state training classes.

    What Project Professionals can do about it: Attend local conferences that you can drive to.  Many local chapters of PMI and now IIBA are launching Professional Development Days or PDDs. Watch for announcements to these and plan to attend. If you have a conference such as Project Summit/Business Analyst World in your town, take advantage of the opportunity and you will find excellent speakers and workshops there. Have you noticed the big increase in webinars as a way of exchanging information and interacting virtually without travelling? Watch for more of the same in 2009. We plan to offer regular webinars throughout 2009.

    Interestingly, national conferences like the PMI Global Congress North America attracted many foreign workers this year, from expanding economies such as Brazil and Russia. These growing countries will have larger travel budgets than some of their US counterparts. We also see continued rising international interest in PMI and IIBA.


Elizabeth Larson, CBAP, PMP and Richard Larson, CBAP, PMP are Principals, Watermark Learning, Inc. Watermark Learning helps improve project success with outstanding project management and business analysis training and mentoring. We foster results through our unique blend of industry best practices, a practical approach, and an engaging delivery. We convey retainable real-world skills, to motivate and enhance staff performance, adding up to enduring results. With our academic partner, Auburn University, Watermark Learning provides Masters Certificate Programs to help organizations be more productive, and assist individuals in their professional growth. Watermark is a PMI Global Registered Education Provider, and an IIBA Endorsed Education Provider. Our CBAP Certification Preparation class has helped several people already pass the CBAP exam. For more information, contact us at 800-646-9362, or visit us at http://www.watermarklearning.com/.

The Uncertainties of Integration Projects

Integration is burdened by a lot of misconceptions. The uncertainties of an integration project can be deep enough to evoke hundreds of questions, specific to a company’s back-end systems. This article will focus on five common thoughts where we have seen Integration sales reps dance around one or more questions, possibly because of either shortcomings in products or a lack of knowledge on how Integration really is implemented.

  1. Building Integrations isn’t really easy, right?
  2. How much will I need to spend on professional services, and what particular expertise will that entail?
  3. What is the true time to develop integrations?
  4. What are the real costs/options of the software?
  5. With all of the Standards out there, isn’t Integration pretty straightforward?

Mid-sized organizations may undertake either internal or external Integrations, and projects can cross over data formats and system boundaries. One good example is integrating order data from an on-line store (typically in XML or an intermediary DB back-end) to an ERP system. It’s what many think of as a singular or monolithic integration: it connects two points, crossing data formats and system boundaries.

However, that’s only part of the picture. You may also want to move the information into a CRM application as a subsequent transaction. This can be thought of as a complex (or multi-step) Integration. Some other examples of Integrations involve legacy data movement, EDI, and XML Interestingly, we are seeing an increase in spreadsheet-to-application integration, as spreadsheet formats emerge as a lower-end data-exchange standard du-jour and, as such, represent a tempting staging area for incorporating into other applications.

Why integration matters: islands need to communicate within a company.

Most mid-sized companies have IT departments that are stretched very thin, for any of several reasons:

  • Limited staff and resources;
  • Lack of knowledge and difficulty in finding impartial advice;
  • The cost of solutions;
  • Lack of time to devote to implementation and maintenance;
  • Short-range management perspectives;
  • A lack of understanding of the benefits that IT can provide, and how to measure those benefits;
  • Lack of formal planning or control procedures.

Without these limitations, many if not all the questions would be irrelevant. But they do exist, and lead to inevitable questions.

 

uncertainties1.png

Q: Building integration isn’t really easy, right?

A: This is the ugly truth about integration projects. Typically, they’re tough going. There are tools that can help, but you still need to be prepared to get in there and roll up your sleeves.

There are several considerations, not the least of which are the data formats to be used (How well do you understand the data formats and processes?); the processes to define; communications methods; if you are doing an external Integration, how cooperative is your partner (well-defined specs)? What information have they provided? And, what skill sets are available in-house to achieve this understanding?

Integration does not need to be difficult if customers have an understanding of the following “whats” or variables:

  • Ability to work with data definitions, create definitions that can be re-used;
  • Easy-to-use mapping tool (graphic, drag-and-drop, connect source and target info. Avoid writing code);
  • Business process management, something that is often overlooked. We find that companies are conscientious about the data, but often forget about the processes, and how they can improve them or accomplish more with what they have.
  • Flexible way to use communications adapters. Without them, a company’s infrastructure can really be over-burdened.

So, the key to making the “how” easier is understanding what the tools can do for you. There is a direct link between the capabilities of the tools and the ease of building the integration.

Q: How much and what kind of professional services will I really need?

A: It depends on a few key factors. The first, most logically, is the size of the integration project; then, the capabilities of the tools used; and especially the in-house skill sets.

For large integration projects, “point solutions,” especially where manual programming is involved, can run from 4x to 10x the cost of the software. The sophistication or complexity of the tool, versus in-house skill sets, tends to define the need for consulting. The in-house capabilities need to be aligned with the capabilities of the tool. This may not be an obvious point, but it’s important to keep in mind. You don’t give an 8-cylinder roadster to a new driver, nor send a jet pilot to operate a back-hoe; match the tool to the skills and experience of the user.

Users can, in fact, leverage a pilot project as a mentoring stage, something we see pretty routinely. The real key, where I have seen the best ROI, is to define a small pilot-project and engage a professional services group to mentor your team. This allows a skill-set transfer, won’t consume a large amount of resources, and produces a usable Integration, rather than a throw-away.

In other words, the pilot project doesn’t go away after implementation, it actually goes into production. The concept has been proven, and the results are immediate. Because of the capabilities of contemporary Integration tools, these pilot projects only last a few weeks. They can be an effective way to measure the rate of knowledge transfer and still end up with a usable result.

Q: What is the true time to develop integrations?

A: Several factors tie in to the first question (Integrations aren’t really easy!). I’ve worked with companies where the IT folks staff the new systems inside and out. They were able to grasp the tools and run full-speed. We also have the converse situation where a smaller company has a business analyst who is savvy and is thrown into the mix as the Integration person. They have “some” knowledge of the processes, a passing familiarity with the data formats and a limited background with the notions of mapping and data conversion.

Then, we have “the rest of us.” So, this is too broad a question to answer generically. There are usually metrics available, for example, to map an EDI 4010 Purchase Order into a back-end database. Even though there are tons of DBs out there, you are still doing similar things (header/detail records etc). Professional services teams usually have a good handle on these types of metrics.

The answer lies within the capabilities of the tool and how easy it is to understand and start leveraging it. A good example is a tool that can take a flat-file document and allow you to easily “discover” the structure of the document. That would save you from having to build the data definition by hand.  Web services have been designed from the ground up to aid this process by providing a schematic of the services and request/response data formats within a descriptor document called a WSDL.  This self-describing nature is also inherent in XML documents, but requires a dictionary to leverage the business relationships of the data.

Understanding your business goes a long way towards developing integration. That is, besides understanding the company’s own business processes, it helps to also have an understanding of the data formats. Awareness, from the general to the specific

There are no real absolute answers, just a range of times that have been observed from projects. However your first pilot project should definitely be scoped to only take a few weeks to a month. You want to attain a completion, without burdening the user with a large knowledge-transfer upfront.

Integration tools can help accelerate the process, and a few key principles come into play:

  • Easy-to-use visual tools reduce learning curve
  • Reusable objects improve development efficiency
  • Integration tools can reduce complexity and therefore reduce dependency on consultants

Testing the Integration is a key area that gets lost in the time estimates, so always keep that in mind.

Q: What are the real costs/options of the software?

A:  By their very nature, most sales people want the prospect to buy the whole enchilada. They want the prospect to think of integration architecture and enterprise-wide solutions together. The fact of the matter is that there are packages designed for the company to buy just what they need at the present moment, and allow them to grow into the full solution later.

Oftentimes, a sales rep. has the option to sell you less than the full-blown system, but may not let you know that until you ask. Depending on your corporate setting, you may be addressing a mandate such as an EDI standard from a large trading partner. In that case, you are looking at a tactical solution with some interest in moving into Integration at a later point.

Given the complexity of enterprise-systems today, it can also be overwhelming to think, from the start, about all the touch-points where Integration would be a benefit. Being able to buy a “slice” of a full Integration solution is a valuable way to solve your immediate need, and introduce you to the technology at a hopefully lower price-point.

Finally, be cautious of hidden costs, such as training time/ramp up.

Q: With all of the Standards out there, isn’t Integration pretty straightforward?

A: Yes, one interesting thing about standards is that there are a lot of them (see graphic, for standards relevant to Integration); and they all want you to do things their way. That’s why they call them “standards.” But, there are no “formal” integration standards, just technology standards. There are ways of thinking about Integrations, templates or patterns to follow, but no hard-and-fast rules on how to do Integration. It’s not just about connecting the dots.

Quite often, standards are simply guidelines Understanding your data and processes is a fundamental requirement when thinking about Integration.

Let’s talk about one of the real reasons why Integration exists; it’s what I call the “Standards Oxymoron”… the notion of standards regarding data. There are standards for what the data looks like, how it is transmitted, etc. There is one great thing about these standards; they are so popular that you have a lot to choose from. Every flavor of data has someone attaching a standard to it, from XML (RosettaNet, large-customer proprietary formats) to the EDI (X12 and EDIFACT). That is why we are talking about Integration; if we just had one common layout for File and Database, we wouldn’t have a lot of these issues.

 

uncertainties2.png

 
Another aspect is the agility to deal with new data formats, as they emerge. Spreadsheet formatting is at the forefront of this movement. Whoever would have predicted: spreadsheets as a common data foundation for Integration?

Being equipped with these questions will better prepare a company to deal with the inevitable promises and uncertainties of an integration initiative. Considering a few of points will be especially helpful.

  • Start with a pilot project and mentoring
  • Think strategically: where is your business going?
  • Ask yourself, “Do I need the whole picture or just the key pieces first?”
  • Your existing team, with their existing skills, can be successful.

Vendors ranging from software companies to integrators to some-of-each will do their best to sell their approach. Knowing the questions will better equip you to sift through the answers.


Mark Denchy is Director of Product Management at EXTOL International. With more than 20 years’ experience in the software field, he has a wide-ranging background in application development and platform integration. He works with several clients on leveraging new technologies that enable integration.

This article originally appeared in DM Review

ITIL for BAs. Part IV; The Service Catalog

The Service Catalog is one of the primary artifacts of an ITIL-based organization and its orientation toward the business as an IT Service Provider.  The Service Catalog is the source of information on all IT Services in operation or being prepared for operation and identifies status, interfaces, dependencies, delivery levels, and other attributes.  And in the spirit of encapsulation, the Service Catalog is written in the language of IT’s customers, free of the details about how the service is delivered from an infrastructure point of view

Think of the Service Catalog as the IT “menu” and a Service Level Agreement as a particular customer’s order from that menu.  (Service Level Agreements and the Service Level Management process will be the topics of a future article.)

For the BA, the Service Catalog is in one sense an inventory of IT building blocks that contribute to the creation of business solutions.  The BA then is interested in, and dependent upon, the catalog in terms of its completeness, accuracy, and its suitability as the basis for specifying the requirements of an IT Service to meet business requirements.

itil1.png

Because of its importance in expressing IT’s purpose and value, the Service Catalog must be maintained through the Service Catalog Management process.  In looking at the key triggers, inputs, activities, and outputs of Service Catalog Management, it is clear that the BA has much to contribute toward the content and maintenance of a high-quality catalog:

Service Catalog Management Activities

  • Agreeing on and documenting the definition of an IT Service – that is “What do we mean by the term ‘service’?”  Services must defined to a level of granularity and detail to support their use in business cases in terms of functionality, risks, costs, and other attributes of interest to the customer, so the definitions decided by IT need to be useful to the BA.
  • Interfacing with the business to identify and document customers’ dependencies on the IT Services and those customers’ needs relative to business continuity (i.e., recoverability).  When IT is faced with the need to prioritize (and when it is not?!), decisions need to be based on their relative impact on the quality and availability of IT Services – guidance about which is received from those knowledgeable of the business cases for having those Services in operation.
  • Interfacing with the business to ensure that the information in the catalog is aligned to the business.  The Service Catalog needs to be accessible by various business stakeholders, and just as with business solutions in general, the BA would represent to IT any business requirements regarding the operation of the catalog.

Key inputs to the Service Catalog Management process are clearly in the BA’s domain and include:

  • Information on business strategy, financial plans, and current and future requirements
  • Business Impact Analysis – information on the risk, impact, and priority of each service

Much of the BA’s involvement in this process is indirect, addressed as part of the normal set of activities throughout the business solution life cycle.  In fact, many IT organizations have survived without any Service Catalog, so its role and value can be elusive – so let’s conclude with a few relevant points:

  • The goal of ITIL is to encapsulate all of IT in terms of IT Services, expressed in the language of IT’s customers – in other words, ITIL helps an IT organization separate the “what we do” from the “how we do it” – lifting from the Customers’ shoulders the significant burden of having to understand the IT infrastructure and its commensurate complexities, costs and risks.
  • The IT organization’s mission should be to deliver IT Services with a “service culture”: every IT contributor should be able to see his or her contribution to service quality and availability rather than working solely within his or her particular silo.
  • BAs themselves are also very driven to make a distinction between the what (business requirements) and the how (the solution to satisfy those requirements).

Back to the “menu” analogy we started with – imagine a restaurant where you can order a meal only after you understand the details about how the kitchen operates, what ingredients it has in stock, which kitchen tools are available, etc.

What about your IT organization?  What is the maturity of the IT Service Catalog?  Have you participated in the design and operation of such a catalog?