Skip to main content

Preventing Disasters; How to Use Data to Your Advantage

The late Lew Platt, former CEO at Hewlett-Packard once stated, “If only HP knew what HP knows, we would be three times more productive.” This is a typical situation in large organizations, where far too often, disasters arise from lack of awareness. Critical information is available in the organization, but goes undetected, is not communicated or is blatantly ignored.

Take the recent mortgage meltdown, for instance. The banking industry has a wealth of data on consumers, robust credit risk models, as well as lessons learned from the past. Their analytics told them which loans were too risky according to traditional models. Yet, they decided to relax their standards, ignore the data…and the rest is history. Or, take the recent PR debacle around Southwest Airlines’ plane inspections. The FAA had inspection logs that could have told them that the planes were passing with flying colors at unprecedented rates, yet no one suggested conducting a site visit to see if the airline was actually performing those inspections. And when low-level employees reported issues to their managers, that information was not passed on. Fortunately, in that case, a tragedy was avoided.

If there is a question we should be asking in the current economic and regulatory environment, it is “Why does accountability so often fail, and what role does analytics play in preventing these disasters?” Organizations need to understand why they fail to detect early warning signs, how to filter and monitor available data to create actionable information, and how correctly applying analytics can turn data into knowledge. That knowledge can then prevent disasters and increase competitive advantage.

Why Accountability Fails

The repeated disasters that occur due mainly to failures in accountability arise for the following reasons:

  • Large, complex organizations (or environments) make it difficult to know what is happening “on the ground” and detect significant changes in the environment.
  • Very often, players in the organization (managers, employees, others) receive incentives only for presenting a positive picture and anchor on how things have worked in the past.
  • Organizations measure and monitor only past-focused, outcome measures, which only indicate a disaster once it has already occurred.
  • Many organizations lack the skills necessary to manage data, much less apply analytical techniques to make sense of that data and keep an accurate view of the current operating reality.

The Impact of Anonymity

The lack of awareness that often brings disaster stems from the anonymity that characterizes today’s organizations. A hundred years ago, most business transactions were conducted face to face. Business owners walked the shop floor. Customers who bought eggs from the village shopkeeper knew not only the shopkeeper, but also the farmer who raised the chickens. Loans where made to people the banker knew personally and regulations were made and enforced by local officials.

The more complex an organization becomes, the less transparency there is, and the more difficult it becomes to make good decisions. Consumers and producers don’t know one another. Decision makers and implementers don’t have direct lines of communication. By the time information reaches a decision-maker at the top, it is usually highly filtered, and often inaccurate. The information and implications have been spun so as not to upset management or cast dispersion on employees, and therefore fail to present the reality of the situation.

These conditions not only impair the organization’s ability to understand what is currently going on, but also remove any ability to detect change in the environment. Outside information can effectively be closed out in extreme examples. The U.S. automakers in the 1970s, who looked out the executive suite window into their parking lot and saw only U.S.-made cars, determined that Japan was not a threat. Meanwhile, dealers in California had significant early signals in their sales numbers that Japan was indeed a threat to the U.S. auto industry.

Incentives for Bad Behavior

An even more insidious problem is that disasters often arise because organizations have actually encouraged behaviors that lead to them. The filtering of information cited above is actually a very mild form of this. Employees and managers are rewarded for highlighting what they’ve done well, so why would they ever identify something that is going wrong on their watch?

We tend to blame those who bring bad news, whether they deserve it or not. Consider any major whistle-blower of the past. The amount of scrutiny, negative media attention and damage to their career is enough to dissuade most people from taking a stance. And yet those same people brought to light, and often prevented, significant disasters in the making.

So many organizations reward those who bring in good short-term results, prove out the organization’s current business model and don’t ruffle too many feathers. In return, we get exotic financial instruments in an attempt to make quarterly revenue, low standards on food or workplace safety and fudging on project and financial status reports. The contrarian voices pointing out the impending disaster go unheard and unheeded, and changes come too late to matter.

Driving While Watching the Rear View Mirror

The vast majority of the data that organizations look at represent outcomes that are past-focused. The traditional financial statements show the outcomes of business activities (revenues, expenses, assets, liabilities, etc.) while nothing in those statements measures the underlying activity that produces those outcomes. Hence, nothing gives any indication of the current health of the organization.

Kaplan and Norton sought to remedy this with their Balanced Score Card approach. By focusing on the drivers of those outcomes, the organization should be able to monitor leading indicators to ensure the continued health of the enterprise. Relatively few organizations have fully adopted such an approach, and even those few have struggled to implement it fully. Too often, managers do not fully understand how to impact the metrics on the scorecard. And as time moves on, the scorecard can fail to keep up with changing realities, suggesting relationships between activity and outcome that no longer exist.

Numeracy?

“Numeracy” is the ability to reason with numbers. John Allen Paulos, Professor of Mathematics at Temple University, made this concept famous with his book Innumeracy, in which he bemoans how little skill our society has in dealing with mathematics, given how dependent upon it we have become. Organizations today struggle to maintain a workforce that has the skills to manage the data their operations generate. Once the data have been wrangled, the analytical reasoning skills required to make sense of that data are lacking.

Analytics provides powerful tools for dealing with massive quantities of data, and more importantly, for understanding how important relationships in our operating environment may be changing. But without a strongly numerate workforce, organizations cannot apply these techniques on their own and have a very limited ability to interpret the output of such techniques. A lack of good intuition and reasoning with numbers means that many warning signals go undetected.

What Drives Organizational Outcomes?

Organizations that want to prevent disasters and increase competitive advantage first need to define what constitutes critical information – in other words, what really matters to the organization. Prior assumptions have no place in that determination. Let’s say, for example, a company is proposing to increase its customer repeat rate by increasing satisfaction with its service. But does that relationship between customer repeat rate and satisfaction with the service really exist? And to what degree? Amazon.com, for example, does not simply assume that a person who buys a popular fiction book will want to see a list of other popular fiction books. Rather, it analyzes customer behavior. Thus, someone who is ordering Eat, Pray, Love might see an Italian cookbook, a Yoga DVD and a travel guide for Bali as recommendations because other people who bought that fiction book also bought those other items.

The steps to decide what matters are:

  1. Decide what the organization wants to accomplish.
  2. Identify the activities (customer behaviors and management techniques) that appear to produce that outcome.
  3. Test and retest those relationships, collecting data from operations to measure the link between activity and outcome.

Once an organization has identified what constitutes its key activities, how can it find the information it needs to monitor them?

Find the points in the value chain where the key actions have to occur to deliver the intended outcomes.

  1. Collect critical information at, or as close to, those points as possible. The closer an organization can get to the key points of value delivery, the more accurate the information it can collect.
  2. Continuously look for the most direct and unfiltered route to obtain the richest, most consistent information on each key point of the value chain.
  3. Keep testing each assumption by asking the question, “What surprising event could I see early enough to take corrective action?”

Stop Trying to Prove Yourself Right

Several traditional ways of doing business blind organizations to warning signs of potential disasters. First among these is looking for data that confirms that all is well. Although extremely counterintuitive, it is critical to look for evidence that things are not all right. Ask the question, “if something were going to cause failure, what would it be and how can it be measured?” If it can be measured, then it can be corrected early and failure can be avoided. Rather than indicating what has gone right in the past, these measures contain warnings of what could go wrong in the future.

To see the early warning signs, follow this process:

  1. Ask what assumptions are being made in the process of executing strategy to deliver value. For example, if the goal is to increase the efficiency of inspections, is there an assumption that inspectors will become more efficient while still adhering to the same high quality standards? Or, in a call center, is there an assumption that reps can decrease call handle time and still provide superior service?
  2. These assumptions are alert points where failure might occur. Don’t wait for the final outcome, but track, measure and monitor each assumption to make sure it is playing out successfully. This process is well known to project managers. They don’t just design Work Breakdown Structures and Critical Paths and then wait around for the end date to see if the project was successful. As soon as a task begins to exceed its scope, the impact is assessed all the way down the line.
  3. Keep testing each assumption by asking the question, “What surprising event could I see early enough to take corrective action?”

Organizations that do this well are not operating with a negative, doom-and-gloom perspective. Rather, they want their positive outcomes so badly that they look for data that might be telling them something is going wrong so they can correct it before it is too late. They are willing to “Fail Fast” and “Fail Forward,” keeping the failure small to ensure large successes.

People Power the Process

Creating knowledge from data to prevent disasters depends on both technology and human skill. Computers are powerful tools that can help collect, store, aggregate, summarize and process data, but the human brain is needed to analyze the data and turn it into actionable information. It’s this human factor where the biggest gap exists in most organizations. Finding people who can perform the required analysis is becoming increasingly difficult. A spreadsheet is just a pile of data until someone applies critical thinking, adding subjective experience and industry knowledge to derive insights into what the numbers really mean.

Organizations must invest in developing these skills in their workforce. Here’s how:

  1. Provide employees with the training, job assignment, education and mentoring opportunities needed to develop their analytical skills, industry expertise and decision-making acumen.
  2. Subject decision-making to evidence-based approaches, providing feedback to improve future decisions.
  3. Ensure employees have the tools they need to manage the volumes of data they are expected to digest and act upon.

Blame Is Not an Option

In his book The Fifth Discipline, Peter Senge said that a “learning organization” depends on a blame-free culture. In other words, when a problem arises, people need to refocus from laying blame or escaping blame and start fixing the problem.

In today’s data-rich world, preventing disasters large and small requires monitoring and filtering through the large volumes of information that stream into organizations every day to find early warning signs of imminent failure. Intellectually, just about everyone will agree that it makes sense to look for what could go wrong. Emotionally, however, it’s another matter. It is both counterintuitive and intimidating to ask managers to search out constantly how the organization is failing. Establishing a blame-free culture is the final frontier to create a new awareness and encourage people to test assumptions, make better use of analytics and communicate information without fear.


Charles Caldwell is Practice Lead, Analytics, with Management Concepts. Headquartered in Vienna, VA, and founded in 1973, Management Concepts is a global provider of training, consulting and publications in leadership and management development. For further information, visit www.managementconcepts.com or call 703 790-9595.

10 Ways to Use Requirements to Melt an Executive’s Brain

So you’ve been tasked to get requirements on a strategic project, and you’re thinking to yourself, “How can I make my business requirements documents as incomprehensible as possible?” Going this route may not be just a job security thing. Making yourself indispensable as the interpreter of requirements seems to be the traditional route of delivery and getting buy-in. Just keep running the requirements process until someone gets desperate and finally signs off on the spec in the vain hope of getting something useful for their effort before the end of Q4 2014.

So, some tips and traps for those of you looking to truly perplex and bafflegab your bosses:

  1. Just give a list of “the system shall …” statements. Having a few hundred of these statements with no accompanying business process descriptions as a common point of reference to help navigate the swamp will keep the average executive bogged down for days. Their paranoia that something critical got missed might just match your paranoia when they show up at your desk to have requirement 143 explained to them.
  2. Experiment with similes when describing data objects. No one wants to hear about ‘customers’ repeatedly. Why not make the document more dynamic and talk about prospects, or accounts, or valued relationships, or partners, or something more interesting. You just shouldn’t be overusing simple words repeatedly … it’s boring!
  3. The use of UML sequence diagrams and class diagrams will help prove to that doubtful executive that you truly understand systems development and have a deep understanding of industry standards. Just loading up their inbox with hefty documents packed with these pretty pictures will have them panting for more.
  4. Remove all evidence of traceability between one set of requirements and the next. The idea is to produce a set of business objectives and scoping documents in one format and using one set of techniques. Then get your business requirements using something completely different. Go for something truly unique in the system specs. The idea is to show your diverse understanding of ALL the different approaches available to the business analyst. Besides, they are signing off on each document separately anyway… there is no real need to look back at what came before.
  5. Be snappy with solutions. No matter what the request. Within three minutes of the executive’s starting to the grand vision, cut him or her off and begin explaining how the solution is easily delivered by looking at the existing legacy applications. Your attention to promoting existing corporate assets will be well received.
  6. Tell your CFO that you want to use Xtreme programming on the Basil II financial compliance project. The idea of a programmers eagerly jumping into the breach to resolve important issues for the business should be warming to her heart.
  7. Tell your outside sales force they need to be dedicated to a requirements discovery session for three weeks. Your attention to thorough detail will be appreciated by the SVP sales.
  8. Adopt PowerPoint as your primary documentation engine. A few simple screen mock-ups to show the user interface will immediately grab everyone’s attention as delightfully artistic. Besides, the workflow behind it should be entirely self-evident to any self-respecting programmer that’s been with the company for a few dozen years.
  9. Make sure the first dozen pages contradict the second dozen pages. Every executive should be presented with options. How could they not like alternatives for the business?
  10. Drop out key sections of the document and pop these into secondary or tertiary documents. Then refer generally to the existence of these other documents, but don’t put in the actual page, or a hyperlink. This is a great way to ensure they read the whole thing.

Hey guys – have fun and add your own. I wish you all great success.


Keith Ellis is the Vice President, Marketing at IAG Consulting (www.iag.biz) where he leads the marketing and strategic alliances efforts of this global leader in business requirements discovery and management. Keith is a veteran of the technology services business and founder of the business analysis company Digital Mosaic which was sold to IAG in 2007. Keith’s former lives have included leading the consulting and services research efforts of the technology trend watcher International Data Corporation in Canada, and the marketing strategy of the global outsourcer CGI in the financial services sector. Keith is the author of IAG’s Business Analysis Benchmark – the definitive source of data on the impact of business requirements on technology projects.

Why Visualize Requirements?

How many times have you been in a meeting discussing a set of requirements, a methodology, or a project plan etc and someone has gotten up from their chair and said “where’s the whiteboard let me draw what I mean”?

I can tell you for me it has been plenty!!!!

Whilst requirements specifications are a great way to document the detailed information related to a new or existing product’s functionality we all live in a time poor society and few of us have the time to trawl through large documents and extract the information we need and then start the seemingly endless e-mail threads to discuss the individual use cases associated with each requirement consisting of many messages that start and end with “what did you mean by X?”, ” I meant X and Y but I think you thought I meant Z!” Instead why don’t we adhere to the adage of a picture tells a thousand words and instead of page after page of documents create a visual representation of those requirements – hopefully communicating a thousand words in a single picture.

However, what we must remember is that visualization of requirements can vary in its meaning. For example, some people may view requirements visualization in the same context as simulation diagrams, whilst others interpret visualization to mean simple use case diagrams or business process flows typically created in a MS Visio type tool. For me, all of these usage contexts can represent visualization, so instead of trying to classify visualization into one genre I thinking it is best to view it on a scale with simple flows at one end and high end simulations at the other – and the user selects which method is most appropriate for them at any given time. For example, if you are trying to show how a user will move through an application to make a purchase then using MS Visio to define process flows may be enough. However, if you are trying to envisage how a new UI (User Interface) may look then mockups and more rich content visualizations would serve you better. Whichever method is selected there are a number of benefits that come from visualization these include:

  • Flexibility and Usability – flow diagrams can be easier to navigate helping to find content
  • Mistakes can be easier to identify in a visualization
  • Easier to identify potential parallelisms between requirements and business processes
  • Easier to spot missing Use Cases in business process
  • Increase understanding of the requirements themselves
  • Increase understanding of the dependencies between requirements
  • Visualization of business flows can provide a first bridge to Business Process Models or SOA repositories

Now that we have explored some of the benefits of visualization the question now becomes when should it be used? Should we visualize every requirements we write or just some and if we are going to be selective which requirements should we chose?

In my opinion there are a number of questions we can ask ourselves which can help to determine when to and when not to visualize. These include (and there are many more):

  • Type of development method – we need to ask ourselves the question do requirements visualizations fit in with the need for more agile and rapid requirements definition or will they add more time to the development process?
  • Complexity of the requirement – if a requirement has too many sub requirements will this create a “spiders web” diagram which may overcomplicate the definition of the requirement?
  • Type of requirement – should we visualize the user story only and define the functional requirements associated with this user story as text or do we want to visualize all requirements?
  • Risk level of the requirements – should only high priority or high risk requirements be candidates for visualization?

It is important to note that I am not saying that requirements visualization is a “panacea” for enabling effective business and IT communication but what it will do is act as a good facilitator to help initiate a better degree of communication and understanding between the two parties.

So now the decision is yours. Why not try visualizing requirements and feed back to the group how things go.


Genefa Murphy works and blogs with Hewlett-Packard where she is Product Manager for Requirements Management. This article first appeared in HP Communities.

© Copyright 2009 Hewlett-Packard Development Company, L.P.

ITIL for BAs – Part XI; BAs and ITIL Service Transition Processes

In our two earlier posts we considered Service Transition from a policy point of view, and the significant extent to which those policies support the BA’s objectives in terms of keeping requirements management and solution development bound together tightly.

When looking at Service Transition from a process point of view, the relationship between ITIL and business analysis is as evident as could be.  The ITIL processes most associated with Service Transition are:

Transition Planning and Support

Plan appropriate capacity and resources to build, test, release, deploy, implement, and place into operation new or changed services, while minimizing adverse impact on services; manage release-related risks

Change Management

Use standardized methods and procedures to efficiently and promptly handle changes that deepen business/IT alignment, and in such a way as to maximize value and minimize incidents, rework, and disruption

Service Asset and Configuration Management

Identify, control, record, report, audit, and vary service assets and configuration items (IT components); protect the integrity of the assets and components as well as the information about them; provide asset and configuration item information to other processes in order to increase the efficiency and effectiveness of those processes

Release and Deployment Management

Plan the release packages and obtain agreement with stakeholders; ensure releases can be tracked, installed, tested, verified, and/or uninstalled or backed out if necessary; manage the risks associated with releases; ensure that necessary communications and knowledge transfer take place with stakeholders and users

Service Validation and Testing

Plan and implement a structured validation and test process that yields objective evidence that a new or changed service meets the functional and Quality of Service requirements of the stakeholders

Evaluation

Provide a consistent and standardized approach to determining the performance of a service change, relative to the predicted as well as agreed to performance targets, with the purpose of understanding and managing deviations

Knowledge Management

Ensure that the right information is delivered to the appropriate place or person at the right time, in the format desired, in order to contribute to effective decision making

BAs familiar with the BABOK and with typical activities around testing, quality assurance, verification and validation would certainly be at home working within the above processes.

After all, the goals of both business analysis and ITIL, certainly from an IT point of view, are the same: develop, implement, and operate IT-based solutions that meet business needs.

We’re approaching the home stretch!  There will be three more articles in this series:

  • Service Operation
  • Continual Service Improvement
  • What It All Means (or Should Mean?) to You

Until then, I hope that wherever you are, you are enjoying a pleasant month of May!

Avoiding Conflict between the PM and BA. Part 1

At a recent conference I sat next to a project manager who observed, “My organization hired a new consulting company to do business analysis work. They’ve completely taken over. Now they do a lot of the work that I used to do, such as meeting with the sponsor to uncover the business problems, determining what we’re going to do on the project…I can’t believe it! I feel like I’m being treated like a second-class citizen!”

While this complaint pointed out some organizational issues, it also got me thinking about the role of the PM and the BA in the early stages of a project. The two bodies of knowledge, the BABOK® Guide 2.0 and PMBOK® Guide – Fourth Edition, each allude to work being done at the beginning of the project , so it is not surprising that conflict between these two roles can arise.

It’s easy for me to say that spelling out roles and responsibilities helps avoid this conflict. Using a responsibility assignment matrix, such as a RACI, is helpful, but it may not be enough. Looking back it seems to me that as both a BA and a PM, I never spent a lot of time dwelling on this issue. When I was a BA I didn’t have a project manager, so in a sense I was able to avoid conflict. When I became a PM, I was extremely fortunate to work with strong BAs who took the initiative to define their own roles. Below I have listed what worked for us and why. Next month I’ll delve more into the pitfalls and some examples of less successful projects.

We worked on a project which had both business and technical complexity. We were introducing many new business processes as well as new technology. The project affected many business units within the organization, and the risk was high. Below are a few of the factors that I believe contributed to a smooth relationship between the BA and me (PM), and ultimately to a very successful project:

  • We each worked with our strengths. As a PM, mine was focusing on delivering the product (new software) when we had promised it, within the approved budget, and with frequent communication with the sponsor. As a BA, hers was an incredible ability to understand the real business need-why the project was being undertaken, what was happening currently, and what we needed to recommend to the sponsor, which was different from what the sponsor had requested. Without her, I would have accepted the solution originally requested by the sponsor, a solution which would not have solved their business problem.
  • We kept the good of the organization in front of us at all times. There simply was no grab for territory, because it wasn’t about us. It was about delivering a product that worked–on time and within budget. One of the team members observed that she felt like we were giving birth. The good news was, though, that we didn’t have to suffer through teenage years!
  • I was focused on the date and budget, so my natural tendency was to want to do the project quickly rather than correctly. Fortunately I had the good sense to listen to the BA and slow down when I needed to, which was usually at her insistence. Was this easy for me? Not at all! Am I glad I did? You betcha!
  • I completely trusted the BA. But the whole topic of trust is the topic for different blog on another day.

Elizabeth Larson, PMP, CBAP, CEO and Co-Principal of Watermark Learning (www.watermarklearning.com) has over 25 years of experience in business, project management, requirements analysis, business analysis and leadership. She has presented workshops, seminars, and presentations since 1996 to thousands of participants on three different continents. Elizabeth’s speaking history includes, PMI North American, EMEA, and Asia-Pacific Global Congresses, various chapters of PMI, and ProjectWorld and Business Analyst World. Elizabeth was the lead contributor to the PMBOK® Guide – Fourth Edition in the new Collect Requirements Section 5.1 and to the BABOK® Guide – 2.0 Chapter on Business Analysis Planning and Monitoring. Elizabeth has co-authored the CBAP Certification Study Guide and the Practitioner’s Guide to Requirements Planning, as well as industry articles that have been published worldwide. She can be reached at [email protected]