Skip to main content

Author: Adrian Reed

Adrian Reed is a true advocate of the analysis profession. In his day job, he acts as Principal Consultant and Director at Blackmetric Business Solutions where he provides business analysis consultancy and training solutions to a range of clients in varying industries. He is a Past President of the UK chapter of the IIBA® and he speaks internationally on topics relating to business analysis and business change. Adrian wrote the 2016 book ‘Be a Great Problem Solver… Now’ and the 2018 book ‘Business Analyst’ You can read Adrian’s blog at http://www.adrianreed.co.uk and follow him on Twitter at http://twitter.com/UKAdrianReed

Avoid The “Saying/Doing” Gap

Elicitation is a core part of business analysis, and there are a vast array of elicitation techniques at a BA’s disposal. However, if you’re like me (and most BAs) you probably have a few favorite techniques that you gravitate towards. For me, I’ll often start with interviews and workshops, using document analysis to gain context and perhaps some observation to see how things really work.

 

While it’s completely natural to have frequently-used techniques, it’s important not to forget that other elicitation techniques exist. It is very easy to fall into a rhythm of reverting to a particular set of techniques irrespective of the context, project or situation being examined. This could lead to a situation where other techniques might have proved more efficient or effective. In particular, it’s important to use an appropriately varied set of techniques to avoid the ‘saying/doing’ gap…

 

People Often Do Things Differently In Reality

As any experienced BA will tell you, asking someone to describe how they do their work will often give you a very different result to going and seeing them do their work. There are often parts of the process that are so obvious to the people undertaking the work that they don’t even think about mentioning them. Imagine if someone asked you to describe the act of driving a car… you might explain the steps of opening the car door, putting the key in the ignition, checking mirrors and accelerating away. You probably wouldn’t mention putting on a seatbelt, or closing the door… but these are things that are very important! If we are wanting to understand an as-is process, we’ll often want to understand those ‘seatbelt’ moments too.

 

There’s also the tricky subject of exceptions and unofficial workarounds. Sometimes there will be exceptions that the designed process never really catered for, so workarounds have emerged. These workarounds might not be documented anywhere, or if they are documented they might be documented in an informal way. Knowing about these workarounds (and the exceptions that cause them) is important too—any new process really ought to cater for these situations without a workaround being necessary.

 

All of this points towards the need for a mixture of elicitation techniques.

 

Advertisement

 

Use a Mixture of Elicitation Techniques

Interviews and workshops are excellent techniques for understanding stakeholders’ perspectives on a situation and asking probing questions. Yet if we are going to gain a broader understanding of the situation, it’s important to mix and match our techniques. There are also some techniques that might not traditionally be thought of as elicitation techniques that can be brought into the mix too.

 

Here are just a few examples for consideration:

 

  • Analysis of Reports, Data & MI: What does the data show you about the process? When are the peaks? How many queries go unanswered? What are the most common queries from customers? Why are those the most common queries? What are the uncommon situations that might be causing exceptions or problems?
  • Observation, Sampling & Surveys: Observing colleagues undertaking their work can help us understand how the work really works, but requires a good level of rapport (else people might revert to the ‘official’ process rather than what they actually do). It isn’t always possible to do this, so sampling can be useful. In a call center where calls are recorded, if (with the relevant legal permissions) you can gain access to call recordings, you can potentially sample elements of a process and see how things are done. Or, you might issue a survey to the relevant customers or internal stakeholders. Sometimes giving people some time to reflect (rather than asking for an instant response in a one-on-one interview) can be useful.
  • Document analysis: A very broad technique, but looking at things like process models, procedures, exception reports and so on can prove useful. Of course, there is often a gap between how a process is written and how it is actually executed, but documentation is a good starting point.
  • Correspondence or sentiment analysis: This is really a particular type of document analysis, but if you are looking to improve a customer-facing process, why not look at some of the correspondence that customers have written about it? What do they like and dislike? Look at complaint logs, which elements are they complaining about? After all, complaints are a potential source of innovation, when they are filtered and used as input to process improvement. If your company has a social media team, perhaps they are capturing suggestions that have been submitted via these channels also.

 

Of course there are many other elicitation techniques too, these are just a few examples. But crucially, initiatives usually benefit from a mixture of elicitation techniques, some of which require synchronous stakeholder input, some of which require asynchronous stakeholder input (and therefore gives reflection time), and some of which initially don’t require stakeholder input at all (e.g. document analysis).

 

With a breadth of elicitation techniques, we gain a broad understanding of the current situation (and future needs). This helps ensure we deliver a valuable solution to our stakeholders.

 

Process Redesign: Problem Prevention or Detection

I recently changed phone networks, and wanted to keep the same phone number. I’ve done this before in the past, and it’s always been a relatively simple process. Usually, a changeover date is agreed, and on that date there’s an hour or so where the phone number is inactive, and then everything works again as usual.

 

Unfortunately, this time was different. Something went wrong with the ‘porting’ of my number, and days later I was left in a situation where I couldn’t reliably receive calls or texts. Pretty annoying for someone who relies on a phone for work.  It was doubly annoying that I kept getting told “wait another 24 hours to see if it magically corrects itself” and it was only when I made a formal complaint that things got resolved…

 

Now I’ve no idea how number porting works on UK mobile (cell) phones, but from what I read online it’s more complex than a consumer might imagine, involving lots of interfaces and interactions between the old and new companies, and sometimes problems occur.  This got me thinking about how some processes preempt problems, and others leave the customer high-and-dry…

 

Predictable Problems: Prevent or Detect

When designing processes, there will be some potential problems that can be predicted. If you were designing the payment processes for an online shop, you can predict that at some point in the future someone will try and use a stolen card to make a transaction.  You can try to prevent that by restricting the shipping address to be the same as the cardholder address, and by processing the card transaction prior to shipping the goods.

 

Prevention involves use of sensible validation and “guard rails” to ensure that things go as smoothly as possible. Yet there will be some predictable problems that you can’t prevent, but you can still detect as soon as they occur.  Once detected, the process can notify relevant people or systems, so that action is taken to rectify the situation.

 

An example: “Your bags didn’t make the flight”

A few years ago, I was flying from the USA to the UK, with a flight connection at Dallas Fort Worth. Unfortunately, my initial flight was delayed and I landed at the airport really late. So late that even though I ran as fast as I could, the gate for my transatlantic flight had closed.  However, luckily the plane hadn’t left and the gate staff explained they were holding the flight for passengers like me who were transferring.  Phew!  I boarded the plane, relaxed, and tried to sleep for the flight.

 

When I landed in London and turned on my phone, I got a notification. It read:

“We’re sorry, but your baggage (1 of 1) for record locator <<Reference number>>  is arriving on a later flight. For help, go to the <<Airline Company Name’s>>  baggage office”

 

It also had contact details, and a link to track my bags. Now, the fact my bags didn’t make it onto the flight wasn’t really a surprise to me… I nearly didn’t make it onto the flight!  But this preemptive message meant I didn’t have to waste time at the baggage carousel.  I went straight to the baggage counter, they explained it was on a flight about 12 hours later and they’d courier it to my home address.

 

Advertisement

 

A Process Design Pattern: Detect, Inform and Solve

A key point here is the process the airline had set up meant that the problem (a delayed bag) was detected by them, they provided relevant information to me and provided a practical solution.  I suspect most customers accept that problems will occur.  But when the customer notices the problem first, surely that indicates an opportunity to improve the process?

 

This highlights the importance of building in problem detection into manual and automated processes. Going back to my earlier example of a phone number transfer that went wrong, surely there must be some way for the phone company to detect that the process failed? Wouldn’t it be far better for them to ‘notice’ this before the customer does, and either fix it straight away or at least inform the customer so the customer doesn’t have to raise a query?

 

This might sound like it will add processing cost. Yet, it might actually be cost saving or cost-neutral. When things go wrong, customers often spend an inordinate amount of time navigating helpdesks and eventually making complaints. This is really a type of ‘failure demand’—work that is best avoided. By creating a situation where customers don’t have to raise the query in the first place, it reduces these incoming queries and will also likely increase customer satisfaction. In many cases, this will be a clear win/win!

 

This pattern of prevent or detect, inform and solve is one well worth remembering for us as BAs.  I hope that you find it useful in your process analysis and definition!

 

 

Be Bold: If You Disagree, Leave Them In No Doubt

Very early in my career, I was probably a bit too much of a ‘people pleaser’ which led to me shying away from conflict.  It’s a common trait in BAs, we want to help people out, and we want to get them the best possible outcomes. This is a positive thing, but when over-played it can spill over into conflict avoidance, and this really isn’t a good thing.

 

For example, imagine a stakeholder requests a new feature, but there’s clearly no time or budget for it. It would be very easy to say something along the lines of:

“Ah, yes, that’s really interesting. I’ll see if that’s possible and come back to you if it is”

 

Now, on the face of it no commitment has been made, the BA might consider they’ve said “no”, but it’s a very, very weak no. The stakeholder may well have heard things differently, and might have drawn the conclusion that the feature will be delivered, after all, they didn’t hear the word ‘no’ at all!  In three months’ time, when memories have faded, they may well ask you why the feature they asked for still isn’t delivered…

 

Conflict Avoidance Isn’t Friendly

Avoiding conflict might seem like a good tactic, but it really only works in the short term. When disagreement is communicated in a subtle way, it’s easy for there to be a sort of illusory agreement. Stakeholder A disagrees with Stakeholder B but they think they are agreeing. Of course, eventually they’ll find out that they didn’t agree… but by that time budget and time may have been spent unnecessarily.

 

This is an area where BAs can add huge value. Firstly, by being bold and concisely stating when something is outside of scope or can’t be delivered in a particular timeframe. This doesn’t mean it can’t be delivered… it just means that a conscious choice needs to be made. There might be a trade-off, by having feature X, it means that feature Y will be delayed or discarded. Or perhaps it means that there needs to be a discussion over the budget.

 

All of these decisions are best made consciously. A little bit of discomfort now, followed by an honest and transparent conversation is likely better than taking the easy route and saving up the consequences for later. You might be familiar with the concept of technical debt… well this is similar, it is almost a form of decision and conversational debt. By having illusory agreement over things, the decision is never really made, and an absence of decision creates issues.

 

Cultural Dimensions

Of course, it’s important to take national culture and corporate culture into account when considering how to respond to conflict.  I can only speak as someone who has spent most of their time in the UK. I certainly know that in the UK we are fairly indirect communicators at the best of time (“Oh I’m very thirsty” can be code for “I’d like a drink please, but I can’t ask directly because that would be seen as impolite”. Equally “Hmm.. .interesting” can sometimes mean “I have zero interest in what you are saying”. We are a complicated nation!).

 

There are certainly other cultures where different types of response will be more appropriate, so it is all very much down to the context.  One thing that is common though is conflict is best negotiated openly and honestly, and pretending it doesn’t exist is unlikely to lead to the best possible results.

 

Advertisement

 

Conflict Doesn’t Have To Be Negative

There is perhaps a view that conflict is inherently negative, yet that doesn’t have to be the case. Often conflict arises because different people have different backgrounds and perspectives. As BAs, we can explore those perspectives and understand the specific areas where they agree and disagree. We can work with them to navigate the conflict, and reach a situation that they are happy with.

 

In many ways, if there are conflicting views, it is usually better if they are surfaced earlier rather than later. If someone disagrees but doesn’t feel able to raise the issue, then this may indicate that they feel uncomfortable. Perhaps psychological safety is lacking. Either way, this may indicate wider issues with the organizational culture, and may mean that dissenting voices are being quashed. Which can be an issue if one (or many) of those voices are right!

 

Lead by Example

This is an area where BAs can lead by example, by being bold and sometimes vulnerable, by asking ‘tricky’ questions and being open and honest when we think something isn’t right. It’s also important to be prepared to change our minds when new information presents itself. All of this relies on building good rapport with stakeholders, which is a key BA skill in itself!

Beware Proxy Measures

Organizations are usually pretty good at measuring and counting things. Whether it’s positive customer reviews, staff engagement, average call handling time or something else… chances are that someone in the organization is tracking it. They might even be creating reports or dashboards so that executives can see how different elements of the organization are performing.

From time-to-time a metric will drift and there will be a desire to get it back on track. Perhaps the staff engagement survey shows that people aren’t happy. Or perhaps there’s a contact center where the average call handling time has drifted from 3 minutes to 5 minutes. Either way, it’s easy to imagine that a concerned manager would want to investigate and initiate changes.

 

Yet here a danger awaits the unprepared. It would be very easy to make knee-jerk reactions when referring to the data alone, without considering its context. It would be even more dangerous to make decisions based on ‘proxy measures’. By ‘proxy measure’ I mean some kind of indicator or metric which approximates how well something is going, but doesn’t directly measure it.

 

That might sound abstract, so here’s an example. Like many people, I wear a smartwatch that counts my steps. Doing this has definitely changed my behavior, and I strive to get 10,000 steps each day. Yet if my overall goal is to “stay healthy by staying active” then the step counter is at best a proxy measure. Sure, it’ll indicate if I’ve suddenly slumped into a sedentary lifestyle… but it would be very easy to ‘cheat’ the system. Ten thousand slow steps around the house are probably not anywhere near as beneficial as fast-paced walking (or jogging)… and that’s before we even consider the fact that it’s possible to wave your arms around to get a few extra steps.  I’m sure I’m not the only one who has done that to get an extra few ‘steps’ in before midnight…

 

The Danger Of Unfair Comparisons

The point here is that it would be easy to equate ‘number of steps’ with ‘how active and healthy’ a person is. But that would be a dangerous equivalence to make. Ultimately, the smart watch is (I guess) “measuring the number of arm movements which are likely to indicate steps”.  That is the real metric… it can be used to approximate many other things, but that is just an approximation. You certainly wouldn’t want to rely on it for decision making.

 

A similar pattern exists within organizations where unfair comparisons are made. Let’s imagine a call center manager is measuring the ‘average length of call’, and wants each agent to achieve an average of 3 minutes or less. The manager is probably equating “effectiveness of operator” with “length of call”.  But is this truly the case?

 

Extending this example, perhaps there are two different agents: One (Agent A) has an average call handling length of 5 minutes, the other (Agent B) of 2.5 minutes.  Agent A is put on a performance management program, while Agent B is given a bonus. Is this fair? Or could it be that Agent A is thoroughly investigating the customers’ needs, solving root causes so they don’t have to call back again, whereas Agent B is just doing the quickest thing.  Perhaps Agent B even cuts an occasional customer off to hit their target…The point here is that without further investigation it would be impossible to know.

 

Advertisement

 

A Key Question: “Why?”

As with so many situations, a key question to ask is “why?”.  In this case it’s important to ask why particular measurements are being taken. It can be a difficult question for stakeholders to answer, and different stakeholders might have different perspectives on the rationale for measuring and reporting on a particular metric. That’s useful to know too.

Asking this question can help us to determine potential gaps in the way that situations are being assessed. For example, imagine we asked two stakeholders why call handling time was measured. Perhaps they say:

 

“To measure efficiency of the call center agents”

“To ensure good customer service”

 

Arguably, the measure on its own doesn’t achieve either of these. It might be that other metrics are necessary alongside this to give a better picture.  Customer feedback, customer satisfaction scores and so on might also need to be considered to give a better picture.  In some cases it might be useful to stop measuring or reporting on something entirely, as the very act of reporting just acts as a distraction.  All of this depends entirely on the context, so further investigation of the situation is likely to be needed.

 

Questioning The Norm

As with any situation, this is an area where BAs can add value by acting with curiosity. Working backwards to understand why things are measured will help ensure that possible options for improvement are generated. It will often involve questioning the norm, but most BAs are used to that!

The Importance Of Experimentation And “Falsification”

Increasingly, teams are working to deliver change in an incremental and iterative way. As teams shift towards a product management approach, there is often a parallel shift towards experimentation and hypothesis-based product or service development. The idea being that, in an uncertain market, it’s impossible to say with any certainty what customers and stakeholders will actually find valuable. Of course, market research and customer insight is valuable… but as anyone who has been involved in research will tell you, there’s a ‘saying vs doing’ gap. I might say that I would definitely subscribe to an online streaming service that specializes in BA content for $9.99 a month… but when the offer actually comes around it’s anyone’s guess as to whether I’ll actually do it.

 

Hypothesis-based development starts when we accept that, however we dress it up, the features and functions that we deliver are our best guess at a point in time of what stakeholders and users will find useful. Until they are actually used, we won’t know how useful people find them. And crucially, until users themselves get to play with the amended product or service, they won’t be able to determine how valuable it is to them.

 

In one way, this idea of taking a hypothesis and testing it sounds intuitive. Yet, in reality, these hypotheses are often badly articulated and made without a clear understanding of how and when they will be tested.  I’ll move on to that a little later, but first (as in any sensible article), we need to talk about swans…

 

Karl Popper: Seeing Swans and “Falsification”

Influential Austrian philosopher Karl Popper used an analogy involving swans that is still extremely relevant today, and relevant in business change initiatives, which I’ll attempt to succinctly paraphrase here. Imagine an observer wanting to know what color swans exist in the world. If an observer sees a white swan, they know with certainty that white swans exist.  If they see more white swans, that might be interesting, and it might help them see patterns about the prevalence of white swans, but it doesn’t actually give them much more information about the colors of swans that exist.   It would be easy, after seeing 10,000 white swans to conclude “all swans are white”. Seeing a single black swan on the other hand would refute the hypothesis that “all swans are white”. In that regard, and in that context, a sighting of the black swan is providing significantly more information than a sighting of the 10,000th white swan.

 

This takes us to a key tenant of rigor in scientific research: falsification.  To be valid, a hypothesis needs to be falsifiable. This is perhaps an idea that we ought to consider in our work as business analysts and product professionals too.

 

Imagine there is a hypothesis that a particular group of features or functions would prove useful. That hypothesis might be stated as a requirement, a user story or whatever. It might have acceptance criteria, but often these are somewhat functional in their nature. Wouldn’t it also be useful to extend these and have some additional form of measure that determines whether it has actually been deemed successful and valuable from different stakeholders’ perspectives? And crucially, wouldn’t it be useful to test against these criteria early and regularly?  If it isn’t going to fly, far better to cut our losses early, and focus on something that will be valuable.

 

This leads us to consider testing ideas with Minimum Viable Products (MVPs) and prototypes. Two ideas which are often misunderstood.

 

Advertisement

 

MVPs Are Easily Misunderstood

One thing I’ve learned over the years is to ask the following question:

“When you say MVP, what exactly do you mean by that?”

 

I have found different stakeholders within the same organization often have different views of what MVP means. Imagine a hypothetical scenario: some people think it means “release one”; the first fully-featured release. Others think it’s a release with partial functionality. Others think it’s got most features, but on a different technology stack… a recipe for disaster!

There’s enough written elsewhere about what MVP means, but suffice to say it’s crucial that there’s a shared understanding. I would tend towards viewing it as a semi-functional test. I draw on the definition Eric Reis gave in his book The Lean Startup:

 

“The MVP is that version of the product that enables a full turn of the Build-Measure-Learn loop with a minimum amount of effort and the least amount of development time. The minimum viable product lacks many features that may prove essential later on. However, in some ways, creating a MVP requires extra work: we must be able to measure its impact.”

Reis makes an important point here, when we’re talking MVP, we ought to be talking measurement. How will we know if we’ve built the right thing? How will we measure success from the organization’s perspective? From the customer’s perspective? From the viewpoints of other stakeholders?

That’s a tricky question, but if we don’t ask it, we rob ourselves of the opportunity to properly test an idea.  It becomes too easy to focus on the number of things that have been delivered, rather than the value that’s been enabled.

 

Prototypes Play A Part Too

Even before MVPs, it is worth considering prototyping. Often, the word ‘prototype’ brings up the vision of a screen mock-up. That could be useful, but a prototype could equally be a ‘could-be’ process model or service journey.  Imagine speaking to banking customers “we’re thinking of changing the way you interact with this banking service, here’s how it might change, would that work for you?”.  Of course, there will still be the saying vs doing gap that I mentioned earlier, but it’s a chance for earlier conversations and feedback.

 

So, in summary, when thinking about hypothesis-driven product or service development, remember the swans. Make the various hypotheses falsifiable, and think about how they’ll be tested. Then test them early and often!