Skip to main content

Author: Adrian Reed

Adrian Reed is a true advocate of the analysis profession. In his day job, he acts as Principal Consultant and Director at Blackmetric Business Solutions where he provides business analysis consultancy and training solutions to a range of clients in varying industries. He is a Past President of the UK chapter of the IIBA® and he speaks internationally on topics relating to business analysis and business change. Adrian wrote the 2016 book ‘Be a Great Problem Solver… Now’ and the 2018 book ‘Business Analyst’ You can read Adrian’s blog at http://www.adrianreed.co.uk and follow him on Twitter at http://twitter.com/UKAdrianReed

Project Reporting: The Illusion of Control

As a business analyst, I’ve been fortunate enough to work alongside some excellent project managers in my career.  I’ve always considered the BA/PM relationship to be crucial for success; there will typically be some creative tension between the two perspectives, but it is that tension that means that the BA is free to challenge, with the PM ensuring that we don’t get drawn into any unnecessary ‘analysis paralysis’.  The two roles are different and complementary, and it’s crucial that open and honest communication takes place between the two.

One thing I’ve always found challenging though is reporting, particularly on projects where (by their very nature) some hefty upfront work is required.  Imagine creating a specification for an Invitation to Tender (“ITT”) or Request for Proposal (“RFP”) process—there needs to be enough certainty about the detail upfront for the vendors to create an initial quote and proposal, which means there’s a chunk of up-front work, even if the detailed elaboration and design work is done in an agile way.

Reporting in these situations can be troublesome because it’s really hard to know how close to ‘done’ we really are. Imagine a project manager coming to you clutching a Gantt chart and saying:

“According to the plan, you should be 72.6% complete with your high-level requirements. I just wanted to check that you are, in fact, 72.6% complete?”.

This is a very difficult question to answer honestly; let’s face it, with the best will in the world you might think you’re 80% done, but all of a sudden a stakeholder has a flash of inspiration and realizes that some crucial element of scope has been forgotten.  It’s great to catch it early, but all of a sudden you’ve gone from 80% complete to 20% complete…

Let’s face it, progress reporting of this type gives the illusion of control.  Unless the environment is stable, predictable and you happen to be working on a completely predictable repeatable project where there’s historic data, then any estimate is really just an informed guess. And reporting progress in percentages is highly suspect—I mean, how can anyone really know if they’re precisely 72% and not 71% or indeed 68% complete?  So what happens in reality is people just say “yes”, and you get a task that appears to be on track until suddenly it really isn’t.  It becomes like a taskbar on a computer that works steadily up to 99% and then stalls, with the final 1% taking 10 times longer than the previous 99%…


Advertisement

Gaming the Numbers

Similar games get played on agile projects.  I remember once hearing somebody who was under pressure to improve a team’s velocity say “I can double our velocity overnight—I’ll just double our story point estimates”.  The point was semi-facetious but also deadly serious: Assuming the right mixture of people with the right expertise are involved, the work is going to take the time the work takes. Asking people to “work faster” is, at best, a temporary fix and, at worst, a way of completely burning out a talented team.

This reminds me of the classic film This Is Spinal Tap which parodies a rock group documentary.  The group famously has amplifiers that ‘go up to 11 rather than 10’ thinking they are louder.  Of course, in reality, the amps generate exactly the same volume, there’s just an extra number on the dial… In organizations, we need to be careful that we don’t fall for a similar fallacy.

I’m not sure that there’s any easy solution to this dilemma. There are some who advocate avoiding certain types of estimation at all (see the #noestimates movement on social media). Some things I have found personally useful when it comes to estimating analysis work are:

  • Show progress not percentages: Breaking work up into modular parts and getting artefacts reviewed (and on to their consumers) sooner helps progress get tracked. For example, an initial context diagram and problem statement might be useful artefacts on their own.
  • Estimate effort remaining: Rather than saying ‘I’m 72.6% through’, I’m much more comfortable saying “based on what I currently know, and the progress and interaction with the stakeholders, I’d estimate another five to seven days effort, spread over ten to twelve working days to allow for stakeholder availability”.
  • Estimate in ranges: You’ll notice that I included ranges in the estimate above, the wider the range the less confident I am. The more I know about the situation, and the more stable and predictable the situation, the narrower the ranges.
  • Separate effort from duration: Neither you or I, realistically, can do ten days effort in ten days’ duration if we need input from others. The chances of them being available exactly when we need them is low. Plus, there are probably other things in our diaries (team meetings, training, etc). So best to be open and transparent.
  • Make it clear it’s an estimate: As is commonly (and sensibly) said in the agile world ‘this is an estimate, not a commitment; it’s my best guess but if the existing situation is a heck of a lot more complex, then we’ll need to revisit it.
  • State assumptions and revisit: By stating assumptions, we make it clear what the estimate is based on. One thing that is often overlooked is that (on any complex piece of work) it’s useful to revisit What initially has a very wide range will become narrower when more is known.
  • Never estimate the work of others: If somebody needs to know how much development time something will take, then it’s time to ask a developer…

Like so much, this is very much about transparency and communication. The relationship between the BA and project (or product) manager is crucial, and this open dialogue enables us to jointly learn and collect data that may be useful for future estimates.  It isn’t ever going to be easy, but by having the difficult conversations early, we avoid having to disappoint people later on!

What are you experiences with creating estimates for analysis work? I’d love to hear from you. Feel free to connect with me on LinkedIn and let’s keep the conversation going.

Keeping Customers Happy: Understanding Information Needs

One of the many perspectives that need to be balanced when conducting business analysis is that of the customer. Quite rightly, tools like personas and journey maps form a part of the BA toolkit and these (and other) tools can be deployed to gain a representative understanding of what customers or other stakeholders want. As well as understanding what they want, another angle that is worth consideration is their pain points or frustrations. Gaining an understanding of what isn’t working now can be incredibly helpful when figuring out how a particular journey should change.

Within their current frustrations, one area to probe is their information needs. Quite often an otherwise perfect service might feel frustrating just because a customer doesn’t know what is happening, or when it will happen. Imagine you ordered a product online and no indication was given over when it would be delivered. You’d probably form an expectation based on your experience with other online retailers and might expect delivery in 2 or 3 days. If the product hadn’t arrived after 6 or 7 days, you’d probably chase. This creates frustration and works for you, and it creates additional work for the company (as they have to deal with unhappy customers chasing their products). If, however, a clear expectation of the delivery was set at the outset before you purchased the product you’d have been able to make an informed choice. Perhaps the company might add a line to its website “Our products are bespoke, and made to order. This means they take 7-10 days to be delivered, we’ll provide you with an estimated delivery date when you order”.


Advertisement

This probably sounds like a trivial example, but it illustrates a wider pattern. Sometimes a journey can be improved by the selective and timely provision of information. This provides confidence to the customer (“ah, I know when it’ll be delivered”) but also cuts down on queries and chasers. These are the types of incremental change that can increase satisfaction while simultaneously reducing rework and the associated costs. Some examples are shown below:
Pattern Example
Confirming Texting confirmation of an appointment, so a customer doesn’t feel they have to ring and confirm
Committing Emailing confirmation of a key commitment which had been verbally made over the phone
Preempting Predicting common queries and providing information at an appropriate time (e.g. a hotel might email the check-in / check out times and details about parking 24 hours before a customer is due to arrive)
Providing Visibility Letting someone calling with a query know their place in the queue
Allowing Scrutiny Let the customers view all of the information you store about them so they have confidence that everything is correct.

There are many other possibilities, of course, and the ones that are relevant for you will depend a lot on the environment and context that you’re working in.

How To Find Information Needs

The question becomes “how do we find out what information our customers value?”. Ironically, providing too much information at inappropriate times can create problems too (I’m sure we’ve all been victims of ‘information overload’!). There is no easy answer to this question, but one key thing to do is to ask them.

 Of course, if you have thousands of customers, it won’t be possible to ask everyone. Yet surveys, workshops, and focus groups are ways of getting insight into what customers really want.  It may be that an internal team such as marketing has already commissioned detailed customer research and we can piggyback from that.  There may be a goldmine of information in other places too, such as:

  • Complaints logs: Some complaints may be due to a mismatch of expectations, which might indicate an information need
  • Operational logs & statistics: If there are high volumes of a particular type of query, this may indicate an issue. However, statistics should always be treated with caution until their validity and accuracy are known.
  • Front-line staff: People who actually speak to customers often have a very good idea of common queries and gripes. If there’s absolutely no practical way of speaking to customers, speaking and observing front-line staff can provide a useful proxy.

Once potential improvements have been identified, they can then be sketched out/prototyped and feedback can be sought.  Nothing beats actual feedback from those that are affected, and this might refine other areas for improvement too!

How do you handle customer information needs? I’d love to hear your views.  Feel free to connect with me on LinkedIn and let’s keep the conversation going!

How Do Your Stakeholders Evaluate Success?

Change initiatives work best when they are outcome-focused, as opposed to focusing purely on predetermined deliverables or solutions.  Focusing on desired outcomes allows a team to tease out and understand the perceived problems with the current situation, analyze and evaluate different courses of action, and then carry out a series of experiments to determine which way forward is actually the best.  Working with the team to further understand the situation might uncover a completely different understanding of the current issues, which leads to new and innovative options for change being uncovered.  Techniques and approaches such as pre-project problem analysis, business cases, prototypes, proofs-of-concept, or other similar experimental approaches can be extremely useful.

While being outcome-focused is undoubtedly beneficial, rarely is the question asked “outcomes from whose perspective”, meaning the outcomes are defined internally, often by just a few senior stakeholders. There is nothing inherently wrong with this, yet in reality, processes that are efficient and effective tend to balance different sets of needs and wants.  Put differently, to be successful a process or journey has to meet the needs of a whole range of different stakeholders, each of whom may evaluate success differently.

One place this tension can be seen is anywhere there is a backlog of queries or a queue of customers.  I’m sure as a customer you’ve spent hours on the phone on hold, waiting for someone to pick up.  As a customer this feels really inconvenient—it often involves sitting with a phone against an ear while listening to the same 30 seconds of music interspersed with someone proclaiming ‘your call is important to us’… we’ve all been there!  Yet from the organization’s perspective, it might actually be seen as desirable to have a queue of customers waiting.  If they are focusing purely on “maximum efficiency” and if they define that by “number of calls cleared per agent per day” then it makes sense to have a queue. After all, you wouldn’t want your call center agents sitting idle… or would you…?

It’s Not That Simple: Bring On Other Perspectives

There are at least two other perspectives that would need to be considered here, that of the customer (and in reality, there would likely be different customer groups), and that of the call center workers themselves.  It’s likely that both of these groups would have quite different aspirations over what a good ‘outcome’ looks like.

In a customer’s ideal world, they wouldn’t need to wait at all. In fact, they’d probably have the direct number of a named representative who they could call on whenever they needed.  This illustrates that the customer values convenience, and getting their job done with the minimum amount of fuss.  Making a call to a call center is a distraction in their otherwise busy day.

Employees probably hate situations where there are long wait times on the phone too.  Customers are angry when they get through to them, this creates a reinforcing ‘doom loop’: customers are angry, so they take longer and make complaints that take time to deal with, meaning that more time is taken for each call, which means the waiting time increases….

There would likely be many other perspectives beyond these; however, this illustrates the point that different stakeholders will evaluate success in different ways. Put differently they are seeking different outcomes, in this case:

  • Organization: Wants to maximize profits so employ fewer staff (MONEY)
  • Customer: Wants convenience and just wants to get on with their day (TIME)
  • Staff: Want an interesting job, and don’t want hundreds of angry customers shouting at them each day! (VARIETY/MORALE)

Advertisement

It’s All About The Balance

One reason that gaining a sense of how different stakeholders will evaluate success is that it can help to determine different possible solution approaches.  If we know the different criteria, we can ask, “How can we improve the situation in a way that balances the need to increase efficiency (save money), whilst also being more convenient (saving time for the customer) and also improving the lives of the staff (giving them more variety)?”.  This leads to a very different set of options than if any one of them were examined individually.

When they are looked at in this balanced way, we might (for example) start by examining the reasons that customers are calling.  We might find that the automated letters and e-mails sent out are confusing, and there’s a ‘quick fix’ which reduces cost and increases convenience by reducing the need for them to call at all!  We might delve further and propose a shift to online servicing for certain transactions, customers to self-service at their own convenience.  The call center workers could then focus on the more complex cases, as well as providing support to those who don’t want to (or can’t) access the website.

Of course, these are just examples but I am sure you get the idea.  Simply projecting the organization’s outcomes as if they are paramount is dangerous.  It leads to an internal focus and robs us of the chance to understand the wider stakeholder landscape. As analysts, we are perfectly placed to ensure that stakeholders’ voices get heard.

What are your views? I’d love to hear about how you have approached balancing different stakeholders’ needs.  Feel free to connect with me on LinkedIn

Understanding Constraints: Ask “What If…”

When working on a change initiative, it’s important to understand any underlying constraints that need to be taken into account.  There can be a variety of different types of constraints, but time and budget are two that feature heavily in many projects.  I suppose it’s theoretically possible to work in an organization where there’s too much time and money, but I suspect most of us find we work on initiatives that are squeezed within ambitious timescales with very restricted budgets. As change practitioners, we probably find ourselves trying to work as effectively and efficiently within these constraints.

However, not all constraints are created equally.  Some constraints are genuinely well considered and there will be real consequences if they are broken.  I remember working on a project where there was a contractual penalty if delivery did not take place on a particular date and the client had no appetite to delay.  There was a clear logical argument to spend more in the short term to hit the date; any extra spend that was less than the contractual penalty was good value.  Yet, other constraints seem less well thought out.

I feel like I’m breaking a secret BA code by saying this out loud: but I suspect some constraints are completely arbitrary in nature.  I suspect some deadlines were dreamt up in meetings months ago, before anyone knew the level of complexity or the context of the delivery.  Yet somehow those deadlines got written down as if they are the immutable truth. To question them is to act as a heretic… even though nobody can actually remember why the date was chosen or why it is so important.

A key question that can help us to seek clarity is “What are the key outcomes and benefits you’re aiming for?” These should be understood well before any heavy-lifting commences, but so often they are only loosely understood.  And different stakeholders may have very different perspectives on what success looks like, so getting these out on the table is so very useful.  Ultimately, the aims affect the constraints.  If someone is aiming to be “first to market” that might imply that time is everything, and getting something out the door as soon as possible (even if it’s not the finished product) would be desirable.  On the other hand, if the aim is to be the best in the market that will lead to a focus on quality, perhaps delaying until every necessary bell and whistle is thoroughly tested.

Advertisement

Understanding Constraints With “What If..”

Understanding the aims, outcomes, and benefits is a start.  In addition to this, two powerful words that can help gain a shared understanding of constraints are “what if…”.  When we have rapport with our stakeholders we can ask questions out of a genuine sense of curiosity.  We can ask questions that are framed very much as ‘thought experiments’ to determine what is really important.  Here are a few examples:

  • “What if we could deliver a week or two late, but the cost was significantly lower. Would that be a good outcome for you?” If the answer is ‘yes’, then this indicated budget is valued over time, and the deadline isn’t as fixed as it appears
  • “What would happen if the deadline isn’t met?” There may be genuine consequences; asking this question will help us determine them.
  • “If it was a day late, would you still want it?” There are some things that are only valuable if they are delivered on a particular day. For example, delivering a livestreaming platform the day after a conference was supposed to happen is completely useless!
  • “What about if we released something earlier, with a smaller scope?” Perhaps the ultimate deadline can extend, but something gets released earlier. This can help determine the type of delivery approach that’s relevant.

These are just examples of course, but in essence, these questions seek to understand the rationale behind a constraint. If there is genuinely no rationale, then surely that is something that we should challenge?  If we understand which constraints are malleable and which are not, we can hopefully work with our stakeholders to co-create a solution that best meets their needs.

And the phrase “what if..” can help us a great deal with that!

What are your views on constraints?  Feel free to connect with me on LinkedIN and we can keep the conversation going!

The Importance Of “No”

I can vividly remember a time at school where a careers guidance counselor gave a lesson on the importance of saying “yes”. They explained that opportunities come and go, they are like items passing by on a conveyor belt. If you don’t grab them, they’ll be gone, and you might regret passing them by.

In general, this is probably good advice—it’s certainly important to deliberately consider opportunities as they arise—but like all advice, taken to an extreme it may actually cause as much harm as good. Saying “yes” to absolutely everything will probably mean that few things get finished, work days expand and become exhausting, and resentment grows (“why am I the only one working at 9pm?!?”). Perhaps you’ve been there…

As analysts, it’s important that we build rapport and good relationships with our stakeholders. One seemingly easy way of doing this is to say “yes” a lot, after all “yes” is the path of least resistance, and it also implies agreement (which is a way of avoiding conflict). Being the person that always gives positive news is a good way to be liked… at least in the short term.  It’s also quite comforting and comfortable to say “yes”, there’s no need to argue or face hostility.

I recently heard a story from a front line worker who had been seconded into a subject matter expert role. An external consultancy was working on requirements for a major replacement system, and they were there to represent their team’s needs. Every time a requirement was mentioned, the consultant would write it down and say “yep, we can do that” and wouldn’t mention it again. When the system was actually delivered, few (if any) of these requirements had actually been met. We might speculate that there was a contractual scope that had been agreed previously, and some of the requirements that were raised were outside of this scope. Whatever the reason, the outcome was a very frustrated user base who felt they had been completely ignored.

 

Advertisement

 

Avoiding the “mindless yes” trap

Mindlessly saying “yes” like this might buy some short term gains, but it does so at the expense of long term pains. Saying “yes” to every story, feature or requirement without a discussion about necessity, feasibility or priority will quite understandably lead to an expectation of delivery. As the backlog gets bigger and bigger disappoint and trust issues might emerge. It’s easy to imagine a frustrated stakeholder exclaiming “why is nothing I suggest actually being delivered?!  The tough prioritization conversation hasn’t been avoided, it’s just been deferred. When it happens it’ll probably be even more difficult. Or, like in the example I mentioned above, it might just lead to disappointment and disengagement. The complete opposite of what was intended!

It’s often perceived that the only alternative to saying “yes” is a cold, hard, permanent “no”.  However, this is rarely the case. As business analysts we can often say yes, but at a cost! Surely a more ethical thing to do is to make that cost visible?  Here are some possible examples:

  • “That’s certainly a possibility, but it’s outside of the objectives of the program. It’s possible that the world has changed, and we need to revisit the objective though… shall I book a call with the sponsor to revisit this?”
  • “The initial guestimate from the developers is that this is huge. It’s absolutely possible, but it’d delay the first public release by a month, and it’d impact the testing plan which increases cost. What’s your view with that in mind?”
  • “I can absolutely take that task on. I’m already at full capacity, so the impact would be to delay work on my other projects, and slip the deadlines for the BA work. Are you and the other teams OK with this?”

Here we aren’t saying no, we are providing options, and providing information that will help the decision maker choose.  Of course, each of these examples are simplified, and in reality there would be more discussion, and there will be cases where a flat out “no” is appropriate too.

In summary, whilst saying “yes” might make us popular, it may lead to long term over commitment. We shouldn’t be afraid of saying “no”, or even better saying “that’s possible, and here are the consequences”.