Author: Elizabeth Larson

BA As Detective

Some of you know that I am a big fan of mystery fiction.

There are many aspects of this genre that I enjoy, but what I find the most interesting is watching the detective look for clues to help identify the problem, investigate alternatives, and finally offer the solution. In past articles I’ve noted some of my favorite detectives –Louise Penny’s Armand Gamache, Philip Kerr’s Bernie Gunther, Michael Connelly’s Harry Bosch and Tana French’s various detectives, to name just a few. They are all flawed, but amazingly talented at solving the fundamental problem of “who done it.” What is it that enables them to put seemingly disparate puzzle pieces together to solve the case? Characteristics that are also needed to succeed at doing business analysis work.
In my past articles I have drawn comparisons between the BA and detective. I have explored the need for both to connect the dots and solve problems creatively. I have discussed the need to rely not only on their intuition, but also their rational mind. I have discussed the importance of recognizing patterns, creating structure from chaos, and feeling comfortable with ambiguity. However, one comparison I haven’t explored is perhaps the most obvious and relevant one—the ability to ask good questions. 
As BAs we ask a lot of questions. As Penny’s Chief Inspector Gamache says, “The question that haunted every investigation was ‘why,’” also an important question for all BAs to ask in one form or another. But Gamache knew, as do BAS, that asking ‘why’ by itself is not enough. We need to ask contextual questions. Consider the quote from Dashiell Hammett’s famous detective Sam Spade: “Who shot him,” Spade asks a witness … The witness “scratched the back of his neck and said, someone with a gun.” Experienced BAs know that when we ask vague questions, we’ll get vague answers.
Not only do we need to ask good questions, but we need to be able to understand the answers provided. What happens when we want to ask follow-up questions, but are absolutely “clueless” about what the stakeholder is saying? This can be particularly unnerving when that stakeholder uses highly technical language such as a data scientist describing the algorithms that will be used in the latest AI effort. Like our detectives, we need to clarify. And if we don’t understand, we need to admit so. And if that technical guru asks us questions that make no sense to us (what ETLs need to be developed, for example), we need to admit that we don’t know. One of the tips Chief Inspector Gamache gives each new recruit, is to get clarification when needed. He says that one of the most important things “that leads to wisdom” is saying “I don’t know.” We need to have the courage to say those words modified, of course, for our own situation. 


Of course it gets trickier in the digital world. As BAs we cannot simply say, “I don’t know what you’re talking about,” or words to that effect. We can try the old standby, “Help me understand…” which is great, but we run the risk of still not understanding the explanation. What do we do when we don’t have experience even vaguely related to the stakeholder’s answer? As BAs we often find ourselves asking questions about all manner of things unfamiliar to us, but the world of AI can present new and unforeseen challenges. Yes, we can—and need to– prepare questions in advance. But how can we ask good questions, not just the fundamentals like “why” and “what,” when we know nothing about the subject? 
For example, let’s say I want to ask about algorithms, a subject that I know almost nothing about and therefore am terrified of. Sure, I do research, but when confronted with an answer that makes no sense to me, I might freeze. I want to ask about why one type of algorithm was used instead of another. I want to ask about built-in biases. But answers like “I chose a non-parametric algorithm which uses this method for classification and regression…” might give me pause. What helps me is to go back to the basics and start asking contextual questions, which provide a business context and can set the tone for other questions and answers. Once I establish a business perspective, I can put all my further questions into a business context as well. And importantly, it helps me say “I don’t know” without actually saying it. Remember those ETLS? We can rephrase our answer into a question about what the alternatives are and how each alternative provides the business what they’re looking for. 
We can also start out at a high-level discussion about the AI effort, what business problem it solves, and how it aligns with the organization’s strategic direction. Even if we’ve heard the answers from sponsors and other business stakeholders, we can encourage technical gurus to frame their answers in business terms. Once we have established the business context, we can move to more detailed questions about how a chosen algorithm helps the business, risks of built-in biases, and, as needed, ask about the data, its source, when it was cleansed, and so forth. Or start with a lower level of detail and work upwards. As good detectives, BAs know that solutions rarely occur when we try to investigate in a straight line. Detectives and BAs know that a question that leads to unexpected answers is a source of a myriad additional questions that take us in unexpected directions, but ultimately solve the problem more quickly. 
And that’s where some of the skills discussed in previous articles come in. These competencies. like the ability to connect the dots, help us solve problems and come up with creative solutions. These competencies allow us to follow unusual lines of questions, even if we have no idea what the outcome will be. They allow us to prioritize our work and the questions to ask each stakeholder. They allow us to uncover implicit and hidden requirements. And they enable us to make creative yet practical recommendations. In other words, they help us find “clues” that may seem meaningless at first, but which ultimately help us solve even the most difficult business problems. 

Is AI a Solution, a Technology, or a System…and Why Should I Care?

A recent article in Harvard Business Review (HBR) asks if AI is a system or is it a solution like so many organizations think?

An interesting question, but one that I would rephrase: Is AI a solution, is it technology that supports the solution, or is it part of a larger system? I have always thought of AI as supporting the digital transformation, which includes all the organizational changes that are needed to make use of digital technologies. So I have always thought of AI more broadly than either a solution or technology. The HBR article points out that 1) 80% of organizations surveyed are developing some sort of AI applications and that 2) companies that think of AI as a system rather than a solution will see their revenues grow by as much as a third over the next 5 years[i].

To understand why this might be the case, let’s consider a few possibilities:

If we think of AI as a solution, we need to be pretty clear about what problem it solves, or business need it addresses. For example, let’s say we need to be able to predict which customers will buy our new product. Sure, this sounds like a business need, but it really is a solution. Ah, you might be thinking., predict customer patterns = predictive analysis, so the solution I need is predictive analysis. No, predictive analysis is a way we can predict who will buy our product. It supports the solution. But what is the business problem? It might have to do with loss of market share, decreased revenues, or a number of other real problems.

So instead of:

  • Problem: We need AI to remain competitive
  • Solution: AI

We can think of it as:

  • Problem: Market share has decreased by x% since this time last year with resulting revenues down by $x
  • Solution: Ability to predict which customers will buy our new products to increase our customer base and to increase revenues.
  • Technology needed to support the solution: Software to analyze the data for customer buying patterns and predict customers who will buy our product

But will technology by itself solve our problem? Probably not. What about the related end-to-end processes that will need to change, the massive amounts of data needed to be analyzed and which predictions need to be made, which algorithms to use, the effect of AI on the organizational culture, the jobs that will be created and lost, the business decisions that will need to be made, the business rules to consider and much, much more..


When we think of AI as the technology part of a system, a system in its broadest sense, this starts to make sense. We know that we need to understand not only the technology, but all the context and processes surrounding the technology. When we analyze whole systems, we consider such things as:

  • Problem: In this case, loss of market share to competitors
  • Solution: Ability to predict which customers will buy our new product
  • Technology needed to support the solution: Software to analyze the data for customer buying patterns and predict customers who will buy our product
  • Processes: current processes and how they will change with the implementation of the solution
  • New roles and positions to create and hire for

We also know how to make organizations aware of such consequences as:

  • Wrong staff doing the work, such as creating the models
  • Dirty data leading to shabby analysis and incorrect predictions
  • Minimal acceptance by key stakeholders
  • Wrong people making business rules and other business decisions
  • Biases built into the predictive models

That’s one of the reasons why, I believe, taking a systems approach increases the chances for organizations to see growing revenues. Thinking of the entire system, not just the technology, allows for the distasteful but essential hard work of figuring this whole thing out. If we look at only the technology, we’re apt to fall into the myriad pitfalls that so many organizations fall into, and which lower the chances of successful outcomes.

How BAs can help

  • Understand the problem. We can help explain the difference between a problem and a solution in search of a problem and that a solution in search of a problem does not necessarily help an organization achieve its goals.
  • Ensure data is trustworthy. AI depends on trust-worthy data, data that is clean, that not only has a single source of record, but that comes from an agreed-upon source. That the data business rules are aligned with the organization’s goals and objectives.
  • Examine algorithms and the underlying data to see if there are built-in biases. BAs these days need to get up-to-speed on AI in its various forms (machine learning, predictive analysis, RPA, etc.). They need to educate themselves on the various algorithms that are used and the advantages and disadvantages of using one over the other from a business perspective. We need to ask really good questions to ensure the right algorithm is being used for the business need at hand. We need to ensure that the kinds of predictions and AI recommendations will not harm the organization’s ability to serve a variety of constituents. We need to look for underlying biases.
  • Help evaluate predictive tools to weed out any that intentionally or unintentionally promote biases. As BAs we can help the organization examine various measures of success and explain how subjective measures might insidiously shape a tool’s predictions over time. We can look at end-to-end processes and the input to and output from these processes to examine the data for underlying biases. And once we understand the organization’s “system,” we can work with software vendors to help ensure that the software itself is aligned with the organization’s goals and doesn’t have hidden built-in biases.

If, on the other hand, our scope is simply implementing the AI application, much of the needed business analysis could well be short-circuited, resulting in this sorry statistic—72% of executives said their company’s digital efforts are missing revenue expectations.[ii].

Organizations may want us to help them implement AI quickly, but they need us to help them avoid the consequences of falling into the common pitfalls, as so many organizations have done. In other words, we can do our part to help achieve the revenue growth projections when viewing AI as a system

[i] Taking a systems approach to adopting AI by Bhaskar Ghosh, Paul R. Daugherty, H James Wilson, Adam Burden

[ii] Gartner, 11/27/2018 HBR, Every Organizational Function Needs To Work On Digital Transformation

The Digital BA Series: How BAs Can Help Reduce Bias in Hiring Caused by AI-related Bias

A recent article in Harvard Business Review (HBR) raises an interesting question:

do hiring algorithms used by companies to recruit staff prevent bias or amplify it?[i] Their conclusion is unclear. The article warns that the technology has to be “proactively built and tested” to remove any intentional or unintentional bias.[ii] In this article I want to make the case for why the business analyst (BA) is the organization’s best-hope for ensuring that AI technology is built and tested to avoid this bias.

But first a little background related to how organizations are using AI/machine learning in various stages of the recruitment process.[iii] Companies are already using AI to help them recruit candidates. They want AI to help them:

  • Reduce recruiting budgets
  • Score resumes
  • Find candidates who will fit the job description
  • Advertise jobs in venues apt to draw the best candidates
  • Assess candidates’ qualifications
  • Add consistency to the recruiting process

However, these benefits can easily backfire. Let’s look at a couple of examples.

  • Reduce recruiting budgets. With machines taking over some of the functions formerly done by live people, organizations hope that in the long run the cost of the recruiting processes will be reduced. However, the long run is very long and the road is rife with pitfalls so the expected cost savings may not be realized. Not only are there technical challenges, but it is likely that the organizational culture will need to change as well.
  • Score resumes. When scoring is based on historical data that contains built-in biases, the machine learning algorithms can learn those biased patterns and use them going forward. Data such as the candidate’s name (Susan vs. Sujata for example) or sports played in school (hockey vs basketball perhaps), might produce unforeseen results.
  • Find candidates who will fit the job description. Again, let’s say that historical data has shown that a certain type of candidate has traditionally been successful in the organization. It might be natural to program the algorithms to look for candidates with those same characteristics, thus replicating institutional biases.


In addition, it can increase bias in unforeseen ways.

  • Predictive algorithms help advertise job openings and play the role of head hunter. That is, it can find both candidates who are actively seeking jobs and those who are not. On the surface this sounds good. But if algorithms suggest advertising the job in venues that cater to a certain class of candidates, such as men, chances are only men will apply. The organization might be able to say, “Well, we looked for a woman, but none applied.”
  • Such biases may not reflect the diversity of the company’s customers. This is particularly true for large organizations with diverse customers and/or global companies.
  • Some algorithms have been known to predict who will click on an ad rather than who’s apt to be the most successful candidate.

One way organizations avoid some of these digital pitfalls is to ensure that business analysts are included on these digital projects. A BA can help in many ways. Here are just a few examples:

  • Evaluate software options. They can help in the evaluation of AI tools and recommend only those that do not promote the kinds of biases discussed above. Helping with commercial software selection and implementation has always been something BAs do well. This assumes, of course, that the BA has done their homework and has become familiar not only with various options available, but also with how AI is being or will be used throughout the organization.
  • Examine the algorithms. This means that the BA has to actively engage with the data scientist (or person creating the algorithms) to understand the type of algorithm being used and why. The BA needs to ensure that the algorithms being used will promote the goals and objectives of the organization and that the AI effort is meeting a real business need. Part of examining the algorithms is to look at is how to measure the success of potential candidates. BAs need to look the end-to-end recruitment process and where AI is used in each part of the process in order to detect where the potential for built-in biases may occur.
  • Cleaning the data. It is well-known that one of the aspects of AI that most people dread is cleaning the data. Yet data cleansing has to be done if the results of the machine’s predictions are to be trusted. Part of this cleansing process is to examine the historical data to ensure it doesn’t contain underlying biases.
  • Testing the software. BAs can help proactively test these tools with the goal of removing biases in mind. The BA can review test cases to ensure that any biases are thoroughly tested and that anomalies are called out and removed.

To summarize, there are many ways for bias to find its way into AI recruiting technology. Business analysts can add a tremendous value to organizations by helping them recognize and remove biases from these applications.


[i] All the Ways Hiring Algorithms Can Introduce Bias, by Miranda Bogen, May 06, 2019, HBR,

[ii] Ibid.

[iii] In this article I’m going to use the terms AI and machine learning interchangeably although there is a distinction.

Lessons Learned from Bhutan and Nepal: Part 2 – Process Thoughts about Digital Transformation

In this age of digital transformations and the digital BA, data matters.

Without data there would be no big data, no data mining, no machine learning or predictive analytics. No AI. Nothing digital to transform. So yes, we have to focus on data. But what about poor process, once the king of projects, now often relegated to an afterthought? We commonly use to data improve processes. With better data, many cumbersome processes can be automated and improved beyond recognition. But process in and of itself still matters. 
The importance of good processes was highlighted for us on a recent trip to Nepal and Bhutan. Getting into Nepal was beyond difficult and frustrating. To enter Nepal from the US you need a visa. Many countries require visas, and the processes to obtain those visas vary in the degree of difficulty. But Nepal was unique. We applied for the visa online, so they had our data. But there was no process for doing anything with that data. When we arrived, the fact that we had applied was irrelevant. We waited for nearly 2 hours in the same lines as everyone else. Once we got in, we really enjoyed our visit to Nepal. But the entry process was pretty awful. Bhutan, on the other hand, was a breeze. So here are 5 process lessons learned from this trip that apply universally.

Lesson #1: Before we can improve a process, we need to understand how it works today.

As much as we’d like to jump in and make a process better, we need to understand the current way things are done. It sure would be tempting to send some business analysts to “fix” Nepal’s visa problem, but there’s way too much we don’t know about why things are done the way they are. We need to understand how process works, as well as workarounds, exceptions, and little tidbits that make the process better for the people doing the process, if not for the customers and the entire organization. We also need to be aware of the personal, practical, and political reasons things are done as they are. We suspect the immigration officers in Nepal were as tired of the long lines as we were. We suspect that they would have loved to double the number of agents and to have better automation. We have no idea of the constraints and pressures they felt, and without that understanding, the process cannot be improved.

Lesson #2: A process map helps.

The second part of the trip was Bhutan, and one of the Bhutan activities was a hike to a monastery called the Tiger’s Nest. We posted a photo in Part 1 of this article. It sits perched on a ledge in the middle of a mountain and requires a 3,000 ft fairly vertical climb to 10,000 ft. Given the altitude and the age of everyone in our group (over 50), it caused a certain amount of anxiety, even though we were all physically fit. Some of us watched YouTube videos of the hike, but those hikers were all much younger than everyone in our group. 
The night before the hike, our Bhutanese guide called a meeting to prep us for the next day’s hike. To our delight, he brought out a flip chart and drew a graphical depiction of our hike—a kind of process map! Palee explained the different levels, where we could get tea, where we could take the most scenic photos, where there was a dirt path and where there were steps, and importantly, where the few restrooms were located. We still had some trepidation but felt much better prepared. 

Lesson #3: Process maps are usually incomplete.

But even the best of process maps doesn’t prepare us for all the exceptions. There are often unexpected forks in the path and choices that have to be made. It would be great to learn a process by reading existing documentation, but it may not be up-to-date. It probably won’t have all the exception paths. It certainly won’t have the workarounds that experienced staff know and love. If we rely on documentation alone, we might go astray. As helpful as our guide’s process map was, it did not prepare us for how we would react to altitude, for the numerous forks in the path, for how to share the path with horses, nor the hordes of hikers on the same hike.


Lesson #4: A guide makes a process easier.

The first decision point was whether to hike or to ride a horse to the first level. Since one person chose the horse, our guide was unavailable to steer the rest of us through the other exception paths. As we wrote in Part 1, three of us in our group of eight were ahead of the others when we came to our first fork in the road. Which way to go? Go with the flow of course and the flow of hikers in front of us chose the right-hand path. It turns out it was a big mistake. It was a terribly steep and difficult path. We were about a quarter of the way up when one of the hikers shouted down to a friend—“take the right-hand path. It’s shorter. Much steeper, but shorter.” The path was so steep that we didn’t want to turn around, hike back down, and take the other path. We came across many other forks in the road, but our guide was always there to show us the way, which made our hike far easier.
Process Map Tigers Nest 1

Lesson #5: The shortest path is not necessarily the fastest.

When we finally joined the other path, the rest of our group was actually ahead of us. They had taken the longer, less difficult path and they were farther along. And were far less out of breath. There are times when shortcuts make sense. When we blindly follow processes just because “we have always done it this way,” we take a giant step towards bureaucracy. However, our shortcuts need to be well-conceived, and we need to understand the consequences of taking a shorter, less-known path. If that shortcut is tested and well-understood, we need to recommend that it replace the existing process. 
By the way, everyone in our group made it to the top—and it took us about two hours less than our guide originally estimated. We have our wonderful guide Palee to thank–he was a great PM, BA, knowledgeable SME, and overall great guy.

Tyrion the Trusted Advisor: What Game of Thrones Teaches Us about Influencing Without Authority

I have always loved the Game of Thrones TV series.

And what has fascinated me the most is the treatment of the Trusted Advisor, beautifully portrayed by Peter Dinklage as Tyrion Lannister. Tyrion embodies important ingredients of a trusted advisor who influences decision-makers as we’ll see below (warning – some plot spoilers ahead)

To influence without authority, we need to establish trust, be prepared, and have courage

Simply put, it’s impossible to influence anyone who doesn’t trust us. In the Game of Thrones (GOT) trusted advisors are called Hands, probably because they are really the right-hand of the king or queen and it is a highly powerful position. Hands have the ear of the ruler, but if the ruler doesn’t trust the hand—watch out! In Season 1, for example, the Hand to King Robert Baratheon is Ned Stark, who reluctantly accepts the position. Although King Robert trusts him and accepts his advice, his queen does not. When the king dies, the queen and her ruthless son, behead him in a shocking warning of what happens to advisors who are not trusted.

Tyrion, on the other hand, is not initially trusted by anyone. However, throughout the series he works to establish trust by being prepared before giving any advice to his queen, Daenerys Targaryen, and by having an overabundance of courage. Early in the show Tyrion is an exhaustive reader, doing his homework and his advice, as Daenerys slowly realizes, is usually sound. When she follows his advice, it almost always works (I know fans, there are instances when Tyrion gets fooled). When she doesn’t listen to him, things don’t go well for her. For example, in the penultimate episode, Tyrion advises sparing the lives of innocents, but Daenerys rejects that advice, leading to her ultimate destruction. And as Hand, Tyrion shows unimaginable courage when he provides advice knowing that it’s unwanted, but also knowing it is absolutely the right course of action.

One more example of Tyrion’s courage. In the last episode of Season 8, Tyrion understands that he can no longer support a Queen who wants power so much that she is willing to do just about anything to get it. Although he knows that he will be arrested for “treason,” Tyrion cannot support such actions. In an act of defiance that he knows will condemn him to death by dragon fire, he deliberately resigns his post, taking off his Hand badge and throwing it away.

Our projects require us to build trust, to be prepared before giving advice to decision-makers, and to be courageous. In some organizations it takes a great deal of courage to be the bearer of bad news as when we need to provide accurate project status or when we point out risks. Although not as dire as in the GOT, it still takes courage to recommend the right thing for the organization. Not all decision-makers want to hear from us about why the organization should move in a new direction, or develop a new process, or build a long-term solution when the organization wants short-term fixes. What gives us courage, of course, is knowing what we’re talking about. It’s having the facts and the statistics to back up our recommendations. It’s being prepared. It’s also the ability to articulate and sell our recommendations. When our recommendations turn out to help our organizations, we, like Tyrion, gain credibility and build trust.


To influence without authority, we need to provide advice to the decision-makers, but not own the decisions.

In an episode a few seasons ago Tyrion gave Daenerys a piece of advice that she refused. Tyrion then says to another advisor, Lord Varys, that he, Tyrion, can give the queen his advice, but he can’t force her to take it. In Season 7 Tyrion advises against killing traitors with dragon fire. When she kills them anyway, Tyrion agonizes over what he could have done to stop her.

This is what we call the trusted advisor’s dilemma. We need to provide advice—good, sound advice backed up with facts, but we are not the decision-maker. We can point out risks and consequences, but we cannot make the decisions ourselves. We want to make our advice so sound that if we know the decision-makers are off course we can convince them of another course of action, but that is not always possible. The only thing we can do is to ensure that our recommendations are in the best interest of the organization and not promoting our own personal goals, even when our goals seem in conflict with the organization’s.

The trusted advisor’s dilemma: “We need to provide advice—good, sound advice backed up with facts, but we are not the decision-maker.”

Years ago I was a manager in the unenviable position of having to eliminate an entire department. The department supervisor remained positive throughout, recommending shut-down and transfer processes. Somehow, he communicated the business need for the shut-down and his own optimism to the staff. In the end he was promoted and none of the staff lost their jobs.

Respect, authenticity, and empathy help us to influence without authority.

Throughout the 8 seasons of Game of Thrones, Tyrion experiences tremendous growth. He goes from being not much more than a selfish, heavy-drinking womanizer to a Hand who agonizes over the consequences of his advice, his conflicting loyalties, and giving advice that truly benefits the realm, rather than what’s best for him. He becomes a true friend, caring brother, and overall good guy. He shows respect for the would-be Queen, even when she makes terrible decisions. He demonstrates authenticity (we can see his pain), and empathy for his friends. By the end of the series Tyrion becomes perhaps the most influential character.

In our organizations we have a greater influence when our approach is respectful, authentic, and empathetic. Expertise alone does not create competency. Most people do not relate well to “know-it-alls,” and trying to showcase our expertise rarely builds credibility. We are most successful when we use our expertise to support the organization, rather than for personal gain or visibility.

To summarize, as trusted advisors we provide our advice, but we do not make decisions. We build trust in many ways, including establishing credibility by being prepared when we make recommendations, being respectful and empathetic when giving our advice, and by showing courage.