Skip to main content

Tag: BI

The Philosophical Data Analyst (Part 2): The Problem of Induction

Most organizations understand data is an asset, providing a rich resource that can be analyzed to unearth predictive insights. Many organizations invest heavily in their data operations to ensure the ongoing completeness, integrity, and accuracy of collected data. However, regardless of how complete, correct, and/or unbiased collected data may be, there are limits as to what insights can be gleaned from a given dataset. Acting on analytical insights outside these limits introduces risk.

This article describes the problem of induction – assuming you know more than you do – in the context of predictive analytics. It describes how the problem can be exacerbated by seemingly improbable or unlikely events. Finally, it outlines how PESTLE can be used to explore the limitations of the analysis, allowing Business Data Analysts to assess and mitigate risks.


The Problem of Induction

The Problem of Induction is not new. It has been explored by philosophers since at least the 18th century (Henderson, 2018).  Simplistically, the problem is the illusion of understanding – when people think they know what is going on when the situation is more complicated (or random) than they realize (Taleb, pg. 8).

In his book The Black Swan, Nassim Nicholas Taleb uses the plight of a Thanksgiving turkey to illustrate the problem (Taleb, pgs. 40,41). Assume a given turkey is fed every morning for 1000 days. On the first day, being fed is probably a welcome surprise to the turkey. After a few days, the turkey will start to notice a pattern and become more confident that it will be fed the following day. Over the course of the 1000 days, the turkey’s level of confidence will grow to the point that they expect – indeed are quite certain – that they will be fed the following day.

However, on day 1000, the turkey is not fed and is instead slaughtered – an event that would seem completely unpredictable (a black swan) to the turkey given its experience over the prior 1000 days. Of course, if the turkey had known about the tradition of thanksgiving, it may have been able to factor this into its predictions. Alas, this information was outside the realms of the turkey’s knowledge and experience.

Black swans are used by Taleb to describe events that seem improbable and/or unfathomable. Before the “discovery” of Australia, the idea that a swan could be anything other than white seemed preposterous to Europeans. However, a single sighting of a black swan by European settlers was enough to invalidate the understanding of an entire continent (although for the local Whadjuk Noogar people*, the idea of a swan being anything but black would have been equally preposterous – perhaps making the coming of European settlers their white swan event?)

The year 2020 provided us with an example of a black swan that no doubt exacerbated the problem of induction for many organizations. Most organizations failed to foresee the rise of COVID and its impact simply because it was outside their lived experience – a black swan. Neither its occurrence nor impact could be reliably inferred from the information they had – just as the turkey could not foresee its demise. As such, predictions for 2020 based on historical information were unlikely to be accurate. 

Note that this does not mean the pandemic and its impact could not be predicted to some degree. In the same way, the Whadjuk Noogar have always known swans could be black, health and virology experts have been publicly predicting and even planning for a pandemic for some time (see Rosling 2018, pgs. 237-238 and NHS England, 2013). In addition, available analysis from previous outbreaks of disease, such as SARS and Swine Flu, could have provided some insights into the impact of a pandemic (see Smith et. al. 2009, Australian Government Treasury 2007). However, until 2020, a pandemic was not part of the lived experience for the vast majority of organizations. Therefore, it was unlikely to be factored into their analysis and planning.

Containing the Problem of Induction: Know Your Limits

In the context of predictive data analytics, the problem of induction is often exacerbated by:

  • Assumed Continuity – when analysis implicitly assumes that the conditions under which data were collected and analyzed will be sustained. An organization may believe they are thinking about the future, but they are usually just extrapolating the present, and that’s not the same thing at all.” (Lovelock, 2020).
  • Information Blind Spot – this is where information is not considered in or has been omitted from the analysis.

Relying on predictive analytics without understanding its limitations can lead to a false level of confidence in predictions. This may mean organizations continue to ‘trust’ analysis beyond the point of reliability, taking longer to respond to changes in conditions. (It worked before; it should work now!)

Techniques such as PESTLE can provide an effective frame for exploring the limits of predictive analysis. By assessing the reliability of analysis under different scenarios, Business Data Analysts can understand and communicate limitations. The table below uses PESTLE to help identify some high-level scenarios to explore.


Change in government; Market intervention (think quantitative easing); Legislative changes; Change in political stability (think Arab Spring); Act of terrorism (think 9/11); War;


Interest rate change; Cost-of-living changes; Global trade and/or supply chain issues; Recession or economic shock;


Health care or housing availability crisis; Social movement (think Me Too, Black Lives Matters); Mass migration (think Syrian Civil War); Epidemic/Pandemic;


ICT security incident (think ransomware); Disruptive technology (think Uber, Air BnB); Scientific discovery/scrutiny (think smoking and cancer); Severe defect/breakdown (think Boeing 737 MAX);


New regulations/de-regulation; Employee malpractice; Legal scrutiny;


Infrastructure outage; Natural disaster; Man-made disaster;

The technique can be used to identify more detailed scenarios that are applicable to a given organization. The idea is to think of a range of scenarios, from the probable to the seemingly improbable. Once identified, you can assess if analytical outputs are likely to be valid under each of the scenarios, thus identifying the ‘bounds’ within which the analysis can be applied. For example, you may deem that analytical outputs would continue to be reliable in the event of a democratic change of government, but not in the event of a military coup.

Once the limits of analysis are identified, steps can be taken to:

  • Identify additional information sources that may be used to strengthen analysis
  • Identify types of events that should trigger a review of analytical models, measures, and outputs
  • Identify and mitigate any risks posed by the scenario
  • Inform decision-makers of the limitations of the analysis so that it may be factored into decision-making.


Predictive analytics can provide useful insights to support decision-making. However, the conditions under which data is collected and analyzed naturally limit the situations under which insights should be applied. Understanding these limits can prevent analytical results from being relied on in circumstances outside of these bounds.

At the end, it’s better to have some understanding of what you don’t know than to think you know what you don’t.

*The author acknowledges the Whadjuk Noogar people, the traditional custodians of the Derbarl Yerrigan (or the Swan River), and pays her respects to elders’ past, present, and emerging.


  1. Taleb, Nassim Nicholas, The Black Swan: The Impact of the Highly Improbable, Random House, 2007.
  2. Rosling H., Rosling O., Rosling Ronnlund A., Factfulness: Ten reasons we’re wrong about the world – and why things are better than you think, Flatiron Books, 2018.
  3. Henderson, Leah, The Problem of Induction, Stanford Encyclopedia of Philosophy, March 2028. (Last Accessed January 2022).
  4. Lovelock, Christina, The Power of Scenario Planning, BA Times, July 2020. (Last Accessed January 2022).
  5. Operating Framework for Managing the Response to Pandemic Influenza, NHS England, (Last Accessed January 2022).
  6. The economic impact of Severe Acute Respiratory Syndrome (SARS), The Australian Government Treasury, 2007. (Last Accessed January 2022).
  7. Smith, Keogh-Brown, Barnett, Tait,The economy-wide impact of pandemic influenza on the UK: a computable general equilibrium modelling experiment, The BMJ, 2009. (Last Accessed January 2022).

The Philosophical Data Analyst: Some Variables are from Extremistan

Most organizations understand data as an asset, providing a rich resource that can be analyzed to unearth both descriptive and predictive insights. Data-driven decision-making is promoted by many organizations. More and more, organizations are processing and analyzing data in real-time, automating operations based on results, making them reliant not only on the data but on the methods used to analyze it.

Many organizations invest heavily in their data operations to ensure the ongoing completeness, integrity, and accuracy of collected data. However, regardless of how complete, correct, and/or unbiased collected data may be, there are limits as to what insights can be gleaned from a given dataset. Acting on analytical insights outside these limits introduces risk.


This article describes risks posed by data variables that are susceptible to seemingly improbable or extreme values. It explains the characteristics of susceptible data variables and outlines a simple technique for identifying them.  Once identified, Business Data Analysts can assess the impact of using these variables in the analysis, allowing any risks to be quantified and mitigated and/or alternatives analytical approaches explored.

Mediocristan vs. Extremistan

It is common to describe the range and frequency of variable values using statistical distribution patterns. The ability to describe data variables using a particular distribution pattern is a prerequisite for many data modeling methods. The most common statistical distribution pattern is Normal Distribution or the bell curve.

Image credit:

For a normally distributed data variable, the frequency of values is symmetrically clustered around a mean – the further away from the mean, the less likely the data variable will take on that value. However, in some cases, data variables may appear to hold a certain statistical distribution pattern when, in fact, they may legitimately take on values that (given the distribution pattern) are deemed improbable or extreme – particularly in cases where a variable is subject to complex and/or unknown external influences.

In The Black Swan, Taleb introduces the idea of Mediocristan and Extremistan. Taleb defines Mediocristan as subject to the routine, obvious and predicted, while Extremistan is subject to the singular, accidental, and unseen (Taleb, pg. 35).  Applied to data analysis, data variables from Extremistan are susceptible to extreme and/or unpredictable values, while those from Mediocristan are not. The argument is that data from Extremistan cannot be accurately described using common statistical distribution patterns – and certainly cannot be described using normal distribution as value frequency is not symmetrical.

For example, take a sample of 1000 randomly selected human beings. If you were to calculate the average height of the group, what would you expect the answer to be? Now add the tallest human on earth to the sample (which will now comprise 1001 humans) and recalculate the average height of the group. How much would you expect the result to change? The answer is not a lot as there is a limit as to how tall a human can be. While adding the tallest human on earth to the sample would cause the overall average to rise, the impact would be minor. Human height is an example of a variable from Mediocristan.

Now perform a similar thought experiment, except this time use net worth instead of height. What would you expect to happen to the calculated average net worth of a randomly selected group of 1000 human beings if you were to add the richest person in the world to the group? In 2021, Forbes identified Jeff Bazos as having the highest net worth in the world, estimated to be around US$177 billion. You would expect the average from the sample that included Jeff Bazos to be dramatically higher compared to one that didn’t. Net worth is an example of a variable from Extremistan.

The problem is that if you were to take a random sample of 1000 humans from earth, what is the likelihood that it would include Jeff Bazos? Or Bill Gates? Or Jay-Z? Or the Queen? Or anyone else with a much higher net worth than average? And if the sample did happen to include one of these individuals, what would you do with the offending value? Would you treat it as a true representation of the entity being described? Or would you discard it as an outlier?

Extreme values can be mistaken for outliers when they are in fact indicative of the behavior of the entity they are representing. Analysis that does not account for Extremistan variables properly may prove unreliable – particularly when accurate but extreme values enter the underlying data. Extremistan variables may also be subject to extreme changes in value as a result of seemingly unlikely or improbable events (for example, a sudden stock market shock impacting the net worth of some individuals more than others).

What to do in Extremistan?

Taleb proposes a method for modeling Extremistan variables based on the work of the mathematician Mandelbrot, the pioneer of factual geometry. However, mathematics is complicated and beyond the capabilities of most organizations. (As most Business Data Analysts would know, explaining analysis based on simple mathematics to stakeholders can be a struggle – let alone analysis that uses more complex mathematical modeling techniques). Understanding Extremistan variables, how they contribute to the analysis, and mitigating any risks their use poses is a more realistic goal for most organizations.

Start by classifying data variables into the categories ‘Mediocristan’ and ‘Extremistan’. The table below provides some guidance on the characteristics of Mediocristan and Extremistan variables.





The most typical member is mediocre

There is no ‘typical’ member

Winner gets a small segment of the pie

Winner takes all

Impervious to Black Swan (seemingly improbable) events

Vulnerable to Black Swan (seemingly improbable) events

Often corresponds to physical quantities with limits

Often corresponds to numbers with no limits

Physical, naturally occurring phenomena are often from Mediocristan

Variables that describe social, man-made aspects of human society are often from Extremistan

Examples include height, weight, age, calorie consumption, IQ, mortality rates…

Examples include income, house prices, number of social media followers, financial markets, book sales by author, damage caused by natural disasters…

 (Adapted from Taleb, pg. 35, 36)

Once classified, Business Data Analysts can identify where and when Extremistan variables are used in the analysis, and whether they pose any risk to the accuracy/reliability of analytical outputs. In many cases, this can be done simply by identifying or estimating extreme data points (such as the Jess Bazos in the example above), adding them to the underlying data, and assessing their impact on the analysis.

Note that using Extremistan variables in the analysis is not necessarily a problem – it depends on how they have been analyzed and the insights that are drawn from the analysis. Some analytical and modeling techniques will be able to deal with Extremistan variables without introducing much risk. However, be wary of analysis the assumes Extremistan variables to be normally distributed and/or simply treats legitimate extreme values as outliers.

When classifying variables, it is also important to consider the scope of the data collection, and the context in which it is being analyzed. Take for example a sample of bakers who live in a certain region. You may want to use data collected from this sample to predict the income of other bakers in the same region. Assuming there are no issues with data quality and/or data collection, baker income is likely to be a variable in Mediocristan as there is a limit to how much bread a baker can bake in a day, and price/demand variability for baked goods is usually low.  On the other hand, take the same example and replace ‘baker’ with ‘social media influencer’. A social media influencers income is subject to a more complex and ephemeral range of factors, such as numbers of clicks, ‘fame’, the popularity of social media platforms, etc. As such, social media influencer income is more likely to be from Extremistan.


Data is an asset. Data-driven decision-making can help increase efficiency, drive innovation, and reduce bias in decision-making. However, it is important to understand that there are limits to the insights that can be drawn from a given dataset. By identifying variables that may be subject to extremes, analysts can ensure these variables are appropriately accounted for in analysis by assessing any risks, and ensuring analytical insights are considered in context.

But know this – variables from Extremistan are anything but normal!


  1. Taleb, Nassim Nicholas, The Black Swan: The Impact of the Highly Improbable, Random House, 2007.
  2. Guide to Business Data Analytics: Getting Better Insights. Guide Better Informed Decision Making. IIBA, 2020.
  3. Normal Distribution,, 2021. (Last accessed Jan 2022).
  4. Dolan, Kerry A., Forbes 35th Annual World’s Billionaires List: Facts and Figures 2021, Forbes, Apr 2021. (Last accessed Jan 2022).
  5. Penn, Amanda, Extremistan: Why Improbable Events Have a Huge Impact, Nov 2019. (Blog last accessed Jan 2022).

AI and the Digital BA—What’ It All About? Part 3

This is the last of a three-part article written with answers to some of the most frequently-asked questions I get about artificial intelligence (AI).

In Part 1, I addressed some common terms and issues related to AI as it is used in a business context. In part 2, I focused on the various roles that BAs play on AI efforts. In this article I will discuss various subjects like the need for AI translators, the importance of AI governance, and the digital PM. As with Parts 1 and 2, I will use a Q/A format.

Why is the role of AI translator so important?

Recently there have been numerous articles in journals like Forbes and Harvard Business Review (HBR) about the need for an AI translator role, someone who acts as a go-between between the organization’s data scientist and strategic decision-makers. These articles don’t mention the BA specifically, but their descriptions are consistent and describe a role that BAs have routinely played—that of ensuring that business stakeholders and technical staff understand each other. I think the AI translator is a perfect role for any experienced BA. Data scientists need to understand the strategic direction of the organization, the business need for the initiative, and the related business rules that will be required on many of the AI systems. Business stakeholders need to understand the impacts of their decisions.

In the early days of AI, it was not uncommon for data scientists to guess at the business rules and make AI-related decisions themselves. This did not go well, as documented in Computer World.[i] The next phase was to have data scientists get input directly from the business. This, too, did not go well. So some organizations have introduced an intermediary role—the AI translator. They understand that they need to have someone who understands the importance of business input and who can also speak comfortably with the data scientists—a translator role. That’s where the BA comes in. We’ve always been translators. Translating the requirements into designs and back to ensure stakeholders get the functionality they ask for and really need. Yes, this is a perfect role for the BA and one that can greatly contribute to successful AI projects.

How much governance is needed on AI initiatives?

Many of the challenges on AI initiatives are no different from those on other projects. In a survey published in Information Magazine in July 2019, respondents included these factors as the major challenges:[ii]

  • 50% – Lack of leadership buy-in
  • 49% – Lack of metrics, especially surrounding data (bad data, ownership, etc.)
  • 37% Internal conflict
  • 31% Time required to implement (takes longer than expected)
  • 29% Unexpected costs

What do these factors have to do with governance? Each one directly relates.

  • Executive buy-in. Among other things, no executive buy-in makes it almost impossible to reach consensus on the need for and nature of governance itself.
  • Data metrics. Governance guides such metrics as how accurate historical data needs to be.
  • Internal conflict. Governance establishes guiding principles around conflict, how it will be resolved, and by whom.
  • Time and cost overruns. Project governance will help such things as keeping projects on track, how and when to communicate when they’re not, and even what “longer than expected” means, so forth.


The article goes on to suggest that in order have successful AI initiatives, organizations need to hire data stewards to manage and coordinate the organization’s data. The data steward would be a steward in the real sense of that word: someone to manage, administer, and generally take care the data. In order to manage and administer, this role needs to help the organization determine what that governance will work and then to be responsible for its governance. Sounds like a BA!

In a podcast, cited in Harvard Business Review (HBR) in August 2019, De Kai and Joanna Bryson join Azeem Azhar to discuss the importance of governance on AI initiatives.[iii] They define governance as coordinating resources involving both internal AI modules and humans. They suggest that there needs to be an independent, oversight group with the authority to apply agreed-upon governance, and I think the seasoned BA is in a perfect position to facilitate this group.

Is there such a thing as a digital PM and if so, how does that role differ from a digital BA?

Digital BAs are similar to all BAs in that they do BA tasks, use BA techniques, and need the same BA competencies (see Part 1). Likewise, digital PMs do PM tasks, use PM techniques, and need PM competencies. They work with the sponsor to charter AI projects and help organizations implement them. Although not yet a common role or title, having someone with experience managing AI projects can be valuable to organizations. Again, they’ll still do their tasks and use their techniques appropriate to PM work, but being a PM on an AI project and coordinating all the resources entailed on such an initiative will most certainly require a healthy working knowledge of AI.

Another way to look at digital PMs is that they use AI systems and tools to manage AI projects. In an article in Forbes Magazine on July 2019, the author focuses on the use of automated AI systems and tools to help digital PMs manage their projects.[iv] He says, “AI, with its unique ability to monitor patterns, is a capable assistant to PMs.” In addition to helping with the routine admin tasks, AI can provide all kinds of predictive analytics. AI tools can look at hidden complexities and all the moving parts inherent in a complex project or program and predict areas of concern, from project slippage to team members behavior and more.

The digital PM, then, is one who not only takes advantage of AI tools to do a better job of managing projects, but also has enough AI expertise to manage complex AI projects.

Does “digital” have to be related to “AI?”

In the past, the term “digital” was used broadly. It referred to any digital project, like development of a website, digital marketing, or developing the organization’s presence on social media. Nowadays the term is generally used to refer to “AI,” which encompasses all things related to machine learning, predictive analytics, and data mining. More recently the terms “AIs” and “AI systems” are also commonly used.

I hope you have enjoyed this three-part series. Look for more AI-related content in the future.


[i], Robert Mitchell, July, 2013

[ii], Data Governance in the Age of AI, Gienna Shaw, Information Magazine, July 19, 2019.

[iii], Podcast, De Kai and Joanna Bryson

[iv],, Forbes, Tom Schmelzer, July 30, 2019

AI and the Digital BA—What’ It All About? Part 2

This is the second of a two-part article written with answers to some of the most frequently-asked questions I get about artificial intelligence (AI).

In part 1 I addressed some common terms and issues relating to AI as it is used in a business rather than technical context. In this article I will focus on the various roles the BA plays to help organizations with their AI initiatives. As with the last article, I will use a Question and Answer format.

Quick Review of Part 1

What is AI?

AI is an umbrella term that encompasses all digital technologies, like machine learning and predictive analytics, which are used to make predictions and recommendations using massive amounts of data. In short, it’s machines doing human tasks that range from simple to complex.

What is a digital business analyst (BA)?

A digital BA is a trusted advisor who helps organizations with their AI strategies. Rather than developing the strategies, they provide their advice about impacts to and value of AI initiatives.

What skills does a digital BA need?

The skills don’t change, but the subject matter is incredibly complex.

How successful are most companies with their AI efforts?

Not very. Most AI initiatives totally miss the mark and result in all kinds of issues, not the least of which is financial. A recent Forbes article details some of the resulting issues.[i]

What is digital fluency?

Digital fluency is defined as “The ability to interpret information, discover meaning, design content, construct knowledge, and communicate ideas in a digitally connected world.” [ii]

Part 2

What is the role of the BA on digital projects?

A digital BA can be involved in many aspects of an AI initiative. Some of the roles that a BA may play include one, several, or all of these:

    • Strategic BA. In this role BAs help organizations determine the value and direction of the AI effort. Some of the specific outputs can include:
      • Business case on the value of the AI initiative
      • Recommendation(s) on the best strategic approach to the AI initiative
      • High-level implementation plan
      • Pitfalls to avoid
      • First look at state of the data to be used
      • High-level governance plan


  • AI coordinator who implements the AI strategies. In this role the BA coordinates AI initiatives across project and portfolios.
  • BA on a project(s) that is part of the AI initiative. Although this role is similar to any BA role, there are some differences. The BA will need at least working knowledge of, if not expertise in, AI.
  • Business data analyst. In this capacity the BA may
    • Analyze the current data to determine how much is useable, how much needs to be cleansed, and how much needs to be collected
    • Recommend an approach to cleansing the dirty data
    • Help determine the data needed for predictive analysis and other AI functions
    • Interpret statistical analysis resulting from AI functions
    • Be an AI translator to facilitate communications between the data scientist and the business stakeholders.

What’s the difference between a data scientist, data analyst, and BA who works a lot with data?

These 3 roles can be confusing. At first glance we might not recognize differences or understand why the distinctions are important, but they are. I discussed the possible roles of the BA above, so here is a brief description of the other two.

Let’s take the easy one first—the data scientist. Not that the role is easy, it’s just easier to explain why this one is different from the other two. The data scientist is the most technical and needs the most expertise. About three-fourths have master’s degrees in mathematics and statistical analysis. Over half have Ph.Ds.

Data scientists create the predictive models. They determine what the machines need to do in order to meet the business objectives. They decide which algorithms are best given the objective of the AI initiative so that the machines can be trained to learn. Having said that, unless there is good governance and substantial input from business stakeholders and decision-makers, those algorithms have the potential to be created with built-in biases. Likewise, they may not be the best ones to solve the business problem.

The data analyst. This is really a subset of the BA role. I described some of the high-level functions above. On AI projects it’s necessary to focus on the data because it’s so integral to the success of the effort. Machines learn based on historical data. Issues like dirty and redundant data, as well as ownership of the data aren’t easy and require a strong facilitator and influencer to resolve. This data analyst role is such an important role that IIBA has created a new certification—the certification in business data analysis (CBDA).

What are some of the business and technical pitfalls that the digital BA should be aware of?

Here are some of the big ones:


  • Beginning with AI as a solution without a defined problem
  • No real AI strategy
  • Unrealistic expectations of what AI can do for the organization

Data and technology

  • Dirty data
  • Business processes don’t support the technology
  • Weak security

Organizational and communications pitfalls

  • Siloed and cumbersome business architectures
  • Inflexible organizational structures
  • The data scientists create the business rules
  • The data scientists talk directly to the business and the business does not understand
  • Confusing roles on AI projects
  • Built-in biases in the algorithms

In Part 3 of this article, we will explore other aspects of how BAs can help organizations get the most value from their AI initiatives. Some of the topics we will cover include the need for governance on AI efforts, the recognition of the importance of the AI translator role, the digital PM, and more. 



AI and the Digital BA—What’ It All About? Part 1

In this three-part article I’ll answer some of the questions that I am frequently asked about artificial intelligence (AI)

and the role of the BA (business analyst) in helping guide organizations in developing and implementing their AI strategies. In Part 1 I’ll address some common terms and issues relating to AI as it is used in a business context. In Part 2 I’ll focus on various roles BAs can play on AI initiatives and detail some of the more common pitfalls. In part 3 I’ll discuss various topics including the need for governance on AI efforts, the digital PM, and the AI translator.

What is AI?

These days the term “AI” is being used as an umbrella term that encompasses all digital technologies, such as machine learning, predictive analytics, RPA (robotics process automation), etc. In today’s common usage we think of AI at its most fundamental level—any time machines act like humans, that’s an aspect of AI.

Machine learning, another common term, is a kind of AI. When machines use predictive models and massive amounts of historical data, they learn, make predictions, and provide insights. As new data comes in, they keep learning and improving and are able to make better predictions and provide better insights

It seems like most organizations are jumping on the AI bandwagon. Why is AI so important?

During the dot-com boom in the late 90s I asked the same question of a presenter talking about ecommerce. She explained it to me by saying, “everyone’s looking for that next get rich quick scheme. A hundred years ago it was the gold rush. Today it’s ecommerce.” While I would never say that AI is a get rich quick scheme, there is a nugget of truth (pardon the gold rush reference). But I would phrase it differently. Organizations realize that they need to adapt to their environment in order to survive. Survival of the fittest if you will. And today’s environment requires at least some element of AI.

How successful are most companies with their AI efforts?

In a recent survey by Harvard Business Review[i] 72% of organizations said they were not getting the value out of their AI projects that they were expecting. The article stated that 40% of the problems were caused by an ill-defined problem and/or product. In other words, what’s typically lacking is something that BAs do so well—define the business problem to be solved, recommend solutions, and then define the requirements of the solution. Neglecting these things can wreak havoc on any project, as these statistics point out. According to the same article another 40% of the issues are due to bad data, another of the BA’s many bailiwicks. Only 20% of the problems are due to the algorithms themselves, but that’s where many companies put 80% of their resources.

BAs can help organizations avoid 80% of the pitfalls mentioned in this survey. If organizations involve BAs to help recommend solid AI strategies and implementation plans, these efforts would be more successful.

What is digital fluency?

Digital fluency is defined as “The ability to effectively and ethically interpret information, discover meaning, design content, construct knowledge, and communicate ideas in a digitally connected world.” [ii] There are a few points in this definition that are worth highlighting.

  • Instead of efficiency and effectiveness, the emphasis is on being effective and ethical. This is because ethics is so important in our digital world. The concept of digital trust has risen in priority, so this competency requires not just speaking the digital language, but also understanding the ethical impacts of AI on organizational decisions.
  • Digital fluency requires the ability to do what BAs have always done. We discover meaning by eliciting information in a variety of ways. We design content when we model the future state. We construct knowledge by connecting the dots and putting the disparate pieces of information together. We communicate ideas to a variety of stakeholders in a variety of ways, always translating and interpreting the technical complexity so that stakeholders can understand and make good decisions.
  • Our world is digitally connected, so we need to do what we have always done—considering that it will be done by a broader range of stakeholders anywhere, anytime, on any device.


What is a digital business analyst (BA)?

I‘m fond of saying that a digital BA is a BA who helps organizations figure out what’s the best approach to take on digital transformation projects. Digital BAs help organizations make the best use of AI. They help organizations recognize and avoid common implementation pitfalls and risks.

A recent study by IIBA in conjunction with UST Global describes the digital BA as someone who guides organizations as they develop their AI strategies. Once that strategy is created, the digital BA “validates, supports, and executes” that strategy. [iii]

In other words, a digital BA is a trusted advisor who helps organizations with their AI strategies. Importantly, they do not create any digital strategies. They provide their advice through expert recommendations.

What skills does a digital BA need?

We still need to do business analysis work, using business analysis techniques. We don’t need a Ph.D. in statistical analysis, since the role of the BA does not create the predictive models. But we will need to talk to the data scientists, the role that does create the models. So digital fluency is important. Facilitation, conflict resolution, business and industry knowledge, ability to influence, ability to analyze, think critically, and solve problems are key competencies as well.

To summarize, the role of the digital BA on AI initiatives is becoming an essential part in organizations’ AI efforts. In this article we have answered common questions related to this important role. Look for answers to other common questions in Parts 2 and 3 in the upcoming months.


[i], HBR March 2019


[iii] IIBA and UST Global get attribution