Skip to main content

Author: Elizabeth Larson

Elizabeth Larson, has been the CEO for Watermark Learning as well as a consultant and advisor for Educate 360. She has over 35 years of experience in project management and business analysis. Elizabeth has co-authored five books and chapters published in four additional books, as well as articles that appear regularly in BA Times and Project Times. Elizabeth was a lead author/expert reviewer on all editions of the BABOK® Guide, as well as the several of the PMI standards. Elizabeth enjoys traveling, hiking, reading, and spending time with her 6 grandsons and 1 granddaughter.

The Courage to Try Something Old – Part 2: Scribing

In the previous article I wrote about the importance of facilitating requirements meetings and why it can take courage to do so. In this article I’ll discuss another skill that has fallen in and out of favor over the years—scribing.

Many ancient societies valued scribes. Scribes typically were at the center of all activities and highly regarded in the areas of government, law, military, and religion. Today’s scribes are not so universally regarded, particularly in our world of PMs and BAs. Effective scribes should be at the center of requirements activities, but most often they are not. We’re often in the back of the room or off to the side. We’re not always introduced in virtual meetings. Many organizations view scribes simply as passive note-takers and unfortunately that’s how many scribes view themselves. But I have found that scribes are essential to the success of the project.


What is a scribe and what does a scribe do? A scribe is the role that provides documentation, either formal or informal, and anyone can play that role. PMs, BAs, facilitators, business owners, QA analysts, programmers—it doesn’t matter what the title is. Any time we’re documenting our PM or BA work we’re scribing.  Our output can include recaps of sponsor and other stakeholder meetings, requirements (models, textual, etc.), assessments, gap analyses, and business cases to cite just a few.


What skills does a scribe need? Like every effective PM and BA, the scribe has to create structure from chaos. That’s not easy so scribes need a variety of skills, such as listening, absorbing, clarifying, and writing. But perhaps most important is critical thinking, which comprises many skills including:

  • Conceptualizing – grasping what’s being discussed because we have enough context[i]
  • Applying – taking what we know from our experience and using it in new situations.
  • Synthesizing – absorbing lots of information, processing it, and making sense of it immediately.[ii]
  • Evaluating – knowing what’s important and what’s not, what works and what doesn’t.[iii]




Why do we need scribes? Documentation is important if for no other reason than because it saves time. We cannot possibly remember all the salient topics of our project and requirements meetings. Documentation helps prevent revisiting and revisiting again all the important decisions already made and who should complete which action items and by when.


How much courage does it take to scribe?  Why in the world would it take courage to scribe? Because the most common scribing pitfalls relate to courage. I am often asked questions such as these:

  1. What if the PM and/or team thinks it’s a waste of time to have a scribe?
  2. What should I do when the facilitator wants to “take notes,” but in the end, much of the meeting is lost because the notes are too sketchy?
  3. What should I do when I’ve been told to sit in the back and be silent when I take notes? Most of the time I have questions or want to clarify what’s been decided, but I’m told that asking questions will take too much time.
  4. What should I do when I’m asked to distribute the documentation in an unreasonable time frame?
  5. I know it’s important to recap the highlights of my scribing output at the end of the meeting, but we never seem to have time. Our discussions always run over.


If we are too timid to address these issues, we will be less useful not only to our project team, but the entire organization. But it takes courage to tackle them. We need to be effective at influencing, and courage is a main component of influence. We need to ensure that everyone understands the importance of both scribing and the scribe role, and it takes courage to point these out. It takes courage to speak up about the risks of not having scribes in organizations that don’t value them. And to link an unsatisfactory product delivered to stakeholders to effective scribing. And because it takes time to be an effective scribe, we need to advise including scribing tasks in project planning.

Finally, as scribes we need to be neutral and not have a vested interest in the outcome of the meeting. As we know, the person with the pen has the power and can rewrite the project’s history. Let’s not sneak in a couple of our or our sponsor’s favorite requirements, or conveniently forget any because it’s easier than seeking a scope change. And there’s no need to document every conversation– the key items like decisions and action items will do. When done well, scribing is a thing of beauty. When not, it might well be tossed out with other old but necessary techniques—definitely not in the interest of either the project or the organization.

[i] This often comes from past experience and is one of the reasons I’m not in favor of “neutral” scribes
[ii] This is one of my favorite scribe skills because it is essential in a requirements workshop where there’s so much happening at the same time.
[iii] for the 4 basic concepts

Critical Skills Needed for Project Success

Part 1 – Elicitation

This article is the first in a series I’ll be writing about critical skills that all project managers (PMs) and business analysts (BAs) need for success. We need these skills regardless of the type of project we’re on, the industry we’re in, the technology we use, or the methodology we follow. Each of these skills requires a combination of what are commonly called hard skills with those needed to work effectively with others.

This first article is about elicitation. It seems easy. After all, what’s so difficult about asking stakeholders questions? Elicitation, of course, is far more than the questions we ask. When all is said and done, it’s about learning. We learn what our stakeholders want, what they need, and hardest of all, what they expect by asking really good questions and listening to what they have to say with great attention. It’s tricky, though. We can’t do what I did early in my career when I tried to develop a list of requirements by introducing myself and asking what the stakeholders’ requirements were, what they really needed, and what they expected by the end of the project. Simply put, we won’t learn enough to create an end product that they’ll be happy with.[i]

What makes the elicitation process so hard? Here are several pitfalls.


Common Pitfalls

#1 – Missed expectations

Expectations are requirements, but they’ve never been stated. Therefore, we cannot get expectations by asking about them. Our stakeholders don’t think to mention them, and we don’t think to ask about them. I didn’t know about hidden requirements early in my career when I asked the questions like those noted above. Another problem– my focus was specifically on the future state solution. I asked for the features and functions, documented them, and got stakeholder approval. Then the development team built the final product according to the specifications with the inevitable result—a lot of stakeholder complaints.

#2 – People fear the future state.

This major pitfall is hard to overcome for many reasons. Some stakeholders are comfortable with their current state and don’t want to learn or train on the new processes and automation. Others are concerned for their jobs. Still others have a stake in the existing ways – perhaps they were part of its development or a known expert on its use. Whatever the reasons, the fear of the future state can make elicitation difficult.

#3 – The time trap

Many of us are often under so much pressure that we don’t have time to dig deep. We gather some high-level requirements, but we don’t have time to uncover the expectations. And even if we have time, which is rare, many of our stakeholders don’t. Many are available for an initial set of sessions, but interest wanes as the difficult detailed meetings drag on.

So, what can we do? Here are 3 tips for successful elicitation.

Tips for Successful Elicitation

Tip #1 – Use a variety of elicitation techniques

The first tip for uncovering expectations is to use a variety of elicitation techniques. That’s because each technique that we use uncovers a different aspect of the requirements. Here are some examples.

  • Process modeling. This has always been one of my favorite techniques. It documents how people get their jobs done. But as with all elicitation, it’s not easy. For example, one of the most difficult aspects about process requirements is that stakeholders argue over where to begin and where to end and how the processes fit together. Using different process models helps avoid this contention. SIPOCS (suppliers, inputs, process, outputs, customers) help narrow the scope of each model and swim lane diagrams help visualize how the processes fit together.
  • Data modeling. Process modeling is great, but people need information to get their work done. Data modeling helps us figure out what information supports each process step. It also provides business rules and is invaluable on our AI initiatives.
  • Use cases. These models help us understand how our stakeholders want to use the final product. They provide not only the scope, but all the functionality of the solution. And use cases, if completed thoroughly, turn into test cases.
  • Prototypes show what the final solution will look like.
  • Brainstorming yields the power of the group, while one-on-ones often reveal what stakeholders really think.

Tip #2 – Ask context questions

A context question is one that surrounds the solution that we’re building. While we do need to ask questions about the  solution’s features and functions, such questions do not provide the complete picture.

I like to group context questions into four categories of questions:

  1. These questions relate to what’s happening outside the organization and include questions like demographics, language, weather, technology, and compliance/regulatory. These may or may not apply to the project. If they do, we need to understand their effect on our work.
  2. These pertain to how ready the organization is to accept the final product. The bigger the change, the more issues there usually are. We need to know, for example, which stakeholders will be on board, which will resist the change, and what needs to be done to prepare the organization for the change.
  3. We need to ensure that the business problem we’re solving and the proposed solution align with the organization’s goals and objectives.
  4. These context questions are usually those about the current state.

Tip #3 – Know when to use open-ended, closed-ended, and leading questions

Open-ended questions allow the respondents to expand their thoughts. We ask open-ended questions any time we want to learn more. For example, we ask these questions when we’re just beginning an effort, during brainstorming, and when we need to get all the issues out on the table, etc.

Closed-ended questions are forced-choice questions. They have the answers embedded in the question itself, sometimes explicitly as in a survey question, or implicitly. I like to ask closed-ended questions when stakeholders are all over the board and we need them to focus. For example, given all these issues we’ve identified, if you had to choose 10, which would they be?

Leading questions are not questions at all. They sound like questions, but they’re really our opinions stated in the form of a question. “This is a pretty cool feature, isn’t it?” My least favorite leading question is one we often hear: “Have you ever thought about…solution.” Again, it’s not a question. It’s us presenting our opinion rather than asking what our stakeholders think. What’s wrong with that? Remember we’re in the middle of elicitation, which is about learning. Presenting our solutions during elicitation cuts off exploration because we’re telling rather than learning. Later, after we’ve completed elicitation and analysis, whether it’s for the whole project or a smaller part, we can make a thoughtful recommendation.

To summarize, effective elicitation is critical to the development of a final product that our stakeholders are happy with. Elicitation is not easy. There are several pitfalls which are difficult to overcome. But if we follow the tips provided in this article, we will deliver a product that our stakeholders actually like and want to use.

[i] I use the terms solution, final product, and end product synonymously. It’s the solution to the business problem we’re solving. It’s also the product or product increment being produced at the end of the project, project phase, or iteration.

We all Communicate, So What Makes It So Difficult?

Communicating is something we do throughout our lives. Much communication is verbal, some is not.

We use different languages to communicate different needs. Babies have their language, teenagers theirs. We talk both formally and with slang, sometimes using proper grammar, sometimes shortcuts and acronyms. Sometimes we talk without communicating and sometimes we communicate without talking. Given its pervasiveness, it seems that by now we all would have learned how to do it effectively. But as we are all aware, there is an abundance of miscommunication everywhere we look.

Still, communication is a key skill for all business analysts (BAs) and project managers (PMs). It’s not possible for us to be successful without effectively communicating. Here are three tips for effective communications and how to avoid common communications pitfalls.

Pitfall #1 – Same words, different meaning

As BAs and PMs we often encounter what is known as having different mental models. This happens when a stakeholder uses a term or phrase, and we interpret it differently. Or vice a versa. We use the same words, but it means different things to each of us. One important reason is context. Although we each using the same term, our context is different.


Recently my husband and I went through a home renovation project with an outside remodeling company. We did this all virtually. We looked at selections on Zoom and had Zoom meetings as needed to resolve issues. At one point we got a text from the PM stating that they had encountered an issue relating to a post in the center of the master bath. This issue had been uncovered during the “demo.” I wrote back to ask when the demo took place and why we, as the sponsors, were not at this demo. A series of texts and emails got us nowhere, so we set up a Zoom meeting. We soon realized that to him the demo meant demolition. I told him that to me a demo was a demonstration. Thus, the confusion. That cleared up, we proceeded to discuss the problem. My context was a Scrum demo, a review of the product with the product owner and other business stakeholders. His context was in the building industry, where demolition commonly precedes construction. The same word had entirely different meanings.

Pitfall #2 – Too much emotion or not enough emotion

Another common pitfall is to put either too much or not enough emotion into our communications. We all communicate our emotions to a greater or lesser degree. We do this either verbally or non-verbally. Non-verbal communication accounts for most of the communication taking place. So even if we never say a word, we usually communicate how we’re feeling. And there’s nothing wrong with that. But when our anger or frustration or other negative feelings are triggered and we react too quickly, we risk throwing up communication barriers that will be hard to break down once we calm down. That’s why we know that we should wait a while before sending an angry email or text or making that phone call to blow off steam.

On the other hand, when the situation calls for empathy and kindness and we show none, we also risk putting up communications barriers. When we come across as Sherlock Holmes, Conan Doyle’s famous analytical detective who was often perceived as cold and dispassionate or Mr. Spock, Star Trek’s half human, half alien epitome of cool detachedness. we also throw up barriers. Like everything relating to effective communications, it’s best when we temper our emotional reactions to the situation.

Pitfall #3 Asking the right questions the wrong way (or asking the wrong questions)

“We thought we had the answers-It was the questions we had wrong” from U2, Eleven O’Clock Tick Tock

Speaking of Sherlock Holmes, I see may similarities between effective PMs/BAs and detectives. Both use logic and intuition to synthesize disparate pieces of information and connect the dots. This ability is important –in the case of the detective to catch the bad guy, in the case of the BA to understand and solve business problems. In addition, both are curious. They ask pertinent questions, listen to the responses, and keep digging until satisfied. Sometimes their questioning takes unusual and unexpected turns. This is because neither accepts the answers given them as being the final answer. They probe. Sometimes they go down rabbit holes. But the good ones know when to pursue a line of questioning and when to let it go, when to ask follow-up questions and when to think further about what’s been said.

Asking pertinent questions is one of the most useful skills project professionals have. Good questions not only uncover needs and requirements but also open communications. Likewise, poorly worded questions can end conversations quickly. For example, “what do you like bet and least about this solution?” can open communications. “ Isn’t this the best option?” can shut it down.

Perhaps even more important is the way we ask questions. “Why…” is a great question. It uncovers almost every aspect of our work, including the current and future state processes, the business need for any given initiative, and questions relating to stakeholder commitment, to name just a few. However, we do not want to sound like cranky toddlers, asking “why, why, why?”.

Our tone is also important and can put people at ease or on the defensive. We don’t want to sound like prosecuting attorneys, which can easily shut down communications. We are not, however, always aware of how we come across. Our intention might very well be to put people at ease, but our effect might be very different. And sometimes when communicating across cultures, tone, facial expressions, and other non-verbals can be misinterpreted.

Finally, many BAs and PMs ask the wrong questions, often in the form of leading questions. Leading questions sound like questions, but they’re really solutions. Questions like “have you ever thought about…” or “Isn’t this solution the best choice …” sound like we’re engaging our stakeholders, but in reality, we’ve just cut off communications. We’ve presented what we think rather than asking what our stakeholders think. After we’ve asked all our questions, we do want to present our recommendations. But not until we’ve asked our questions and done our analysis.

These three pitfalls represent just a few of the many that get in the way of effective communications. However, understanding the context, displaying the right emotions for the situation, and asking the right questions is a great start.

AI and the Digital BA—What’ It All About? Part 3

This is the last of a three-part article written with answers to some of the most frequently-asked questions I get about artificial intelligence (AI).

In Part 1, I addressed some common terms and issues related to AI as it is used in a business context. In part 2, I focused on the various roles that BAs play on AI efforts. In this article I will discuss various subjects like the need for AI translators, the importance of AI governance, and the digital PM. As with Parts 1 and 2, I will use a Q/A format.

Why is the role of AI translator so important?

Recently there have been numerous articles in journals like Forbes and Harvard Business Review (HBR) about the need for an AI translator role, someone who acts as a go-between between the organization’s data scientist and strategic decision-makers. These articles don’t mention the BA specifically, but their descriptions are consistent and describe a role that BAs have routinely played—that of ensuring that business stakeholders and technical staff understand each other. I think the AI translator is a perfect role for any experienced BA. Data scientists need to understand the strategic direction of the organization, the business need for the initiative, and the related business rules that will be required on many of the AI systems. Business stakeholders need to understand the impacts of their decisions.

In the early days of AI, it was not uncommon for data scientists to guess at the business rules and make AI-related decisions themselves. This did not go well, as documented in Computer World.[i] The next phase was to have data scientists get input directly from the business. This, too, did not go well. So some organizations have introduced an intermediary role—the AI translator. They understand that they need to have someone who understands the importance of business input and who can also speak comfortably with the data scientists—a translator role. That’s where the BA comes in. We’ve always been translators. Translating the requirements into designs and back to ensure stakeholders get the functionality they ask for and really need. Yes, this is a perfect role for the BA and one that can greatly contribute to successful AI projects.

How much governance is needed on AI initiatives?

Many of the challenges on AI initiatives are no different from those on other projects. In a survey published in Information Magazine in July 2019, respondents included these factors as the major challenges:[ii]

  • 50% – Lack of leadership buy-in
  • 49% – Lack of metrics, especially surrounding data (bad data, ownership, etc.)
  • 37% Internal conflict
  • 31% Time required to implement (takes longer than expected)
  • 29% Unexpected costs

What do these factors have to do with governance? Each one directly relates.

  • Executive buy-in. Among other things, no executive buy-in makes it almost impossible to reach consensus on the need for and nature of governance itself.
  • Data metrics. Governance guides such metrics as how accurate historical data needs to be.
  • Internal conflict. Governance establishes guiding principles around conflict, how it will be resolved, and by whom.
  • Time and cost overruns. Project governance will help such things as keeping projects on track, how and when to communicate when they’re not, and even what “longer than expected” means, so forth.


The article goes on to suggest that in order have successful AI initiatives, organizations need to hire data stewards to manage and coordinate the organization’s data. The data steward would be a steward in the real sense of that word: someone to manage, administer, and generally take care the data. In order to manage and administer, this role needs to help the organization determine what that governance will work and then to be responsible for its governance. Sounds like a BA!

In a podcast, cited in Harvard Business Review (HBR) in August 2019, De Kai and Joanna Bryson join Azeem Azhar to discuss the importance of governance on AI initiatives.[iii] They define governance as coordinating resources involving both internal AI modules and humans. They suggest that there needs to be an independent, oversight group with the authority to apply agreed-upon governance, and I think the seasoned BA is in a perfect position to facilitate this group.

Is there such a thing as a digital PM and if so, how does that role differ from a digital BA?

Digital BAs are similar to all BAs in that they do BA tasks, use BA techniques, and need the same BA competencies (see Part 1). Likewise, digital PMs do PM tasks, use PM techniques, and need PM competencies. They work with the sponsor to charter AI projects and help organizations implement them. Although not yet a common role or title, having someone with experience managing AI projects can be valuable to organizations. Again, they’ll still do their tasks and use their techniques appropriate to PM work, but being a PM on an AI project and coordinating all the resources entailed on such an initiative will most certainly require a healthy working knowledge of AI.

Another way to look at digital PMs is that they use AI systems and tools to manage AI projects. In an article in Forbes Magazine on July 2019, the author focuses on the use of automated AI systems and tools to help digital PMs manage their projects.[iv] He says, “AI, with its unique ability to monitor patterns, is a capable assistant to PMs.” In addition to helping with the routine admin tasks, AI can provide all kinds of predictive analytics. AI tools can look at hidden complexities and all the moving parts inherent in a complex project or program and predict areas of concern, from project slippage to team members behavior and more.

The digital PM, then, is one who not only takes advantage of AI tools to do a better job of managing projects, but also has enough AI expertise to manage complex AI projects.

Does “digital” have to be related to “AI?”

In the past, the term “digital” was used broadly. It referred to any digital project, like development of a website, digital marketing, or developing the organization’s presence on social media. Nowadays the term is generally used to refer to “AI,” which encompasses all things related to machine learning, predictive analytics, and data mining. More recently the terms “AIs” and “AI systems” are also commonly used.

I hope you have enjoyed this three-part series. Look for more AI-related content in the future.


[i], Robert Mitchell, July, 2013

[ii], Data Governance in the Age of AI, Gienna Shaw, Information Magazine, July 19, 2019.

[iii], Podcast, De Kai and Joanna Bryson

[iv],, Forbes, Tom Schmelzer, July 30, 2019

AI and the Digital BA—What’ It All About? Part 2

This is the second of a two-part article written with answers to some of the most frequently-asked questions I get about artificial intelligence (AI).

In part 1 I addressed some common terms and issues relating to AI as it is used in a business rather than technical context. In this article I will focus on the various roles the BA plays to help organizations with their AI initiatives. As with the last article, I will use a Question and Answer format.

Quick Review of Part 1

What is AI?

AI is an umbrella term that encompasses all digital technologies, like machine learning and predictive analytics, which are used to make predictions and recommendations using massive amounts of data. In short, it’s machines doing human tasks that range from simple to complex.

What is a digital business analyst (BA)?

A digital BA is a trusted advisor who helps organizations with their AI strategies. Rather than developing the strategies, they provide their advice about impacts to and value of AI initiatives.

What skills does a digital BA need?

The skills don’t change, but the subject matter is incredibly complex.

How successful are most companies with their AI efforts?

Not very. Most AI initiatives totally miss the mark and result in all kinds of issues, not the least of which is financial. A recent Forbes article details some of the resulting issues.[i]

What is digital fluency?

Digital fluency is defined as “The ability to interpret information, discover meaning, design content, construct knowledge, and communicate ideas in a digitally connected world.” [ii]

Part 2

What is the role of the BA on digital projects?

A digital BA can be involved in many aspects of an AI initiative. Some of the roles that a BA may play include one, several, or all of these:

    • Strategic BA. In this role BAs help organizations determine the value and direction of the AI effort. Some of the specific outputs can include:
      • Business case on the value of the AI initiative
      • Recommendation(s) on the best strategic approach to the AI initiative
      • High-level implementation plan
      • Pitfalls to avoid
      • First look at state of the data to be used
      • High-level governance plan


  • AI coordinator who implements the AI strategies. In this role the BA coordinates AI initiatives across project and portfolios.
  • BA on a project(s) that is part of the AI initiative. Although this role is similar to any BA role, there are some differences. The BA will need at least working knowledge of, if not expertise in, AI.
  • Business data analyst. In this capacity the BA may
    • Analyze the current data to determine how much is useable, how much needs to be cleansed, and how much needs to be collected
    • Recommend an approach to cleansing the dirty data
    • Help determine the data needed for predictive analysis and other AI functions
    • Interpret statistical analysis resulting from AI functions
    • Be an AI translator to facilitate communications between the data scientist and the business stakeholders.

What’s the difference between a data scientist, data analyst, and BA who works a lot with data?

These 3 roles can be confusing. At first glance we might not recognize differences or understand why the distinctions are important, but they are. I discussed the possible roles of the BA above, so here is a brief description of the other two.

Let’s take the easy one first—the data scientist. Not that the role is easy, it’s just easier to explain why this one is different from the other two. The data scientist is the most technical and needs the most expertise. About three-fourths have master’s degrees in mathematics and statistical analysis. Over half have Ph.Ds.

Data scientists create the predictive models. They determine what the machines need to do in order to meet the business objectives. They decide which algorithms are best given the objective of the AI initiative so that the machines can be trained to learn. Having said that, unless there is good governance and substantial input from business stakeholders and decision-makers, those algorithms have the potential to be created with built-in biases. Likewise, they may not be the best ones to solve the business problem.

The data analyst. This is really a subset of the BA role. I described some of the high-level functions above. On AI projects it’s necessary to focus on the data because it’s so integral to the success of the effort. Machines learn based on historical data. Issues like dirty and redundant data, as well as ownership of the data aren’t easy and require a strong facilitator and influencer to resolve. This data analyst role is such an important role that IIBA has created a new certification—the certification in business data analysis (CBDA).

What are some of the business and technical pitfalls that the digital BA should be aware of?

Here are some of the big ones:


  • Beginning with AI as a solution without a defined problem
  • No real AI strategy
  • Unrealistic expectations of what AI can do for the organization

Data and technology

  • Dirty data
  • Business processes don’t support the technology
  • Weak security

Organizational and communications pitfalls

  • Siloed and cumbersome business architectures
  • Inflexible organizational structures
  • The data scientists create the business rules
  • The data scientists talk directly to the business and the business does not understand
  • Confusing roles on AI projects
  • Built-in biases in the algorithms

In Part 3 of this article, we will explore other aspects of how BAs can help organizations get the most value from their AI initiatives. Some of the topics we will cover include the need for governance on AI efforts, the recognition of the importance of the AI translator role, the digital PM, and more.