Skip to main content

Author: Yvonne Harrison

Requirements are a Contract

Ensuring that project requirements have been understood and agreed to by all stakeholders is one of the foundations of a Business Analyst’s work.

However, that understanding and agreement from the stakeholders doesn’t always translate to the successful delivery of the project. Even though everyone on the business side has confirmed that the BA understands their wants and needs, those requirements can wind up being misinterpreted (or outright ignored) once they reach the technical team.

No matter how careful the BA has been, things seem to slowly and surely fall apart. It seems to make no difference if the project is using agile, or waterfall, or the requirements are documented in Word, or Excel, or in requirements management software such as IBM DOORS, or SPARX Enterprise Architect, or JIRA, or written on a sticky note. The results are depressingly similar from project to project.

 

Whatever is presented in the demonstration never seems to be what anyone in the business asked for. Which usually becomes the Business Analyst’s problem. Somehow the BA didn’t do enough documentation or missed something.

Over the years, I have realised that no amount of documentation, meetings, or stand-ups will save the BA. That’s because the problem is not with the requirements or the Business Analyst. The problem is with the quality of the technical team.

Before anyone gets upset, no, I am not insulting the technical team. What I’m suggesting is that most projects are a mixed bag of personalities and experience. There might be new junior developers, overworked senior developers juggling multiple projects, people who don’t want to read the documentation, and sometimes, people who outright ignore the requirements because they have determined, “that’s not how the business works.” The project is either over time and out of budget (or both) but even then, can never seem to finish.

No one seems to discuss the impact the technical team has on the requirements themselves and the amount of stress it places on the Business Analyst.

 

Think like a lawyer

What can a BA do to reduce the pressure they’re feeling?

In my opinion, a BA may find it useful to think like a lawyer.

A lawyer researches the parties involved in a contract. What are these parties like? Are they prepared to negotiate, or do they dispute everything? As a BA, you do have one advantage in that you will have communication skills and you can deal with different people and personalities. This allows you to figure out who you may be dealing with. What are they like? How experienced are they? Are there issues within the team? Although you could argue that this is just a RACI matrix – this goes one step beyond. You don’t care if they’re responsible, accountable, consulted or informed. You want to know how likely it is that the project will wind up in a mess. Something the Project Manager is unlikely to acknowledge until it’s too late (depending on the PM’s experience).

 

This changes the focus because not only are you eliciting your requirements from your stakeholders, but in the background, you’re also trying to determine what the technical team is like. The makeup of the technical team is going to help you determine your deliverables.

Alistair Cockburn states in the introduction in Writing Effective Use Cases, “A use case captures a contract between the stakeholders of a system about its behaviour.”

 

Advertisement

 

Decide on the type of contract

Keeping with a more legalistic view of requirements, the Business Analyst can then decide if the contract should involve a high degree of ceremony in which requirements are meticulously explained and documented, formally signed off, with meetings scheduled to discuss these requirements in detail. Or whether you can take an approach that is highly collaborative and relies on face-to-face chats between the team and the BA, with light touch documentation (some user stories, and a couple of whiteboard sessions).

In other words, how you construct your contract, will depend on how tightly you need to bind the technical team to that contract. And the type of contract the binds the two parties together will depend on how much you trust that other party. If you trust the other party, and you’ve known them for a while, then a handshake agreement may be all that is needed. A handshake agreement tends more towards an agile approach with some user stories and daily discussions, over and above a standup.

If you have less trust, or the project has a lot riding on it in terms of its budget or the features being delivered, you may decide on the equivalent of a legally binding contract. It may consist of several documents, and the documents are all formally signed off. Like all weighty contracts, you need to write in a manner that removes all ambiguities.

 

Much like a lawyer, your job is to ensure your wording is not open to misinterpretation or provides a way for the technical team to deliver something else entirely.

And if they do, you have your signed off documentation in which you can point to it and politely ask why the clause was ignored – and how they’re going to remedy the problem. Because the requirements are a contract. One that all parties need to adhere to.

Cognitive Biases Versus Exception Flows

Despite the various methodologies and business analysis techniques that have surfaced over the years, organizations continue to either deliberately, or accidentally, ignore the possibilities of errors or issues occurring with their software.

For decades, the IT industry has continually delivered, or canceled, projects that are inherently flawed. The failure rate has improved slightly over time, but it seems that project teams continue to be surprised by problems in their software, even though the same mistakes have been made by teams in almost every country on the planet for years.

Much of that failure, rather than being caused by a specific issue such as ‘poor requirements’, is more likely to be initiated by one or more cognitive biases at play in a project team and with stakeholders.

There are few organizational cultures aware enough to acknowledge that cognitive biases play a significant part in every day decision making. A cognitive bias is defined as, “(…) a systematic pattern of deviation from norm or rationality in judgment. Individuals create their own “subjective reality” from their perception of the input. An individual’s construction of reality, not the objective input, may dictate their behavior in the world. Thus, cognitive biases may sometimes lead to perceptual distortion, inaccurate judgment, illogical interpretation, or what is broadly called irrationality.” (Wikipedia.)

Cognitive biases aren’t deliberate or born from a place of malice, but they’re very real, and can derail any project at any point in time. Cognitive biases are inbuilt into human behavior, and for the most part appear to have evolved as an adaptation during human evolution. In general, a set of heuristics that could quickly deliver an answer served most humans in the past far better than taking too long to detect danger and wind up being injured or killed. “Within this framework, many ostensible faults in human judgment and evaluation may reflect the operation of mechanisms designed to make inexpensive, frequent errors rather than occasional disastrous ones (Haselton & Nettle, 2006; Johnson et al., 2013).”

Part of the issue is that there are so many cognitive biases in existence that the business analyst is going to end up fighting one or more while trying to do their job. For example, with Parkinson’s Law of Triviality, people will spend more time discussing minor issues because they are easier to understand. However, complex issues are ignored. This leads to everyone becoming involved in resolving the minor issues because they can show their contribution to the project, while the complex issues are parked, or have only one or two people trying to solve them.

In other words, a business analyst, by trying to discover possible exception flows may also need to discuss an issue that has a higher amount of complexity than normal. As the complexity is difficult to resolve, the exception flow as a topic morphs into the more trivial issue of personality and team dynamics. The questions around an exception flow is interpreted as the business analyst asking what is wrong with the software and therefore the team. The developers are insulted because it seems to imply their coding is substandard. The project manager is unhappy because they don’t like negative people on the team, and trying to think of exceptions is going to add to the timeline for delivery. For the agile team relying on a product owner, the product owner needs to be particularly adept at being able to self-critique their own requests and ideas, and accept input from others, which, let’s face it, doesn’t always happen.

As Carol Tavris and Elliot Aronson point out in their book Mistakes were Made (but not by me)  “Because most people have a reasonably positive self-concept, believing themselves to be competent, moral and smart, their efforts at reducing dissonance will be designed to preserve their positive self-images.”

The business analyst’s quest to find exception flows in an effort to protect the organization should failure occur, could be doomed because the implications of an exception occurring interferes with the project team’s self-concept of being competent, moral and smart.


Advertisement

Now, I assume that some people reading this article are muttering to themselves, “This is all about communications skills. I don’t have any problems at all because people listen to me.”

This is where another cognitive bias comes into play. If you’re a beneficiary of the Halo Effect, then people may be listening to you in spite of your communication skills. The Halo Effect is a cognitive bias in which a positive impression of a person may influence another’s person’s opinion. In other words if you’re a well-dressed, good looking person, you’re probably going to have an easier time convincing someone that there might be a problem with their project. This is because your appearance is subconsciously associated with other aspects, such as assuming you’re a good and/or smart person. This is in direct opposition to the Horn Effect, in which an unattractive person is subconsciously assumed to be morally inferior and not as smart as an attractive person.

Worse, cognitive biases can interact together to turn seemingly trivial design decisions into software that causes a significant impact once implemented into production. For example, a business unit designs a long form with many text areas because they want a lot of data from their users. The business analyst may advise them that this may not be a good idea if their users type slowly or the data collected via text entry can’t be effectively used for data analysis, but the business can’t be convinced otherwise. The business unit’s conviction that the form is okay could be a result of the Law of Instrument (we have always used forms), Optimism Bias (there is no reason why this wouldn’t work), Planning Fallacy (a user won’t spend much time on the task), the IKEA Effect (we designed this form, we’re proud of this form, therefore it is a good form), and perhaps a touch of Illusion Superiority (our business unit is the best because we know our users and the business analyst does not.)  From there it goes to the tester, who happens to be a touch typist and can quickly enter data into the form. The form passes testing. The business unit take a ‘ I told you so’ approach that the business analyst can’t defend. No one is aware that the developers have left the timeout at a default of twenty minutes because no one told them otherwise.

When the form goes live, it creates havoc. Many users, as predicted, are slow typists and it takes longer than twenty minutes to fill out the form. The users encounter timeouts. The data in the form hasn’t been saved. The users are not pleased. They contact the Call Center. The Call Center scrambles for a solution that can tide the users over until the developers can make the change, and/or the form is redesigned. The user is given the impression that the organization is incompetent, and simply doesn’t care about the user at all.

So, how can a business analyst work on exception flows when the stakeholders and project team aren’t interested, won’t acknowledge the need for them, or seem to only deal with easily understood issues. What can a business analyst do if they’re working in an organization that is unaware of the influence of cognitive biases?

  • The exception flow will occur whether the organization wants to acknowledge it or not. Even if you can’t get any ‘buy-in’ from the team or the stakeholders, make a list of possible exception/issues anyway. Look for solutions and document a resolution via the exception flow as best you can. (Note that in this article an exception flow is considered to be separate to a risk register item.)
  • Watch out for cognitive biases in yourself. Which is easier said than done.
  • Realize that unless you’re working in a team that acknowledges the realities of cognitive biases then those biases will not only be on full display, they will also be in full effect.
  • You can do nothing to change anyone’s mind (unless you benefit from the Halo Effect). Research into cognitive biases say that it’s almost impossible to change a person’s mind, and yes, some people can do this in some organizations but it’s delusional to assume you can do it in all organizations.
  • As above, think up possible exception flows anyway and keep them in your back pocket so you can deal with them should they occur. If they do occur, you can surface them, and highlight how the issue could be solved. At this stage you will have a higher chance of being heard, as people will be looking to keep their self-image intact. It’s a good news story for a project manager if they can go to a higher level manager and already have a solution.

Finally, be kind to yourself. If you’ve tried your best to identify issues that might occur, and you’ve got several exception flows, and you’ve been ignored and your meetings hijacked, then it’s not because you’re the world’s worst business analyst. It is more likely that you’re battling cognitive biases, and cognitive biases don’t respond well to logic or reason.

“The secret: One day builds always take longer than a day.”

After reading Every Tool’s a Hammer by Adam Savage I was struck by how many lessons he’d learned as a maker that were applicable to the act of creating software.

It also made me reconsider the title of Business Analyst. As explained in the BABOK it can be applied to “any person who performs business analysis tasks”. Analysis is the practice of “enabling change in an enterprise by defining needs and recommending solutions that deliver value to stakeholders. Business analysis enables an enterprise to articulate needs and the rationale for change, and to design and describe solutions that can deliver value.”

The general definition of analysis is “a detailed examination of anything complex in order to understand its nature or to determine its essential features: a thorough study” (Merriam Webster), or as Google succinctly puts it “detailed examination of the elements or structure of something.”

These definitions seem narrow. A Business Analyst doesn’t only examine something to understand it. Enabling change can be an arduous process, depending on the organization and the people involved. The Business Analyst tries to turn a customer’s vision into a concrete and understandable grouping of requirements/user stories/use cases, along with models/presentations/diagrams.  A BA needs to negotiate with all stakeholders and help everyone to agree and collaborate.

It’s a multi-faceted job and goes well beyond the boundary of ‘analysis’.

Perhaps a different way to view ourselves isn’t as Business Analysts, but as ‘makers’. A term used by Adam Savage and others to describe a person who creates, makes, and produces something. That something can be anything. For example; a wooden bird house, a suit of armor, a costume for a convention, a model for a film or TV series, a painting, a novel, or a new app (to name a few). Adam Savage doesn’t discriminate in terms of what is being made—just that something is created out of nothing from a desire to make an idea exist in the world.

The something can be created by one person, or multiple people trying to deliver a common vision. For example, a film crew working together to deliver the vision of the director.

Switching from the perspective of Business Analyst to that of a maker also transforms the typical frustrations of BA life into common issues that all makers contend with. The trials and tribulations that the project is going through is, in all likelihood, pretty much standard.


Advertisement

For a start, every maker acknowledges that mistakes will be made. The first version will, most probably, not be the final version. In keeping with this premise, demands of perfection early in the process means that your end product will probably have issues. As a BA / maker, it’s a more relaxing process when you realize that your first draft of your use cases/diagrams/user stories will always have issues, if not be outright awful. You won’t have all the information, and you won’t have a clear view of the vision unless you’ve managed to get everyone to articulate it and agree to it in a comprehensible manner. You’re going to need several attempts at this, and each piece of feedback helps you figure out a more exacting version of the overall idea and helps you solidify how it might be delivered. Like any good maker, you realize it’s an advantage to fail at the beginning because it gives you a chance to recover and iterate. That’s how making/creating works in real life. Not because Gartner published an article about failing fast.

On top of that, maybe that estimate you gave about how long it takes to produce workable requirements isn’t all that precise. As Adam Savage says in his book, “One day builds always take longer than a day.” Makers know that the only way an estimate gets close to being right is by having built a similar thing many times over. As per making, the materials you use to create something is usually standard—but the thing you’re building is not. Just because your friends built their cosplay outfits out of EVA foam and a hot glue gun does not mean that your envisioned cosplay costume is going to work because you’re using the same materials. Likewise, even if you gathered a set of requirements, and the developers are all using C++, and you’re running on an Apache server does not mean your software build is standard. Yes, everyone on the project, especially the BA, will be under pressure to deliver by a date (whether using waterfall or scrum). The trick is to acknowledge the date and work towards it, knowing full well that estimates are not an absolute due to challenge of what you’re trying to achieve. As Adam Savage writes in his book, “As a maker of any kind, with any project, you will never really know what your destination is. You know your starting point, you know roughly what your “problem to solve” is, and you can try having a whiteboard session about final goals to help figure out what you’d like your destination to be, or at least what the rough outline of it should be. (…) But it won’t change the fact that nothing can quite prepare you for what it’s like to set out along the path of creation only to realize you are not going to end up where you planned.(…) Put another way: How many of your projects turned out EXACTLY like you intended? How many went as smoothly as you expected? Mistake free. Distraction free. In my experience, the answer is pretty close to none.”

After reading Adam Savage’s book it struck me that a BA, and in fact, the entire project team, would be far less stressed moving to the mindset of a maker. Methodologies and frameworks and manifestos are not needed to acknowledge a basic fact of making: Creating something from nothing is difficult work.

I’ve only briefly touched upon the premise of the book, but I would encourage you to read it and gain a different perspective on the art of business analysis—or more to the point—the art of making.

Can the Business Analyst Survive the Future?

In the early 1800s, lacemaking was a necessary source of income for women and families with the lace fetching a reasonable price when sold.

By 1843, with the advent of lacemaking machines, prices collapsed, and factories employed women and children for a pittance. Most wages were barely enough to live on, and many families were thrown into subsistence living. (Ivy Pinchbeck, 1977.)

The Eastman Kodak Company dominated the sales of camera film, employing thousands of people through much of the twentieth century—until digital technology arrived in the 1990s and unable to adapt fast enough or target the right products, Kodak filed for Chapter 11 in 2012.

Automotive parts manufacturing workers are now in the same situation. What was once a job that allowed men and women to support their families, barely pays a living wage and has a high degree of injury associated with it. (Waldman, 2017.)

And on a personal note, my great-great-grandfather earned his living as a wheelwright. That’s a person who builds and repairs wooden wheels and is a profession that had been around for centuries until the arrival of the modern automobile. In a few short decades, the craft of the wheelwright had nearly disappeared.

It seems that about forty years (or less) is all it takes to destroy a formerly secure job. The question is, could it happen to the role of the business analyst, and when?

This isn’t a strange question when even coders are experiencing a shift in how their role is perceived.

In February of 2017, WIRED magazine published an article asking if coding was the next blue-collar job in America. The article pointed out that coding had become a two-tiered career choice. Silicon Valley only employs eight percent of the USA’s coders and programmers. The rest are in other business sectors. “These sorts of coders won’t have the deep knowledge to craft wild new algorithms for flash trading or neural networks. Why would they need to? That level of expertise is rarely necessary at a job. But any blue-collar coder will be plenty qualified to sling Java-Script for their local bank.” (Thompson, 2017.)

In an interview with NPR, Clive Thompson, the article’s author, noted that a huge number of available coding jobs don’t require the high-level skills that would get a person a gig at Google. “But the truth is, you know, an awful lot of programming doesn’t require or need that type of, you know, crazy pouring out of creativity. I guess it is more like maintenance or the slow stable making sure that a company is sort of moving along, that its software is working.” More importantly, this sort of coding doesn’t always require a college degree. There are plenty of self-taught coders that are gainfully employed, and there are more and more courses available where a person can get a junior level position after completing the syllabus.

If you look up the definition of blue-collar via Wikipedia, the page says that a blue-collar worker is a person who performs non-agricultural manual labor that can be skilled or unskilled. Examples of blue-collar jobs include firefighting, manufacturing, and mining. But why would Clive Thompson compare what’s an obviously non-physical profession to a blue-collar job? I think if it’s viewed in terms of the changes happening now, more and more blue-collar jobs will shift (and have shifted) from the purely physical. Many blue-collar jobs are already there. Working on a car assembly plant in the 21st century means working with, and alongside, technology. Modern mining operations require a similar level of understanding about the machinery once the person in the entry-level position wants a promotion. However, the primary identifying characteristic of the jobs will continue to be the requirement to produce the same things over-and-over-again. Viewed from that perspective, assembling code in the future could wind up like assembling a car. Everyone tackles a piece of code that they repeatedly produce (without much variation), but they never do more than that.

This brings me back to the role of the business analyst. Viewed from the blue-collar perspective, the role of business analyst shares much of the same characteristics as ‘every day’ coding. It’s rare that we work on a project that’s inventing something new, and it’s also rare that we need a vast amount of creativity to get the job done. We perform a lot of the same actions on every project, with minimal variation on the theme. Even the BABOK states that the job can be done by many different roles. “A business analyst is any person who performs business analysis tasks described in the BABOK® Guide, no matter their job title or organizational role. (…) Other common job titles for people who perform business analysis include business architect, business systems analyst, data analyst, enterprise analyst, management consultant, process analyst, product manager, product owner, requirements engineer, and systems analyst.” (BABOK v3.)

There are many people performing business analysis, under many guises, and more joining the field every day as the demand increases. We come from a variety of backgrounds (like coders) and don’t need a specialized college degree. Much like the coders Clive Thompson talks about in his article, we’re the workers, “making sure that a company is sort of moving along.”

And this creates an issue because this means that there are many factors that could disrupt the role of the business analyst in the future.

The most obvious one is supply and demand. To date, there’s always been a demand for business analysts. The market is currently in an equilibrium where supply and demand appear evenly matched, and the price the market is prepared to pay for business analysis skills means most people in the role can maintain a solidly middle-class existence.

However, the law of supply and demand also says that if ever the time arrives where the demand side decreases because the economy is forcing organizations and businesses to cut back on spending, or an oversupply of business analysts means the price can be lowered for the service, then the role becomes far less attractive.

The role can also be disrupted by technology and methodologies. Agile already attempted to do this with a focus on developers and the business working directly with each other. With AI on the horizon, it’s entirely possible someone will figure out a way for a customer to answer a series of questions and the AI will synthesize the answers and produce a decent specification or set of user stories as an output.

The only thing that may save the business analyst’s role in the near future is that the role also falls into the pink-collar job classification. Originally used to denote a group of service jobs predominantly performed by women in the 1970s, the term has slowly morphed into a way to classify jobs that require social skills and consists of interacting with people and customers.

“For an office worker, that could mean being able to communicate across departments. For someone in customer service, it’s interacting with another complicated human. For a care provider, it’s the empathy to help someone vulnerable and in need. These are all skills robots are really bad at—at least for now. And they have, over the last three decades, become increasingly vital in the labor market.” (Greenfield, 2016.)

But as we all know, industries change quickly and roles and jobs that were seen as necessary, suddenly become unnecessary.

So, for now, and in the near future, our jobs as business analysts might be safe. Our social skills and certifications may save us. But as plenty of people have found out the hard way over the centuries and decades, disruption in an industry comes swiftly, and the effects are devastatingly immediate.

Bibliography
Clive Thompson, “The Next Big Blue-Collar Job is Coding”, WIRED (2017): https://www.wired.com/2017/02/programming-is-the-new-blue-collar-job/.
IIBA, BABOK v3 A Guide to the Business Analysis Body of Knowledge: (International Institute of Business Analysis, Toronto, Ontario, Canada, 2015).
“‘Wired’ Declares Coding As Next Blue-Collar Job Boom,” NPR (2017): http://www.npr.org/2017/02/10/514566974/wired-declares-coding-as-next-blue-collar-job-boom.
Rebecca Greenfield, “Forget Robots—People Skills Are the Future of American Jobs. You might call it pink-collar work. Experts call it the future of the labor market”, Bloomberg (2016): https://www.bloomberg.com/news/articles/2016-12-07/forget-robots-jobs-requiring-people-skills-are-the-future-of-american-labor.
Peter Waldman, “Inside Alabama’s Auto Jobs Boom: Cheap Wages, Little Training, Crushed Limbs. The South’s manufacturing renaissance comes with a heavy price”, Bloomberg (2017): https://www.bloomberg.com/news/features/2017-03-23/inside-alabama-s-auto-jobs-boom-cheap-wages-little-training-crushed-limbs.
Ivy Pinchbeck, Woman Workers in the Industrial Revolution: (Frank Cass and Company, 1930, 1969,1977).

Stuck in the Middle

As Business Analysts, we have all been there.

We held high hopes for a collegial give-and-take in a workshop or a productive meeting where processes and requirements are teased out. In our imagination, visions of cooperative brainstorming danced before our eyes along with the shimmering promise of decisive and swift stakeholder agreement. The solution would be optimal and deliver a quality outcome.

Then the project starts, and the original objectives are lost. It moves from the potential of an excellent solution to a half-baked compromise. The users only have a fraction of what they wanted, and that fraction does not do anything useful. Everyone swears they did not ask for what he or she got although our requirements management spreadsheet/system says otherwise (and so do the stakeholder signatures). In the world of Scrum, the product owner looks at the design and keeps saying, “Is that what I asked for?” Moreover, the design keeps changing.

The Business Analyst struggles their way through the project feeling like a failure, wondering why the exciting techniques described in the BABOK (brainstorming, collaborative games, experimenting, research!) seem so ineffective.

For example, the BABOK 3 notes that “Workshops can promote trust, mutual understanding, and strong communication among the stakeholders and produce deliverables that structure and guide future work efforts.” However, the BABOK does list some limitations. “The success of the workshop is highly dependent on the expertise of the facilitator and knowledge of the participants. Workshops that involve too many participants can slow down the workshop process. Conversely, collecting input from too few participants can lead to the overlooking of needs or issues that are important to some stakeholders, or to the arrival at decisions that don’t represent the needs of the majority of the stakeholders.”

Research argues that the long-standing advice about collaboration in the workplace may be entirely wrong. The paper “Equality bias impairs collective decision-making across cultures” suggests that decisions made during meetings, workshops or as part of a collaborative team, will likely not only be less than optimal, but possibly substandard unless all participants are at an equal level of expertise and (just as important) awareness.

Ali Mahmoodi and his coauthors wrote, “When making decisions together, we tend to give everyone an equal chance to voice their opinion. To make the best decisions, each opinion must be scaled according to its reliability. Using behavioral experiments and computational modeling, we tested (in Denmark, Iran, and China) the extent to which people follow this latter, normative strategy. We found that people show a strong equality bias: they weight each other’s opinion equally regardless of differences in their reliability, even when this strategy was at odds with explicit feedback or monetary incentives.”

The problem is compounded by the inability of most people to recognize when they are not competent. “A wealth of research suggests that people are poor judges of their own competence—not only when judged in isolation but also when judged relative to others. For example, people tend to overestimate their own performance on hard tasks; paradoxically, when given an easy task, they tend to underestimate their own performance (the hard-easy effect) (1). Relatedly, when comparing themselves to others, people with low competence tend to think they are as good as everyone else, whereas people with high competence tend to think they are as bad as everyone else (the Dunning–Kruger effect) (2). Also, when presented with expert advice, people tend to insist on their own opinion, even though they would have benefitted from following the advisor’s recommendation (egocentric advice discounting).” (Mahmoodi et al., 2015.)

This suggests that even in a facilitated workshop, the bias will not be sufficiently neutralized to get the desired outcomes. People will still insist on their own view of requirements, even if they are faced with differing opinions. Alternatively, they defer to the person perceived as having higher competence, even if that is not true. As suggested by Mahmoodi’s paper, and contrary to received wisdom, the best strategy for arriving at a set of optimal requirements might first involve determining the participant’s skill levels before deciding which requirements are more valid. The research also suggests fewer, more knowledgeable participants in a workshop or meeting could produce a clearer set of requirements.

The danger of assuming expertise (the Dunning-Kruger effect) and the demonstration of equally weighting people’s opinions are witnessed in a real-world project example from New Zealand. Novopay was an infamous education sector payroll project, and the government ran an inquiry to identify the issues that led to its failure. The inquiry specifically called out the SMEs (Subject Matter Experts) in the report. “The Ministry had difficulty providing sufficient SMEs with adequate knowledge, and there were many examples of SME input being incomplete, inaccurate or contradictory.” (Jack and Wevers, 2013.) The Ministry did not have the expertise to realize their SMEs were not providing competent information, and the SMEs thought they had sufficient expertise to provide advice on a software development project. As the SMEs and the Ministry agreed with each other and at the same time deferred to each other, it is no surprise that the project had major issues.

This behavior is not unique, and anecdotal evidence suggests many projects fall into the same trap. David Dunning (one of the researchers who identified what is now called the Dunning-Kruger effect) points out that our minds can be, “(…) filled with the clutter of irrelevant or misleading life experiences, theories, facts, intuitions, strategies, algorithms, heuristics, metaphors, and hunches that regrettably have the look and feel of useful and accurate knowledge. This clutter is an unfortunate by-product of one of our greatest strengths as a species. We are unbridled pattern recognizers and profligate theorizers.” (Dunning, 2014.)

The problem of cognitive biases may also help explain some of the frustrations evident in the Agile world. Scrum tries to deal with the issue of identifying who can guide the product vision by assigning one role to the task—the product owner. Scrum aims to reduce the hazards associated with everyone making decisions by making one person responsible for the product. Of course, the pitfall is that the product owner needs to have real expertise, as opposed to thinking they have expertise. Although the Agile approach seems instinctively better, (collaboration, sustainable pace, self-organizing teams, business people and developers working together), Agile remains as susceptible to failure as the waterfall model. Perhaps it comes down to the simple fact that the Agile Manifesto was conceived by experts who may have assumed that everyone else was highly skilled. In the ordinary world, there are enough people trying to use Agile that are precisely none of those things.

So, what’s a business analyst to do in the face of the knowledge that we are all affected by cognitive biases and metacognition errors?

Luckily, business analysts have the only role on the project that stands a chance of seeing past the biases. We are tasked with collecting information and reviewing it as best we can, and producing (what we hope) is an optimal solution. We are forced to keep an open mind and arrive at our conclusions by weighing up options. As a business analyst working on a project by project basis, there are many occasions when we have little or no knowledge about the business area/organization that we are working for. It makes it impossible to maintain an illusion that we are competent because it is obvious that we are not. Therefore, we have already cleared one hurdle: we have enough expertise to realize we are not an expert and seek assistance from others.

This of course, can be a double-edged sword. If we have worked in one industry for a period of time, there’s a danger (as per the Novopay example) that we assume we know the job intimately enough to produce a sound set of requirements without consulting anyone else in the business.

We also need to contemplate whether we have the right business analysis skills for a project or if we are at the right level to tackle the task ahead. If we consider the pitfall of cognitive biases, it is obvious that we could fall into the trap of thinking that we are proficient in analysis when we are not. Therefore, the IIBA certifications become an important instrument in helping to offset this delusion. By gaining certification, we have gone some way to proving we have a level of mastery in the business analysis arena.

Even certification does not completely get us off the hook. Dunning points out education’s limits. “Here’s a particularly frightful example: Driver’s education courses, particularly those aimed at handling emergency maneuvers, tend to increase, rather than decrease, accident rates. They do so because training people to handle, say, snow and ice leave them with the lasting impression that they are permanent experts on the subject. In fact, their skills usually rapidly erode after they leave the course. Months or even decades later, they have confidence but little leftover competence when their wheels begin to spin.” (Dunning, 2014.)

Recertification, although painful, may be the necessary thorn in our sides that prevents us from assuming we are still good business analysts twenty years after we read a book on the subject.

Finally, there’s one consoling aspect of learning about cognitive biases: we can be less hard on ourselves if we are struggling to get any agreement on requirements, or if the user stories cannot be corralled into a sensible design. It may be a clear demonstration of Dunning-Kruger and the equality bias in full effect rather than the fault of the business analyst.

Then again, maybe that is just another example of an error in thinking. As David Dunning notes, cognitive biases are, “the anosognosia of everyday life.” (Dunning, 2004.)

“As such, wisdom may not involve facts and formulas so much as the ability to recognize when a limit has been reached. Stumbling through all our cognitive clutter just to recognize a true “I do not know” may not constitute failure as much as it does an enviable success, a crucial signpost that shows us we are traveling in the right direction toward the truth.” (Dunning, 2014.)

Bibliography
IIBA, BABOK v3 A Guide to the Business Analysis Body of Knowledge: (International Institute of Business Analysis, Toronto, Ontario, Canada, 2015)
Al Mahmoodi, Dan Bang, Karsten Olsen, Yuanyuan Aimee Zhao, Zhenhao Shi, Kristina Broberg, Shervin Safavi, Shihui Han, Majid Nili Ahmadabadi, Chris D. Frith, Andreas Roepstorff, Geraint Rees, Bahador Bahrami, “Equality bias impairs collective decision-making across cultures”, Proceeding of the National Academy of Science (2015): http://www.pnas.org/content/112/12/3835.full.pdf.
Murray Jack and Sir Maarten Wevers, KNZM, Report of the Ministerial Inquiry into the Novopay Project, (New Zealand Government, 2013).
David Dunning, “We are all Confident Idiots,” Pacific Standard. Miller-McCune Center for Research, Media, and Public Policy (2014): https://psmag.com/we-are-all-confident-idiots-56a60eb7febc#.tvb54we9p.
David Dunning, Self-insight: Roadblocks and Detours on the Path to Knowing Thyself: (Taylor & Francis, 2004).

Editor’s Notes
(1) The hard–easy effect is a cognitive bias that manifests itself as a tendency to overestimate the probability of one’s success at a task perceived as hard and to underestimate the likelihood of one’s success at a task perceived as easy. (Wikipedia definition – https://en.wikipedia.org/wiki/Hard–easy_effect)

(2) The Dunning–Kruger effect is a cognitive bias in which low-ability individuals suffer from illusory superiority, mistakenly assessing their ability as much higher than it really is. (Wikipedia definition – https://en.wikipedia.org/wiki/Dunning–Kruger_effect)

  • 1
  • 2