Stuck in the Middle
As Business Analysts, we have all been there.
We held high hopes for a collegial give-and-take in a workshop or a productive meeting where processes and requirements are teased out. In our imagination, visions of cooperative brainstorming danced before our eyes along with the shimmering promise of decisive and swift stakeholder agreement. The solution would be optimal and deliver a quality outcome.
Then the project starts, and the original objectives are lost. It moves from the potential of an excellent solution to a half-baked compromise. The users only have a fraction of what they wanted, and that fraction does not do anything useful. Everyone swears they did not ask for what he or she got although our requirements management spreadsheet/system says otherwise (and so do the stakeholder signatures). In the world of Scrum, the product owner looks at the design and keeps saying, “Is that what I asked for?” Moreover, the design keeps changing.
The Business Analyst struggles their way through the project feeling like a failure, wondering why the exciting techniques described in the BABOK (brainstorming, collaborative games, experimenting, research!) seem so ineffective.
For example, the BABOK 3 notes that “Workshops can promote trust, mutual understanding, and strong communication among the stakeholders and produce deliverables that structure and guide future work efforts.” However, the BABOK does list some limitations. “The success of the workshop is highly dependent on the expertise of the facilitator and knowledge of the participants. Workshops that involve too many participants can slow down the workshop process. Conversely, collecting input from too few participants can lead to the overlooking of needs or issues that are important to some stakeholders, or to the arrival at decisions that don’t represent the needs of the majority of the stakeholders.”
Research argues that the long-standing advice about collaboration in the workplace may be entirely wrong. The paper “Equality bias impairs collective decision-making across cultures” suggests that decisions made during meetings, workshops or as part of a collaborative team, will likely not only be less than optimal, but possibly substandard unless all participants are at an equal level of expertise and (just as important) awareness.
Ali Mahmoodi and his coauthors wrote, “When making decisions together, we tend to give everyone an equal chance to voice their opinion. To make the best decisions, each opinion must be scaled according to its reliability. Using behavioral experiments and computational modeling, we tested (in Denmark, Iran, and China) the extent to which people follow this latter, normative strategy. We found that people show a strong equality bias: they weight each other’s opinion equally regardless of differences in their reliability, even when this strategy was at odds with explicit feedback or monetary incentives.”
The problem is compounded by the inability of most people to recognize when they are not competent. “A wealth of research suggests that people are poor judges of their own competence—not only when judged in isolation but also when judged relative to others. For example, people tend to overestimate their own performance on hard tasks; paradoxically, when given an easy task, they tend to underestimate their own performance (the hard-easy effect) (1). Relatedly, when comparing themselves to others, people with low competence tend to think they are as good as everyone else, whereas people with high competence tend to think they are as bad as everyone else (the Dunning–Kruger effect) (2). Also, when presented with expert advice, people tend to insist on their own opinion, even though they would have benefitted from following the advisor’s recommendation (egocentric advice discounting).” (Mahmoodi et al., 2015.)
This suggests that even in a facilitated workshop, the bias will not be sufficiently neutralized to get the desired outcomes. People will still insist on their own view of requirements, even if they are faced with differing opinions. Alternatively, they defer to the person perceived as having higher competence, even if that is not true. As suggested by Mahmoodi’s paper, and contrary to received wisdom, the best strategy for arriving at a set of optimal requirements might first involve determining the participant’s skill levels before deciding which requirements are more valid. The research also suggests fewer, more knowledgeable participants in a workshop or meeting could produce a clearer set of requirements.
The danger of assuming expertise (the Dunning-Kruger effect) and the demonstration of equally weighting people’s opinions are witnessed in a real-world project example from New Zealand. Novopay was an infamous education sector payroll project, and the government ran an inquiry to identify the issues that led to its failure. The inquiry specifically called out the SMEs (Subject Matter Experts) in the report. “The Ministry had difficulty providing sufficient SMEs with adequate knowledge, and there were many examples of SME input being incomplete, inaccurate or contradictory.” (Jack and Wevers, 2013.) The Ministry did not have the expertise to realize their SMEs were not providing competent information, and the SMEs thought they had sufficient expertise to provide advice on a software development project. As the SMEs and the Ministry agreed with each other and at the same time deferred to each other, it is no surprise that the project had major issues.
This behavior is not unique, and anecdotal evidence suggests many projects fall into the same trap. David Dunning (one of the researchers who identified what is now called the Dunning-Kruger effect) points out that our minds can be, “(…) filled with the clutter of irrelevant or misleading life experiences, theories, facts, intuitions, strategies, algorithms, heuristics, metaphors, and hunches that regrettably have the look and feel of useful and accurate knowledge. This clutter is an unfortunate by-product of one of our greatest strengths as a species. We are unbridled pattern recognizers and profligate theorizers.” (Dunning, 2014.)
The problem of cognitive biases may also help explain some of the frustrations evident in the Agile world. Scrum tries to deal with the issue of identifying who can guide the product vision by assigning one role to the task—the product owner. Scrum aims to reduce the hazards associated with everyone making decisions by making one person responsible for the product. Of course, the pitfall is that the product owner needs to have real expertise, as opposed to thinking they have expertise. Although the Agile approach seems instinctively better, (collaboration, sustainable pace, self-organizing teams, business people and developers working together), Agile remains as susceptible to failure as the waterfall model. Perhaps it comes down to the simple fact that the Agile Manifesto was conceived by experts who may have assumed that everyone else was highly skilled. In the ordinary world, there are enough people trying to use Agile that are precisely none of those things.
So, what’s a business analyst to do in the face of the knowledge that we are all affected by cognitive biases and metacognition errors?
Luckily, business analysts have the only role on the project that stands a chance of seeing past the biases. We are tasked with collecting information and reviewing it as best we can, and producing (what we hope) is an optimal solution. We are forced to keep an open mind and arrive at our conclusions by weighing up options. As a business analyst working on a project by project basis, there are many occasions when we have little or no knowledge about the business area/organization that we are working for. It makes it impossible to maintain an illusion that we are competent because it is obvious that we are not. Therefore, we have already cleared one hurdle: we have enough expertise to realize we are not an expert and seek assistance from others.
This of course, can be a double-edged sword. If we have worked in one industry for a period of time, there’s a danger (as per the Novopay example) that we assume we know the job intimately enough to produce a sound set of requirements without consulting anyone else in the business.
We also need to contemplate whether we have the right business analysis skills for a project or if we are at the right level to tackle the task ahead. If we consider the pitfall of cognitive biases, it is obvious that we could fall into the trap of thinking that we are proficient in analysis when we are not. Therefore, the IIBA certifications become an important instrument in helping to offset this delusion. By gaining certification, we have gone some way to proving we have a level of mastery in the business analysis arena.
Even certification does not completely get us off the hook. Dunning points out education’s limits. “Here’s a particularly frightful example: Driver’s education courses, particularly those aimed at handling emergency maneuvers, tend to increase, rather than decrease, accident rates. They do so because training people to handle, say, snow and ice leave them with the lasting impression that they are permanent experts on the subject. In fact, their skills usually rapidly erode after they leave the course. Months or even decades later, they have confidence but little leftover competence when their wheels begin to spin.” (Dunning, 2014.)
Recertification, although painful, may be the necessary thorn in our sides that prevents us from assuming we are still good business analysts twenty years after we read a book on the subject.
Finally, there’s one consoling aspect of learning about cognitive biases: we can be less hard on ourselves if we are struggling to get any agreement on requirements, or if the user stories cannot be corralled into a sensible design. It may be a clear demonstration of Dunning-Kruger and the equality bias in full effect rather than the fault of the business analyst.
Then again, maybe that is just another example of an error in thinking. As David Dunning notes, cognitive biases are, “the anosognosia of everyday life.” (Dunning, 2004.)
“As such, wisdom may not involve facts and formulas so much as the ability to recognize when a limit has been reached. Stumbling through all our cognitive clutter just to recognize a true “I do not know” may not constitute failure as much as it does an enviable success, a crucial signpost that shows us we are traveling in the right direction toward the truth.” (Dunning, 2014.)
Bibliography
IIBA, BABOK v3 A Guide to the Business Analysis Body of Knowledge: (International Institute of Business Analysis, Toronto, Ontario, Canada, 2015)
Al Mahmoodi, Dan Bang, Karsten Olsen, Yuanyuan Aimee Zhao, Zhenhao Shi, Kristina Broberg, Shervin Safavi, Shihui Han, Majid Nili Ahmadabadi, Chris D. Frith, Andreas Roepstorff, Geraint Rees, Bahador Bahrami, “Equality bias impairs collective decision-making across cultures”, Proceeding of the National Academy of Science (2015): http://www.pnas.org/content/112/12/3835.full.pdf.
Murray Jack and Sir Maarten Wevers, KNZM, Report of the Ministerial Inquiry into the Novopay Project, (New Zealand Government, 2013).
David Dunning, “We are all Confident Idiots,” Pacific Standard. Miller-McCune Center for Research, Media, and Public Policy (2014): https://psmag.com/we-are-all-confident-idiots-56a60eb7febc#.tvb54we9p.
David Dunning, Self-insight: Roadblocks and Detours on the Path to Knowing Thyself: (Taylor & Francis, 2004).
Editor’s Notes
(1) The hard–easy effect is a cognitive bias that manifests itself as a tendency to overestimate the probability of one’s success at a task perceived as hard and to underestimate the likelihood of one’s success at a task perceived as easy. (Wikipedia definition – https://en.wikipedia.org/wiki/Hard–easy_effect)
(2) The Dunning–Kruger effect is a cognitive bias in which low-ability individuals suffer from illusory superiority, mistakenly assessing their ability as much higher than it really is. (Wikipedia definition – https://en.wikipedia.org/wiki/Dunning–Kruger_effect)