Skip to main content
BATimes_May24_2023

The Importance Of Experimentation And “Falsification”

Increasingly, teams are working to deliver change in an incremental and iterative way. As teams shift towards a product management approach, there is often a parallel shift towards experimentation and hypothesis-based product or service development. The idea being that, in an uncertain market, it’s impossible to say with any certainty what customers and stakeholders will actually find valuable. Of course, market research and customer insight is valuable… but as anyone who has been involved in research will tell you, there’s a ‘saying vs doing’ gap. I might say that I would definitely subscribe to an online streaming service that specializes in BA content for $9.99 a month… but when the offer actually comes around it’s anyone’s guess as to whether I’ll actually do it.

 

Hypothesis-based development starts when we accept that, however we dress it up, the features and functions that we deliver are our best guess at a point in time of what stakeholders and users will find useful. Until they are actually used, we won’t know how useful people find them. And crucially, until users themselves get to play with the amended product or service, they won’t be able to determine how valuable it is to them.

 

In one way, this idea of taking a hypothesis and testing it sounds intuitive. Yet, in reality, these hypotheses are often badly articulated and made without a clear understanding of how and when they will be tested.  I’ll move on to that a little later, but first (as in any sensible article), we need to talk about swans…

 

Karl Popper: Seeing Swans and “Falsification”

Influential Austrian philosopher Karl Popper used an analogy involving swans that is still extremely relevant today, and relevant in business change initiatives, which I’ll attempt to succinctly paraphrase here. Imagine an observer wanting to know what color swans exist in the world. If an observer sees a white swan, they know with certainty that white swans exist.  If they see more white swans, that might be interesting, and it might help them see patterns about the prevalence of white swans, but it doesn’t actually give them much more information about the colors of swans that exist.   It would be easy, after seeing 10,000 white swans to conclude “all swans are white”. Seeing a single black swan on the other hand would refute the hypothesis that “all swans are white”. In that regard, and in that context, a sighting of the black swan is providing significantly more information than a sighting of the 10,000th white swan.

 

This takes us to a key tenant of rigor in scientific research: falsification.  To be valid, a hypothesis needs to be falsifiable. This is perhaps an idea that we ought to consider in our work as business analysts and product professionals too.

 

Imagine there is a hypothesis that a particular group of features or functions would prove useful. That hypothesis might be stated as a requirement, a user story or whatever. It might have acceptance criteria, but often these are somewhat functional in their nature. Wouldn’t it also be useful to extend these and have some additional form of measure that determines whether it has actually been deemed successful and valuable from different stakeholders’ perspectives? And crucially, wouldn’t it be useful to test against these criteria early and regularly?  If it isn’t going to fly, far better to cut our losses early, and focus on something that will be valuable.

 

This leads us to consider testing ideas with Minimum Viable Products (MVPs) and prototypes. Two ideas which are often misunderstood.

 

Advertisement

 

MVPs Are Easily Misunderstood

One thing I’ve learned over the years is to ask the following question:

“When you say MVP, what exactly do you mean by that?”

 

I have found different stakeholders within the same organization often have different views of what MVP means. Imagine a hypothetical scenario: some people think it means “release one”; the first fully-featured release. Others think it’s a release with partial functionality. Others think it’s got most features, but on a different technology stack… a recipe for disaster!

There’s enough written elsewhere about what MVP means, but suffice to say it’s crucial that there’s a shared understanding. I would tend towards viewing it as a semi-functional test. I draw on the definition Eric Reis gave in his book The Lean Startup:

 

“The MVP is that version of the product that enables a full turn of the Build-Measure-Learn loop with a minimum amount of effort and the least amount of development time. The minimum viable product lacks many features that may prove essential later on. However, in some ways, creating a MVP requires extra work: we must be able to measure its impact.”

Reis makes an important point here, when we’re talking MVP, we ought to be talking measurement. How will we know if we’ve built the right thing? How will we measure success from the organization’s perspective? From the customer’s perspective? From the viewpoints of other stakeholders?

That’s a tricky question, but if we don’t ask it, we rob ourselves of the opportunity to properly test an idea.  It becomes too easy to focus on the number of things that have been delivered, rather than the value that’s been enabled.

 

Prototypes Play A Part Too

Even before MVPs, it is worth considering prototyping. Often, the word ‘prototype’ brings up the vision of a screen mock-up. That could be useful, but a prototype could equally be a ‘could-be’ process model or service journey.  Imagine speaking to banking customers “we’re thinking of changing the way you interact with this banking service, here’s how it might change, would that work for you?”.  Of course, there will still be the saying vs doing gap that I mentioned earlier, but it’s a chance for earlier conversations and feedback.

 

So, in summary, when thinking about hypothesis-driven product or service development, remember the swans. Make the various hypotheses falsifiable, and think about how they’ll be tested. Then test them early and often!


Adrian Reed

Adrian Reed is a true advocate of the analysis profession. In his day job, he acts as Principal Consultant and Director at Blackmetric Business Solutions where he provides business analysis consultancy and training solutions to a range of clients in varying industries. He is a Past President of the UK chapter of the IIBA® and he speaks internationally on topics relating to business analysis and business change. Adrian wrote the 2016 book ‘Be a Great Problem Solver… Now’ and the 2018 book ‘Business Analyst’ You can read Adrian’s blog at http://www.adrianreed.co.uk and follow him on Twitter at http://twitter.com/UKAdrianReed