Skip to main content

The Business Analyst / Tester switch

The relationship between Business Analysts and Testers has long fascinated me.

The two roles, when done well, start to merge into the other. Both roles demand similar kinds of analytical skills. The main different is the point in the development cycle when the roles are involved.

That similar skills and mindset are required in both roles was demonstrated to me early in my career. I worked on the systems to support rail privatisation here in the UK. To say the project was waterfall is an understatement, it had mountains of paper and ISO certificates to prove it! Of course this was in the days before anyone had heard of lightweight processes let alone agile.

Anyway, coders were vastly outnumbered by Testers. Or rather, Business Analysts acting as Testers. The consultancy building the system deployed Business Analysts in the testing role and it actually worked really well. There was limited end user contact but much reading of documentation and thinking. Either way the analysis skills were the primary need.

Traditionally programming is sandwiched between the two: analysis, before the coding, and testing, after the coding. Yet the more I look at the way the agile development cycle is itself evolving the more it seems to be that the two roles will, in the near future swap places. Coding will still be the meat in the sandwich but increasingly Testers will do their work before code is written and Analysts afterwards.

There are two forces at work here: one pushing test forward and the other business analysis backwards. First the easy one: test first.

Agile, and specifically Extreme Programming, has long advocated a test first approach. Originally it was programmers who adopted a test first approach to unit testing. But as tools and experience has grown one now sees test first elsewhere.

Acceptance Test Driven Development (ATDD) using tools such as FIT was the first to appear. Testers and product owners (often Analysts) would work together to set acceptance criteria and then acceptance tests.

In the last few years this has evolved into Behaviour Driven Development (BDD) using tools like Cucumber and JBehave. Developers have become more involved at this stage and techniques such as “Three Amigos” (sometimes called Power of Three) have emerged. Here the Coder, Tester and Analysts work together to come up with the criteria and tests which the code – which is yet to be developed – needs to pass.

Of course one can’t see all acceptance criteria in advance. This is one of the ways Agile working and digital product development differs from traditional IT. Teams accept that as something is built understanding grows, consequently additional criteria and tests will emerge.


Advertisement

Ultimately the only true test is what happens when the product is put in front of real live users and customers in the market. And this is where the Business Analyst reenters the picture.

The second force at work is the ability to develop small products fast and cheap. Lengthy analysis and planning phases are not cheap. Analysis and planning have not benefited from new tools and techniques the way programming has. If anything these phases have got longer in corporate environments.

In the beginning there is only a rough idea of what is required and what the value of a product or feature is. Once there is a working thing, even a really tiny thing, then analysis can truly begin.

Do customers use this the way we thought they might?

Is it useful to them?

What else would help them here?

Anyone who has been in the profession for a few years will be well aware of how understanding changes once a real product is used. Until there is a thing to be seen, touched and used everything else is speculative.

Some teams push this to the extreme with hypothesis driven development: they accept that any idea is just that, an idea. The idea might be useful and it might be valuable, or it might be neither. Therefore, run an experiment and test the hypothesis. If the experiment is positive then expand on the idea. And if the experiment invalidates the idea then move on.

Advances in technology – not just CPU cycles but the tools that massive processing power make possible – mean that today it is far faster and cheaper to find out what a customer wants by just building something.

The question is: once you’ve got your product to put in front of users and customers who should look at the results?

Who has the skills to make sense of feedback? To watch users working with the product? To talk to users about how the product could change their work?

You guessed it: it’s time for the Business Analyst.

Now, rather than spending their time questioning what might happen, how a process might be change, how customers could respond Analysts are needed to look at what does happen. And having looked at what happens they need to think about what happens next, what experiments should be run next? Should the next change be in the software? Or the business process? What opportunities become possible? Even, does the business strategy need to change?

Twenty-years ago it made a lot of sense to think long and hard about a problem before letting coders work. Today it can be far more effective to do something and analyse after the event.

As a result Tester and Analysts need to swap places. Testers need to be engage with the test driven process while analysts look at the results and determine value.

Of course, when teams are running tight iterations the close of one iteration is the start of another. So which comes first and which follows is merely a matter of perspective. The key point is that Analysts need to devote more time and effort to analysing results and feeding these findings back into requirements and prioritisation.

In a world were digital products can be created faster and more cheaply than ever analysis skills become more valuable because it becomes more important to kill things which look successful but are not. Products which look cool but don’t deliver business improvement. Nor it is just products that don’t deliver value: products which don’t deliver enough value need to be weeded out. Even a product which is competitive in the market might not be competitive in your portfolio of other successful products.

Comments (2)