Points Make Problems
Points Make Problems (Rather Than Prizes) – Or why Story Points might not be the best way of estimating effort.
‘Points make prizes’ was the catchphrase of a UK game show called ‘Play Your Cards Right’. The concept of story points will be well known to those working in an Agile environment but, in my view, the use of story points can create problems rather than deliver prizes, and here’s my opinion as to why.
Ron Jeffries, the person who claims to have invented story points, came out a couple of years ago saying he’s unhappy about the way they’ve developed. https://ronjeffries.com/articles/019-01ff/story-points/Index.html
I find myself agreeing with Jeffries to a large degree.
I vividly remember when the notorious Yorkshire Ripper serial killer was finally caught. It was the result of a routine stop, where the police officers were suspicious about his car. Subsequently the implements he’d used for his gruesome killings were discovered in the vehicle. A senior police officer doing the press conference was asked how the arrest happened, and he replied that “It was just good, old-fashioned coppering”.
What’s that rather macabre example got to do with Agile & estimating? Hopefully, I can explain (and I should point out that it’s not about taking a blunt instrument to a Delivery Manager or Product Owner!).
I get the need to apply a simplified estimating metric, because I’ve worked on MI systems where we’ve had to develop a load of reports and we’ve simply classified them as Complex, Medium or Simple. We then applied a generic time to each category so that a ‘Simple’ report would take one day to develop, a ‘Medium’ report 3 days and a ‘Complex’ one 5 days. Of course, each ‘Simple’ report didn’t take exactly 7 or 8 hours. Some took 2 or 3 and some 10 to 12. But it gave us a metric to estimate the overall effort without having to do it in detail for each report.
But used in this way, to give you a broad metric, story points seem to dodge the issue. Mike Cohn, the guru of User Stories, recently wrote in his blog that story points weren’t just about the complexity of the story, but about the effort required to deliver it. But if you’re estimating effort, it seems to me better to estimate the effort directly rather than dressing it up in story points.
However much science we seem to throw at it, estimating software development and delivery timescales has always been an inexact process. ‘Function points’ was a favoured metric used when I started my IT career. These dated from the late 1970’s and early 1980’s, and estimated effort based on what types of function the software was being required to deliver (input, output, searches, internal file operations or external interfaces) and the perceived complexity of each of those. The notion of ‘function points’ evolved as software development methods also evolved, and you can see a clear lineage from them to modern day story points.
Advertisement
Jeffries makes various suggestions in that article about how to estimate but there’s one part of it where he makes the point that his original intention was to estimate the actual, real-world effort required. He uses the term ‘Ideal Days’ which is the time required with absolutely no distractions. But there are always distractions in the real-world. Even in an Agile/Scrum environment there will be stand-ups, Three Amigos sessions, spikes, reviews and retrospectives. And as employees there will be team meetings, training, helping others or asking for help, admin tasks, water-cooler or coffee chats, etc. So it’s incredibly rare that 7 hour’s effort can get completed in a single 7-hour working day.
But this is something we knew already. I started as a systems analyst (whatever happened to them?) and have done various other roles over time, but for a few years I was an out-and-out project manager. Planning, estimating, allocating resources and status reporting was my day-to-day role and I worked for a consultancy where one of their key selling points was the project management methodology they’d developed, which had been codified in a number of meaty volumes.
In the same way that story points really aren’t new, Jeffries notion of ‘Ideal Days’ and actual days was nothing new either, as we were taught this concept when estimating. We called it ‘resource loading’ and one of the basic concepts was that, over time, even a relatively short period of time, it was unlikely that anyone could dedicate more than 70% of their available time to a software development task. That meant that if the estimate to complete a task was 7 working (or ideal) days, it would actually take you 10 working days at best.
Then, we added some contingency, depending on the degree of perceived risk. For a task perceived to involve minimal risk, we still added 25% to the estimate. For medium risk it was 50% and 100% for high risk. So if that 7-working day development task was seen as high risk, we could end up allocating 20-working days as elapsed time in the plan. That’s effectively two 2-week sprints, for something that might be estimated as 8 story points.
If we look at a typical 10-day sprint therefore, a medium risk development commitment could more realistically take more like 2.5 sprints to complete. There are ways of reducing risk of course, with the main one being splitting stories, even to the point (as Jeffries suggests) where there’s only a single acceptance criterion/condition of satisfaction. But the same principle, meaning there’s never an “ideal day” still apply.
To return to my opening paragraph, “good, old-fashioned coppering” caught a notorious serial killer. It seems that, when we’re thinking about how much we can actually deliver in a typical Sprint, Agile could learn some lessons from good, old-fashioned project management.