Skip to main content

Tag: Planning

Business Case Clarified – Vision-Schmision Part 5

ferrer April30 IMG01Motivation:
“If you are going to be incomplete, at least nail your most important business process! 

Ingredients:
Business Need
Any Enterprise Architecture not limited to:
Organization (Business) Goals & Objectives
Performance Metrics
Organizational Process Assets
All Solutions [Constructed, Deployed]
Ten Stakeholder Types (and all their ideas)
All Information Available on Risks
Business Analysis Resources As Needed

“Danger, Will Robinson!” If you are wearing a PM hat, your sense of “scoperiety” is about to be violated, but try to relax. This is business analysis, not project analysis.

Welcome to Part V of our series on Enterprise Analysis, AND the use of these tasks are not “sequential” as presented, but highly iterative. ALSO, I remind myself (my readers need no reminder) that we have a gentleperson’s agreement that we don’t have time to be “complete”. All we are doing is trying to improve on “Go Paperless for $10,000,000” (see the Schmision Part 1 for more details).

A good way to get a higher quality “scope” out of the initial vision is to play “what if” with a quality business case. This is because any significant project represents something new to the enterprise “consciousness”. The newness implies risk, uncertainty and many assumptions. Even the constraints are assumptions, especially if the constraints are not analyzed for their impact on potential outcomes. “What if” thinking is one route to making higher quality decisions given uncertainty. * This is especially important if formal feasibility studies will not be done before full project commitment. **

Enterprise Analysis (which includes Requirements Analysis work) led us to Define the Business “Knead”, Assess Capability (and other) Gaps, and Define Solution Approach(es) and Solution Scope(s). We pluralize the solutions on purpose. First of all, we wish to present as professional BAs, offering choices, pros and cons, not opinions, attitudes and personal preferences. Second of all (a major point of this series), the rush to solution” is a key failure mode of large projects. By examining alternatives (see Part IV last month), we begin to understand the true impacts of the solution choices, and the tradeoffs inherent in choosing a specific project scope and set of priorities. 

This is great “pre-project” stuff – you know – requirements as its own “pre-project project”. This kind of practice could maybe improve the quality of the Project Management Office’s “triage”. It might also (my favorite) give the next analyst in line a head start on next analysis steps. There is ALWAYS a next analyst in line, because there are always next analysis steps – pay us now, pay us later). If no professional analyst is assigned, it means that the end users step into the role with a vengeance. Example analysis by end user experts: “This stinks, can’t use it.”

To minimize the need for such expert analysis, we imagine starting the project on the right foot by presenting a high quality business case instead of a weakly analyzed cost justification. In case you forgot, click here for a glance at the “cost justification” from Schmision Part 1.

Now, let’s compare the “just costs” analysis to a business case that resembles the actual business we are in. Notice the beginnings of (and one of the primary purposes of) traceability.

We start by “cleaning up” (analyzing) the Business’ goals and objectives. We do this by reconciling their relationships with each other and with the problems and the opportunities as understood so far (the “Knead” itself). In return we get guidance for how to organize the business case, better understanding across the high level requirements, PLUS feasible “summary” potential.

This analysis following (once again, with the gaps???) immediately suggests gaps. We do this analysis by figuring out that “Some of these things belong with the others”. We continue to add gaps that we (all stakeholders) see, and fill them where we know how.

Goals/Objectives/Issues/Opportunities and their relationships:

  1. Increase 2013 Undergraduate enrollment from 37,213 in Fall 2012 to 41,500 for Fall 2013. DO THIS BY:
    1. Increasing applications from around 49,000 to over 55,000 (how??? How many of the new applicants will be qualified? How many will then accept our offer to attend??? Can we )
    2. Reducing lost applications/applicants (see issues 1, 2 below).
    3. Reducing dropouts (and transfers???) from 3723 per semester to 1500 or fewer. This should result in Undergraduate enrollment levels of at least 38,500 at end of Spring 2014 (relate this to goals above???). DO THIS BY:
      1. Improving student satisfaction from 82% (2012 results) to 90% as measured by the 2013 annual “Student Satisfaction” survey to be given 12/1/2013 (see issues 1, 2, 4 below).
      2. Increase summer school enrollment 10% from 9833 in 2012 to 10,816+ by 2013 (how???).
      3. Attracting more undergraduate students to stay with us in anticipation of continuing their education within our expanding Graduate programs (see Goal B, below):
      4. Increase Fall Graduate enrollment from 12,360 in Fall 2012 to 13,500, to be split between academic departments. Goals are as follows:
        1. Law – 200+
        2. Business – 430+
        3. Media / Arts – 120+
        4. Public Safety (new department) – 320 plus
        5. PhD Programs – 70 plus
  2. Reduce costs even as business grows. DO THIS BY:
    1. Reducing employee turnover from 10% per year to 5% (???How???).
    2. Freezing hiring at 2012 levels (1017 employees) except for:
      1. Except for one new Dean for the new Public Safety department.
      2. While adding 20 contract faculty for new teaching workload (S
      3. While adding 20 contract faculty to offset expected contract attrition (we assume that contract employees and regular employees have the same impact on paper workloads).
    3. Improving employee productivity by XX%??? (see issues 1, 2, 6 below)
      1. By reducing time spent paper (information) handling
      2. ???With improved systems (integrated faculty scheduling, online student registration and class scheduling)???
      3. ???With other system initiatives??? (see issues 4 & 6 below)
    4. Cutting annual document archival costs by 90% for FY 2014 (see issue 5 below).
  3. Improve community relations, as indicated by the annual community feedback session plus a (new) formal survey of the community and trends in feelings and attitudes.: DO THIS BY:
    1. Expanding English as a second language outreach classes from 4 to 16.
    2. Improving student satisfaction??? – See above – students are community members too???). (see issues 1, 2, 4 below)

Known problems and opportunities related to the use of paper include, but are still not limited to:

Specific Issues:

  1. Bottlenecks / slowdowns in the student admission process due to sharing the paper application file (see Admit Student process in Appendix A). Our biggest competitor can give a prospective student an admission decision in less than two weeks, while our average is currently 5 weeks (???This average seems low given some of the delays given below – is this our average when everything goes well???). While we do not know how many students we lose because of delay, surveys show that over 30% of our applicants complain about the delays. Reasons given by applicants for not attending included:
    1. Missing financial aid, acceptance by other colleges, family or health issues, inability to provide a complete (Surveys were limited to applicants in our systems. We are not sure if we lose prospects that never applied on line, because they knew they were too late???).
  2. Lost or misplaced student transcripts delay financial aid. Estimated as around 250 per year never found, new transcript requested, and around 4000 that are stuck in inboxes, new transcript requested, original transcript later found (what happens???) student transcripts delay financial aid. There are other reasons that delay financial aid (missing information from students), which we believe is the cause of half of our 3723 student dropouts last semester. Financial aid delays can stretch for months instead of weeks, and always contribute to admissions decision delays.

  3. To hire faculty requires that anywhere from 10-20+ persons and 3+ academic departments (undergraduate, graduate and professional, and any “related” graduate programs???) examine the prospect’s academic transcripts. Confidentiality & privacy considerations discourage or forbid photocopying or e-mailing the transcripts. It can take from 2 to 12 weeks for the official paper transcript to pass from hand to hand, group to group.

  4. Grades are being computed and delivered by faculty on paper, for entry into a grade reporting system by each department (Dean’s Office). When questions arise about the grade, there is no detail to explain how the grade was awarded. Faculty explains that the criteria for grading are explained and written on the board at the beginning of each semester (every faculty does this???), student complaints to the ombudsman notwithstanding. The student must fill out a form to formally request explanation from the faculty member. The student ombudsman receives about 200 grade related complaints every semester. The number of formal forms submitted each year is less than 5. The reasons for the difference are unclear? There are approximately 37K students enrolled).

  5. Archival costs total $453,000 per year as of 2013. Almost half of these costs seem out of control due to repeated need to access already archived documents. These documents are often related to students who are taking longer than 4 years to finish their degrees. We need policies and electronic search to reduce this repeated manual paper archive searching.

    General Issues:

  6. Departmental managers recently estimated from their own observations that employees spend anywhere from 10% (executives) to 50% of their time finding, moving and re-filing paper documents in support of their more expert administrative and academic work. These include, but are not limited to (are there estimates of quantities, time impacts per document?):
    1. Health care
    2. Counseling
    3. Housing
    4. Part-time work
    5. Recruiting, hiring, firing, benefits administration (and other HR functions)
    6. Regulatory compliance
    7. Legal work
    8. Grading
    9. Ombudsman cases
    10. Veteran’s educational benefits
    11. Fraternity & student organization oversight
    12. Faculty mentoring and counseling
    13. Preparation and follow-up for management meetings
  7. It is anticipated that more specifics are to be discovered if a decision is made to further analyze and detail requirements for a business case.

=================================

We will use the above reconciliation (highest level business viewpoint) to trace an electronic document based solution alternative (AS-IS vs. TO-BE) that we imagined in Schmision Part 4. When we are done, we will (imagine that we) have enough “dough” to imagine a business case, at last. This usually takes the form of a spreadsheet, not provided here . Let me know if you get the idea. I have mostly limited myself to the CRITICAL elapsed time for applicants in the analysis that follows, but could have added in more level of effort and personnel impacts as well.

Admit Student – “Potential Happy” Path(s):

Step # Step Description AS-IS New Step TO-BE(s)?
01 Applicant (potential student) fills out an accurate and complete application package. Applicant fills package on line, System verifies it is complete and accurate, so applicant can correct it before submitting – skip to step 13 (this choice seems out of scope for e-docs, yet maximizes impact on goals and objectives)
02 Applicant mails application package to Admissions Applicant scans & emails package to Admissions – skip to step 9 (in scope for e-docs)
03  U.S. Post Office provides delivery service to Campus Mail Processing twice per day Eliminated 1-7 days elapsed time in most cases.
04  Campus Mail Processing sorts application packages into Admissions delivery box Eliminated 1-2 days elapsed time in most cases.
Campus Mail Processing scans and emails package to Admissions – skip to step 9 (in scope for e-docs)
05   Admissions sends a Mail Pickup Person to Campus Mail Processing twice per day to:

  • Pick up Admissions mail and bring to Admissions department.
  • Deliver Evaluated Applications for copying and distribution to multiple departments
Eliminated 1/4-day elapsed time on average.
Elapsed time lost (10 mins to 1/2 day???)
Level of effort (20 mins to 40 mins???)
06  Admissions time stamps all received mail. Eliminated.
Level of effort (1 hour to 10 hours each day???)
07  Admissions Clerk distributes mail to Application Evaluators. Eliminated.
Elapsed time lost (10 mins to 1/2 day???)
Level of effort (1/4 day to 1/2 day???)
08  Application Evaluator time stamps all received application packages before working on any particular package. Eliminated.
Elapsed time lost (10 mins to 1/2 day???)
Level of effort (10 mins to 3 hrs???)
09  Application Evaluator selects the application package with the oldest time stamp. Time creating and maintaining an accurate paper queue is estimated at approximately 5-10% of Application Evaluator’s time.
10  Application Evaluator determines that the application package is complete. Elapsed time lost (10 mins to 1 hr???)
Level of effort (10 to 15 mins???)
11  Application Evaluator determines that the application package is accurate. Elapsed time lost (1 hr to 10 hrs???)
Level of effort (10 mins to 3 hrs???)
12  Application Evaluator passes the application package to Campus Mail Service inbox for twice daily delivery. Time and effort guesstimated above.
13  Campus Mail Service copies application package for distribution to multiple departments:

  • Academic Advisor’s Office (Verify Academic Qualifications)
  • Financial Aid Office (Coordinate Financial Aid)
  • Veteran’s Aid Office (Coordinate Veteran Aid)
  • Prospect Marketing Office (Coordinate Prospective Student Events)
  • Admissions Office (Communicate Acceptance to Prospect)
  • Housing, Cafeteria, Infrastructure (Prepare for Student Arrival)
  • Student Orientation Office (Prepare for Student Onboarding)
Elimination of manual error and delays before delivery to academic departments may save from 4 to 24 days elapsed time.
Electronic copies will ease sharing of data with agencies and may eliminate as much as 1 day to – 3 weeks elapsed time (Veteran’s takes 3 weeks to scan any unscanned financial aid apps).
  Student Onboarding  
WHEW! I AM REALLY OUT OF TIME. ESTIMATES BELOW OFFER BENEFITS FROM REDUCED ELAPSED TIME:
  Lost prospects that were lost due to clumsy Administrative processes. Estimated by Academics as 15-25%??? of the lost (not all) prospects.
  Prospects lost due to slow Academic processes. Estimated by Administrators as 15-25%??? of the lost (not all) prospects.
  ETC. 🙂

NEXT MONTH:

Something easier and more fun, we hope 🙂

==============================================================

*Bayes’ Theorem is another, and it allows us to measure our beliefs against emerging evidence. Unfortunately Bayes’ Theorem is much easier to apply to search and rescue missions than to complex business projects, which don’t even allow building robust business cases most of the time. The “retrospective” (lessons learned) in Agile is a mild attempt to apply “evidence” to beliefs. We set some priorities (prior beliefs), do some work (experiment and gather data), we learn what happens (“facts”, “evidence” and opinion), and then we update our beliefs (new priorities).

** The full project becomes the feasibility study, with the usual results. One way to make every project work better is to do small pilots (you knew that) before full speed ahead (ready, aim, fire, instead of fire, ready, aim). This makes the “feasibility” attribute of the full project more manageable and less likely to cost the moon.

*** Uh huh 🙂

Don’t forget to leave your comments below.

TARDIS – Time and Relative Dimensions in Scoping

One of our biggest concerns as business analysis practitioners is scope: how do we define scope, how do we guarantee that we can deliver everything that satisfies the scope, and how do we ensure that we don’t stray outside the scope?

In other words, how do we place our hand upon the BABOK and swear that the analysis we give will be the scope, the whole scope, and nothing but the scope?

How wide is it?

When most people think about scope, they think about the boundaries of an area or domain, constraining the breadth of what we look at so that we focus more sharply on those processes and activities that require change.

This definition is inherently two-dimensional.

Imagine an episode of Doctor Who, where he and his trusty sidekick have to work out how they can get across a puddle. If the Doctor thought about problems in just two dimensions, he might think he can solve the problem of crossing the puddle by looking only at how long and wide it is; because this tells him whether he can step or jump across it.

This perspective is probably sufficient early in a project where we want to define the business scope, but the real world is not just two-dimensional.

How deep is it?

Within and below the business scope is the solution scope. Most problems I have seen with scope are with the solution scope; that is, how simple or complex will the solution be, are we happy with manual processes alone, with some minimal technical support of the main scenario, or do we need full automation of every possible scenario.

This takes the definition into the third dimension, i.e. how deep it is.

So now Doctor Who could use his sonic screwdriver to gauge how deep the puddle is; which tells him at least whether he can easily walk across it, wade through it up to his waist, or that he’ll need a boat.

The deeper it is the more expensive or time-consuming it will be; but we need to know that before we start, right? Too many projects are approved to go before the solution options have been considered, and this is one of the most common reasons for cost and time blow-out.

The only thing constant is change

And of course, one of the biggest issues for our projects is that they assume the scope is fixed at the outset and work to deliver that; only discovering toward the end that the scope has shifted or grown so that they will struggle to get sign off or worse still, the business will sign it off and never use it.

Of course, this is about the state of something over time, and so introduces the fourth dimension: time.

Back to our puddle; if the Doctor looked at it in the morning, then popped off for a few hours, when he came back the puddle may have dried up, run somewhere else, or grown in area or depth if the rain continued. He cannot then hope to get across it the same way, and if it’s gone completely he doesn’t need to bother at all.

We should never assume the scope stays fixed; and how can we deal with that?

Denial: Some organisations seek to constrain this by enforcing strict change control, in effect putting brakes on change. While this means the scope stays the same during the life of the project, it doesn’t mean it will be fit for purpose when it’s ready for delivery – so we get cost and time blow-out as we redress that.

Acceptance: Smarter organisations accept that scope will change; they baseline the business scope, and work in smaller chunks to elaborate the solution scope on an ‘as needed’ basis. They can continuously check that the scope is still sound, and if it has changed, avoid doing work that will be unnecessary. Although this could end up with only delivering 80% of the original solution scope, the project can still end on time and within budget … or they can choose to continue if that final 20% is really needed.

At the end of the episode, as Doctor Who vanishes off on his next adventure, we’re faced with what’s left and have to ask:

  • What has been your experience of problems with scope?
  • How does your organisation seek to cope with changes in scope?
  • Are there any other approaches that you have seen work?
  • And, most importantly, what was so important about the puddle and couldn’t the Doctor just have used the TARDIS to reappear on the other side?

Don’t forget to leave your comments below.

The Science of Business Analysis

This is the second of a four-part series exploring whether ‘Business Analysis’ is art or science. In the first article, Business Analyst, Greg Kulander, discussed how his career has taught him both the science and art of Business Analysis. This week we’ll look at the case for Business Analysis as Science. 

“Is Business Analysis art or science?”

The Merriam-Webster dictionary defines art as “a skill acquired by experience, study, or observation” and science as “the state of knowing: knowledge as distinguished from ignorance or misunderstanding.” Dictionary.com further defines science as “a branch of knowledge or study dealing with a body of facts or truths systematically arranged and showing the operation of general laws.”

Business Analysis in the 80s

When I first started out in IT back in the early 80’s, we didn’t have any business analysts at the company I worked for. However, there was still a need to understand what the business wanted in order to develop the right products / solutions for our clients. I fell into the ‘art’ of business analysis mostly because I was the developer who wanted to know why we were doing what we were doing for the project. And, I was the only developer who was eager to talk to the users to find out that information. I definitely acquired my business analysis skills by experience, learning what worked and what didn’t the hard way.

However, while the field of Business Analysis may have been more art than science in the past, over the past decade it has evolved into a science. Business Analysis now has a defined knowledge base, defined procedures and tools for accomplishing the business analysis tasks, and new, defined ways of measuring both an organization and an individual Business Analyst’s (BA’s) competency levels.

IIBA is established in 2003

A large part of the evolution into science was the emergence of a formal association dedicated to the business analysis profession – the International Institute of Business Analysis (IIBA), which was established in 2003. The IIBA organization created:

  • The Business Analysis Body of Knowledge (BABOK) which formalizes the knowledge of the profession, as defined by practitioners in the field.
  • Tools, such as the IIBA Business Analysis Competency Model and the Self-assessment Tool, which can be used to evaluate and measure the effectiveness of an organization’s business analysis practices and the competency level of their Business Analysts.
  • The independent, internationally recognized certification programs – the CBAP (Certified Business Analysis Professional) and the CCBA (Certification of Competency in Business Analysis) – which evaluate and test the experience level and knowledge of individuals in the business analysis field.

With the advent of the IIBA, the business analysis profession entered the ‘state of knowing.’

Tools and Templates

In addition to the industry standards established by the IIBA, there are other signs that the business analysis profession has become a science. Most companies today either have or are developing a Business Analysis process as part of their system/product life cycle. They have a defined process for initiating projects, eliciting and analyzing requirements, managing requirements and change control, and evaluating the quality of requirements. Companies often have established metrics for measuring the effectiveness of their business analysis process and practitioners.

There are also well-defined requirements templates that can be used to capture business, functional, technical and non-functional requirements. While these templates can vary from company to company, they are being defined and followed by most organizations. Business analysts can easily get example templates via the Internet or from professional business analysis books .

Finally, there are now a number of commercial tools available to aid BAs in their job:

  • Prototyping tools (iRise , Serena Prototype Composer, Axure RP, Balsamiq, etc.)
  • Requirements Management tools (Requisite Pro, DOORS, TestTrack RM, etc.)
  • Requirements Definition tools (UML, Rational Composer, etc.)
  • Business Process Management tools (Appian, BEA Systems, IBM, etc.)
  • Agile requirements tools (Mingle, Rally, etc.)

When the Business Analysis profession first began to emerge, it took a lot of creativity and “art” on behalf of the practitioners to understand requirements and the Business Analysis role. We all had to learn a skill that did not have a defined knowledge base, proscribed approaches or tools to help us practitioners. But, today, the field has well-defined best practices, systematic ways of gathering and analyzing business needs, and recognized ways of measuring competency of practitioner’s competency levels.

The Business Analysis field is now a recognized science.

Don’t forget to leave your comments below.

A Real-World Example of the State Transition Diagram

Whenever a workflow of some kind is being converted to an electronic process, you’re going to be looking at creating a status-driven process. That’s because the object passing through the workflow needs to receive status changes to indicate that the previous step is completed, and to trigger the next step of the process. Recently I was involved in a project converting a thesis submission process from a paper-based process to an electronic process and it serves to illustrate the usefulness of the State Transition Diagram for this type of project.

In this example I used the Bizagi Process Modeler to create my State Transition Diagram. I prefer it over the other tools I use because it’s free, the diagrams are more aesthetically pleasing than the other tools I’ve used, and it is quick and easy to use.

Bennett Dec11 Img01To give you some background about the project that the State Transition Diagram presented below is based on, the submission of Master’s and Ph.D. theses in my institution is not overly complicated, but there are many different forms that must be completed by a variety of people, including students, administrative staff, and faculty. I needed an easier way of explaining all the steps of the process and what forms were being released at each step. Enter the State Transition Diagram.

Something else we had to make a decision on with this project was where to start the electronic process. The thesis submission process is lengthy, and we had to decide whether to include the part of the process – before it is officially deposited at the Graduate Studies office – when the thesis is still under development. To reduce the scope of the project, we decided to start the electronic process after the thesis has been developed and the student is ready to officially submit it to Graduate Studies.

Click image to view larger.
Bennett Dec11 Img02

This diagram demonstrates the status progression of the submitted thesis. Starting from the left, we see the first status is “Pre-Upload”. This is before the thesis is uploaded by the student. The circle following that status is the “Thesis Uploaded” event. Any circles like this one that have a double border around it are known as events. This first event indicates that the student has uploaded their thesis via the new electronic process. From this event we see a dotted line leading up to a gray rectangle, known as an annotation. The items listed in this rectangle indicate the forms that the student must complete as they upload their thesis. Once this event occurs, the status of the thesis becomes “Thesis Package Received”, which triggers the release of the Thesis Defence Authorization Forms. These are forms that go to the members of the defence committee who must decide whether the thesis is suitable to defend, hence the decision diamond that follows this status: “Defence Authorized?”. If the decision is no, the status of the thesis becomes “Defence Denied”. If yes, we next see the “Notice of Defence Approved” event. This is where a notice is posted in some locations around the university listing the members of the defence committee, and the date, time, and location of the defence. Once this event occurs, the status of the thesis becomes “Ready for Defence”, and from this status we see another annotation leading upwards listing some more forms that are released with this status change. Following this status is the “Defence” event. This is where the student defends their thesis before the defence committee. At the defence the committee will determine whether revisions are required or not, and thus we see another decision diamond for “Revisions?”. If the decision is no, the status proceeds to “Thesis Accepted”. If yes, the status becomes “Requires Major/Minor Revisions”, and in the annotation above this status we see the Thesis Revisions Approval Form is released. Following the “Requires Major/Minor Revisions” status is the “Revisions Uploaded” event. This is where the student has uploaded their modified thesis according to the revisions that were provided to the student at the defence. At this point the decision must be made as to whether to accept the revisions or not, and so we see the decision diamond “Accepted?”. If no, the flow goes back to the “Requires Major/Minor Revisions” status. If yes, it proceeds to the “Thesis Accepted” status. Going down from this status is another annotation showing the final forms that the student must complete before the thesis is electronically packaged and sent to the Library for publication. That’s why the next event we see is “Thesis is sent to the Library”, followed by the final status “Transferred to Library”. This is the end of the electronic thesis submission process.

Explaining this workflow to my stakeholders with the help of this State Transition Diagram that they could follow along with was much easier and was more meaningful than reading just a written explanation or explaining it verbally. It may not be a useful tool in all situations, but for workflows, I find it indispensable.

Don’t forget to leave your comments below.

Use Case Goals, Scenarios and Flows

VanGalenNov13 IMG01Introduction

This article is inspired by two more of Alistair Cockburn’s “gold nuggets” found in his book Writing Effective Use Cases [1].  The first nugget is the idea of a goals hierarchy, which represents, among other things, how the goals of a system’s use cases are derived from and trace back to larger goals of the system’s users.  The second nugget is his “striped trousers” explanation of use case scenarios, which are sequences of use case steps that represent different paths through a use case in pursuit of the use case’s goal.  This article also demonstrates a convention for organizing a use case’s steps based on the established approach of writing use case steps nonredundantly by using flows of different kinds.  It even introduces a new kind of flow and advocates its use over that of the extension use case in specific circumstances.

Goal hierarchies

Cockburn provides a mental model of how people and organizations function on the basis of a hierarchy of goals, where each goal, but at its highest-level belongs to one or more higher-level goals and each goal, but at its lowest-level, breaks down into multiple lower-level goals.

  • Moving down the hierarchy answers “How?” to show how a certain goal can be achieved.
  • Moving up the hierarchy answers “Why?” and provides a rationale for why a certain goal exists.

These goals are pursued by different role players (people), organizations (groups of people) and systems (automated resources) inside and outside an organization.

Use case goals

A use case arises when a person (or a system) with an overall goal needs to interact with a supporting system in order to achieve a mini goal as a step towards reaching the overall goal.  The use case represents the supporting system’s ability to deliver the user’s mini goal, and this it the use case’s overall goal.  This can be reflected no more clearly than by naming the use case after that goal.  The use case’s specification details how the use case’s overall goal is broken down into its own mini goals, represented by steps and groups of steps.

Thus, the user’s overall goal, the user’s mini goal/use case’s overall goal, and the use case’s mini goals are seamlessly incorporated into an organization’s goals hierarchy.

The beauty of this perspective is that:

  • System use cases arise from and trace back to the goals of a system’s users, rather than a use case modeler’s imagination.
  • The idea of a goals hierarchy that integrates the goals of groups of people with the goals of people, the goals of people with the goals of supporting systems, and the goals of supporting systems with the goals of their internal steps provides a single paradigm for how an organization functions across all of its levels in both manual and automated ways.

Use case scenarios

A use case scenario is a sequence of steps that represents a single use case execution (a scenario is a possible path through a use case specification).

Cockburn presents a diagram (Figure 2.2 in [1]), whose originality and quirkiness are only exceeded by its effectiveness. He calls it the “striped trousers” view of a use case and its scenarios. In it:

  • The belt represents “the [use case] goal that holds all the scenarios together.”
  • One leg “is for the scenarios that end in success.”
  • The other leg “is for the scenarios that end in failure.”
  • “Each stripe corresponds to a scenario.”
  • “The first stripe on the success leg [is] the main success scenario.”
  • “The leg’s remaining stripes are [all] other scenarios that ultimately end in success – some through alternate success paths and some after recovering from an intermediate failure.”
  • The stripes on the failure leg are scenarios that encounter one or more intermediate failures, possibly recover from some, but always fail eventually.

As Cockburn says, this model “is useful for keeping in mind that every use case has two exits, that the [initiating] actor’s goal binds all the scenarios, and that every scenario is a simple description of the [use case] succeeding or failing.”

This strikes me as one of the greatest contributions to use case modeling.

Use case flows

Cockburn points out, “We won’t actually write every scenario separately from top to bottom.  That is a poor strategy because it is tedious, redundant, and hard to maintain.”

Instead, a use case can be written nonredundantly, starting with an unconditional main flow that is subsequently enhanced with conditional additional flows of different kinds, where a flow is a sequence of steps.  The main flow equates to “the main success scenario” and each additional flow represents a portion of one or more of the remaining scenarios.

The UML doesn’t deal with flows.  Cockburn only refers to “the main success scenario” (main flow) and “extensions” (additional flows).  Refining this view, the following sections present five different kinds of flow by providing a definition, diagram and simplified example for each.  The purpose of the examples is to illustrate a clear, consistent and scalable writing convention for representing a use case’s structure (they’re not meant as full-fledged use case specifications).

The main flow

Definition: An unconditional set of steps that describe how the use case goal can be achieved and all related stakeholder interests can be satisfied.  Each step is essential to achieving the use case goal (no step can be skipped), and each step succeeds.VanGalenNov13 IMG02

Cockburn calls this “the main success scenario,” and others use the terms “the happy path,” “the basic flow” and “the normal course of events.”  My preference is for “the main flow” because it’s short and ties in well with the names of the other kinds of flow.

The next four sections outline what collectively can be called additional flows.

An alternative flow

Definition: A conditional set of steps that are an alternative to one or more steps in another flow (the alternative flow is performed instead of the other step or steps), after which the use case continues to pursue its goal.VanGalenNov13 IMG03

A recovery flow

Definition: A conditional set of steps that are a response to the failure of a step in another flow (the recovery flow is performed after the other step), after which the use case continues to pursue its goal.VanGalenNov13 IMG06

An exception flow

Definition: A conditional set of steps that are a response to the failure of a step in another flow (the exception flow is performed after the other step), after which the use case abandons the pursuit of its goal.VanGalenNov13 IMG07

An option flow

Definition: A conditional set of steps that represent a nonessential option available between two steps in another flow (the option flow is performed between the two steps), after which the use case continues with the second of those steps.VanGalenNov13 IMG08

The option flow’s main purpose is to represent behavior that is nonessential to achieving the use case goal (the behavior can be skipped) but that still contributes to that goal in some way (the behavior only has meaning in the context of that goal).

An “option” is like an optional feature in a new car (e.g., sun roof).  Choosing the option contributes to the goal (order a car), but you can order a car with or without the option (a sun roof is not essential).  Continuing the analogy for another term in this article, a car’s possible engine sizes beyond the standard size are “alternatives” to the standard size (but having an engine of some size is essential).

On point markers

Before taking a closer look at the option flow, here are some comments about branch point markers and rejoin point markers.

Advantages.

  • There is no need to reference actual step numbers (the convention used in [1]); these change when steps are added, deleted or reordered, and if we forget to update their references then the use case becomes corrupted.
  • A flow’s branch points and rejoin points are immediately obvious from the flow’s own description, rather than only from the descriptions of the additional flows that branch off the flow (the convention used in [1]).

Placement of branch point markers. 

  • As the examples show, a branch point marker for an alternative, recovery or exception flow is attached to a main flow step, rather than placed before or after the step; this keeps markers from visually dominating the main flow.
  • This doesn’t apply to the option flow, which by its very nature occurs between two steps.

More on the option flow

What an option flow is not

An option flow is not:

  • Part of the main flow, because an option flow is conditional and nonessential to achieving the use case goal, which is the very opposite of the main flow.
  • An alternative flow, because an option flow isn’t an alternative to a step or steps.
  • A recovery flow, because an option flow is not a response to a failed step.
  • An exception flow, because an option flow is not a response to a failed step and doesn’t mean the use case failed.

Use an extension use case instead?

You may wonder whether nonessential goal-related behavior can be represented as an extension use case instead, given the UML’s extension use case description in [2].  To help answer that, the next table lists key UML statements about the extension use case and their applicability to the option flow.VanGalenNov13 IMG09

At first blush, the above, and in particular its first two statements, suggests it’s reasonable to represent nonessential goal-related behavior as an extension use case.  However, for a conclusive answer, we turn to the extension use case interpretation outlined in [3].

When to use an option flow and when to use an extension use case

Based on [3], the next table repeats when behavior can be modeled as an extension use case and gives answers for the behavior of the earlier Coupon option flow.VanGalenNov13 IMG10

Regarding the False answers for the Coupon option flow behavior:

  1. The third answer is soft: the order could be declared public data, which would change the answer to True.
  2. The first answer is hard: it is and remains a fact that the Coupon option flow behavior contributes to the use case goal, so the answer is always False.

This leads to the following conclusion:

  • Nonessential behavior that in some way contributes to the use case goal must always be modeled as an option flow and never as an extension use case.
  • Nonessential behavior that doesn’t contribute to the use case goal in any way may be modeled as an extension use case (when all three questions are answered with True) or as an option flow (when the third question is answered with False).

Other differences between the option flow and the extension use case

Key to the above conclusion is whether nonessential behavior contributes in some way to a use case’s goal or not.  When it does, using an option flow has undeniable advantages over even thinking about using an extension use case.  These aren’t considered in [3] but do reinforce the above conclusion, which is based on [3], as shown in the following table.VanGalenNov13 IMG11

Thus, the extension use case turns out to be an inappropriate choice for modeling nonessential goal-related behavior; in contrast, the option flow is ideal for this.

Benefits of using the option flow

Ease of producing use case models: 

  • Only need to model one use case for a given goal, not two or more (prevents “use case bloat”).
  • No need to use the special extension use case construct, its involved UML diagramming convention and a custom writing convention; in contrast, writing an option flow and its branch and rejoin points follows the convention used for all additional flows.

Ease of consuming use case models:

  • It’s easier to consume one use case for a given goal than two or more (most consumers will agree with this).
  • A use case’s postconditions and scenarios are all in one place (great for all consumers, but especially for testers).

In conclusion

It is my hope that this article will contribute to crisp, clear and consistent use case models (including diagrams and specifications), and will benefit their producers and consumers, as well as the organizational benefit of their employers.

References

[1] Alistair Cockburn, Writing Effective Use Cases, 12th Printing, November 2004.

[2] Object Management Group (OMG), OMG Unified Modeling LanguageTM (OMG UML), Superstructure, Version 2.4.1, Section 16.3.3, Description.

[3] Willem Van Galen, Excavating the extension use case, 10 July 2012,  (Part 1) and 24 July 2012, (Part 2).

Don’t forget to leave your comments below.