Skip to main content

Tag: Business Analysis

Get Off the Documentation Hamster Wheel

In a recent BA Times article, I suggested that most teams spend too much time on documentation.

I even boldly proclaimed: “When planning, elicitation, and analysis are done well, documentation becomes simple and speedy.” I think most people agree in theory. They are hungry to reduce documentation and speed up their requirements process, but they keep on documenting. They stay in the hamster wheel and keep writing and reviewing and updating their documents over and over again.

WickOct 1

Related Article: Are Business Analysis Documents Becoming the Dumping Ground?

Reducing documentation is a real struggle for individuals, teams, and organizations. They want to jump off the wheel, but they don’t know HOW! They ask me:

• What should I be documenting? What should I remove from my documentation?
• How do I know when it’s good enough?
• What does lightweight documentation mean?
• How will my developers get the code right if the details are not in my specs?
• How do we know if all of the requirements have been met if the documentation does not include detailed requirements?

Or they point fingers:

• The PMO/BACoE/CIO makes me fill out all of these templates.
• This is what the stakeholders expect.
• Audit requires all of these details.
• Legal wants this paperwork.
• The local/state/federal/international/alien planet government needs these documents.

Two New Mindsets

That hamster wheel is going to kill you (or more likely, stunt the growth potential of your organization)! Solving your document dilemma requires two significant changes in mindset. First, teams need to let go of “one size fits all” and replace it with “adapt or die.”

Every project has unique documentation needs. It’s ineffective and inefficient to apply the same approach to every project—teams need to adapt. Would you document a pizza delivery app the same way you document a life-saving medical device that will be implanted in a human body? No. Even within an organization, documentation approach should vary for internal vs. external users, new products vs. upgrades, bug fixes vs. major releases, process-based projects vs. system-based projects, etc.

While it’s important to adapt your approach, great requirements can be consistent! Consistent in explaining (without vague and mushy words) who the users are and what goal they are trying to achieve. Consistent in providing what context, data, rules and quality expectations are in play to create value for the end user and customer.

That leads us directly to our second (and most important!) mindset—teams need to “Let Value Be the Judge.” Instead of pointing fingers and passively accepting status quo, VALUE should be the judge of every proposed piece of documentation.

More is not better! We should identify the right requirement set: does this provide value, what is the thinnest/lightest version of documentation we can use to deliver value, will these requirements lead to a solution that over or under-delivers on value to the customer?

5 Factors to Evaluate Documentation

Consider these factors to evaluate each piece of documentation:

User: Think about who will be using the documentation and how they will use it? What level of detail do they really need? Discuss documentation reductions with users and experiment.
Lifespan: Consider how long the information will be used and how long it will remain accurate. Is the lifespan so short that the document provides zero value? Is accuracy so important that the document should be created just in time? Should the format of the documentation change based on the lifespan?
Cost: Think about who would be willing to pay for this information and why. Estimate how much it will cost to generate the documentation and determine if the cost aligns with the value. Is the process to create shared understanding of requirements more valuable than the document itself?
Fear: Explore the possibility that fear motivates excessive documentation (aka Cover Your A**?) Does that fear-based mindset boost solution value or does it increase time and cost?
Format: What is the most efficient format to deliver value? Do your requirements really need to be written in a template? For some, yes! For others, no. Do they really need to be entered into a tool? What is driving the template or tool usage, governance or better requirements? Depending on your project type and your team structure, documentation might be post-its on the wall, drawings on a white board, prototypes, notes on a napkin or even a series of discussions/demonstrations.

Above all, strive to create an environment that allows for constant collaboration and meaningful dialog with developers. Change the mindset that thinks requirements are DONE when we hand them off to our techies. Instead, be in it together from start to finish.

But What About Audit?

I am not suggesting that you ignore or refuse to cooperate with protective entities like legal, audit, best practice (Center of Excellence/PMO) and regulatory. Instead, I encourage ongoing and collaborative conversations about documentation. Understand what they need and why. Work together to determine the thinnest/lightest version of documentation to meet their needs.

Are you ready to jump off the documentation hamster wheel? Instead of spending all your time updating requirements and managing sign-off, focus on helping your team think strategically about the value you are providing to end users and the organization. Details fall into place with minimal documentation when teams collaborate continuously. Conversations rooted in value build shared understanding, which alleviates the need for excessive documentation.

Please leave your comments below.

What Is the Lifecycle of a Requirement?

In a traditional IT environment, requirements are created at the beginning, handed down the process, and validated at the end. In an Agile IT environment, requirements represent working software and are created, refined, and revised in a circular process.

So, what does that requirement lifecycle look like in an Agile environment?

Related Article: User Stories & Mousetraps: A Lifecycle of Conversations

vineyardJuly

1) Build the Right Thing

The first step is to identify why we are doing something. Stakeholders come to us with their identified problems. However, it is up to the business analyst to probe for the root cause, the business value desired, and the innovative solution.

For example, a homeowner requests new hardwood floors so they look like the day the house was new. We need to know if the customer wants new floors for $12,000 or to refinish the existing floors at $4,000.

Second, we determine who is using our product. We want to know who is our audience, what matters to them, and how we get their feedback. We can use techniques such as story mapping.

Third, turn key features into requirements that deliver customer value. Once we understand the Why and the Who, we align our feature requirements based on the value it delivers. In this way, we ensure all feature development delivers value, and we reduce the likelihood of waste.

2) Build the Thing Right

Once we’ve determined the right thing, we begin the process of building the thing right. The requirement lifecycle continues to get as independent, small, and testable as possible. This example demonstrates the refinement process.

Vertically slice your requirements to create stand alone, customer-value-driven requirements – not architecture-oriented, stack-oriented, or data-oriented.
  Start: I need a form which collects the number of family members for one unit.
New:
I need to calculate the total number of family members based on number of adults and dependents.
Progressively elaborate further using INVEST   New:
I need to collect the number of adults within a family.
I need to collect the number of dependents within a family.
Break down these requirements into user stories.   For total number of adults:
As a client manager, I want to sum the number of adults based on date of birth in a family so that I can report accurate information to funding organizations.
Add objective, measurable, testable acceptance criteria to measure when the criteria is satisfied.   Acceptance Criteria:
– Must be a whole, positive number
– Must be larger than zero

3) Gather feedback

The lifecycle leads you through the process back to the beginning. You need to collect customer feedback once you’ve implemented an independent, small, testable feature.

At this stage, measure how your customers or clients use the features. Compare results with your goals and objectives identified in your Product Vision. Revisit “Build the Right Thing”. Did you get it right? What do you need to change? What do you need to add?

In the Age of Cloud Computing Business Analysts Are More Essential Than Ever

Almost everywhere you go in IT these days, the talk of the town is about cloud: what it is, why it matters, how to get there.

This comes at a time where the market is ruthlessly driving down costs concurrently with increasing demand for delivery of information.

Cloud is emerging as a way of making a business organization more agile, nimble, and efficient so that it can quickly meet business needs.

In this rapidly changing environment, business analysts are becoming vitally important in helping to guide the transformation that is underway. Their role as agents of business change places them in the eye of the storm when it comes to cloud initiatives.

Related Article: Software Solutions: Should I Outsource, Buy, or Develop In-House?

So how does a Business Analyst prepare for cloud computing? First, it is crucial to understand precisely what cloud is. Second, it is important to understand what the challenges and pain-points caused by a transition to cloud. Only then does it become clear how important a business analyst is for addressing the important issues raised by cloud.

So what is cloud?

Cloud computing is often simply (and wrongly) defined as a way by which a 3rd party vendor hosts IT infrastructure that runs applications, instead of having the IT infrastructure owned and hosted by the company or organization using it. While sometimes true, this definition is overly simplistic and not always accurate (for example, a company could own a “private cloud” itself rather than using a 3rd party.)

For a more complete definition, we can look to the National Institute of Standards and Technology (NIST) Special Publication 800-145. According to the NIST definition, “cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.” (Notice there is no mention of 3rd party providers.)

Let’s parse what that means. The NIST definition calls for five essential characteristics for cloud computing:

  1. Cloud provides ubiquitous, convenient, on-demand access. This means that cloud computing is self-service and available all of the time.
  2. Cloud provides for broadly available network access. Computing capabilities are accessible via the network rather than through hard cables, and can be accessed by any relevant client (PC, mobile phone, tablet, etc.).
  3. Cloud computing allows for resource pooling. When an organization doesn’t own computing resources exclusively for the use of specific applications, those resources become available for use by other organizations and applications. Resources can be dynamically allocated and re-assigned based on changing consumer demands. (This contrasts with the traditional model of computing resources that sit in a data center, which collect dust when not being actively utilized.)
  4. Cloud provides rapid provisioning and release of resources with minimal effort or provider interaction. When there is a sudden spike in the volume of demand for an application, it should continue to run quickly and smoothly through rapid allocation of resources behind the scenes without the need for human intervention.
  5. Cloud involves a metered paying model. Implicit in the definition of cloud is that someone has to pay for all of this. Rather than payment being for the purchase of equipment for a data center, the payment is for actual usage–usage for a number of servers, processor cores, terabytes of data, the amount of bandwidth used, or whatever else is relevant to the service.

Another thing to know about cloud computing is how it can be deployed. There are essentially three models:

  • Software as a Service (SaaS): provides functionality for end users, often with a per-license structure and with relatively minimal customization of the front end. The vendor runs everything. Gmail and Microsoft Office 365 are examples of SaaS.
  • Platform as a Service (PaaS): provides a platform upon which applications can be written and used. The vendor runs infrastructure, middleware, and operating system and you manage your own applications. Google App Engine is an example of PaaS.
  • Infrastructure as a Service (IaaS): provides core infrastructure as a utility service. The vendor runs infrastructure only; you manage everything else. Amazon Web Services is an example of IaaS.

A simple way of keeping the deployment model straight is to think of a railroad. The rails are the infrastructure, the train cars are the platform, and the products being carried are the “software” service.

Why is everyone moving to the cloud?

Cloud is often touted as the answer to reducing costs because it offers a number of benefits. The shared nature of resources means there is greater utilization and use of the computing resources you actually consume without having to pay for or maintain what you don’t use. It also offers advantages in terms of requiring less space along with the electricity and cooling necessary. It forces a simpler, more centralized management approach of computing resources. It also outsources the maintenance of commodity services not part of an organization’s core mission for things such as websites or email service. It also places responsibility for maintaining computing resources and their continuity of operations in the hands of experts who provide that service for many customers.

What does all this mean for a Business Analyst?

Consider some of the business problems that arise when we move away from a model where the applications supporting an organization’s processes reside on traditional, fully owned, dedicated computing resources.

  • How do you identify which business services and processes should be moved to the cloud?
  • Does a business really want its redundant, inefficient, or bloated processes and applications to be moved to the cloud where usage is strictly metered? If not, how do you make the identified services and processes cloud-ready?
  • What requirements, policies and governance should be implemented to guarantee performance, privacy, security, and quality of business data?
  • How will performance be measured and monitored, and by whom?
  • What kind of agreements (called Service Level Agreements or SLA’s) must be negotiated and monitored with cloud providers to ensure that the service does what it promised to do?
  • What kind of cultural challenges will arise from moving to the cloud, such as resistance to change or concern over lost jobs?
  • How will cloud applications and supported business processes integrate and interoperate with the rest of the organization’s processes that may remain on traditional computing?
  • How will legacy assets be decommissioned once their corresponding functions are moved to the cloud?

Business analysts will play a key role in addressing each of these problems and more, since nobody is better situated to pave this new path between business needs and IT implementation. It is therefore crucial that Business Analysts begin developing a cloud competency whether or not these issues have already arisen in their organizations.

There are five key things Business Analysts can do now to help prepare them and their departments for cloud.

  1. Educate yourself. There is additional NIST guidance about cloud and a lot more to know than what can be covered by this article. Learn about the different deployment models and cloud computing environments to begin understanding the possible options you could leverage for your organization.
  2. Take a holistic view of the enterprise. Now more than ever, a business analyst must take a 360-degree view of the enterprise when analyzing business process re-engineering. With cloud metering of usage, business processes must be aligned and made efficient wherever possible to eliminate duplication and waste. This happens best when the needs of the entire business are kept in mind rather than just the needs of a local business unit.
  3. Be prepared to address the issue of control in your requirements. The most burning issues for cloud revolve around control: ensuring performance of the infrastructure, continuity of operations should there be a need for disaster recovery, and security of data and application assets. Giving up the control provided by owning infrastructure assets is bound to be culturally difficult, but be prepared to explain how there are ways of mitigating these concerns to the level of acceptable business risks.
  4. Know how to measure. In a metered environment, everything about cloud comes down to measurement of performance. How much uptime was there? How fast is the application running? How much bandwidth was consumed? What is the Return on Investment of a new cloud platform? Your organization better be ready to choose, track, and report on all essential Key Performance Indicators. Many business organizations are not all that great about measuring their own performance. That needs to change before cloud computing can truly shine.
  5. Provide natural leadership. As you learn about cloud, you will become a thought leader on how best to acquire and manage it. Whether it’s implementation of SLA’s, development of cloud policies, making business processes cloud-ready, or writing cloud-specific requirements, you can be in the driver’s seat in making choices to maximize your organization’s success.

For current business analysts, the next few years will provide many opportunities for career advancement as long as skills are kept current to be valuable in the age of cloud computing. For new business analysts seeking to enter the field, there will also be more opportunities than ever to help fill increasingly critical roles in business organizations capitalizing on the advantages provided by cloud.

5 Tips for Successful Lessons Learned

Call it what you want – look back, reflection, retrospective or post-mortem – but an effective lessons learned session is sometimes harder to come by than most people realize.

During most projects, best case is that people are able to identify problems or roadblocks and fix them quickly; worst case is most people are heads down in the actual work and don’t stop to think about what might be going wrong or what might work better. Or somewhere in between. For longer projects or multi-phased projects, lessons learned sessions can make the difference between future success or status quo.

Related Article: 5 Lessons Learned from Harry Potter in the Room of Requirement

Holding a Lessons Learned session has many challenges. There is always the danger that it will deteriorate into a complaint session or that people only remember what happened in the last week or two of the project. Successful lessons learned sessions are held with some degree of frequency throughout the project, include all working members of the team, even if individuals have moved on to other projects and be an organized part of the overall project plan.

1. Capture in the moment

Use google docs, company wiki, outlook task or note, email to yourself, running list in a spreadsheet or even writing in a special notebook for those of you still like pen and paper. It’s important to capture the details of what happened, why it was good or bad, the impact of the event and how to avoid it or promote it for the future. Otherwise, you might end up forgetting, exaggerating or blaming.

2. Encourage openness

The only way the feedback is valuable is if people are encouraged to be honest and open about how something went. If someone really feels like they can’t speak up, you probably have a bigger problem.

3. Plan ahead

Outline the lessons learned process at the beginning of the project so the entire team understands how the information will be used. Define the timeframe (every three months we will have a lessons learned), the structure (identify an event and the impact) and how team members should capture the information (see #1).

4. Summarize

For the actual meeting, provide a summary of common comments, solutions or tasks that should be entered as future best practices. Don’t call out any one individual’s comments. Center the discussion around the summary points.

5. Implement

Have the team figure out solutions and how to best implement, then assign tasks. Get buy-in and commitment from group to do what will make the project better. Lessons learned only work if you actually use what you learned (duh!).

It’s also important to communicate best practices to a larger group or team. Depending on the size of the organization, many lessons learned can be packaged for other teams. Try to keep tools and summaries in a central place for future reference.

My own personal experience with lessons learned came from working on several high-profile corporate initiatives in a busy IT department. As a participant, I witnessed several sessions degrade into angry finger-pointing sessions. As a first time Quality Assurance (QA) lead on a highly visible project, I set the expectation at the beginning of the project in order to get quality feedback. The first time one of the QA’s came to me with a complaint about something, my response was, “please write this down on a document or as a task so that we can make sure it doesn’t happen again.” After addressing whatever issue was causing the complaint, I moved on.

For each iteration (usually monthly), our team got together and discussed issues and what we might change in the future. Then we assigned actions items to those things we knew we could fix, brainstormed mitigations for risks we couldn’t fix and documented everything for future iterations of the project.

In order to solicit feedback that is constructive in nature, I also kept my own list of issues. Sometimes, the items on my list were the result of an emotionally charged confrontation or failure in one part of the project – either a team member or a business partner who wasn’t fulfilling their role responsibilities, personality conflict or an area where I failed to recognize an issue, which then escalated. When I reviewed the list before holding the meeting, several topics usually fell off the list because my temper had cooled down a bit and I recognized that it would not be a productive topic for the look back. However, many of the items on my list also made it onto everyone else’s list which provided the starting point for conversations during the session.

Lessons learned provide a huge opportunity to improve individual projects and institutionalize best practices uncovered during the process. Harnessing the best ideas and suggestions from the team is the easiest way to get good results for the future.

Pablo Picasso and Scope Visualization

Scope – the last frontier. We are on a mission where no business analyst has gone before. To explore strange new diagrams and to have the project scope clearly understood. Extra credit to those who remember which TV show that was from! Scope and context are the number one reasons business expectations about a project are not met and projects fail.

Let’s face the reality. Projects today are more complicated. In this integrated and connected world of systems, long gone are the days of the quick and easy change. Our organization’s architectural diagrams look like the tombs of Egyptian Pharaohs. Symbols and shapes connected by lines that fill the wall of an entire room. Even trying to explain the diagram to someone can take days.

Related Article: Requirements in Context Pt 3: Scope = High-Level Requirements

Projects now require more involvement by more people. Our systems and processes are so complex and integrated it’s too difficult for one individual to understand them all fully. Stakeholders are flung across the globe speaking many different languages. Top it off with organization’s taking on hundreds of projects at the same time. Keeping track of each project’s scope and impacts to the organization are difficult to comprehend. It’s no wonder why understanding the context of a project’s scope is the number one reason why projects fail to deliver value. They simply lose sight of the project’s vision and goals in our complex systems and processes. Everyone is one a different page. We wind up spending a lot of time trying to get stakeholders, sponsors, and team members to have a clear understanding of scope.

So it’s no wonder that scope and context are the number one reasons projects fail. How can you get an entire project team moving in the right direction? Not understanding the scope and context of a project leads to all sorts of time being spent on just figuring out what we are trying to accomplish with a project.

So how do we get everyone on the same page? By that, I mean the same page in the same book!

It’s time to visualize scope. Scope places the boundaries around where the entire project team will work. Bust out that context diagram. Getting a common, clear understanding of scope and business expectations leads to better projects that deliver real value.

Is that user story a complete representation of the project boundaries or scope? Maybe not. The EPIC or a bunch of user stories combined together would be closer to the bulls-eye. A picture is worth a thousand words. Visualization of scope is worth its weight in platinum as it creates the vehicle to ensure a common understanding of the project scope.

Scope visualization isn’t just about a context diagram. That’s certainly a great tool and I blogged about it previously. Don’t get me wrong – I love my context diagrams. Pushing the envelope a bit, I have used infographics to display project scope in place of context diagrams. In a recent server upgrade project, I was updating the operating systems and consolidating over 1,300 servers. Sticking 1,300 servers on a diagram was an exercise in futility. There just isn’t a big enough piece of paper to display them all. So I pictured things at a higher level. I displayed each server farm as a farm – yup cows and red barn with farmer Joe. The size of the farm was based on the number of servers on that farm. Server farms were in specific locations, so this gave the project team a visual representation of which sites were going to be impacted more heavily. All of this was based on estimates from doing a high-level scan. Remember context is high level.

In each barn was an icon that represented a group of servers. There were 3 groups: leave it alone, upgrade it and consolidate, then retire it. I didn’t have exact numbers or server names at this point, but I knew the servers would be divided into those groups by talking with stakeholders. Servers were put into groups based on our best guess.

In the kickoff meeting, this was a great tool. Sponsor and stakeholders understood in the scope of the project. Yes, they wanted to know more. Everyone wants to know the details, but we were just starting out. Everyone walked out of the room with a pretty good understanding of the scope and estimated size. Many were surprised at the volume of servers in each farm. Overall the infographic did a good job of setting the stage for the project visually. All on one PowerPoint slide.

The idea of scope visualization is to present a single page to provide a high-level overview of the changes the project will make to systems, processes, and people. That’s no easy task. Taking the complex and making it simple is powerful. It creates a better common understanding of the project.

The business wanted a global CRM solution, but all they got were pigeons and index cards.

Context doesn’t just talk about scope – it also sets business expectations about the outcome of the project. It’s important to keep the communication channels open on what is happening with the scope and how the design is being implemented to meet the scope all throughout the project.

I take the concept of the context diagram a little farther than how most folks typically use a context diagram. You know me always pushing the envelope. Context diagrams usually explain the end state or the final outcome of the project. They show the scope of a project outcome.

Building on a good thing, I like to build a context diagram of the current environment at a high-level. Even at a high-level I’m often surprised at how differently stakeholders, sponsors, and team members view the current state. It’s a great tool to get everyone on the same page for the starting point. Having everyone on a different page for what we currently have will cause a few issues down the road in understanding the final destination. Knowing where you are starting from is a powerful thing when explaining where you want to end up in the future state.

Taking this concept even a bit further (and perhaps more uncomfortably) into the desired state. Not many projects really look at the desire of the stakeholders and sponsors. The desire is basically stated in the project request form or project charter. The sponsor and stakeholders put together a vision of the expected outcomes in these documents. A context diagram of the project charter or request which elaborates the vision is a powerful thing. It ensures what is being asked for is understood.

Don’t re-invent the wheel. Many times I take the current state diagram and just highlight the areas that are changing. Simply use color to highlight the add, modify or removes based on the context diagram for the current state. This visually explains where the changes are visualized to occur.

Now you may think I completely lost my mind at this point. Fear not, I’m taking a step even further. I take the context diagram that shows the desired state (based on the project charter or project request) and determine what is feasible. Everybody wants it all but the teleporter to zap you across the globe for break in Paris hasn’t been built yet. Reality always steps in and dictates what is feasible. Taking the context diagram, I highlight the areas that are NOT feasible. It’s a great way to level set the expectations of the sponsor, stakeholder, and project team members.

So when in the project lifecycle does all this context stuff happen? Ideally, it should happen before the project starts at a very high level. Wouldn’t it be great to start a project where everyone understood and was in complete agreement about the project outcome? You can bet it would save a lot of time running around trying to get everyone on the same page. Typically, the context is set at the start of the project.

As you move through the project, more and more understanding is acquired. Details need hammering out, and there is ALWAYS change to the project. Has anyone ever worked on a project with absolutely zero change? If you have, you are leading a very charmed existence. I’m jealous. Context diagrams can help evaluate how change would impact the project. So forget about laminating them and hanging them on the wall. They are living breathing documents that will change throughout the life cycle of the project.

The pitfall is that architects and others might expect diagrams that show the smallest of components. Don’t fall into that pit. Your job is to communicate the boundaries clearly but not make it so complicated a rock scientist from NASA can’t figure it out. Detail is important for design but scope context requires things to start at a very high level and be decomposed into more details. Context is simple with enough detail to make it clear.

Break out your inner Pablo Picasso and get creative. Find a way to display context or scope in a visually appealing manner. Color can help bring greater clarity. Highlight areas in different colors to bring focus to them. If a system is risky or greatly impacted by the project scope, highlighting is a technique to denote that risk. Black & White isn’t your friend. Studies have shown that color diagrams – even with a small amount of color – are more memorable.