Skip to main content

Tag: Planning

Business Analytics with In-Memory Databases

Abstract

The Business intelligence (BI) and Data Warehouse vendors are increasingly turning to in-memory technology in place of traditional disk-based storage to speed up implementations and extend self-service capabilities.

For years, it has been noticed that the process of creating customer data queries and building business intelligence reports has been a prolonged activity. This is because the information needed must be pulled from operational systems and then controlled in separate analytical data warehouse systems that can accept the queries. Now, however, with the advent of true ‘in-memory analytics’, a technology that will allow operational data to be held in a single database that can handle all the day-to-day customer transactions and updates, as well as analytical requests – in virtually real time.

Starting Questions

Successful Business Analytics project implementations start by asking the right questions.  Here are a few that should be on your short list.

·   How do I manage and maintain the performance of my existing reports with the ever increasing data?

·   What is the cost effective alternative to data warehouses that provide the ability to analyze very large data sets, but is much simpler to set up and administer?

·   What can I do today to support near-real time reporting requirements and not relying heavily on IT departments?

·   How can I demonstrate value to my company to extend real-time ad-hoc query capabilities for high volume transaction functionalities such as Financial Services?

·   How do I minimize the administration overhead and yet provide a transparent reporting environment to end user?

The purpose of this article is to put both BI technologies in perspective, in-memory and disk-based, explain the differences between them, and finally explain, in simple terms, why disk-based BI technology is not on its way to extinction. Rather, explain the requisites for considering an in-memory database BI solution.

But before we get to that, let us understand the differences between disk-based and in-memory databases.

Disk-based and In-memory Databases

Database irrespective of disk-based or in-memory, we are talking about where the data resides while it is actively being queried by an application: with disk-based databases, the data is queried while stored on disk and with in-memory databases; the data being queried is first loaded into RAM (Random Access Memory).

Disk-based databases are engineered to efficiently query data residing on the hard drive. At a very basic level, these databases assume that the entire data cannot fit inside the relatively small amount of RAM available and therefore must have very efficient disk reads in order for queries to be returned within a reasonable time frame.  On the other hand, in-memory databases work under the opposite assumption that the data can fit entirely inside the RAM. The engineers of in-memory databases benefit from utilizing the fastest storage system a computer has (RAM), but have much less of it at their disposal.

The fundamental trade-off with disk-based and in-memory technologies is faster reads and limited amounts of data versus slower reads and practically unlimited amounts of data. These are two critical considerations for business intelligence applications, as it is important both to have fast query response times and to have access to as much data as possible.

Fast analysis, better insight and rapid deployment with minimal IT involvement!

What is it?

As the name suggests, the key difference between conventional BI tools and in-memory products is that the former query data on disk while the later query data in random access memory (RAM). When a user runs a query against a typical data warehouse, the query normally goes to a database that reads the information from multiple tables stored on a server’s hard disk. With a server-based inmemory database, all information is initially loaded into memory. Users then query and interact with the data loaded into the machine’s memory.

BI with In-memory databases may sound like caching, a common approach to speeding query performance, but inmemory databases do not suffer from the same limitations. Caches are typically subsets of data, stored on and retrieved from disk (though some may load into RAM). The key difference is that the cached data is usually predefined and very specific, often to an individual query; but with an inmemory database, the data available for analysis is potentially as large as an entire data mart.

In-memory database is designed specifically to take advantage of the immense amount of addressable memory now available with the latest 64-bit operating systems. In-memory technology uses the multi-gigabytes of memory space available in 64-bit servers as for its data store. In-memory analysis is designed to improve the overall performance of a BI system as perceived by users, especially affecting complex queries that take a long time to process in the database or when accessing a very large database where all queries are hampered by the database size. With in-memory database, it allows data to be analyzed at both an aggregate and a detailed level without the time-consuming and costly step of developing ETL processes and data warehouses or building multidimensional OLAP cubes. Since data is kept in-memory, the response time of any calculation is lightning fast, even on extremely large data sets analyzed by multiple concurrent users.

This kind of immediate, interactive analysis is particularly important when people are trying to discover unknown patterns or learning new opportunities.

Who is it for? Know your challenges Finding the right mix
  • When selecting an in-memory solution consider one that operates seamlessly within an end-to-end BI platform where its usage is completely transparent to users and report developers
  • Ideal for setting up departmental BI applications and for meeting the BI needs of small to medium sized businesses as it requires very little up-front effort, and no ETL
  • Populated quickly from any database source, users can seamlessly use in-memory databases and associated meta-data layers as a source for many reports, dashboards, and analysis
  • Look for technology that has been designed to avoid the excessive administrative burdens and can scale to enterprise levels in terms of user number, data security and data governance
  • The leading benefits of Business analytics with in-memory databases are to deliver decision insight with the agility that businesses demand. It is a win for business users, who gain self-service analysis capabilities, and for IT departments, which can spend far less time on query analysis, cube building, aggregate table design, and other time- consuming performance-tuning tasks
  • Regardless of what fancy algorithm is used with an in-memory database, storing the entire dataset in RAM has a serious implication: the amount of data one can query with this technology is limited by the amount of free RAM available, and there will always be much less available RAM than available disk space
  • Limited memory space means that the quality and effectiveness of the BI application will be hindered: the more historical data to which we have access and/or the more fields we can query, the better analysis, insight and, well, intelligence one can get to
  • One could add more and more RAM, but then the required hardware becomes exponentially more expensive. Beyond 64GB, we can no longer use what is categorized as a personal computer but will require a full-blown server which brings us into very expensive computing territory
  • Note that the amount of RAM required is dependent on the number of people simultaneously querying it. Having 5-10 people using the same in-memory BI application could easily double the amount of RAM required for intermediate calculations that need to be performed to generate the query results.
  • A key success factor in most BI solutions is having a large number of users, so we need to tread carefully when considering in-memory technology for real-world BI. Otherwise, the hardware costs may spiral beyond what the organization is willing or able to spend
  • Some of these databases introduce additional optimizations which further improve performance. Most of them also employ compression techniques to represent even more data in the same amount of RAM
  • The future of BI lies in technologies that leverage the respective benefits of both disk-based and in-memory technologies to deliver fast query responses and extensive multi-user access without huge hardware requirements.   These types of technologies are not theoretical anymore and are already utilized by businesses worldwide. Some are designed to distribute different portions of complex queries across multiple cheaper computers (this is a good option for cloud-based BI systems) and some are designed to take advantage of 21st-century hardware (multi-core architectures, upgraded CPU cache sizes, etc.) to extract more juice from off-the-shelf computers

Summary

Business Analytics with in-memory database provides companies with a faster, more flexible, and arguably lower-cost way of accessing and processing information allowing users to get answers to business questions in seconds rather than hours. By virtue of its high performance architecture in-memory has the potential to help midsize organizations become more informed, agile and respond quicker to changing market conditions.

In addition, advances in technology and lower costs of memory and CPU make this type of technology more attractive than ever before. Matching the appropriate architectural approach with the kind of business analytics solutions needed by a midsize company has the potential to deliver benefits such as reduced time to insight, greater agility, increased self-service and lower overall IT demands.

References:

  1. Open source In-memory Analytics – YellowFin
  2. Extinction of traditional Business Intelligence: Elasticube Chronicles
  3. In-Memory Data Management by Plattner/Zeier

Don’t forget to leave your comments below.


Srikanth Chintamaneni is a manager in the Information Management service line of Deloitte Consulting India Pvt. Ltd. He has over 13 years of experience in providing consulting services involving data warehouse and content management solutions in the Health care, Commercial & Consumer Finance, and Industrial Products industry segments. His capabilities support services involving data profiling, data modeling, report design, and end-to-end data warehouse implementations.

Can Parallel Thinking and JAD Save the US Congress?

This article proposes that the US Congress consider Parallel Thinking (Six Thinking Hats) and Joint Application Development (JAD) as methods for gaining agreements on various issues.  This year’s difficultly of gaining a compromise much less a consensus on the nation’s debt limit begs for a proven method for settling issues.  Due to partisan positions, simple negotiation methods have been ineffective.  Instead of deals being made via dialogues, congressional committees hold long drawn-out discussions that extend for months.  Parallel Thinking and JAD may be a solution for saving the US Congress from itself.  

Observation of the “AS-IS”

Little is getting done.  And it is hurting all of us.  Our elected representatives in Congress are diverse stakeholders.  Each have their own agenda with interests that in their view reflect what is best for our country.  It appears that everyone is talking and no one is listening.  Around the table of congressional committees are assertive individuals, each pushing their position and trying to dominate the discussion.  Essentially, their meetings are a series of discussions in which each person is attempting to win arguments while tearing-up the opposing views to pieces. 

What is needed is a process that provides a constructive and collaborative dialogue and an effective decision process. 

  • A discussion is an examination of ideas by argument or debate 
  • A dialogue is a conversation where there is an exchange of opinions

Proposal of the “TO-BE”

First and foremost, a neutral facilitator needs to be appointed by the committee chairperson for guiding the participants to hopefully a consensus or in the least case a compromise.  Note that the neutral facilitator provides process for meetings and does not participate in content.  The chairperson opens the meeting by stating the objective and then passes the floor to the facilitator.  After establishing meeting roles and rules, the facilitator introduces the parallel thinking technique called the “Six Thinking Hats” (1) to promote a dialogue on the meeting objective.   

Rules are vital to have a successful meeting.   The facilitator needs to gain an agreement that participants will treat each other with respect and most important focus on issues, not blame.

Below is a possible sequence of the hats:

  1. Blue Hat – the facilitator opens the meeting by establishing the objective, the six thinking hats process and the hat sequence that will be used.
  2. White Hat – each participant states only what is known (facts) and not known about the problem; like the character Detective Joe Friday, “Just the facts ma’am,” on the famous series “Dragnet” (2). Assumptions may be included, but they must be later confirmed as facts.
  3. Red Hat – each participant states only intuitive likes, dislikes, fears, hunches, and gut feelings on issues concerning the objective
  4. Black Hat – each participant states only the issues that are threats concerning the objective
  5. Yellow Hat – each participant states only the issues that are opportunities concerning the objective
  6. Green Hat – each participant states only how to address the threats and opportunities issues identified in the black and yellow hat dialogues
Parallel thinking forces each participant to consider all points of view and prevents one view from dominating the dialogue.

After conducting the parallel thinking dialogue, the facilitator then announces a follow-on technique for decision making.  Joint Application Development (JAD) is an effective technique for settling issues.  The facilitator explains this technique and how issues will be resolved (3).  During the technique explanation, the facilitator gains an agreement from the participants on additional meeting rules concerning a vital role – the decision maker. 

Essentially after the participants conduct a dialogue on the issues, the facilitator attempts to guide the participants through active listening and questioning – the end goal being a consensus or a compromise.  If an impasse develops, the issue(s) are resolved by a neutral person called a decision maker.  And per the meeting rules, the participants already agreed to accept the ruling(s) of the decision maker if needed.  This allows the meeting to progress and conclude with results that the committee can forward to the full Congress for an up or down vote.   

  • A consensus is when participants change their positions for the betterment of the group. 
  • A compromise is when participants make a deal, winning their view on some of the issues and losing on others.

So Who Is the Decision Maker?

As stated above, the decision maker is a neutral person that breaks through impasses. On a project, the role is typically performed by the project sponsor.  The guideline is that the person needs to be high enough in the organization to rise above the fray and decide on issues.  However, in this case there is no project sponsor and finding a neutral elected official is difficult.  Therefore, it is best to have a neutral arbitrator with no political affiliation.  One approach is for the chairperson to blindly select an arbitrator with assurances that the arbitrator’s identity be kept anonymous (Arbitrator Protection Program?).   

Summary

Unclear if Congress would consider any of the above methods even though they are proven facilitation techniques that are used in business analysis.  However, there is a sense of urgency that something is needed.  Just saying Congress is broken due to the participants is insufficient.  Process is needed. 

Writing this article has been somewhat therapeutic allowing me to put forth a constructive solution.  If you know of other proven facilitation techniques that would be useful in Congress, your comments are welcomed.  

Don’t forget to leave your comments below.


Mr. Monteleone holds a B.S. in physics and an M.S. in computing science from Texas A&M University.  He is certified as a Project Management Professional (PMP®) by the Project Management Institute (PMI®), a Certified Business Analysis Professional (CBAP®) by the International Institute of Business Analysis (IIBA®), a Certified ScrumMaster (CSM) and Certified Scrum Product Owner (CSPO) by the Scrum Alliance, and certified in BPMN by BPMessentials.  He holds an Advanced Master’s Certificate in Project Management (GWCPM®) and a Business Analyst Certification (GWCBA®) from George Washington University School of Business.  Mark is the President of Monteleone Consulting, LLC and can be contacted via e-mail – [email protected].

References

  1. de Bono, Edward (1999), Six Thinking Hats, Back Bay Books
  2. Webb, Jack (2005 release), Just The Facts Ma’am: The Warner Bros. Recordings
  3. Wood, Jane and Silver, Denise (1995), Joint Application Development, Wiley

(As seen in the International Association Facilitators 2011 October Global Flip Chart newsletter.)

It’s Time for Template Zombies to Die

Feature_April5Zombies, dead and mindless creatures that shuffle about sucking out the brains of the living, have invaded your office.

It’s time to exterminate the zombies from your office.

Template zombies, while not necessarily the most dangerous kind since they don’t suck actual human brains but instead suck the brains out of projects, must die because they are one of the biggest factors in ruined projects. And this isn’t happening in Los Angeles or New York, as Hollywood’s movies may suggest – it’s also happening right here in South Africa. The undead, or in this case the brain dead, are among us.

For the sceptics, it’s worth noting that an IAG Consulting report by Keith Ellis found that more than 70% of companies in the top one-third of requirements discovery capability reported a successful project. 54% of their projects are on time, within budget, and deliver all requisite functionality. And – here’s the kicker – as a group those companies pay about 50% less for their applications.

What’s a successful project?  One that is on time, within budget, and delivers all requisite functionality.

By contrast, the main culprit in failed projects is poor requirements discovery.

Companies with a poor requirements discovery competency take 39% longer and spend 49% more to deliver their projects. Nearly 80% of their projects were over budget and time, and a whopping 50% were runaway projects. (Runaway projects are those that go 180% over time, 160% over budget, and deliver less than 70% of functionality. )

And why does poor requirements discovery continue to thrive? Because of template zombies.

There are three components to competent requirements discovery: planning, people and process, all inter-related and not separate.

But a lot of companies replace planning and process with templates. At first glance it seems to make sense: you obtain a standardised approach, right? Well, yes. But the result is also a sub-standard, outcome. And that’s normally due to the notorious, even infamous business requirements specification (BRS), functional requirements specification (FRS), requirements specification document (RSD) or one of the many other TLA terms camouflaging corporate bureaucratic mediocrity.

Perhaps that’s a little harsh, but if these documents had a decent structure they may not be so bad. However, as it is they mix business requirements with solution requirements and design – that’s a recipe for disaster.

In fact, typical planning in organizations that rely on these camouflaging real analyst practices (CRAP) documents consists of the analyst asking: “So, how do I fill in these blanks as quickly as possible?”

It’s that kind of approach that sets companies up to snatch defeat from the jaws of victory. As Albert Einstein so famously said: “Insanity: doing the same thing over and over again and expecting different results.”

But so what if a project takes a little longer or costs a little more? Well, it costs 15 times the amount to fix a defect during the user acceptance stage and nearly 18 times as much to fix it after go-live as opposed to getting it right during the requirements stage, according to research by the National Institute for Standards and Technology in the US.  And if this happens in your organization, it will continue to happen until you get the right people, following the right processes, and performing proper planning. Why pay more for less?

And why allow template zombies to continue shuffling around threatening the living?

Don’t forget to leave your comments below.


Robin Grace is a Business Analyst Principal Consultant at IndigoCube

 

How to Handle Tight Deadlines as a Business Analyst

BA_Nov30_FeaturePicWhen you are assigned a complex project that has a short timeframe (as often happens), it can be nerve wracking – I know this from experience. It’s like driving a racing car – you have to push close to the limits but any error can throw you completely off the track.

The first thing that I do when I get a project like that is considering the reasons for the deadline. You can end up with a tight deadline for a variety of reasons. The deadline may be mandated by the management. It can be determined by interdependencies between projects. It can be defined by market compliance rules. In other cases it’s estimated using the work breakdown structure for the project but ends up being too short because of wrong assumptions.

So how can you as a business analyst make sure that circumstances don’t control you and your team, and you deliver your project successfully? Keep in mind that sometimes you can be successful even if you don’t meet the original deadline!

Here is a visual summary of deadline causes and ways of handling them:

BA_Nov30_Article_cropped

Let’s look into the ways of handling deadlines in more detail.

Manage the Manager

Management sometimes sets up deadlines with a good “buffer” to allow themselves time for decision making at the end of the project, or because they don’t expect to get results on time and want to push things along by moving the deadline forward. This can be best mitigated by having good communication with the management.

However, what can you do if your manager does set an unreasonably short deadline? Find the tolerance level of your project sponsor (management) and of the key stakeholders who can influence the sponsor. Talk to end users to understand the severity of not delivering on time. There can be scope for negotiation.

Shuffle Dependencies

If your deadline is constrained by dependencies, you can talk to project managers of the upstream and downstream projects to get a better understanding of the interconnections. You might be able to find a way to reorganise things and either get what your project needs delivered earlier, or move the deadline for your own project.

Be Smart About Compliance Deadlines

If you’re working on a compliance project, it may have a firm deadline established by the market bodies. In this case find out if there is a transition period. It is usually provided because market participants have differing levels of compliance and don’t have the same resources available to make the transition. You can also evaluate the impact of breaching the deadline (such as fines for non-compliance), and prioritise the parts of the project which would have the highest financial impact if they are overdue.

Double Check Estimates

When it comes to deadlines defined by estimation, it’s a good idea to double check the estimates. Ask what facts and assumptions were taken into account when the task was initially estimated.

Watch Out For Changes

Once the project starts, you have to watch out for changes in the project environment. Changes will affect project completion time, so work with the team and stakeholders to update the WBS and the schedule.

Organising Work On Projects with Fixed Deadlines

After you’ve applied the practices above and you are sure that you’ve done everything you can to negotiate a reasonable deadline for your project, the next phase is organising your work in the best possible way to meet the deadline.

I’ve found that the following practices help me complete projects on time:

  • Determine the business context which will be affected by the change to status quo
  • Define the scope of the solution required to satisfy the identified business need
  • Plan short iterations to verify the project direction
  • Align the solution with the existing business processes and IT infrastructure

Each of these practices is discussed in more detail below.

Determine Business Context

Completing this step successfully often determines the success or failure of the project.

Many organizations that operate in a competitive environment have well defined and standardized processes. Many others don’t however, so be prepared to discover them. Explore the business processes which may be affected by the new solution. Learn which systems are used by the business within these processes. Embedding new solutions into these business landscapes should be considered thoroughly to reduce resistance to changes and exclude redundancy in project management, solution delivery and transition to the new state. If done right, it also gives a business analyst an opportunity to find ways to add value to the business.

The rationale for the project should be identified by the project manager, while the business analyst should identify business drivers and actual business needs.

Define Solution Scope

This is the exciting part of the project, but defining solution scope has never been an easy task. Short timeframes and technological changes which may occur during your project make it even more challenging.

In general, to cope with this task the project team needs a solid foundation to build on – well documented processes and good infrastructure. Knowing and using best industry practices can often point you towards defining a sustainable solution and save exploration and research time.

When it comes to defining solution scope, my approach is to use only the “must” requirements for the “initial” solution, and prioritise the remaining “should” requirements into subsequent phased releases (“final” solution).

You must work closely with the solution architect and play an active role in exploring available options. Often the overlap between business analysis and system architecture saves a lot of time – I have saved up to a third of project time by ensuring that the architect could use my documents as a useful starting point in producing a detailed design of the solution.

Plan Short Iterations

I’m still on the fence with regards to the Agile method. Its value is clear in software development (at least for certain kinds of projects) but when it comes to business analysis, I’m not so sure. However, short iterations are one useful technique in Agile which can reduce project time. Use them to get a summary of the completed and outstanding tasks, evaluate changes to the project scope, and identify feasible shortcuts.

Project manager and business analyst need to present a unified front in dealing with business stakeholders. Face to face communication is essential to make short iterations work for analysing the current situation, required changes and making decisions on the next steps. Informal communication style helps too – really, there’s just no time for strict formalities if you want to get things done. It’s very important for the project manager to arrange a “green corridor” for access to authorities and clear the way for the team to focus on delivering rather than struggling with bureaucracy.

As a business analyst, you also have to do your share to deliver results quickly by being professional and active in all your activities (industry research, compliance requirements and so on). Make sure that communication is well maintained between everyone involved in the project.

Align Business and IT Infrastructure

Most of the time new solutions are embedded into the existing environment. It’s a good idea to make maximum use of the existing components and processes to make the introduction of the new solution less intrusive and to minimise the number of temporary patches and business interruptions.

I try to present the solution in terms of interacting services to achieve this. I transform business references to applications and systems into services and show how they could interact. This approach allows me to show the business users how all pieces of the business, including external parties and outsourced services, come together and how the new solution will improve overall efficiency.

Conclusion

Whenever you get a project with a short deadline, don’t forget that there are two major considerations: what can be done to change the deadline, and what is the best way to organise your work to meet the deadline.

The practices presented in this article should help you address each of these considerations.

Don’t Forget to Leave Your Comments Below


Sergey Korban is the Business Analysis Expert at Aotea Studios, publisher of innovative visual learning resources for business analysts. We think business analysis should be easy to learn! We deliver practical knowledge visually, with a minimum of text, because that’s an efficient way to learn. Find out more at http://aoteastudios.com

Where Do User Stories Come From? Part 2

In my last post I introduced the topic of User Story Writing Workshops. These are planning meetings where a product owner can gather a team (existing or potential) together where they collaborate to create a list of user stories to be used in seeding a product backlog.

It’s a wonderful technique for generating loads of stories-quickly! It also has some serious side-benefits that I think are valuable as well. But, enough of that…

Let’s Get to Writing Stories

There are many ways to attack story writing in groups. I’ll be emphasizing my preferences here, but don’t let that limit your own approaches. The key element is to:

  • Generate as many stories as possible
  • Across as broad a spectrum of features / work as possible
  • As quickly as possible
  • And along the way, collect as much Meta-Data as you can

I like to have the roles drive the brainstorming, so for each role, I’ll allocate 10-15 minutes of story writing time. Usually, I start this with the entire group-so everyone is writing. I’ll “call out” each story as I collect it to let everyone know what’s being written and to reduce the number of duplicate stories produced.

Once you have a large set of stories, I like to stop action (writing) and get the team engaged in “cleaning up” the stories. If I’ve posted them on a whiteboard, I’ll invite the entire team up to the board to start removing redundant stories, consolidating stories where there are obvious relationships, and writing new stories to fill in any ‘gaps’.

Often I’ll oscillate between writing as a group and this collaborative clean-up effort. It helps to keep the whole group engaged in the process and it maintains energy and focus in the meeting. Once you’ve developed stories for each one of your roles, you’re essentially done with “Round #1”, Then you move onto massaging (ordering in time sequence, considering risk, filling in gaps and dependencies, etc.) in your stories.

The Chess Analogy

Once you have a fairly healthy list of stories, I like to spend a fair amount of time on ordering them. This gets everyone thinking as soon as possible about execution. And because we’re thinking of overall workflow instead of pure feature development, another rich set of stories soon emerges related to things like quality and testing, release preparation, design and architecture, prototyping, UX design, software packaging, etc. You know, all those things required to get the project ready for customer deployment need to be captured in story-form as well.

In order to do this, I ask the team to place the stories in largely three buckets: opening moves (early stories), middle game (middle – mostly execution based stories) and end game (stories focused towards completion and delivery).

While this is truly not a WBS or Gantt chart, it does serve to get the team organizing the story mix into a rough workflow and begin thinking of linear dependencies and breaking things down for construction and integration. So the flow is illustrated below-

09-11-2010_10-54-52_AM

 

 

 

 

 

 

Overloading Your Stories, Meta-Data

Perhaps it’s just me, but if I’m taking time from a team for a story writing workshop, I want to maximize the data that I collect. To do this I usually ask the team to overload their cards with other bits of useful information that I might (keep in mind might) use later. For instance; Adding estimates, in either weeks or story points, can be a useful exercise at this point. This becomes the starting size estimate for each story and even though it’s simply a guess, it does help to communicate size and level of effort across the group.

Another thing to consider is team assignments, especially those for any specialized skills. For example, if there’s a database story that the entire team knows only Sally can effectively handle, then we might say that on the card. Or if the card requires a skill that we don’t currently have on the team, then specifically identify that gap.

I try to avoid any notion of “tasking assignment” at this point, so don’t do this for every story. Just mine these connections as a natural consequence of the writing.

Related to this point is the notion of capturing dependencies and risks. Again, we’re just writing our thoughts down on Post-it Notes, so ask the team to note cross-story dependencies right on the Post-it Notes. You should also visually orient the stories to be “close together” on your board. And while the team is collaborating, I always ask them to identify risks as they go along. I usually maintain them as a growing risk-list off to the side.

I find that this sort of meta-data gathering enhances the quality of the workshop, but also reduces the time it takes for later processing of the stories into a Product Backlog. This data can be equally valuable as the stories themselves.

Finally, What’s Next?

Once I have the “opening move” stories defined, I usually will ask the team to do a bit more detailed expansion around these early stories. The logic goes that while I have your time and attention, why don’t I leverage it a bit more to increase the depth of visibility into high priority stories. This also serves to set the stage for “next steps” immediately following the workshop. So dig into the details on your highest priority stories.

Wrapping Up

So there you have it! By investing perhaps a half day of time, you can come out of a User Story Brainstorming Workshop with a wealth of information. You have stories. You’ll have a view to workflow, what needs to be done right away, and what’s deferrable. You’ll have a sense for key skill stories and for dependencies and risks. And you’ll have a solid view towards next steps.

But most importantly, your team will have created a “shared view” of their overall project. I’ve found this outcome to be critical when the team starts to iteratively attack the work. Since everyone contributed collaboratively to the overall scope of the project AND contributed to workflow details, they’ll have an innate sense of what is needed and where things are headed.

How cool is that?

Don’t forget to leave your comments below