Skip to main content

Ten Bad-Ass BA Techniques

Plus Four Fundamental Principles

Principle #1. Leave your ego at the door

  • You are a business analyst – you have a license to ask dumb questions; it is your responsibility and your job! So ask the dumb questions, admit you don’t know, ask for input, show work at early stages, don’t let your own ego-fears-pride get in the way of problem solving.
  • Put your team in the spot light, put yourself behind the curtain.

Principle #2. Authority is 20% given & 80% taken – take it!

  • Don’t wait for permission, ask for forgiveness.
  • Manage those meetings!

Principle #3. Acknowledge people

Sometimes you have to push people or ask them to do more than is normal to expect. You can thank them for their help but over time, your thanks may develop a hollow tone. Take the time to recognize people’s efforts in a way that means something to the individual; creative ways of saying “thank you” are remembered for a long time and create a positive impression and a good relationship.

  • Nominate them for an award
  • Send a message to their boss – what you needed, why the person’s professional conduct and timely response saved your butt
  • Send a message to the person
    • A “thank you” card – there are many cyberspace sites that offer electronic cards
    • A simple email message acknowledging the person’s effort
      Include a .jpg of a plate of tasty goodies like cookies, chocolates, or samosas

Principle #4. If you don’t fail on occasion, you aren’t trying hard enough

Progress and innovation come from holding on to the idea despite the inevitable series of failures. If the consequences of your taking initiative results in a backfire

  • Acknowledge verbally that you may have gone too far in your attempt to actively engage in moving the project along the path to success
  • Ask the person if there is a better way for you to accomplish your goal. Smile; deflect any barbs that might come your way.
  • Learn from the failure. Don’t get defensive – nothing ventured, nothing gained!


The Ten Techniques

Remember, these are the “bad-ass” techniques. Use them with care, especially if you are risk-adverse.

Managing meetings

1. Use “roll call” to obtain explicit decisions. In meetings (telephone or in person), do not accept silence as a response! Instead of asking, “do we all agree?” instruct people to express their concerns with this prompt, “If you disagree, speak up now.”

2. Provide a suggested agenda to focus activities at a standing meeting.

3. Use Actions-Decisions-Issues to record meetings.

Facilitating communication and understanding

4. Share bad news early

  • The sooner “management” or “leadership” knows there’s a problem, the sooner they can start working on it. 
  • If you use the red-yellow-green flag paradigm: extend the paradigm, “Pale Yellow” means “warning, this could get worse”; “Orange” means “one step away from Red”.

5. Did they read the document?
For documents that are in a draft form, include an unexpected phrase in a strategic location in the document, e.g., “300 Pink Elephants” – people will comment on it if they see it. Take care to remove the phrase before the document becomes a deliverable!

6. Treat requirements templates as guidelines

  • Provide all the information that is asked for, or explain why you can’t.
  • Don’t ignore the gaps, missing or unknowns, identify them!
  • Add the sections or references you think are missing

Conducting interviews

7. Send the list of topics you plan to cover in advance – no more than five general topics. If you have specific questions that will require research, provide those questions in advance.

8. Paraphrase as a way to keep a person talking without agreeing with what they are saying.

Establishing trust-based relationships

9.  Make a personal connection

  • Extend yourself beyond normal bounds to make a personal connection with the individual regardless of social group, ethnic background, and gender.
  • Ignore what you may have heard about an individual; do not allow another person’s negative assessment of that individual to prejudice you – make your own assessment, based on how that individual conducts him/herself with you.

Managing requirements

10. Get the Success Criteria and Success Metrics

  • Offer outrageously low or high metrics for targets to elicit a more realistic expectation for “success”
  • Accept the “solution” with grace; but continue to ask questions. Play the fool until the requirement (need) has success criteria and a way to measure it.


Cecilie Hoffman is a Senior Principal IT Business Analyst with the Business Analysis Center of Excellence, Symantec Corporation. Cecilie’s professional passion is to educate technical and business teams about the role of the business analyst, and to empower the business analysts themselves with tools, methods, strategies and confidence. Cecilie is a founding member of the Silicon Valley chapter of the IIBA. Her personal passion is cross-country motorcycle riding. She can be reached at [email protected]

Tackling Updates to Legacy Applications

After 40 years of serious business software development, the issues involved in dealing with legacy systems are almost commonplace. Legacy applications present an awkward problem that most companies approach with care, both because the systems themselves can be something of a black box, and because any updates can have unintended consequences. Often laced with obsolete code that can be decades old, legacy applications nonetheless form the backbone of many newer applications that are critical to a business’s success. As a result, many companies opt to continue with incremental updates to legacy applications, rather than discarding the application and starting anew.

And yet, for businesses to remain efficient and responsive to customers and developing markets, updates to these systems must be timely, predictable, reliable, low-risk and done at a reasonable cost.

Recent studies on the topic show that the most common initiatives when dealing with legacy systems today are to add new functionality, redesign the user interface, or replace the system outright. Since delving into the unknown is always risky, most IT professionals attempt to do this work in as non-invasive a manner as possible through a process called “wrapping” the application, which is an approach to keeping the unknown as ‘contained’ as possible and interacting with it through a minimal, well-defined, and (hopefully) well-tested layer of software.

In all cases, the more a company understands about the application – or at least the portions that are going to be operated on – the less risky the operation becomes. This means not only unraveling how the application was first implemented (design), but also what it was supposed to do (features). This is essential if support for these features is to continue, or if they are to be extended or adjusted.

Updating Legacy Applications: A Formidable Task

What characterizes legacy applications is that the information relating to implementation and features isn’t complete, accurate, current, or in one place. Often it is missing altogether. Worse still, the documentation that does exist is often riddled with information from previous versions of the application that is no longer relevant and therefore misleading.

Other problems can plague legacy development, including the fact that the original designers often aren’t around; many of the changes made over the years haven’t been adequately documented; the application is based on older technologies – languages, middleware, interfaces, etc. – and the skill sets needed to work with these older technologies are no longer available.

Nonetheless, it is possible to minimize the risk of revising legacy applications by applying a methodical approach. Here are some steps to successful legacy updating:

 

Gather accurate information. The skills of a forensic detective are required to gain an understanding of a legacy application’s implementation and its purpose. This understanding is essential to reducing risk and to making development feasible. Understanding is achieved by identifying the possible sources of information, prioritizing them, filtering the relevant from the irrelevant, and piecing together a jigsaw puzzle that lays out the evolution of the application as it has grown and changed over time. This understanding then provides the basis for moving forward with the needed development.

In addition to the application and its source code, there are usually many other sources for background information, including user documentation and training materials, the users, regression test sets, execution traces, models or prototypes created for past development, old requirements specifications, contracts, and personal notes.

Certain sources can be better resources for providing the different types of information sought. For example, observing users of the system can be good for identifying the core functions but poor at finding infrequently used functions and the back-end data processing that’s being performed. Conversely, studying the source code is a good way to understand the data processing and algorithms being used. Together, these two techniques can help piece together the system’s features and what they are intended to accomplish. The downside is that these techniques are poor at identifying non-user-oriented functions.

The majority of tools whose purpose is to help with legacy application development have tended to focus on one source. Source code analyzers parse and analyze the source code and data stores in order to produce metrics and graphically depict the application’s structure from different views. Another group of tools focuses on monitoring transactions at interfaces in order to deduce the application’s behavior.

Adopt the appropriate mindset. While this information is useful, it usually provides a small portion of the information needed to significantly reduce the risk associated with legacy application development. A key pitfall of many development projects is not recognizing that there are two main “domains” in software development efforts: the “Problem Domain” and the “Solution Domain.”

Business clients and end users tend to think and talk in the Problem Domain where the focus is on features, while IT professionals tend to think and talk in the Solution Domain where the focus is on the products of development. Source code analysis and transaction monitoring tools focus only on the Solution Domain. In other words, they’re focused more on understanding how the legacy system was built rather than what it is intended to accomplish and why.

More recent and innovative solutions can handle the wide variety of sources required to develop a useful understanding and can extend this understanding from the Solution Domain up into the Problem Domain. This helps users understand a product’s features and allows them to map these features to business needs. It is like reconstructing an aircraft from its pieces following a crash in order to understand what happened.

Pull the puzzle together. The most advanced tools allow companies to create a model of the legacy application from the various pieces of information that the user has been able to gather. The model, or even portions of it, can be simulated to let the user and others analyze and validate that the legacy application has been represented correctly. This model then provides a basis for moving forward with enhancements or replacement.

The models created by these modern tools are a representation of (usually a portion of) the legacy application. In essence, the knowledge that was “trapped” in the legacy application has been extracted and represented in a model that can be manipulated to demonstrate the proposed changes to the application. The models will also allow for validation that any new development to the legacy application will support the new business need before an organization commits money and time in development.

Once the decision is made to proceed, many tools can generate the artifacts needed to build and test the application. Tools exist today that can generate complete workflow diagrams, simulations/prototypes, requirements, activity diagrams, documentation, and a complete set of well-formed tests automatically from the information gathered above.

Legacy Applications: Will They Become a Thing of the Past?

Current trends toward new software delivery models also show promise in alleviating many of the current problems with legacy applications. Traditional software delivery models require customers to purchase perpetual licenses and host the software in-house. Upgrades were significant events with expensive logistics required to “certify” new releases, to upgrade all user installations, to convert datasets to the new version, and to train users on all the new and changed features. As a result, upgrades did not happen very often, maybe once a year at most.

Software delivery models are evolving, however. Popular in some markets, like customer relationship management (CRM), Software as a Service (SaaS) allows users to subscribe to a service that is delivered online. The customer does not deal with issues of certification, installation and data conversion. In this model, the provider upgrades the application almost on a continual basis, often without the users even realizing it. The application seemingly just evolves in sync with the business and, hopefully, the issue of legacy applications will become a curious chapter in the history of computing. 


Tony Higgins is Vice-President of products at Blueprint. He can be reached at [email protected].

ITIL for BAs. Part VI; “Non-functional Requirements”

The two most recent posts about ITIL for BAs emphasized the roles of the IT Service, the Service Catalog, Service Level Management, and the Service Owner in encapsulating IT as a Service Provider.

It would be natural at this point to explore the ITIL/BA relationship from the Service life cycle point of view.  Much of both Service Strategy and Service Design address what are typically referred to as non-functional or supplemental requirements.  ITIL refers to them as Quality of Service (QoS) requirements.

Other BAs have rightfully pointed out (here is a good example) that QoS requirements frequently do not get the attention they deserve.  There are a number of contributing factors:

  • Stakeholders in QoS requirements are generally not the same as the functional requirements stakeholders
  • The negotiations involved in QoS requirements and functional requirements are different:
    • Functional requirements are normally negotiated by reconciling scope, schedule, and cost factors with development/test/release resources.
    • QoS requirements need to be negotiated by reconciling quality characteristics (availability, capacity, continuity, etc.) with IT infrastructure capabilities (assets), constraints (architecture), and even policy (especially in the area of information security management)
  • Elicitation techniques such as brainstorming, focus groups, interface identification, prototyping, requirements workshops, and reverse engineering are primarily used for functional requirements elicitation.
  • QoS requirements traceability is evasive; it’s one matter to trace the relationship between a function point in a software library to a step in a business process; it’s quite another matter to trace, say, the specific capacity characteristics of a particular IT component to the variety of business demands relying on that capacity.

It is also interesting to note that the BABOK addresses non-functional requirements most fully in Requirements Analysis rather than in Elicitation. 

ITIL’s coverage of QoS requirements is explicit, robust, and effective at contributing to deep business/IT integration.  This is evident particularly in the processes defined (Demand Management, Capacity Management, etc.), the extent to which those processes are embedded in the early stages of the IT Service life cycle, and the way in which ITIL defines “utility” (what the IT Service can do) and “warranty” (how well it does it) and then relates utility and warranty directly to their role in business strategy.  In my next post, we’ll cover that in more detail and then move into specific QoS-related processes and roles.

If you have any good stories to share about your BA experiences and the challenges around QoS requirements, please share them – your comments are great food for thought for your fellow BAs.

Meanwhile, Happy New Year to you and yours!

Fed-Gov Learning Management

Business analysts are masters at adaptation and learning.  A BA can be dropped into any situation and quickly learn everything necessary to get the job done.  A business analyst must quickly become a subject matter expert in his or her domain. 

Learning Management Systems (LMS) for U.S. Federal Government clients is the domain I currently serve.  My first task in entering this position was to identify the key knowledge areas unique to this domain and become an expert in them.  This month I explain the key LMS concepts for Federal Government clients: EHRI and SF-182.

“Enterprise Human Resources Integration (EHRI) is one of five OPM led e-Government initiatives designed to leverage the benefits of information technology”, http://www.opm.gov/egov/e-gov/EHRI/.

EHRI requires that federal agencies report all training data on federal employees to (Office of Personnel Management (OPM).  This gives OPM, Congress and the President ready access to training data across all federal agencies.  Training records are maintained across agencies so if an employee moves from Department of Labor (DOL) to Department of Homeland Security (DHS) their training is ultimately reported to the same repository.

Training data is organized in two major categories: Internal and External.

Internal Training

Internal training is training delivered by an agency to its employees.  Typically this training takes place on-site and does not require travel accommodations or other travel expenses.  Internal training includes instructor-led and online courses.  Internal training typically does not require a provisioning process to allocate money to pay for training and travel. 

External Training

External training is training delivered by an external vendor.  External training typically requires provisioning money to pay for the course and for travel and accommodations.  External training includes instructor-led courses, conferences, and workshops.  To facilitate the provisioning process the federal government requires an SF-182 form submitted by the employee.

The SF-182 is submitted through an approval process that most likely includes the employee’s supervisor and the finance or budget department.  The employee attends training then validates the actual expenses, course dates, and attendance.

Conclusion

EHRI is a major initiative in the Federal Government to create an electronic record for every government employee.  Each employee may be tracked independent of agency assignment.  There are many technical challenges in integrating this data that will take several years to resolve.  I look forward to a day when federal employees may track their career including training throughout their government service.

Can I have My Requirements and Test Them Too?

A study by James Martin, An Information Systems Manifesto (ISBN 0134647696) has concluded that 56% of all errors are introduced in the requirements phase and are attributed primarily to poorly written, ambiguous, unclear or missed requirements  Requirements-Based Testing (RBT) addresses this issue by validating requirements to clear any ambiguity or identifying gaps. Essentially, under this methodology you initiate test case development before any design or implementation begins.

Requirements-based testing is not a new concept in software engineering – in fact you may know it as requirements driven testing or some other term entirely – and has been indoctrinated in several software engineering methodologies and quality management frameworks.  In its basic form, it means to start testing activities early in the life cycle beginning with the requirements and design phase and then integrating them all the way through implementation. The process to combine business users, domain experts, requirements authors and testers; and obtain commitments on validated requirements forms the baseline for all development activities. 

The reviewing of test cases by requirements authors and, in some cases, by end users, ensures that you are not only building the right systems (validation) but also building the systems right (verification).  As the development process moves along the software development life cycle, the testing activities are then integrated in the design phase. Since the test case restates the requirements in terms of cause and effect, it can be used to validate the design and its capability to meet the requirements. This means any change in requirements, design or test cases must be carefully integrated in the software life cycle.

So what does this mean in terms of your own software development lifecycle or the overarching methodology? Does it mean that you have to throw out your Software Development Life Cycle (SDLC) process and adopt RBT? The answer is no!. RBT is not an SDLC methodology but simply a best practice that can be embedded in any methodology. Whether the requirements are captured as use cases, as in Unified Process, or scenarios/user stories, as in Agile development models, the practice of integrating requirements with testing early on helps in creating requirement artifacts that are clear, unambiguous and testable. This not only benefits the testing organization but the entire project team. However, the implementation of RBT is much cleaner in formal waterfall-based or waterfall derived approaches and can be more challenging in less formal ones such as Agile or Iterative-based models. Even in the most extreme of the Agile approaches, such as XP, constant validation of requirements is mandated in the form of ‘customer’ or ‘voice of the customer’ sitting side-by-side with the developers.

To illustrate this, let us take the case of an iterative development approach where the requirements are sliced and prioritized for implementation in multiple iterations. The high-risk requirements, such as non-functional or architectural requirements, are typically slated in initial iterations.  Iterations are like sub-projects within the context of a complete software development project. In order to obtain validated test cases, the team consisting of requirements authors, domain experts and testers cycle through the following three sets of activities.

  • Validate business objectives, perform ambiguity analysis. Requirement-test case mapping.
  • Define and formalize requirements and test cases.
  • Review of test cases by requirements authors and domain experts.
canihave1.png

 

Any feedback or changes are quickly incorporated and requirements are corrected. This process is followed until all requirements and test cases are fully validated.

Simply incorporating core RBT principles into your methodology does not imply that fewer errors will be introduced in the requirements phase. What it will do is catch more errors early on in the development process. You have to supplement any RBT exercise by ensuring you have the means to build integrated and version-controlled requirements and test management repositories. You must also have capabilities to detect, automate and report changes to highly interdependent engineering artifacts.  This means proper configuration and change management practices to facilitate timely sharing of this information across teams. For example, if the design changes, the impact of this change must be notified to both the requirements authors and the test teams so that appropriate artifacts are changed and re-validated.

Automating key aspects of RBT also provides the foundation for mining metrics around code and requirements coverage, and can be a leading indicator of the quality of your requirements and test cases. True benefit from the RBT requires a certain level of organizational maturity and automation. The business benefits from having increased software quality and predictable project delivery timelines.  Thus, by integrating testing with your requirements and design activities, you can reduce your overall development time and greatly reduce project risk.


Sammy Wahab is an ALM and Process consultant at MKS Inc. helping clients evaluate, automate and optimize application delivery using MKS Integrity. Mr. Wahab has helped organizations with SDLC and ITSM processes and methodologies supporting quality frameworks such as CMMI and ITIL. He has presented Software Process Automation at several industry events including Microsoft Tech-Ed, Java One, PMI, CA-World, SPIN, CIPS, SSTC (DoD). Mr. Wahab has spent over 20 years in technical, consulting and management roles from software developer to Chief Technology Officer with companies including Tenrox, Osellus, American Express, Parsons, Isopia Compro and Iciniti. Mr. Wahab holds a Masters in Business Administration from the University of Western Ontario.