Tuesday, 10 July 2007 05:05

Testing Essentials in Business Analysis: An Overview

Written by Youssif Ansara
Most, if not all, system development lifecycle methodologies are often dissected into smaller phases and sub phases. This concept holds true whether such lifecycle methodologies are taught at academic institutions in theory or carefully carried out in practice within the marketplace. Regardless of the size of the project being implemented or the type of industry that project impacts, there is nearly always a testing phase to a project. Unfortunately, most business analysts don't give enough attention to this one yet major phase of project management and system analysis.
The core of solving any problem is to initially test the proposed solution. That is why testing outcomes are considered by the customers and stakeholders in a project as a major project success factor. Before we can begin dissecting the topic of "testing", we have to first understand how it fits into the overall picture of system development lifecycles. This is done by realizing the goals, constraints and main concepts of testing.

Most system development lifecycles share one objective as their common ground. That is supplying a structured approach and an organized process towards analyzing and managing projects. This common objective is in an attempt to deliver successful project results within time and budget while meeting customer expectations. In achieving this objective, most system development lifecycle methodologies are decomposed into several phases including but not limited to:

1. Project initiation and value realization
2. Detailed requirements and preliminary design
3. Business integration and detailed design
4. Development
5. Testing (also called validation)
6. Implementation and project execution
7. Maintenance and post implementation

While testing appears to be its own phase, it can also be a sub phase of other lifecycle phases. Testing therefore has its own set of goals in adherence to the overall system development lifecycle objective.

Goals and Constraints

The business goals of testing are to make sure that the system performs as designed, while successfully meeting the user's needs. While the functional goals of testing are to verify that all system components operate accurately, system defects are identified in a timely manner and tracking related status. So certainly for testing to succeed, it should be carefully planned and executed.

A complete test plan should incorporate a scope statement, testing methods, initial identification criteria of test data and the testing schedule. It is also important to note that the scope statement within a test plan should be closely aligned with the established scope and vision of the overall project. In this regard, the entire body of gathered business requirements, along with their technical and functional specifications, should be thoroughly assessed within their respective project management phases, independent of the testing phase. Even though testing may realistically be planned to occur throughout a system development lifecycle, business analysts should try their best to support and coordinate the project's testing efforts within the respective testing phase.

Many projects have gone beyond their allocated time and budget resources. This is because the business requirements had to be revised or the design remained incomplete, even after the majority of their respective project management phases were declared successfully complete. This can lead the testers for a project to restart their testing procedures to minimize the possibility of future system defects. Another benefit to completing the majority if not all the system development phases prior to the testing phase is to ensure that all testing efforts contain reasonably sufficient traceability. This means, that every testing output is traced back to a test case and that every test case traces back to a test scenario. This traceability would hold true until tracing each testing effort all the way back to the business requirements.

This traceability can usually be achieved by developing an identification system by which all documentation and project articles within a single project can be identified. Project managers and business analysts should then work together to implement this identification system by ensuring that each business requirement is properly identified along with their associated test scenarios and test cases. An example would be “109.1.24” identifying a business requirement where “109” is the subsystem or module according to system documentation, “1” identifies this requirement as being approved by the customers and stakeholders and “2” would mean this business requirement is still under review, and finally “24” means this is the 24th business requirement written for this subsystem or module for this particular example project. We can then formulate similar examples for identifying a test scenario for this example business requirement as “109.1.24AB” and its associated test cases as “109.1.24AB-1” and “109.1.24AB-2”.

As a result, traceability ensures that each business requirement was successfully tested and accounted for. This traceability factor is additionally critical to gaining the customers' respect and stakeholders' approval within a project prior to its implementation. Thus testing should be complete prior to implementation to avoid testing the core pieces of a project after it’s implemented. This is important because major system issues may arise if not all the approved business requirements are tested prior to implementation time. Otherwise, any delayed or incomplete testing may result in establishing subprojects or new implementation releases to resolve these newly occurring yet major system issues as a result of not finalizing the testing phase of previous projects.

While we can try our best to plan our project's testing efforts, we have to realize that there are always constraints to any project that will surely impact all its managed phases including testing efforts. These constraints include available resources (platforms, software or hardware) and human resources (technical, business and project management personnel). Another major constraint to testing efforts is the overall project timeline that will impact the allocation of a dedicated testing schedule. Within these outlined testing constraints, we come to realize that solid test plan documentation will be a significant key to overcoming these obstacles.

Main Concepts of Testing: Scope Statement

The testing scope is defined as a high-level statement clearly identifying what is being tested and briefly how such testing will be conducted. This is done by defining the following documentation factors:

-The main tasks required to accomplish testing.

-Individuals involved with testing responsibilities such as support, coordination, documentation of testing output and evaluation of testing results. (This is very important because when a project is understaffed then it is observable that business analysts in addition to coordinating the testing efforts that they may also be testing). This is unfortunately observable in many projects due to unavailability of dedicated testers, which is another reason why business analysts are in such growing demand due to being able to play various roles in a project team.

- A short background note as to where to find further details about the project/subsystem changes being tested.

- The testing environment with any special hardware/software that is needed to conduct testing such as automated testing tools.

Scope statements for testing purposes are not just meant to clearly identify the testing boundaries within a project but also to restrict such boundaries as a quality control measure.

Main Concepts of Testing: Testing Methods

After successfully defining the testing scope, we need to pinpoint each method required to complete testing. Testing strategies come in various forms; however, there are some that are widely used in most industries.

- Top-down: Approaching testing efforts with this method would allow the tester to focus more on the top of a given system hierarchy. This is done by perceiving the broad interface that is integrating various system modules or by observing the surface of each of the main control modules.

- Bottom-up: Testing with this method would lead the tester to the bottom of a system hierarchy of modules until reaching the top module or by reviewing each unit within a single module. This testing approach is applicable whether a unit is operating independently or requires functional integration with other units to operate properly.

- Middle-out: This is simply a combination of the top-down and bottom-up methods. This involves approaching the system from a mid-point of the module hierarchy, while progressing towards both the top and bottom system modules in a hierarchy.

- White-box: This method observes the flow of data, logic or the sequence of such logic of a proposed system change. This is meant for systems developed in-house, rather than commercial products or outsourced systems, where all the internal components of a system are thoroughly understood and manageable.

- Black-box: All internal components and units of a module are disregarded as the focus is only on the input and output into a given module. For example, if we were to test a calculator using this testing approach, then we would enter “1+1=” as the input and we anticipate receiving “2” as the output. Notice that we are not testing the programming structure that allows digits such as “1” and “2” to appear on the screen and that calculator functions such as “+” and “=” don’t display on the screen. Neither are we testing the logic behind the “+” or “=” calculator functions. With this testing method, we are purely testing input and corresponding output.

- Gray-box: This is a hybrid approach that combines both the white-box and black-box testing. In this case, both the function of a module and its internal coding are being tested. This is ideal for large-scale system enhancements developed in-house, or when major upgrades and system changes are anticipated in the future.

Positive/negative testing: This approach focuses on the impacted system changes by focusing only on how a system processes data. This means positive testing would observe that the system processes a certain type of data as expected and negative testing would observe the system to process a certain type of data in a near opposite or unexpected manner. This is done to ensure that the other types of data that are not meant to be impacted by a system change, remain unaffected by how the system processes a specific type of data.

Often it is the type of software release, system implementation or system changes that dictate what testing strategy to consider. In fact, the vast majority of successful test plans incorporate several testing strategies.

Main Concepts of Testing: Test Data

Test data should be initially identified when testing planning efforts are underway to support the overall testing process, and to locate and resolve any possible system defects.

Just like Requirements Specification (also known as High-Level Requirements) being the basis for Functional and Performance Requirements (also known as Detailed Requirements), the Requirements Specification can additionally serve as the basis for establishing test data. Since assigned project testers are usually not involved in a project when the business requirements are being gathered, test data must be developed and documented in a clear manner for the assigned testers. This approach will ease the tasks of testing by the testers and coordination of such testing by the business analysts and project managers. Otherwise, we can expect many unnecessary meetings between the testers, business analysts and technical staff to introduce each business requirement to the testers before valid test data can be established.

In fact, business analysts should work with technical staff to develop test data and to ensure test data validity and traceability prior to a tester joining the project team. This is important, so that when a tester is being assigned to the project, they don’t have to revalidate unfamiliar test data with their corresponding business requirements. A good method for preventing major post-implementation system defects is to ensure that the test bed (or the structured collection of test scenarios and their cases) is mainly qualitative rather than quantitative.

Again, it is stressed that projects can exceed their allocated time and budget resources for various reasons. One major reason is that the assigned tester was provided test data for hundreds of test cases that were redundant and minimally focused on the actual system changes being implemented. Instead the focus should have been on prioritizing test cases in addition to ensuring that their associated test data directly trace to the prioritization of the project’s business requirements.

Main Concepts of Testing: Test Procedures

Test procedure is basically the process by which test data is established. This process in addition to supporting the activity of gathering and/or creating test data, it also supports defining testing sequence, related logistics as well as documentation of testing outcomes. While a standardized test plan includes test procedures, there is no standard or formal style for documenting testing procedures.

Testing procedures are ideally established by personnel most knowledgeable about a subsystem or module change being tested. In fact, such personnel can be different than the analysts designing the overall project test plan or performing some testing functions.

In identification of specific tests, the test criteria, data, and environment as well as resources needed to finalize each test should be identified. This initial identification of this information should take place when establishing testing procedures. The results from some types of tests should be evaluated by a different person than the tester, even if the tester has sufficient knowledge of their assigned test cases. This is simply considered by many organizations as an additional internal QA measure.

Main Concepts of Testing: Testing Levels

Throughout the implementation of any given project or system change, testing occurs at more than one subsequent level. Although Unit Testing appears to be the mainstream approach to begin testing, some project managers and business analysts prefer testing preparation activities or preliminary testing to be conducted. This testing preparation can also take place either before or along with Unit Testing such as Scaffolding or Shakedown Testing. Let’s look at the most commonly used testing levels:

1. Unit Testing: Also known as module testing, this is conducted on a single subprogram, module or component.
- Objective of this type of testing is test data utilization in observing the behavior of a single unit, such as a module or subprogram.

- The process of restart and recovery should also be considered when completing Unit Testing.

- Programmers are usually responsible for conducting this type of testing. It is recommended that business analysts work with their programmers to ensure that a successful Unit Testing Plan is developed and that it is aligned with the overall project business requirements.

2. Integration Testing: Two or more individual units make up the core focus of this type of testing.
- Similar to Unit Testing, programmers are responsible for performing this type of testing. However, Integration Testing should be performed each time any major system changes likely to impact a unit occur. This is to ensure that all impacted units continue to operate properly, whether together or independently.

- Also, it’s important to note that the combined behavior of units should be tested at this level.

3. System Testing/Process Validation: This is meant to ensure that all system components and business processes are executable. This testing method also ensures that these system components and business processes remain operational while being integrated with one another or independent of each other.. This level of testing also ensures that interfaces interact properly with external applications and business processes.

4. Security/Vulnerability Testing – Verifies that authorized access is established for applications, data and stored procedures. Additionally, possible areas of vulnerability should be discovered and addressed.
- Risk assessments for security access and vulnerability threats are essential in establishing solid test cases for this type of testing.

5. User Acceptance Testing (UAT): This testing should be traceable back to each user requirement in relation to the system/business process being changed or implemented.
- User’s environment, constraints and operational procedures should be simulated when performing this type of testing.

- While all testing types are critical to the overall testing success of a project, this particular type of testing is given more attention by the project manager, business sponsor and customers, because UAT test results often mirror real or production-like scenarios, and expected processing outcomes in the new system.

6. Regression Testing: The objective is to use old yet valid test cases with current test data on a modified or updated system.
- This testing is performed to ensure that the system’s overall functional stability and module interfaces continue to work properly with external applications. This testing level also ensures that the fundamental tasks of the system weren’t negatively impacted by new system changes or upgrades.

- Regression Testing is considered a part of or support for UAT testing. However, unlike the test cases for other types of testing, the test bed and outcome benchmarks for Regression Testing can often be reused and remain applicable from one project to another. The exception to this general “test data recycle/reuse” rule is the need for frequent revision in order to stay current with the latest system changes.

Other widely used testing types that are considered optional by many business analysts are:

- Parallel Testing: Where testing results in an older system or a dedicated Test Environment are to be similar to, if not exactly mirroring, related system processes in a newer system or the Production Environment under the same circumstances.

- This type of testing sometimes involves much larger test data volumes to parallel real-time volume processing in a production environment.

- Stress Testing: This is conducted when observing abnormal circumstances within a system or impacting its environment. Common examples include, but are not limited to, overloaded network bandwidth and insufficient memory. While we can establish many test scenarios to test such abnormal conditions, it is wise to always consider such abnormal conditions within the realm of reasonable hardware/software constraints.

Can you think of some examples of these testing levels that you have conducted in your organization?

Main Concepts of Testing: Testing Schedule

Understanding the relationship between each of the discussed testing levels is critical when developing a testing schedule. This is because a testing schedule defines both date and completion percentage benchmarks for each level or type of testing. Keep in mind that the testing schedule is yet another subset of the overall system development life cycle as well as the allocated project timeline.

Closing Remarks

While we have spent a considerable amount of time on an overview of the essentials of testing in business analysis, we have only touched the surface of the topic. Do you feel this information has helped you realize how to avoid common mistakes? Will becoming familiar with this testing information help us, as business analysts, better coordinate testing with the project manager, technical staff and assigned testers

These concepts are not just applicable when supporting testing efforts as a business analyst in accordance with the BABOK, they can also be applicable in relation to several other well known and recognized methodologies and industry standards such as RUP (Rational Unified Process), CMMI (Capability Maturity Model Integration) and even ISO 9000 (International Organization for Standardization).

Youssif Ansara is an IT Business Consultant who has worked with various industries including oil and petrochemicals and health care insurance, as well as entrepreneurship in the education sector. He gained his expertise from his involvement with technical business analysis and human resource management, both in the United States and abroad.He is an avid advocate of usability testing in both the public and private sectors to ensure that their systems are widely accessible. He does this by conducting accessibility assessments and public speaking about Section 508 of the Rehabilitation Act as amended by the U.S. Congress in 1998 to ensure that electronic and information technology can be accessible to people with disabilities. Youssif Ansara, can be contacted at y_ansara@yahoo.com.

© BA Times.com 2020

macgregor logo white web