Skip to main content

Tag: Methodologies

Well-defined Data Part 9 – Point in Time Attributes

In this article we discuss point-in-time attributes — more commonly referred to as dates and times.

Dates are points on time scales we know as calendars, and times are points on scales we view as clocks. From a well-defined data perspective they are actually quantities, similar to those described in Part 7. Like other quantity attributes, points in time have units of measure, precision, and their values can participate in calculations.

A date, time, or combined date/time attribute represents when something occurred (or will occur). There are business entities, such as purchase, flight, and journal entry, which represent events of interest to an organization. These entities act as the context for one or more point-in-time attributes. Flights, for example, will have a number of point-in-time attributes, including ‘scheduled departure date/time’ and ‘scheduled arrival date/time’.

There are business entities such as customer, product, and location, which are not events themselves but can act as the context for events related to them. A customer that is an individual can have a ‘date of birth’. A product can have a ‘launch date’. A location can have daily opening and closing times. Each of these points-in-time attributes represent the ‘when’ aspect of an event of interest to the organizationn.

Point-in-time Units of Measure

The most common calendar system in civil use is the Gregorian calendar, with its year zero some 2000 years ago and its units of years, subdivided into 12 months which in turn are divided into 28 to 31 days. Google advises that there are some 40 other calendar systems used around the world, most associated with a religion. Each has its designated year of origin, sub-units of months, and days within months.

The commonly-recognized time scale divides a day into 24 hours with sub-units of minutes and seconds. The zero-point of a day is most often assumed to be midnight local time. Organizations that involve events that can take place in different time zones require an additional unit of measure attribute whose role is to identify the time point’s specific time zone. Again, see Part 7 for its discussion regarding associating units of measure with quantity attributes.

Point-in-time Precision

Different organizations or industries can have different precision requirements for the same event. For example, a person’s birth event. ‘Day’ is the most common precision. However, organizations that deal with official birth records record the event date/time to the nearest ‘minute’. Conversely, only the year of birth for an author is needed by book publishers, book retailers, and libraries — used to distinguish between authors with the same (or similar) names.

If the point-in-time attribute involves precision finer than ‘day’, the options progress from hours, to minutes, to seconds, and finally some number of decimals of seconds. The nearest ‘minute’ is usually sufficient for human-related activities. When greater precision is needed, microchip-based timekeeping devices are utilized to source a value (e.g. a POS terminal recording the time-of-purchase transaction to the second).

The easiest way to think of ‘date’ or ‘time’ precision is digitally. Think of a digital clock that displays both the date and time, with a 4-digit year and 2-digit months and days. A digital clock (and point-in-time attributes) have no concept of precision-based ‘rounding’ (e.g. to the nearest year, month, or day). The same applies to point-in-time precision for ‘hour’, ‘minute’, or ‘second’ values.


Advertisement

Period-defining Time Point Attributes

Business time periods can be specified in one of three ways:

  • Two different point-in-time attributes — one marking the start and the other marking the end.
  • One point-in-time attribute and a quantity attribute representing the duration.
  • A sequence of time-point attributes indicating a start point, and the subsequent start point implying the end point of the previous period.

Two different Points — Two different point-in-time attributes are defined, both with the same units of measure and precision. Naming and/or definition should make the role of each clear (start or end marker) and that the two attributes are associated. Business rules need to be confirmed indicating if two or more periods are allowed to overlap and if gaps in time are allowed between periods. E.g. the event-entity ‘staff assignment’ can have overlapping periods indicating job sharing and gaps indicating vacancy periods.

Time Point and Duration — Given either a start or end point in time, plus a duration, the other time point can be derived. For example, given a ‘contract’ start date and a contract duration of four weeks, the end date can be calculated.

Sequential Points in Time — In cases where time periods are contiguous (i.e. no overlapping and no gaps), only one time point attribute is needed. For example, foreign exchange rates. At the point in time a new rate value becomes effective, the previous value ceases to be in effect. Whether to define the start point and assume the end point or the reverse depends on which best represents the business event. In the exchange rate example, there being a new rate taking effect clearly is the event.

Well-defined Point-in-time Attributes

From a data dictionary template perspective, a well-defined point-in-time attribute should have the following properties addressed:

  • Name — Follow any organizational standards for naming date, time, and date/time attributes. Where one of a pair of attributes specifies a time period, try to be consistent with pair names (e.g. begin/end, start/stop, from/to, or effective/expiry)
  • Definition — Describe the business event that time point is marking. Include examples that illustrate precision and any rules that may apply.
  • Unit(s) of measure — Specify calendar system if other than Gregorian (or explain in Definition). For Time, specify ‘local time’, or indicate the attribute that identifies the associated time zone.
  • Precision — The precision that is required for business purposes. Normal dates can assume ‘day’ as the precision. Time (or Date/Time) captured would not be more precise than minutes.
  • Associated-Period Boundary Attribute — Where the time point is one of a from/to pair, identify the other member of the pair.
  • Future values allowed (Y/N)?
  • Historic values allowed (Y/N)?
  • Derivation — For a date or time being derived, a business definition or rule describing the derivation (e.g. ‘Best Before’ date derived from Product Batch ‘Processed Date’ plus Product Type’s ‘Shelf Life Days’). Include examples using business values.
  • Validation — Reference business rules or describe. E.g. Value should not be more than 50 years beyond current date.

Coming next – 5 Questions for Business Stakeholders About Their Data Requirements

The remaining topics applicable to well-defined data are wrapped up in next (and final) article in this series. The topics are addressed in the form of questions, such as ‘optional or mandatory’, that require responses from business stakeholders. The questions, and their answers, are applicable to either attributes or relationships.

Click here for Part 1 – Series Introduction

Smart Business Analysts Ask the Obvious Questions

“Never make assumptions” is some of the most popular advice given to business analysts. How not to is the obvious question that so rarely seems to have an answer included.

But let’s rewind and approach this from an entirely different angle. Let’s talk about asking obvious questions like that instead. Now I know we’re all fond of the saying “there are no stupid questions,” but we all know that twinge when we worry for just moment that a question might be too obvious. There are a bunch of reasons to ask these questions, though.

The first is to remove the stigma of expertise. Once people assume you’re an expert, they stop telling you things that they think you already know. This is maybe the most dangerous type of assumption: the kind others make on your behalf. You don’t know these assumptions are being made, and you have no way to discover them as they’re occurring. You might catch them in a requirements review session, or you might catch them in user acceptance testing, or you might catch them after you go-live. If you’re asking me though, I’d rather catch them much earlier than any of those touchpoints. If we make a point of verbalizing our thoughts when we catch ourselves thinking something like “this probably means”, we are actively encouraging people to talk to us like they’re training us, rather than as a peer.

Dispelling the illusion of expertise can also be vital in relaxing the room. When people are dealing with someone they perceive to be an expert, some folks will feel pressure to keep up with the expert, or to demonstrate their own expertise. This can often be exacerbated when their manager is in the room. Lots of people are understandably uncomfortable with having their experience and expertise being outshone in front of their boss. This can manifest in all kinds of counterproductive behaviours, but even if it doesn’t, why would we ever want a stakeholder, subject matter expert, or user to feel intimidated? This completely undermines any sense of engagement, and it’s how we can do all the right steps in building consensus and yet still end up with users that are adamantly opposed to change. Reducing resistance to change is one of the outcomes organizations typically expect when they make the investment to involve business analysts, so it’s important that we do everything in our power to ensure that we’re delivering in that area.


Advertisement

Another way that becoming less of an expert in the eyes of your stakeholders can be of benefit to you is that it tends to lead to more realistic expectations. I’m not suggesting that we should pursue lowered expectations as a means of achieving success more consistently, but it is important that the users we represent have a realistic impression of what we actually know. We often reveal to people aspects of the bigger picture that exist outside of their bubble, and this leads to an impression of being all-knowing. That sometimes translates into an assumption that we must know their piece of the process just as well, and we need to actively work against that. If user level stakeholders think you must know everything, they’ll be sorely disappointed when the solution you deliver doesn’t address their concerns. Even if those specific concerns never came up.

But in addition to improving the quality of our work, asking these types of questions can reduce the risk of project delays.

We can apply this general technique to business processes. Everything happens for reason, whether it’s a good one. Understanding why each stakeholder thinks each step is necessary, or what it accomplishes in the big picture prevents assumptions. Once you’ve got your swim lane diagram finished, it should be easy for you to point to any step and explain what its purpose is, or what business value we think we’re getting from it. If not, then you’ll risk finding yourself frequently having to decide whether we can make an assumption or if we need to do additional follow-up investigation. It usually doesn’t add much time to ask what the benefit or necessity of a given activity is, but it can add substantial delay to a project to have to schedule repeated follow-up meetings.

Asking the obvious question can also be an effective means of bridging resistance to change. If we think it’s possible that a process can be substantially improved by a change that the users or process owners may find radical, we may need to challenge some deeply held assumptions.

We can do that by trying to sell the change on its pros and cons, but this doesn’t instill a sense of ownership in the people affected by the change. Sometimes that’s acceptable. But where we encounter resistance, we might be wise to consider asking instead of telling.

We can dig into greater detail on the steps where we think there might be opportunity for change, which can naturally allow us to begin asking for more information on the purpose of those steps. By asking the obvious questions, we challenge our users and stakeholders to explain the reason behind a process, which brings them on board in thinking about it from a requirements perspective rather than a solution perspective. When they get to participate in discussing whether a change is viable from the perspective of trying to meet the underlying requirements, we get buy-in built into the solution.

Is this the answer to how not to make assumptions? It’s one way that might help. Where it doesn’t, I think you’ll still find ample value in asking anyways.

Well-defined Data Part 8 – Attributes That Classify

A classification attribute allows the recording of a meaningful fact about an entity instance, with that fact drawn from a pre-established set of values.

Common forms for presenting a set of such values to users include drop-down lists, checkboxes and radio buttons.

This article will discuss three levels of complexity of attribute-based classification:

  • Self-defining — where the attribute represents something that is either true or not.
  • Value-only — where any one of the predefined values may be applicable to a given entity instance.
  • Complex — where one value that is applicable to an entity instance impacts other values that can be applicable to that instance.

Naming and maintaining value sets will also be discussed.

Self-defining Classification Attributes

A self-defining classification attribute is one where the fact it represents either applies or doesn’t apply to a given instance. Examples include a person holding a valid passport (or not) or eggs being from free-range chickens (or not). Because the valid set of values is a simple yes/no (or true/false) pair, it’s up to the attribute name and definition to provide the business meaning of a positive or negative value. The name of the attribute need only be descriptive enough to allow people to understand the general nature of the classification — e.g. ‘Has Valid Passport’, ‘Is Free-Range’. The attribute’s definition should provide further business details related to an appropriate choice in a given instance.

NOTE: It’s recommended that self-defining classification attributes represent the active (or positive) condition. This avoids responses that involve a double negative — e.g. ‘No Valid Passport’ requires a response of ‘No’ to indicate that the person actually has a valid passport.

Also worth noting is that there is a difference between a self-defining classification attribute and a classification attribute that has only two possible business values. For example, in accounting systems, a journal entry is classified as being either a ‘debit’ or a ‘credit’. It would be possible to define a classification attribute named ‘Is a Debit’, where a value ‘false’ implies that the entry is a credit. However, from a well-defined data perspective, the value set should be composed of the valid business values.


Advertisement

Value-only Classification Attributes

A value-only classification attribute is one where the only thing the organization cares about is the business-meaningful values in the classification scheme. These values may be made available for selection or be derived based on defined business rules. For example, a car dealership will classify each car by color, from a fixed value set relevant to their business, including ‘black’, ‘white’, ‘red’, etc. This set of values is sufficient for sales staff and car buyers to find all of the cars they are looking for in a specific color.

Conversely, to an organization that manufactures cars, paint color is a critical component in the manufacturing process. In this context, ‘paint color’ would be a full-fledged business entity with its own business entity identifier, plus naming, quantifying, and classification attributes of its own.

Complex Classification Attributes

A complex classification attribute is one that not only has a set of valid values, but those values involve relationships to other classification values. Continuing with the car business example, the dealership will deal with cars from different manufacturers, which will call for one value set for a car manufacturer and a second value set for the car model. There is a parent/child-type relationship between these two attributes, with a manufacturer being the parent of multiple car models. Implementation of a parent/child relationship in a user interface might involve the user initially selecting a parent value from one combo box and then selecting a child value from a second combo box, which would list only those child values relevant to the selected parent.

Another type of relationship between classification attribute values involves allowable transitions within the same value set. For example, consider the case of a business entity that has a defined set of status values, but the business wants to ensure that an instance is only allowed to transition to a selected subset of other values based on its current ‘status’ value. In addition to the value set, each value needs to identify the other values that it can transition to. State transition diagrams are good for graphically representing valid value transitions.

Classification Attribute Naming

The terms type, class, and category are often used when naming a classification attribute (e.g. ‘Customer Type’, ‘Product Category’, ‘Class of Service’). An organization’s users — familiar with existing business processes that involve one of these generically-named attributes — will also be familiar with their value sets. A classification attribute having a name that provides no clue regarding the classification scheme is only ‘well defined’ when examples of its values are available. For example, the name ‘Customer Type’ is, by itself, meaningless. Add example values ‘new’ and ‘existing’ and all becomes clear.

Value Set Change Process

The maintenance of value sets within an IT-based system typically takes place under the ‘Administration’ functionality. The process should ensure that all change requests originate from an authorized source. Users of the classification scheme ideally are given advance notice of any changes.

A value that becomes no longer applicable for the organization, and should therefore no longer be available for selection, should be end-dated rather than deleted. Similarly, a newly added value should have an effective date associated with it, unless all new values for a given classification scheme take effect immediately.

Additional complexity in changing a value set arises when the classification values are involved in business rules, process flows, or interfaces to other systems. Any added value needs to be accounted for in the rule specification, in existing or added process flow decision points, and/or interfaces. Both the value change and the associated system changes will need to be tested before they are put into production.

Well-defined Classification Attributes

The attribute should have the best name that the organization can provide. Similarly, the values, when textual, should also be as meaningful as possible. As with attributes that name (discussed in Part 6 of this series) the organization may want, in addition to the full ‘name’ of each value, an abbreviated and/or code value. When this is the case, these should be included as part of the definition.

At attribute definition time, the full value set may or may not be known or available. If not, or if it’s a large value set, enough examples should be included to make the classification scheme understandable. The definition should indicate whether the values are examples or they represent the complete value set.

Where classification values are derivable, that derivation should be identified, either as part of the definition or by referencing the derivation business rules (maintained separately).

The example values provided should be sufficient for designers to know what is needed from a database-definition perspective. The examples should be indicative of field size, data type, and precision if required for numeric value sets, and so these properties should not be required.

NOTE: As mentioned at the beginning of this article, there are a number of ways value sets can be presented in user interfaces (e.g. combo-boxes, checkboxes, radio buttons). Usability is a design issue and, as such, is outside the scope of this series.

Coming in Part 9 — Point in Time Attributes

Click here for Part 1 – Series Introduction

Well-defined Data Part 7 – Attributes that Quantify

Having discussed attributes intended to name entity instances in Part 6 of this series, we move on to attributes intended to satisfy the need to say something quantitatively about an instance.

Well-defined quantity attributes require particular attention be paid to their unit of measure (UoM) and precision.

Numbers verses Quantities

There are attributes whose values only contain numeric digits, such Credit Card Number. There are others, such as Part Number, whose name implies that values are numbers, but in some organizations, values of these attributes are allowed to include alphabetic characters. The objective of these sorts of attributes is actually to name or identify an entity instance, not to quantify it.

Genuine quantities have magnitude. A value can be bigger, smaller, or the same size as another value. If you double a quantity value (i.e. multiply it by 2), the resulting value is twice as big (unless of course the original value was zero). It would not make business sense to double a credit card number, or subtract one from another.

Unit(s) of Measure (UoM)

The following table contains types and examples of units of measure commonly used in relation to quantity attributes.

Type of Unit of Measure Example Measurement Units
Length   Feet, Metres, Miles, Kilometres
Area   Square Feet, Square Metres
Volume   Cubic Feet, Cubic Metres, Gallons, Litres
Weight   Ounces, Tons, Grams, Kilograms
Time   Seconds, Hours, Days, Years
Temperature   Degrees Fahrenheit, Degrees Celsius
Power   Amps, Watts
Currency   US Dollars, GB Pounds, Euros
Count   Each, Carton
None   Percentage, Multiplier

NOTE: In the table above, ‘None’ represents types of quantities that are intended to be used in calculations but themselves have no unit of measure. E.g. An additional 10% discount, intended to be applied to the value contained in a ‘price’ attribute.

The unit of measure that applies to a given quantity attribute may be defined globally for the organization, apply to a number of attributes within a given entity, apply to a single attribute, or be different for each attribute instance.

Organizational Level — An organization may deal in a single unit of measure for a given measurement type. For example, the organization’s customers and suppliers are all local, and so all dealings with them are always in one currency. In such situations, the unit of measure can be defined once, as an assumption or as a non-functional requirement. E.g. “All currency amounts represent US Dollars.” 
Entity Level — An entity may contain a number of quantity attributes that are of the same unit of measure type, and for any given instance, all of the quantities involving that UoM type will be of the same units. E.g. An Order entity with attributes Net Amount, Tax, and Gross Amount all being in a given currency for an order. Each of these attributes can be identified as the “Currency” UoM type and a separate Currency attribute in the entity used to identify the specific currency that applies for a given order instance.
Attribute Level — A quantity attribute within an entity can always involve values of the same unit of measure. E.g. Hours Worked. Even if the UoM is part of the attribute name, it should still be defined explicitly as a property of the attribute within the data dictionary.
Entity Instance Level — In an entity where a quantity attribute can involve a different UoM per instance, one or more additional attributes will be required to identify the unit(s) that apply. E.g. A Service Contract with the quantity attribute Charge-out Rate, where for one instance the rate can be in US Dollars per hour and for another in Euros per day. In the service contract example there would need to be one attribute to capture the currency type and another for the applicable unit of time.

Precision

For most quantity attributes, their precision can be specified as a number of decimal places required by the organization. E.g. integer indicating no decimal places, or cents for currency quantities, implying two decimal places. The following are examples of precision that require special attention when defining the quantity attribute.

Smallest Reportable Time Increment — Many time recording systems capture the time worked in hours and portions of an hour. Typically, those portions of hours are restricted to specific increments. If the portions are in units of minutes, the precision may actually be limited to increments of 15 (i.e. 0, 15, 30 or 45). If the hours being recorded allow decimal values, up to two decimal places are allowed, but the actual precision may be limited to quarters of an hour (.25, .5, .75).
Orders of Magnitude Quantities — Where a quantity attribute represents what an organization considers a very large amount, the precision can be defined as an order of magnitude. E.g. the value 1 intended to represent one million, or one billion. The order of magnitude quantity could be defined to allow some number of decimal places, e.g. 1.3 indicating the value ‘one million, three hundred thousand’.

Advertisement

Sources of Quantity Values

As with other types of data, quantity values can be provided from sources external to the organization or from internal sources. A third source, in the case of quantity attributes, is derivations.

Externally-Sourced Values — Quantities that are sourced externally in real-time can be validated individually. The real-time process should be designed to deal with an invalid value, preventing the process from completing successfully if necessary. When batches of records containing quantities are received, the business needs to decide how it wants to deal with errors — either rejecting only those records that have problems or rejecting the entire batch, until the invalid values have been dealt with.
Internally-Sourced Values — When a quantity is sourced internally, it is useful to identify the organizational role(s) that have responsibility for providing values. Some quantities, often price-related, are decided by product owners. Other quantities are simply part of an operational process, where staff members record values that come to them as part of the process. Where money is involved, and potentially large amounts, there is often a ‘separation of duty’ (SOD) process where a dollar value is entered by one person but required to be validated by a second person before the process is allowed to complete.
Derivable Values — Where two or more values from different attributes are used to produce a new value of interest to the organization, an attribute should be defined to represent the derived value, associated with the entity it quantifies. The derivation should be described as part of that attribute’s definition. NOTE:
The derived attribute represents values that are meaningful to the business. Designers are left to decide whether a derived value should be physically stored, or derived as needed.
Both UoM and precision are important when defining a derivation. As the saying goes, “You can’t add apples and oranges.” Also to be considered is where two or more quantities are of the same units, but of different orders of magnitude. Those magnitudes will need to be brought into alignment within the derivation.
NOTE: Where derivations involve more than multiplying or dividing two or more decimal values, rounding can be an issue. Where and how to round is outside the scope of these articles.

Well-defined Quantity Attributes

From a data dictionary template perspective, a well-defined quantity attribute should have the following quantity-specific properties addressed:

  • Unit(s) of measure — Even if the attribute name says something about its units, the unit(s) should be identified explicitly. Where the unit(s) vary per instance and are captured in a separate attribute, that attribute should be referenced.
  • Precision — Most often a simple statement of the number of decimal places. Exceptions, as discussed above, should be described in text.
  • Maximum — The business question that should be answered by a subject matter expert is, “What’s the largest value of this quantity ever encountered or that needs to be catered for?” NOTE: Using nines (e.g. 999,999) to describe a large value is not business oriented. If, for example, the answer to the question is ‘850,000’ then designers will understand what’s required.
  • Can be negative? — yes/no
  • Zero ok? — yes/no
  • Derivation — For the attribute being derived, a business definition or rule describing it. This can be in the form of an algebraic formula, a step-by-step process, a flow chart, or formal business rule definition language. Ideally one or more worked examples containing realistic values would be included.

Coming in Part 8 —Attributes That Classify

Measuring Usability with the System Usability Scale

Beauty, it is said, is in the eye of the beholder. As proof, ask a group of friends or colleagues to verbally explain how to measure beauty, then sit back and watch the entertainment as people struggle to express verbally what is visually obvious to them.

Business Analysts face a similar challenge when we are asked to measure non-functional requirements. Measuring functional requirements is obvious, like recognizing the difference between black and white. Did the function achieve its desired outcome or not? When performing the function, were there any steps missing or extra actions required? Non-functional requirements, on the other hand, can be like measuring the various shades of gray that exist on the spectrum between black and white. Usability is a non-functional requirement that is associated with user’s impressions of the system design and impressions, like beauty, are often difficult to quantify. I am going to introduce you to the advantages of the System Usability Scale (SUS), which I have used with success on projects to help quantify the abstract concept of usability.

The System Usability Scale was developed over 30 years ago by John Brooke at Digital Equipment Corporation (DEC). It is a survey that contains 10 statements:

  1. I think that I would like to use this application frequently
  2. I found the application unnecessarily complex
  3. I thought the application was easy to use
  4. I think I would need the support of a technical person to be able to use this application
  5. I found the various functions in this application were well integrated
  6. I thought there was too much inconsistency in this application
  7. I would imagine that most people would learn to use this application very quickly
  8. I found the application very cumbersome to use
  9. I would feel very confident when using the application
  10. I would need to learn a lot of things before I could start using this application

Each statement has five potential responses based on a 5-point Likert scale – strongly disagree (1), disagree (2), neutral (3), agree (4), strongly agree (5) – where the numbers in parenthesis are the point values earned by each response. There is a 3-step process to convert the raw responses onto a 100 point scale:


Advertisement

  1. Responses to odd statements: subtract one from each point value
  2. Responses to even statements: subtract each point value from 5
  3. The results are now normalized from 0 to 4. Add up the total normalized points and multiply by 2.5 to convert from a 0 to 40 point scale to a 0 to 100 point scale.

A SUS score of 68 is considered to be average. The “Measuring Usability with the System Usability Scale” article has more information about how to evaluate scores.

There are a couple of things to keep in mind when using the System Usability Scale. First, I would be cautious when using the SUS to compare something that is familiar with something that is unfamiliar. In my experience, users that were critical of a new application, a new process or some other change to their way of organizing their work adapt to the change over time. Eventually, years later, when faced with another change to the item in question, the critics have transformed to champions, much to the bemusement of the Business Analyst whose long memory recalls the earlier resistance to change. An axiom I state to project sponsors or leaders of a change initiative is “users hate change”. SUS practitioners have found that users are indeed more favorable towards the known quantity, skewing the usability results more positively towards the old and familiar as opposed to the new and different. When presenting your analysis in this situation, you need to make your audience aware of this inherent bias.

Second, while the SUS is a good, broad measure of usability, you may need to apply other analysis techniques from your Business Analyst toolbox to help evaluate its results. For example, I have had good success applying the SUS when a team is evaluating different Commercial-off-the-Shelf applications or getting initial feedback from user training. However, even if one application is clearly considered more usable than another or trainees are more critical than you had hoped, using the results without further analysis may lead to wrong conclusions. Are there characteristics of your survey group (age, gender, company experience, etc.) that can help explain the preference? Is a training group more negative because the implementation of their department’s requirements were deferred due to cost or schedule constraints? As discussed before, are users negative about the usability of an application because it requires a completely new, unfamiliar device or process than other options? This more detailed analysis will help you recommend potential mitigation actions to address the specific needs of your situation.

If not handled properly evaluating usability can turn into a debate based on subjective views that are impossible to analyze. The System Usability Scale is a tool that Business Analysts can use to help make usability a more objective quantity, opening up options to apply further analytic techniques to provide a more complete assessment to our stakeholders.