Skip to main content

Author: Robin Goldsmith

Survey Says Squ(Wh)at

In my experience working with and training business analysts (BAs), I’ve found a number of things that differentiate more-effective BAs from less-effective ones.

High on the list is that less-effective BAs tend to over-rely on surveys and questionnaires for data gathering. Beyond business analysis, I’ve further found that less effective folks of all persuasions tend to over-rely on surveys for making decisions.

Surveys are popular because they seem like an inexpensive way to get seemingly simple information from a lot of people. It used to be that surveys were printed in bulk and perhaps mailed or just handed out to prospective responders. The responder does the main work of filling out the survey. Totaling scores from returned surveys takes a bit of effort but not much brainwork.

The Internet has made it even easier with online surveys/questionnaires that eliminate printing and postage costs, while simultaneously automating score counting grunt work.

Moreover, the Internet facilitates getting surveys to more people quicker. Many online and offline business transactions trigger immediate online customer satisfaction surveys, with autoresponder software sometimes hounding until the survey is completed. Lots of online transactions embed easy instant surveys, such as one- to five-star ratings, which as with this article usually are voluntary but sometimes must be completed in order to obtain something additional that the customer wants.

The biggest weakness of surveys/questionnaires is that they provide highly questionable data which are not suitable for basing business decisions on. Data are unreliable when answers aren’t real, representative, responsive, or relevant.

Reality

“A wide, though rarely spoken of, position in the tech industry says that customer satisfaction scores are BS” Rob Enderle wrote at http://www.itbusinessedge.com/blogs/unfiltered-opinion/dell-technologies-vs.-hp-inc-the-interesting-nuances-of-applying-nps-to-effect.html.

The ease of Internet surveys no doubt has exacerbated surveys’ unreliability by creating over-surveying and resulting backlash against surveys in general. People answer anything just to get annoying surveys out of their face.

Increasingly, responses too often are perfunctory or “gamed.” These days so many service providers not only ask customers to please complete a coming customer satisfaction survey but also warn that any response less than “the best” will be very harmful to the typically low-level worker being rated. Consequently, customer satisfaction surveys can make even the most superficial services seem world class. In addition, people are reluctant to report even actual shortcomings of providers they like.

Representativeness

Perhaps you’re aware that in the survey business, a 2% response rate is considered good. Quick math show that a far lower percentage of BA Times/PM Times readers instant-rate articles. A rudimentary understanding of statistics makes it evident that a 2% sample cannot reliably speak for the population. Moreover, there’s a good chance that the specific 2% is very much not representative of the population.

Those who choose to respond tend to be self-selecting from the fringes, often disproportionately on the negative end because they have an axe to grind. Sometimes they’re upset about the survey subject; but they could just as well be cranky about something unrelated, yet take it out on the survey instead of kicking the dog. Then, too, are those who take a certain perverse pleasure in “never giving a perfect score,” which further throws off the sample’s meaning.

Additionally, negative responses often reflect that the respondent didn’t want to attend the event or that it wasn’t the right event for them, which means they actually were rating something other than the event itself; but few surveys provide a means to reveal such hidden measures.


Advertisement

Responsiveness

Moreover, some survey responses are simply erroneous. I’ve seen numerous examples of surveys with very positive comments that obviously mistakenly-marked all “1s,” perhaps thinking number 1 is usually the best when in fact it’s the worst score for the particular survey. Sometimes it’s the opposite, where poor comments nonetheless come along with high scores. One-click responses could easily be typos, but there’s no way to tell whether or how they were intended.

Professional survey companies charge a lot of money because they use very scientific techniques. We mainly become conscious of them during elections, but they are used widely in marketing and elsewhere throughout the year. They go to great lengths to understand demographic details and assure their samples in fact are representative of the populations they intend to measure.

Professional surveys usually are reasonably accurate, except of course when they’re not. For instance, there was a famous photo of victorious 1948 Presidential candidate Harry Truman holding a prominent newspaper with a blaring headline that Truman’s opponent Tom Dewey had won. The paper had over-relied on widely-accepted survey data rather than waiting for the actual vote which contradicted the mis-sampled survey. More recently, Mitt Romney and Hillary Clinton found election results failed to match what their surveys were saying.

Also, all survey-takers are not created equally. For many topics, only answers from knowledgeable people will have value. Yet, it can be near impossible to attract responses from them or to filter out respondents who are not suitably qualified.

Relevancy

Professional surveys also rigorously test each survey question to give confidence it accurately measures its intended topic. Wording can dramatically affect how questions are interpreted and answered. In contrast, typical analysts’ survey questions almost always are simply whatever wording the analyst happens to think of at the time.

Sometimes the question’s wording is unclear or misleading—to the survey taker. The author always thinks their questions are clear. Thus, it’s essential to have mechanisms, included in the survey and/or through follow-up, that help us understand how the responder interpreted the question and their response. However, such essential additional effort rapidly erases the survey’s supposedly low cost.

I’m constantly annoyed by surveys that force choosing from a set of answers where none conveys what I want to tell them—and what they probably really needed to know but still won’t find out until it’s too late, if then.

Surveys are best for capturing information about simple facts, such as, “how many computers are there in your house?” Consider how differently different members of the same household could answer that seemingly simple question. Moreover, such facts tend to have limited value. Survey authors usually want to find more complex information, often by asking respondents to compose answers rather than selecting from pre-defined choices. Few people are good at coming up with or articulating open-ended responses; and scoring such answers tends to defeat surveys’ presumed economies.

Consequently, it’s very common for the main finding of the survey to be too-late realization of what questions should have been asked instead of the ones that were asked and turned out not to provide relevant information.

Decision Making

It should be apparent from the above typical weaknesses of surveys why effective BAs use them sparingly and why over-relying on them characterizes ineffective analysts. But the issue goes beyond analysts, because the bigger harm comes from those who act based solely on survey ratings. And we all tend to do it to some extent, often with far greater impacts than we may realize.

These days, a one-digit survey rating can disproportionately determine choice of things such as movies, restaurants, and books/articles/courses. I don’t know about you, but I often find the ratings don’t match my tastes. For instance, I continually am disappointed by Academy Award winners; but not always. .

Not only are such measures likely to be unsound, so too are common reactions to them. High or positive survey scores tend to be taken for granted, whereas disproportionate weight often is attached to low or negative survey scores.

Thus, you may or may not read a highly-rated article or book; but you almost certainly won’t read one with a low rating, even though the low rating may be totally specious, mistaken, or even malicious. When you won’t read the maligned piece, you won’t counter the low rating with your own perhaps higher rating, so the stain persists.

When a one-digit rating steers me to a movie, article, or restaurant I don’t like or away from one perhaps I would have liked, the significance and impact on me is fairly small; but it can make all the difference to the provider whose service or product I and others like me do or don’t choose.

Not surprisingly, survey ratings with potentially big financial impacts not only are shaky at best but now increasingly are subject to gaming. Besides obviously-seeded scam ratings placed by bribery or automated tools, high-rating incentives have led to various borderline-ethical techniques to bump up scores. For example, I’m aware of training that teaches how to make essentially any book an Amazon best seller by inducing five-star reviews as soon as Amazon lists the title. Authors can be disadvantaged by working the old-fashioned way and depending on real readers’ actual ratings.

Issues are magnified when such unreliable solely-ratings-based decisions involve matters of materiality, especially in the realm of projects and business analysis.