Survey data is a valuable source of customer knowledge. The insights gathered through surveys are used to inform critical business decisions. Yet, so much effort is exerted to increase response rates, and relatively little to increase the actual quality of the data.
With such important decisions resting on survey data, organizations would do well to make sure that the information is accurate and reliable. To reach a higher level of data quality, principles from the field of User Experience Design may hold the key.
The problem with survey design
Perhaps the most important element in running a successful survey is also the most overlooked: the cognitive stress load that a survey places on respondents.
While much is said about proper methods of sampling, delivery, and follow-up, one of the most effective ways to improve data quality (and raise response rates) is by writing thoughtful, well-designed surveys that minimize respondents’ cognitive stress.
Cognitive stress in surveys
Cognitive stress is a term that refers to the level of internal apprehension and anxiety in the users of a system (in this case, respondents to a survey). High cognitive stress in survey respondents undermines the data by clouding it with inaccurate responses made in confusion and error. If the stress load gets too high, respondents are likely to drop out and leave the researcher with no data at all.
Common survey design mistakes are responsible for generating most cognitive stress. A list of the minutiae of such mistakes would go on forever, but they can be identified by 3 broad categories:
1. Fatigue: Fatigue-related mistakes cause respondents to become mentally exhausted due to length, repetitiveness, or overwhelming complexity.
The most common fatigue error is simply making a survey that’s too long. Another frequent mistake is not including a progress indicator to show respondents how much of the survey is remaining. Super-complex question types can also cause fatigue: for example, page-long matrices where every question has a 1-10 Likert scale response.
2. Answerability: Answerability mistakes occur when respondents are unable to provide accurate responses due to insufficient knowledge or recall, unsatisfactory multiple-choice options, or unwillingness to provide an honest answer.
Typical answerability mistakes include writing answer sets that are not exhaustive, or options that are not mutually exclusive.
One of the trickiest answerability errors is writing multiple-choice options that don’t correspond to a realistic range of answers, therefore skewing the data you get. For example, if your answer choices were “1-5” / “6-10” / “11-15” / “15+” but most respondents fall in a 12-30 range, the data you get will fail to capture the real nuance of answers.
3. Clarity: Clarity mistakes cause respondents to misinterpret or fail to understand the questions, often due to misleading word choice or poor question phrasing.
An easy way to trip up and cause clarity issues is by using terminology that is unfamiliar to your respondents. This can happen if you are not mindful of their level of expertise or exposure, and you use industry or company jargon that is not meaningful to your audience.
However, such errors can also be as simple as typos or careless editing.
All 3 types of mistakes make filling out your survey a more frustrating experience for the respondent and lead directly to lower-quality data full of inaccuracies. That’s where User Experience (UX) thinking comes in handy.
Injecting user-centric thinking
Typically, UX solutions are applied to website and app design, but surveys are just as much a product with a user experience that can be optimized.
Usability testing, a staple research method in UX design, allows one to see how real people interact with a product and where they become confused, frustrated, or tempted to give up. Applied to survey design, a usability test can serve as an enhanced pilot run, complete with in-depth recorded feedback on what might make the survey difficult or stressful.
Testing a survey with a few people before sending it out lets the researcher identify potential flaws, and target problem spots for rethinking and rewriting. Once steps have been taken to eliminate cognitive stress, the new and improved survey will collect much more reliable data on which to base key business decisions.