The trend is towards shorter surveys—“Eight minutes is the new ten minutes,” according to Aaron Platshon, CEO of player-insights firm Tap Research. Given that, how do you square the circle of shortening your survey while packing in as many or even more questions?
The key is to use modular surveys that trade off sample size for length. For example, for an open-ended question, 100 responses are often as useful as 1,000 responses, so ask an open-ended question selected at random from a battery of open-ended questions.
To pack more questions into shorter surveys (shorter, that is, from the respondents’ perspective), we use the following types of modules:
- Skip patterns – The classic module is the IF/THEN. If you’ve used the brand, answer a module about your experience; if you haven’t, answer a module about why you haven’t and what might get you to consider it.
- Monadic tests – One concept is chosen at random, then evaluated in depth. The crosstabs compare how attitudes towards the different concepts vary from one another.
- Random essay questions – Open-ended or verbatim questions provide great color commentary about the topics being researched. But respondents quickly tire of answering such questions, and verbatim quality (length and relevance of response) decline with each subsequent verbatim question. We handle this with one of three types of modules:
- An initial or final page of the survey with up to 10 essay questions, one of which is chosen at random.
- A series of probing questions each following up on a closed-ended question, with one probe displayed at random.
- A prioritized essay question about attitudes towards a low-incident brand, asking about the lowest market-share brand the respondent is familiar with.
- Random grids – We try not to use more than one grid in a study (and we don’t present that grid as a table to mobile respondents but show each row as an individual question). For CX and ESAT work with large sample sizes, where the grids are used for driver analysis, we show the overall satisfaction grid to everyone and then show only one subsequent drill-down grid selected at random.
- Random segments – Rather than ask a respondent five questions about their attitudes toward each segment (e.g., vertical market, brand, etc.), we will randomly select one segment from those they are familiar with and ask those five questions once, just for that segment. We then crosstabulate the results by segment to show key differences.
- Matrix logic – When a longstanding client sent a dropdown grid (each row and column showing a menu rather than a single radio button) of 5 columns by 7 rows, I panicked (45 questions!). Then I realized that most people would enter “Never” for most dropdown boxes. Instead of fielding it as specified, we added two all-that-apply questions, one covering the rows, one covering the columns, and autopunched “Never” behind the scenes. The average respondent only had to answer 8 questions.
- Iceberg matrices – We sometimes set up matrix questions with 20 or so rows, but then program the grid to only show the most relevant rows to the respondent. We sometimes prioritize and cap the number of rows shown in some fashion. One common technique is to display rows that correspond only to the selections from a prior all-that-apply question. We call these “iceberg matrices” because most of the matrix is never seen by a respondent; it’s below the water line, as it were.
- Imported fields – Another way to hide the true length of the survey is to hide questions and populate them from past surveys or from databases. Why ask demographics if you already have those on file? We’ve taken this one step further; if a panelist hasn’t answered a question, they get asked it but not the other demographic questions that they’ve answered before. This can work for other types of fields as well. For one customer satisfaction survey, we appended 15 fields representing categories of purchases made using a retail loyalty card—integrating actual behavioral data into the survey. For our own CX survey of our customers, we append data about the attributes of the project we completed on behalf of that customer.
These modular techniques are valuable but they have some limitations—with lower sample size to the selected questions, fewer statistically significant differences are identified in crosstabs. Modular surveys do not make sense as a replacement for every long survey. But surveys with long lists of brands, products, or attributes are often a good fit for this technique.
When doing a modular survey, you may want to increase the sample size, if budget allows. While still giving you a large sample to analyze, you’re providing a better respondent experience. Eight minutes may seem uncomfortably short for many researchers, but modular surveys are a powerful solution.