At the Quirks Event in London yesterday, Nadja Böhme of Factworks and Pete Cape of Dynata shared tips for designing research for “Generation Overload” with techniques to minimize the effort required by survey respondents. The goal is strike the right balance between simplicity and substance. “Research should be like a clock – simple at the front-end (the face) and advanced at the back-end (the clockwork).”

Nadja and Pete shared seven tips:

  1. Don’t create monsters. “Let’s start with the obvious, don’t create overlong surveys. Do all the people need to see all the questions?” If you are not crosstabulating by section results, then you can split sections into separate surveys. If you distribute the same sample across these smaller surveys, remember that you only need N=200 for results with incidence at 5% or 95%, compared to N=1,000 for results at 50%.
  2. Really be mobile first. Half your respondents are taking your survey on your phone, and 70% of young respondents are. Have researchers check the survey on mobile. Recognize that just because it works on mobile doesn’t mean it’s optimized: aim for device-optimized design, using techniques like replacing grids with a carousel repeating the scale for each “question” (what would be a row in a grid).
  3. Engage the respondent. Motivate and close the feedback loop: motivate to foster feelings of autonomy, relatedness, competence, value. “Thank you very much for choosing [autonomy] to do this important [value] survey. Your answers will help improve [value] the services we deliver to you [relevance].” Gamify your questions, simply: a game just needs rules and a mechanic; e.g., “You have 60 seconds to write down all the brands you can think of, up to a maximum of 10” produces 5.3 brands per respondent vs. 3.7 using a traditionally worded question.
  4. Be creative. Nadja shared the example of reducing complexity for a conjoint study. Since one feature had far too many levels, Nadja instead ran a MaxDiff exercise with all the levels. In the choice exercise, only a few selected levels were shown. The MaxDiff values were then re-scaled using the choice utility of the tested levels, and simulations were prepared as if all the levels had been used.
  5. Don’t ask what you already know. Pete said, “It is so odd to me that researchers don’t trust data from questions they didn’t ask themselves! We have to get over ourselves and value the data we have.” Too often information is re-asked of survey respondents, rather than imported from CRM, panel, and other systems (e.g., Dynata tracks up to 1,500 fields per panelist). “MR is fairly siloed, and then it is siloed around the project, which is finished and put onto the shelf. Client-side researchers need to think of themselves as data curators, curating internal data from prior surveys or behavioral data and connecting different databases and sources.”
  6. Use sample and analytics wisely. The traditional monadic test shows each person one concept to evaluate, employing equal sample per concept; e.g., 300 responses for each of 5 concepts. “Even though we would want to know more about the most promising concept,” Nadja said. An intelligent allocation with machine learning automatically used smaller base sizes and then allocated more sample during fielding to the two most promising concepts. For the adaptive approach, the total sample was just 999 (600 for the top two concepts, 133 on average for the bottom three concepts). This technique is especially valuable for B2B studies, where sample is scarce and expensive.
  7. Dare to ask openly. Use open-ended questions more often, and let automation and audio transcription work for you. With too little time for traditional coding, turn to NLP (Natural Language Processing) instead.

“How do we arrive at powerful, yet simple research design?” Nadja asked. “Simplicity is for the respondent; complexity is for us as researchers. Be creative when it comes to research design. Let technology work for you to make it smooth and fast.”

Author Notes:

Jeffrey Henning

Gravatar Image
Jeffrey Henning, IPC is a professionally certified researcher and has personally conducted over 1,400 survey research projects. Jeffrey is a member of the Insights Association and the American Association of Public Opinion Researchers. In 2012, he was the inaugural winner of the MRA’s Impact award, which “recognizes an industry professional, team or organization that has demonstrated tremendous vision, leadership, and innovation, within the past year, that has led to advances in the marketing research profession.” In 2022, the Insights Association named him an IPC Laureate. Before founding Researchscape in 2012, Jeffrey co-founded Perseus Development Corporation in 1993, which introduced the first web-survey software, and Vovici in 2006, which pioneered the enterprise-feedback management category. A 35-year veteran of the research industry, he began his career as an industry analyst for an Inc. 500 research firm.