Besides leading questions, another common mistake I see in the draft questionnaires I’m sent is treating respondents like robots. Something about becoming a survey author inspires us to suddenly think of customers, prospects, and employees as automatons with total recall, machine-like logic, and full foresight.

ResearchAcces_c3po C3PO – Respondents are not like this Star Wars protocol droid, “fluent in over six million forms of communication,” yet we often expect respondents to easily translate our jargon and abbreviations. The truth is jargon is often ambiguous and will confuse respondents who don’t follow our industry or product category as closely as we do. You should revise questionnaires to substitute plain language for jargon and to eliminate or at least explain acronyms.
ResearchAcces_r2d2 R2D2 – C3PO’s companion is designed to talk to other machines, and only speaks in bleeps, buzzes and beeps. Too many survey questions seem like they were designed for machines like R2D2 rather than people. Asking respondents to rate items from 1 to 10 produces less accurate results than giving them five, fully labeled choices. Prompting respondents to rank features against one another is cognitively complex and less accurate than having them complete choice exercises. Tasking respondents with allocating 100% of their spending treats people like spreadsheets and overestimates their knowledge of their own spending.
ResearchAcces_data Data – This superrational android from Star Trek doesn’t experience or understand emotions. Too often questionnaires assume respondents are like this, carefully and rationally conducting feature-benefit and trade-off analysis before purchasing. Thanks to the fresh reminder from behavioral economics, we know that consumer behavior is often shaped by emotions and circumstances, not sheer logic. We ask questions as if respondents don’t have emotions at our peril.
ResearchAcces_ruk Ruk – Another Star Trek android was Ruk, a centuries-old creation of an extinct alien race. We are wrong to assume that respondents are like this robot with centuries of memories accessible. Often we ask questions about past purchase behaviors, but the details of those decisions are no longer remembered by consumers, who produce banal or socially desirable answers instead. For instance, at ESOMAR 3D, Tristan Morris of PepsiCo discussed how survey questions about purchasing snacks often fail to capture useful information because people do not accurately recall the circumstances around an impulse decision such as buying tortilla chips.
ResearchAcces_gort Gort – This robot from The Day the Earth Stood Still never tired. Yet respondents do often tire, especially as survey length creeps past 10 minutes. And as respondents tire, the quality of their answers decreases. James Sallow has said, “Respondents are not a limitless resource – we need them to want to take more questionnaires. Keep a constant eye on the incompletion rate of your own surveys and take action: your data is ultimately the victim of a survey that does not engage respondents. It’s all about engaging the audience – it’s good for the respondent, it’s good for the panel and it’s good for your data.” Unless you are surveying robots, keep your surveys short.
ResearchAcces_terminator Terminator – This time-traveling cyborg might be able to predict the future, but respondents can’t. Yet we often ask respondents questions as if they could: “Will you purchase insurance within 1 month, 3 months, 6 months, 12 months or later?” (I just wrote a question like this yesterday. Oops.) If you must ask a question like this, don’t take the answers literally; for instance, it is standard practice in concept tests to discount purchase likelihood – for instance, one formula for evaluating purchase likelihood says to count 80% of those who answer “extremely likely” and 30% of those answer “very likely” as potential purchasers. Concept scores are often inflated, because, unlike Terminator, our respondents haven’t traveled back in time.
ResearchAcces_walle Wall-E – In some ways Pixar’s Wall-E is the opposite of the Terminator: a robot of the past. This garbage-collecting robot is the last functioning robot on an Earth empty of humans, doing a job now devoid of meaning in an empty world. Perhaps it even takes surveys.
ResearchAcces_hal9000 Hal – Survey authors won’t change their ways until respondents sick of being treated like robots start saying, “I’m afraid I can’t do that, Dave.”

Author Notes:

Jeffrey Henning

Gravatar Image
Jeffrey Henning, IPC is a professionally certified researcher and has personally conducted over 1,400 survey research projects. Jeffrey is a member of the Insights Association and the American Association of Public Opinion Researchers. In 2012, he was the inaugural winner of the MRA’s Impact award, which “recognizes an industry professional, team or organization that has demonstrated tremendous vision, leadership, and innovation, within the past year, that has led to advances in the marketing research profession.” In 2022, the Insights Association named him an IPC Laureate. Before founding Researchscape in 2012, Jeffrey co-founded Perseus Development Corporation in 1993, which introduced the first web-survey software, and Vovici in 2006, which pioneered the enterprise-feedback management category. A 35-year veteran of the research industry, he began his career as an industry analyst for an Inc. 500 research firm.