raditionally it doesn’t matter whether you are the first or last respondent to take a survey. What you see is a function of how you answer the questions. But an increasing number of surveys leverage input from the crowd to shape the experience of subsequent respondents. What the last respondent sees is changed by how the first respondent answered. What might be possible if we use earlier respondents to help shape what later respondents see?
As an industry, we’ve been automating survey interviewing since the 1970s, when the first CATI (Computer-Assisted Telephone Interviewing) systems were created for minicomputers. Since then, the focus has always been on consistency, and the experience of a respondent would be the same no matter if they were the first respondent to take the survey or the last respondent. The only exception being that quota limits might prevent latecomers from taking the survey at all.
A number of trends are leading to a tentative re-examining of this basic tenet, leading to the possibility of surveys that use input from earlier respondents in some way to change the survey for later respondents. Among these trends:
- The rise of a variety of online qualitative research tools, from BBFGs (Bulletin Board Focus Groups) to online communities and suggestion boxes, each allowing research participants to respond to one another.
- The emergence of online customer satisfaction surveys, both transactional and relationship surveys, with enormous sample sizes, ranging from the tens of thousands of respondents to millions of respondents.
- The ongoing pressure for shorter and more engaging surveys, accelerated by the rise of web browsing from smart phones.
Crowd-shaped survey techniques range along a continuum from qualitative to quantitative. Let’s consider some possible techniques along this continuum:
- Question buckets
- Crowd-sourced choice lists
- Crowd-sourced laddering
- Comment evaluation
- Idea exploration
- Concept optimization
- Crowd-sourced questionnaires