Where river sampling is typically a supplemental source of responses to panel surveys, intercept surveys gather all their responses by interrupting traffic to web sites. CivicScience, Google Consumer Surveys (GCS) and RIWI each intercept people in their everyday use of the web. CivicScience posts polls to web sites, GCS prompts readers to take a survey in order to advance to the second page of a news article, and RIWI captures visitors to mistyped non-trademarked domain names (among other methods) and shows them instead the first question of a short survey.
Because intercept surveys capture responses from individuals who rarely, if ever, take surveys, they can offer broader representativity than panel surveys, and access hard to reach markets and populations not covered by panels. The GRIT/RIWI Consumer Participation in Research Report found that 72% of intercepted respondents had not done a survey in the past month (25%) or had never take one before (47%), and found significant differences between how “fresh” vs. “frequent” survey takers answered questions.
It is easy to mischaracterize such surveys as probability samples. However, they lack the key element of external selection: the systems are intercepting people who, if they decline to take the survey, are not invited again to this particular survey. Additionally, random intercepts have no sampling frame. We do not know who has a non-zero chance of selection through this method and who doesn’t. Simply put, random intercepts are not random samples. However, intercept surveys provide a valuable technique for increasing the representativeness of online surveys.
This is an excerpt from the free Researchscape white paper, “Improving the Representativeness of Online Surveys”. Download your own copy now.