We work with a range of panel companies and publications, all of which have different strengths and weaknesses. Because we use third-party sources, we are objective about the quality of those sources, and data cleanse the results based on respondent behaviors.

Informed by research on research, we have developed a now 43-step process for data cleansing (though certain steps are only necessary for certain types of sample sources). The process is formal, exacting, and tedious, but we have a shorthand way of referring to the major types of behaviors we are on the watch for:

  • Spamming – Some panelists take a survey multiple times, from one panel using different IDs, or from different panels. We have a range of methods to identify and remove these panelists while keeping any who started, got interrupted, and re-started.
  • Speeding – We note those who complete the survey among the fastest for their path through the survey (with skip patterns, different respondents can take different paths). They might just be good at taking surveys, or they might not be reading the questions: we look for other behaviors to determine which it might be.
  • Spiffing – While most respondents participate for a mix of intrinsic and extrinsic rewards, some are in it only for the survey incentive and are just going through the motions: this shows up in behaviors like providing the same answer to question after question (when it is for a grid of questions, this is called straightlining).
  • Spoofing – Some respondents pretend to be people they are not, and lie on the screener to get in (see this post on screener bias). They specify a range of inconsistent answers between the survey and their panelist or publication profile. For instance, suddenly they are the Chief Marketing Officer, when according to their profile they work in Customer Service.
  • Sputtering – Some respondents hate writing: instead, they provide one-word responses or type random gibberish or answer in a language the survey isn’t being fielded in. We follow Annie Pettit’s lead and ask about the language they grew up speaking, to not falsely penalize non-native speakers.

The important work of Pete Cape of SSI showed that a “one strike and you’re out” policy reduced the representativeness of the data. Some respondents refuse to provide verbatim responses, others are fast, many hate grids. Because of this, the results are more representative when you have a “two strikes and you’re out” policy, recognizing that attention wanders and response styles differ.

Unlike some sample providers, our goal isn’t simply to provide you responses. Our goal is to provide you the most representative data we can given your budget and target market.

Author Notes:

Jeffrey Henning

Gravatar Image
Jeffrey Henning, IPC is a professionally certified researcher and has personally conducted over 1,400 survey research projects. Jeffrey is a member of the Insights Association and the American Association of Public Opinion Researchers. In 2012, he was the inaugural winner of the MRA’s Impact award, which “recognizes an industry professional, team or organization that has demonstrated tremendous vision, leadership, and innovation, within the past year, that has led to advances in the marketing research profession.” In 2022, the Insights Association named him an IPC Laureate. Before founding Researchscape in 2012, Jeffrey co-founded Perseus Development Corporation in 1993, which introduced the first web-survey software, and Vovici in 2006, which pioneered the enterprise-feedback management category. A 35-year veteran of the research industry, he began his career as an industry analyst for an Inc. 500 research firm.