Peter Falk as Columbo, 1973As part of the CASRO webinar series, a few weeks ago Pete Cape, global knowledge director of SSI, discussed his research into respondents’ use of grids. It began with a mystery posed by the 2009 paper, “Beyond ‘Trapping’ the Undesirable Panelist: Use of Red Herrings to Reduce Satisficing” by Jeff Miller and Jamie Baker-Prewitt, PhD, of Burke.

Pete said it was a great paper, but… “With complex things, people take the easy parts out: ask respondents to mark a ‘2’ in a grid, and if they failed, throw away their response. It’s inefficient and not useful. I’ve been intrigued by this paper, but I could find nothing wrong with it.”

He then quoted Columbo… “Just one thing keeps bothering me…”

To test his hunch, Pete “recreated the crime”: he collected 1,500 responses to 3 online surveys (500 per treatment): one with a grid with a trap at item 1 and item 40, 1 with a grid with a trap at item 39, and 1 with the same topics as separate questions, with a trap at question 39. The survey was on employee satisfaction and had 71 items in total.

“We see the results of a grid… but did we ever look inside?” asked Pete. “When we see the output of a grid, we see what people said before they moved on, but no one looked inside to see what people were doing.” By instrumenting the survey to collect paradata, tracking the order in which items were clicked and the time spent per item, Pete discovered:

  • 93% of respondents started with the first item; a few started near the end and work their way back to the top
  • The respondents needed 75.5 clicks to answer 71 items
  • Only 23% of respondents finished without making any errors or corrections
  • While 67% finished on the last item, the remainder finished elsewhere in the grid – probably after having to answer a row that the survey system let them know they had missed.
  • Respondents completed the grid almost 20 times faster than the battery of regular questions: they took half a second per item in a grid but 8 to 10 seconds per question.

“The clue was there all the time,” said Pete, quoting Columbo again. The clue? 7% of responses missed the trap when it was the first item.

To unravel the clue, though, requires understanding the nature of inattention. In a study on “Task Unrelated Thoughts”, researchers paged participants (with a pager) and asked two questions each time: “What are you doing? What are you thinking about?” Said Pete, “We just ask questions and we expect attention,” but that’s not how people’s minds work: 30% of the time there was a mismatch. “‘What are you doing?’ ‘A survey.’ ‘What are you thinking about?’ ‘Dinner.’”

In another test, of “Sustained Attention to Response Task,” people were asked to click every time the number changes except when the number is 3. Each participant completed 225 trials; on average, they made a mistake 4% of the time. Attention wanders.

“Attention slips in surveys,” said Pete. He gave the example of asking people to enter their birth date. Immediately after entering it, respondents were asked to confirm it: 5.8% reported it was in error. “A simple data entry slip of attention.”

Is the trap “Please select the answer labeled ‘2’” a trap of the respondent or a trap of the survey author? The question text was “Thinking about your work and the company you work for, please indicate to what extent you agree or disagree with each of the following statements. Please use a scale from 1 to 5 where 1 means you disagree strongly, 2 means you disagree and 3 means you neither agree nor disagree, 4 means you agree and 5 means you agree strongly.”

The trap is a violation of the instructions: it has nothing to do with “your work and the company you work for”. [Additionally, it violates the use of the scale, asking you to “disagree” with the statement “Please select the answer labeled ‘2’”.]

When confronted with such confusion, respondents often rely on heuristics:

  • Frequency bias – “What do I code most often?” Respondents whose most common answer was 1, 3, 4, 5 or not applicable (6), were more likely than chance to select their most common answer when completing the trap: between 57% and 76% of respondents did, depending on their modal answer.
  • Recency bias – “What was the last thing I did?” Of those answers not explained by frequency bias, between 25% and 33% selected their most recent answer depending on its value (though no one selected “Not applicable” twice in a row at a trap).
  • Similarity matching – “Is this like something else?”
  • Confirmation bias – “Do I believe in this?”
  • And more…

Outside of the grid, 4% of respondents failed the trap – similar to the natural level of inattention. Less than 2% of respondents who failed the trap straightlined all answers.

In summary:

  • “We are not all 100% attentive all the time.”
  • “Our attention traps catch good guys and bad guys.”
  • “The grid encourages too fast processing and decisions.”
  • “Grids produce poorer quality data.”
  • “Grids encourage poor behavior.”
  • “Traps in the grid body is cognitive underspecification at work.”

“For my closing statement,” said Pete, “this is persistent inattention, which is normal human behavior. It is the grid that should be on trial here, not my client, the panelist. I rest my case.”

Author Notes:

Jeffrey Henning

Gravatar Image
Jeffrey Henning, IPC is a professionally certified researcher and has personally conducted over 1,400 survey research projects. Jeffrey is a member of the Insights Association and the American Association of Public Opinion Researchers. In 2012, he was the inaugural winner of the MRA’s Impact award, which “recognizes an industry professional, team or organization that has demonstrated tremendous vision, leadership, and innovation, within the past year, that has led to advances in the marketing research profession.” In 2022, the Insights Association named him an IPC Laureate. Before founding Researchscape in 2012, Jeffrey co-founded Perseus Development Corporation in 1993, which introduced the first web-survey software, and Vovici in 2006, which pioneered the enterprise-feedback management category. A 35-year veteran of the research industry, he began his career as an industry analyst for an Inc. 500 research firm.