Panel companies do a great job of profiling their members for common attributes. Need to survey Hispanics under 30 years old? No problem. Need to survey divorced college graduates? No problem. Need upper-income parents? Again, no problem!
But what if you need to survey moms who took their child to the museum in the past 6 months? Or, you need upper income households that listen to Sirius XM satellite radio? With target groups like these, things start to get a little more complicated.
Now, none of those groups are particularly hard to reach, but it will require you to write a screener to start your questionnaire. You will need to screen out people who don’t qualify for your survey so that those who do respond to the body of the questionnaire are your target market.
While panel companies are diligent about eliminating the small percentage of panelists who cheat on surveys, screeners are tempting to cheat on. Imagine it from a panelist’s perspective. They are invited to take a survey, answer one to five questions and then are disqualified. Eventually, some panelists are going to feel emboldened to lie on the screener in order to qualify for the survey.
The basic practice to eliminate this small group of undesirables is to take a page from questionnaire design on leading questions. For instance, the leading question, “Should people be allowed to protect themselves from harm by using Mace as self-defense?” tips your hand as to the answer you want. In the same way, a screener question like, “Do you care for an Alzheimer’s patient who takes Memantine?” tips your hand as to who qualifies for your survey.
When writing screeners, here are some best practices to follow to further screen your respondents and increase validity to their answers:
- Replace yes/no questions with select-all-that-apply questions. For instance, for the above question on Alzheimer’s, instead first ask, “Do you care for a friend or family member who suffers from any of the following conditions?” Provide a long list of ailments.
- Triangulate qualification by asking related questions. For example, for pharmaceutical research, a subsequent question to the question on ailments might provide a long list of possible medications.
- Screen out respondents who select very rare attributes or multiple, unrelated low-incident choices. Sticking with the same example, I typically include ALAD deficiency in the list of ailments I present. Since only 10 cases of it have ever been reported, I eliminate respondents who select it.
- Screen out respondents who select every choice. For a survey on Internet radio, I asked if they had a subscription to five different services and I screened out the 1 percent who selected all the choices.
- Use red herrings. In your choice lists, use red herrings such as invented drug names or television shows or website names to catch cheaters in the act. The United States Adopted Names (USAN) Council of the American Medical Association is a good source for medication names that have been approved but aren’t yet on the market.
- Provide long choice lists. We recently conducted a control-market/test-market survey in four metropolitan areas. Since 12 percent of Americans move every year, profile data could be out-of-date. Accordingly, we asked respondents to identify the closest major metropolitan area from a dropdown list of 200.
- End the interview with a reworded screener question. Re-ask the screener question in different words as the final question of the survey. If the answers don’t match, then you have a cheater.
The best screener question is the one that you don’t have to ask because your panel provider has already profiled respondents using it. Panelist attribute lists are constantly being updated. Talk to your panel provider to see if you can target respondents more precisely than anticipated. If not, make sure to follow the best practices above.
See also: Screener Bias in Panel and River Samples