When I’m reviewing a questionnaire, here are the top 10 things I remind my co-authors:
- Keep the respondent in mind – Write from their point of view, think of question order as a conversation, explain jargon, give them lists of choices that apply to them and allow them to go off script: e.g., “Other (please specify)” and “I don’t know/recall” choices. Write questions that work well on smartphones.
- Use labels without numbers for each scale item – When you do have to write your own scale, keep in mind that respondents prefer choices like “Excellent”, “Good”, “Acceptable”, “Poor”, “Terrible” to numeric ranges where 10 is best and 0 is worst, for example.
- Provide 5 choices for unipolar scales and 7 for bipolar scales – The number of items in a scale has a modest effect on its reliability and validity, with one meta-analysis finding five-point scales most reliable for “Not at all” to “Completely/Extremely” measures (unipolar) and seven-point scales most reliable for opposites such as “Decreased a lot” to “Increased a lot” (bipolar).
- Replace bipolar scales with unipolar scales where possible – Respondents find bipolar scales more cognitively difficult to answer, and as a result such scales are less reliable. When you can, use a unipolar scale instead (e.g., a scale ending in “Not at all satisfied” instead of “Completely unsatisfied”, or one ending in “Not at all likely” instead of “Completely unlikely”).
- Use common rating scales – Rather than writing your own scales, when possible choose from common measures of frequency, likelihood, quality, etc.
- Minimize use of grid questions – Try not to have more than one grid question per survey, as respondents find them tedious. Make sure they work well on different devices: render each row as an individual question on smartphones, and make sure the headings repeat every three or four rows on desktop devices.
- Rewrite agreement scales – Given that respondents are agreeable, agreement questions overstate how much they actually agree. Use other common rating scales instead, where appropriate.
- Understand the tradeoffs between yes/no and all-that-apply questions – Because respondents are agreeable, they often select “Yes”; one way to minimize this is to provide a longer answer (“Yes, have purchased this in the past 30 days”). Because respondents are in a hurry, they often treat “all that apply” questions as “some that apply” questions, in Pew Research’s famous description.
- Shorten the questionnaire – While brainstorming and developing a questionnaire, don’t worry about length—cast a wide net to determine the most valuable questions! However, once you’ve identified those questions that are a priority, pare the questionnaire down, so that respondents can answer all the questions without tiring.
- Ask the respondent about the questionnaire – It’s a great way to improve over time!
When programming the survey, make sure to randomize choice lists, to avoid order bias. Want even more best practices? Check out my MRII ESOMAR webinar on questionnaire design or take the class I co-authored for UGA, Measurement and Questionnaire Design.
Note: An update to blog post originally published April 30, 2023.