When I’m reviewing a questionnaire, here are the top 10 things I remind my co-authors:

  1. Keep the respondent in mind – Write from their point of view, think of question order as a conversation, explain jargon, give them lists of choices that apply to them and allow them to go off script: e.g., “Other (please specify)” and “I don’t know/recall” choices. Write questions that work well on smartphones.
  2. Use labels without numbers for each scale item – When you do have to write your own scale, keep in mind that respondents prefer choices like “Excellent”, “Good”, “Acceptable”, “Poor”, “Terrible” to numeric ranges where 10 is best and 0 is worst, for example.
  3. Provide 5 choices for unipolar scales and 7 for bipolar scales – The number of items in a scale has a modest effect on its reliability and validity, with one meta-analysis finding five-point scales most reliable for “Not at all” to “Completely/Extremely” measures (unipolar) and seven-point scales most reliable for opposites such as “Decreased a lot” to “Increased a lot” (bipolar).
  4. Replace bipolar scales with unipolar scales where possible – Respondents find bipolar scales more cognitively difficult to answer, and as a result such scales are less reliable. When you can, use a unipolar scale instead (e.g., a scale ending in “Not at all satisfied” instead of “Completely unsatisfied”, or one ending in “Not at all likely” instead of “Completely unlikely”).
  5. Use common rating scales – Rather than writing your own scales, when possible choose from common measures of frequency, likelihood, quality, etc.
  6. Minimize use of grid questions – Try not to have more than one grid question per survey, as respondents find them tedious. Make sure they work well on different devices: render each row as an individual question on smartphones, and make sure the headings repeat every three or four rows on desktop devices.
  7. Rewrite agreement scales – Given that respondents are agreeable, agreement questions overstate how much they actually agree. Use other common rating scales instead, where appropriate.
  8. Understand the tradeoffs between yes/no and all-that-apply questions – Because respondents are agreeable, they often select “Yes”; one way to minimize this is to provide a longer answer (“Yes, have purchased this in the past 30 days”). Because respondents are in a hurry, they often treat “all that apply” questions as “some that apply” questions, in Pew Research’s famous description.
  9. Shorten the questionnaire – While brainstorming and developing a questionnaire, don’t worry about length—cast a wide net to determine the most valuable questions! However, once you’ve identified those questions that are a priority, pare the questionnaire down, so that respondents can answer all the questions without tiring.
  10. Ask the respondent about the questionnaire – It’s a great way to improve over time!

When programming the survey, make sure to randomize choice lists, to avoid order bias. Want even more best practices? Check out my MRII ESOMAR webinar on questionnaire design or take the class I co-authored for UGA, Measurement and Questionnaire Design.

Note: An update to blog post originally published April 30, 2023.

Author Notes:

Jeffrey Henning

Gravatar Image
Jeffrey Henning, IPC is a professionally certified researcher and has personally conducted over 1,400 survey research projects. Jeffrey is a member of the Insights Association and the American Association of Public Opinion Researchers. In 2012, he was the inaugural winner of the MRA’s Impact award, which “recognizes an industry professional, team or organization that has demonstrated tremendous vision, leadership, and innovation, within the past year, that has led to advances in the marketing research profession.” In 2022, the Insights Association named him an IPC Laureate. Before founding Researchscape in 2012, Jeffrey co-founded Perseus Development Corporation in 1993, which introduced the first web-survey software, and Vovici in 2006, which pioneered the enterprise-feedback management category. A 35-year veteran of the research industry, he began his career as an industry analyst for an Inc. 500 research firm.