I remember programming a questionnaire for a client using their survey software account, and I was chagrined to discover that it lacked the ability to randomize the display of items in a choice list. This is a common capability of most modern survey software applications. Not randomizing your choice lists can introduce significant error into your results, as the impact of order bias is greater even than the margin of sampling error.

As an example, the GSS, way back in 1984, showed participants a card during a face-to-face interview, then asked them “The qualities listed on this card may all be important, but which three would you say are the most desirable for a child to have?”

… has good manners (MANNER)
… tries hard to succeed (SUCCESS)
… is honest (HONEST)
… is neat and clean (CLEAN)
… has good sense and sound judgment (JUDGMENT)
… has self-control (CONTROL)
… he acts like a boy or she acts like a girl (ROLE)
… gets along well with other children (AMICABLE)
… obeys his parents well (OBEY)
… is responsible (RESPONSIBLE)
… is considerate of others (CONSIDERATE)
… is interested in how and why things happen (INTERESTED)
… is a good student (STUDIOUS)

The top three choices chosen most often were Honest (selected by 66% of respondents), Judgment (by 39%), and Responsible (34%).

Unless you reversed the order of the choices on the card.

In which case, the top three choices were Honest (selected by 48%, a 17-point decrease), Judgment (41%, a 2-point increase), and Considerate (40%, a 15-point increase).

In fact, there was an average of a ±6.5% difference across the 13 items because of response order bias, as much as the margin of sampling error at 95% confidence level for a probability survey of U.S. adults with 230 respondents. But this may understate the situation: the average difference was ±11.6% for the six items that ended up in the top 3 and bottom 3 items of the list (the other items were in the middle in both versions of the card).

In analyzing this data, Jon Krosnick and Duane Alwin in the paper “An evaluation of a cognitive theory of response order effects in survey measurement” found that choices presented earlier in the list were disproportionately likely to be selected. Summarizing past research showing similar findings, they report two reasons for this primacy effect:

  • “Items presented early may establish a cognitive framework or standard of comparison that guides interpretation of later items. Because of their role in establishing the framework, early items may be accorded special significance in subsequent judgments.”
  • “Items presented early in a list are likely to be subjected to deeper cognitive processing; by the time a respondent considers the final alternative, his or her mind is likely to be cluttered with thoughts about previous alternatives that inhibit extensive consideration of it. Research on problem-solving suggests that the deeper processing accorded to early items is likely to be dominated by generation of cognitions that justify selection of these early items. Later items are less likely to stimulate generation of such justifications (because they are less carefully considered) and may therefore be selected less frequently.”

And, yes, this has been replicated when doing online surveys.

Accordingly, always randomize when appropriate:

  • Most “all that apply” questions should be randomized (leaving “Other (please specify)” and “None of the above” as the last two choices).
  • For multiple-choice questions that aren’t scale questions, randomize the order of the choices whenever they have no logical or inherent order.

Items that you shouldn’t randomize:

  • Scales (e.g., “Completely satisfied” to “Not at all satisfied”). Nor should unusual scales be randomized: e.g., “Yay!”, “Meh”, “Ugh” should be kept in that order (just don’t take the results too seriously).
  • Long lists that respondents will skim to look for the answer they have in mind, such as lists of states, provinces, countries, even industries. Randomizing the order of these will only confuse respondents.
  • Other lists that have an inherent order, such as a list of job titles (e.g., “CEO”, “C-level”, “VP”, “Director”, etc.). For a more unusual example, in a survey rating franchise movies, we presented them in episode order and not randomized.

Sometimes you have to make judgement calls. For example, I alphabetize long brand lists if asking participants to select those they’ve done business with before. For an aided awareness question, however, I’ll randomize the list.

In general, randomizing choice lists is one of the easiest ways at your disposal to greatly improve the quality of your survey data.

Note: An update to a blog post originally published August 1, 2014.

Author Notes:

Jeffrey Henning

Gravatar Image
Jeffrey Henning, IPC is a professionally certified researcher and has personally conducted over 1,400 survey research projects. Jeffrey is a member of the Insights Association and the American Association of Public Opinion Researchers. In 2012, he was the inaugural winner of the MRA’s Impact award, which “recognizes an industry professional, team or organization that has demonstrated tremendous vision, leadership, and innovation, within the past year, that has led to advances in the marketing research profession.” In 2022, the Insights Association named him an IPC Laureate. Before founding Researchscape in 2012, Jeffrey co-founded Perseus Development Corporation in 1993, which introduced the first web-survey software, and Vovici in 2006, which pioneered the enterprise-feedback management category. A 35-year veteran of the research industry, he began his career as an industry analyst for an Inc. 500 research firm.