Last week I was writing a questionnaire for a client using their survey software account, and I was chagrined to discover that it lacked the ability to randomize the display of items in a choice list. This is a common capability of all modern survey software applications, including QuestionPro, Survey Analytics, Google Consumer Surveys, and more. Not randomizing your choice lists can introduce significant error into your results, as the impact of order bias is greater even than the margin of sampling error.
As an example, the GSS, way back in 1984, showed respondents in a face-to-face interview a card, then asked them “The qualities listed on this card may all be important, but which three would you say are the most desirable for a child to have?”
- … has good manners (MANNER)
- … tries hard to succeed (SUCCESS)
- … is honest (HONEST)
- … is neat and clean (CLEAN)
- … has good sense and sound judgment (JUDGMENT)
- … has self-control (CONTROL)
- … he acts like a boy or she acts like a girl (ROLE)
- … gets along well with other children (AMICABLE)
- … obeys his parents well (OBEY)
- … is responsible (RESPONSIBLE)
- … is considerate of others (CONSIDERATE)
- … is interested in how and why things happen (INTERESTED)
- … is a good student (STUDIOUS)
The top three choices chosen most often were Honest (selected by 66% of respondents), Judgment (by 39%), and Responsible (34%).
Unless you reversed the order of the choices on the card.
In which case, the top three choices were Honest (48%, a 17-point decrease), Judgment (41%, a 2-point increase), and Considerate (40%, a 15-point increase).
In fact, there was an average of a ±6.5% difference across the 13 items because of response order bias, as much as the margin of sampling error at 95% confidence level for a probability survey of U.S. adults with 230 respondents. But this may even understate the situation: the average difference was ±11.6% for the six items that ended up in the top 3 and bottom 3 items of the list (the other items were in the middle in both versions of the card).
In analyzing this data, Jon Krosnick and Duane Alwin in the paper “An evaluation of a cognitive theory of response order effects in survey measurement” found that choices presented earlier in the list were disproportionately likely to be selected. Summarizing past research showing similar findings, they report two reasons for this primacy effect:
- “Items presented early may establish a cognitive framework or standard of comparison that guides interpretation of later items. Because of their role in establishing the framework, early items may be accorded special significance in subsequent judgments.
- “Items presented early in a list are likely to be subjected to deeper cognitive processing; by the time a respondent considers the final alternative, his or her mind is likely to be cluttered with thoughts about previous alternatives that inhibit extensive consideration of it. Research on problem-solving suggests that the deeper processing accorded to early items is likely to be dominated by generation of cognitions that justify selection of these early items. Later items are less likely to stimulate generation of such justifications (because they are less carefully considered) and may therefore be selected less frequently.”
And, yes, this has been replicated when doing online surveys.
Accordingly, when doing web surveys, always randomize when appropriate: for multiple choice questions, as opposed to scale questions, randomize the order of the choices whenever they have no logical or inherent order. Most modern survey software applications also allow you to anchor a “None of the above” to the bottom of this list for select-all-that-apply questions.
(For dropdowns with long lists that respondents will skim, such as alphabetical lists of states, provinces, and countries, there is no need to randomize the order. Doing so would only confuse respondents.)
Randomizing choice lists is one of the easiest ways at your disposal to greatly improve the quality of your survey data.