Respondents will answer the questions you ask, but they may not spend the effort to be accurate. For some questionnaires, it’s simply not possible.
For instance, we were asked to run a matrix of 40 attributes about 8 brands (that’s 320 questions!). The kicker was that these were 40 attributes about electric companies. How many attributes could you judge your own electric company on? Maybe price, frequency of outages, handling of outages, billing clarity, overall brand image? Do you know what energy sources your electric company is using? Do you know its commitment to sustainability? Do you know what charities it supports? (We still have 32 attributes to go.) Now which of those attributes can you answer for an electric company you’re not doing business with?
For another study, respondents were asked to provide their annual spending on an activity for the past three years, breaking it down in detail for each year across a range of products and services. It is extremely unlikely people could quantify their spending at this level of detail with any reliability.
For a third study, we were asked to profile consumers’ technology usage in detail across six categories. The overall questionnaire was 80 pages and, at its longest path without skip patterns, for someone who had every type of device, 120 questions. The questionnaire was over 20,000 words long and would take the average English speaker 100 minutes to read! That’s not counting time to answer each question.
In all these cases, the questionnaire author had a matrix or model they wanted to develop, and they assumed that directly asking respondents was the best way to do so. They also assumed that consumers’ mental model of the industry, including the details known and recalled, were close to their own. They also gave respondents no out, if they didn’t have an opinion about electric company #8’s quality for attribute #40, or didn’t recall their 2015 expenditures. Bad questionnaires lead to bad data.
As a private research firm that focuses on providing solutions, we don’t have to sell for the sake of making monthly or quarterly numbers. While we could program and field such studies, we wouldn’t be able to stand behind the quality of the results. Instead, we suggested ways to streamline and refactor each study. Two of these prospects took their business elsewhere (and we later heard that they were very unhappy with the survey results they got).
Fundamentally, poor survey research is a waste of everyone’s time and money. The data simply can’t be depended on. For instance, years ago when I was a junior researcher, a client tasked us with surveying record-store owners about the percentage of sales each year by medium (vinyl, cassette, CD). I dutifully called store owners, but most didn’t know and either outright refused to participate or made up numbers. The resulting model didn’t reflect any objective estimates prepared by others, and we ended up building our model primarily on top of other firms’ published models.
Worse, tedious surveys discourage people from participating in surveys altogether. This makes surveys in general less representative, as response rates drop. For the really tedious surveys, one has to wonder if the people taking the survey to completion are actually representative of the wider market being studied.
When presented with questionnaires that won’t meet the client’s objectives, what can you do?
For detailed brand attributes, you can ask for a list of words or phrases that describe the brand, and you can run a streamlined grid with a subset of attributes. For customers of electric companies, we wanted to ask ten attributes for brands they currently or had ever done business with, and three or four for the others. For other studies, we’ve displayed a random subset of attributes; for others, we’ve asked them to check items they knew about the brand, then piped those selections in the grids. In these cases, the grid was still large, but respondents only saw a tailored subset of it.
For trend research, sometimes all you can ask is for respondents’ perspectives on whether they are spending more or less or the same on a category as they did the prior year (see this post on bipolar questions) and estimate the prior year’s volume from that. For one establishment survey, we survey U.S. small-business owners every year right after the federal tax deadline; we encourage respondents to get out their tax returns to answer the survey; in exchange, they are given detailed comparisons to others in their industry including benchmarks on staff and owner compensation.
For many long questionnaires with repetitive structure about brands, we randomly choose three brands to ask those detailed questions about. This dramatically shortens the questionnaire from the respondents’ perspectives. Occasionally, we customize the randomization with market-share rules, so that low incidence brands are always selected (you have to be careful to note that this can introduce a skew for the high-market share brands).
One of our clients asks a great question at the end of their questionnaires. “How confident are you in your answers to this survey?” (From extremely confident to not at all confident.) We still pay the incentives for those who were not confident, but our client can filter out their results.
Asking Too Often
A corollary to asking too much is asking too often. And, again, as a solution-oriented company, we often push for trackers to not only be streamlined but to be run less frequently. Whenever the results are highly stable, we’ve encouraged clients with monthly trackers to make them bimonthly, with bimonthly trackers to make them quarterly, with quarterly trackers to make them annually, and – in one case – with an annual tracker to be every other year. (We wouldn’t recommend this for trackers with customer-experience case management, where the survey is the primary source of identifying dissatisfied customers and intervening to improve their experience.)
Why would we encourage you to run shorter surveys and to run surveys less often? Because we’re in this for the long haul (seriously, we have a 20-year business plan), and we want to provide you the best data we can, given your budget.
Giving you the best data requires that we all recognize that too many times we are simply asking too much of respondents.