While crowdsourcing has been used in market research for years now, it is often compartmentalized as an input method – an open way of gathering ideas from customers, prospects, and employees. In fact, crowdsourcing can – and often should – be used throughout the research process, as case studies from AllOurIdeas, InSites Consulting, AOL, and IdeaScale make clear.

Idea Collection

The classic “use case” for crowdsourcing is creating a public or private community specifically for generating ideas. The favorite example still – six years on – is My Starbucks Idea, which has just generated nearly 200,000 ideas. The frequent issue raised by the success of such sites is: how do we truly analyze the results?

Idea Ranking

Many idea generation platforms greatly skew the rating of ideas by emphasizing “hot” ideas – for instance, the “Popular Ideas” tab on the Salesforce Ideas platform positively reinforces ideas that are popular, making it more difficult for worthwhile new ideas to break through, especially those that have scrolled off the Recent Ideas tab.

All Our Ideas offers a unique take on this – load it with your ideas, and then visitors to your site see two randomly selected ideas. They vote for the one they like better – every idea gets seen and evaluated, and visitors can’t stuff the ballot box. While contributors can submit new ideas as well, All Our Ideas is primarily a voting platform. For instance, a 2010 All Our Ideas study for creating “a greener, greater New York City” identified a bike-share program as one of the top ideas out of over 200 ideas generated; Citi Bike launched just last year.


At the Green Book IIEX conference in Philadelphia last year, Niels Schillewaert of InSites Consulting presented a case study for Air France-KLM looking at improving the transfer flight experience, which often produces a lot of negative emotions in travelers. After soliciting ideas from a community, InSites recruited a second community of 46 frequent fliers to analyze the ideas and add their own perspectives. They expanded the analysis from 26 insights to 68 and revised 6 marketing assumptions, providing an analysis that couldn’t have been accomplished by researchers alone.

At the 2013 ESOMAR 3D Digital Dimensions conference, Joseph Blechman mentioned the value of crowdsourcing the analysis of verbatim responses, specifically for categorization and sentiment analysis. He gave an example using multiple participants to evaluate each comment in order to improve reliability.

Research magazine recapped an IdeaScale community underway for the U.S. Navy. The Navy writes, “During the next four weeks, the CNO wants to hear from you: from the deck seaman to the goat locker, from the ensign to the Commanding Officer… submit ideas as well as vote and comment on other ideas. Collectively, the best ideas and feedback will ‘bubble to the top’.” To break out of the “hot ideas” bias, IdeaScale offers participants many ways to navigate ideas, including a search engine, tags, and recommendations to review similar ideas. All this to help the Navy with “Reducing Administrative Distractions” and putting “Warfighting First”.

While you should certainly put idea generation first when implementing crowdsourcing, make sure to expand your crowdsourcing to provide rich quantitative and qualitative assessment of ideas as well.