Edward C. Malthouse

Erastus Otis Haven Professor at Northwestern Medill IMC

Edward C. Malthouse is the Erastus Otis Haven Professor of IMC and Industrial Engineering and Management Science at Northwestern University and the Research Director for the Spiegel Center for Digital and Database Marketing. His research interests center on customer engagement and experiences; digital, social, and mobile media; big data; customer lifetime value models; predictive analytics; unsupervised learning; and IMC.

Identifying New Product Ideas: Waiting for the Wisdom of the Crowd or Screening Ideas in Real Time


One of our key aims was to determine whether a firm benefits from waiting to obtain “crowd” data, which take time to accumulate, or whether it can just as easily make decisions in real time, based on “contributor” and “content” information. Two separate models are built to assess the added value of crowd feedback in idea selection: the first contains content and contributor experience (scenario 1), and the second additionally includes crowd feedback (scenario 2).

In all cases, the performance of the model that included all of the 3Cs (content, contributor, crowd) was superior to that of the model that included only content and contributor experience. The improvement in terms of “Area Under the Receiver Operating Characteristic Curve” (AUC) was by 0.113 (117.9%), 0.186 (129.6%), 0.295 (148.1%), and 0.274 (143.8%) in linear discriminant analysis (LDA), linear regression (LR), stochastic adaptive boosting (AB), and random forests (RF), respectively. The DeLong test confirms that these increases are significant across all classifiers (p <.001). In scenario 1, the models can predict idea implementation accurately with an AUC ranging between .613 (AB) and .630 (LDA). This means that the probability that a model will rank an implemented idea higher than a declined idea is between 61.3 and 63.0%. After including crowd feedback (scenario 2), this probability increases to between 74.3% (LDA) and 90.8% (AB).

The performance of our automated models is further compared with the performance of convenient heuristics. These heuristics are intuitive, easy to implement, and are used by firms for idea ranking. Three idea-ranking heuristics are investigated: the first was based on the number of crowd votes (high to low), the second was based on the number of crowd comments (high to low), and the third was based on random selection (equivalent to an unbiased coin toss).

For scenario 1, the use of our model improved idea selection by between 8.7% and 11.7% over ranking by votes, between 23.8% and 21.1% over ranking by comments, and between 22.6% and 26.0% over random idea selection. Thus, idea selection on the basis of content coupled with contributor experience is superior to random idea selection or idea selection based on the number of votes but is marginally inferior to idea selection based on the number of comments. For scenario 2, the use of our model improved idea selection by between 31.7% and 61.0% over ranking by votes (AUC 5.564), between 16.6% and 42.5% over ranking by comments (AUC 5.637), and between 48.6% and 81.6% over random idea selection (AUC 5.5). In sum, using all 3Cs performs systematically better than using idea ranking heuristics over several algorithms. Across algorithms, AB and LDA were, respectively, the best- and worst-performing classifiers for both scenarios.


Our results suggest that waiting for crowd data – and specifically, structured data, that is, the number of votes and comments that an idea receives per day – may be worthwhile: including this information improves idea selection between 17.9% and 48.1% over using content and contributor experience. The nonlinear models (AB, RF) substantially outperformed the linear models (LDA, LR) when crowd data is incorporated, suggesting that the former can capture nonlinearities and interactions not captured by the latter. This finding necessitates more research on the use of nonlinear methods in idea selection. Our results further indicate that ideas need to surpass an initial threshold of obtaining one crowd vote (comment); achieving this improves the odds of implementation substantially. These findings suggest that, after controlling for content and contributor experience, the decision criteria of both the crowd and the firm are likely to be well aligned.

When the manager does not have time to wait for the crowd, then he or she can gain information from the idea’s content, which comprises unstructured textual data. To distinguish between novel and similar ideas, this study developed a measure for ideas’ relative distinctiveness as compared with previously submitted ideas. Both highly dissimilar (i.e., distinctive, new) and highly similar (i.e., more of the same) ideas have a higher implementation probability. Moderately distinctive ideas, however, are not likely to be implemented. The use of nonlinear data analysis methods exposed this previously undiscovered relationship between idea distinctiveness and idea implementation. Our observations here make sense, since firms look for both incremental (“do better, yet more of the same”) and radical (“new to the firm or industry”) ideas. Highly innovative ideas typically demand more support from senior management and require substantial time and investment to develop. Therefore, it makes sense to first identify ideas that the firm considers quick wins. Since community members vote and comment on ideas, it is reasonable to find that idea content is important, but not the top predictor of idea implementation compared with crowd feedback, since text mining approaches, unlike humans, can fail to accurately capture information in ideas.

Contributor experience was also predictive of idea implementation; however, of the 3Cs, its role was smallest. Generally, similarly to Bayus (2013), more experience in generating ideas facilitates idea implementation: community members who have a history of generating ideas or have had ideas implemented previously have a higher probability of having new ideas implemented compared with contributors with no such history. This observation is supported by research of Simonton (2003, 2004), which argues that a contributor’s productivity in generating implemented ideas is strongly associated with the number of submitted ideas. One reason for our observations could be that, as users generate more ideas and monitor the firm’s response on these ideas, they get a better sense of what the firm considers valuable and likewise adapt their suggestions to the firm’s feedback. Although this may result in ideas that are less original and less valuable to the firm, this is not necessarily a problem in cases of small updates. Similarly, a positive effect of community membership (“tenure”) and community interaction (“number of previous contributor comments”) on idea implementation can be observed. This finding supports prior literature that stated that by communicating, contributors revise their own ideas, get multiple views on their ideas, and are more exposed to problems faced by other consumers. As a result, they are more likely to generate ideas relevant to the firm.

It is important to note that the positive effects of prior idea generation took time to develop. More specifically, contributors become better at generating valuable ideas in a nonlinear fashion. Therefore, in line with previous research, firms are advised to aim to retain contributors for longer periods of time to be able to sufficiently capitalize on this effect. In our study, members who were active for about a month (25 days) prior to their idea suggestion had a higher probability of getting their idea implemented.

This research was adapted and abridged from an article by Stephen Hoornaert, Michel Ballings, Edward C. Malthouse, and Dirk Van den Poel published in the Journal of Product Innovation Management (2017).

Written by Edward C. Malthouse, Erastus Otis Haven Professor at Northwestern Medill IMC
Edited by John Pietrowicz, Medill IMC Class Of 2018


Bayus, B. L. 2013. Crowdsourcing new product ideas over time: An analysis of the Dell IdeaStorm community. Management Science 59 (1): 226–44.

Simonton, D. 2003. Scientific creativity as constrained stochastic behavior: The integration of product, person and process perspectives. Psychological Bulletin 129 (4): 475–94.

Simonton, D. 2004. Creativity in science: Chance, logic, genius, and zeitgeist. New York: Cambridge University Press.


© 2020 Northwestern University

1845 Sheridan Road
Evanston, IL 60208-2101