EconPapers    
Economics at your fingertips  
 

Sequential Testing of Product Designs: Implications for Learning

Sanjiv Erat () and Stylianos Kavadias ()
Additional contact information
Sanjiv Erat: Rady School of Management, University of California at San Diego, La Jolla, California 92093
Stylianos Kavadias: College of Management, Georgia Institute of Technology, Atlanta, Georgia 30308

Management Science, 2008, vol. 54, issue 5, 956-968

Abstract: Past research in new product development (NPD) has conceptualized prototyping as a "design-build-test-analyze" cycle to emphasize the importance of the analysis of test results in guiding the decisions made during the experimentation process. New product designs often involve complex architectures and incorporate numerous components, and this makes the ex ante assessment of their performance difficult. Still, design teams often learn from test outcomes during iterative test cycles enabling them to infer valuable information about the performances of (as yet) untested designs. We conceptualize the extent of useful learning from analysis of a test outcome as depending on two key structural characteristics of the design space, namely whether the set of designs are "close" to each other (i.e., the designs are similar on an attribute level) and whether the design attributes exhibit nontrivial interactions (i.e., the performance function is complex). This study explicitly considers the design space structure and the resulting correlations among design performances, and examines their implications for learning. We derive the optimal dynamic testing policy, and we analyze its qualitative properties. Our results suggest optimal continuation only when the previous test outcomes lie between two thresholds. Outcomes below the lower threshold indicate an overall low performing design space and, consequently, continued testing is suboptimal. Test outcomes above the upper threshold, on the other hand, merit termination because they signal to the design team that the likelihood of obtaining a design with a still higher performance (given the experimentation cost) is low. We find that accounting for the design space structure splits the experimentation process into two phases: the initial exploration phase, in which the design team focuses on obtaining information about the design space, and the subsequent exploitation phase in which the design team, given their understanding of the design space, focuses on obtaining a "good enough" configuration. Our analysis also provides useful contingency-based guidelines for managerial action as information gets revealed through the testing cycle. Finally, we extend the optimal policy to account for design spaces that contain distinct design subclasses.

Keywords: sequential testing; design space; complexity; contingency analysis (search for similar items in EconPapers)
Date: 2008
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (10)

Downloads: (external link)
http://dx.doi.org/10.1287/mnsc.1070.0784 (application/pdf)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:inm:ormnsc:v:54:y:2008:i:5:p:956-968

Access Statistics for this article

More articles in Management Science from INFORMS Contact information at EDIRC.
Bibliographic data for series maintained by Chris Asher ().

 
Page updated 2025-03-19
Handle: RePEc:inm:ormnsc:v:54:y:2008:i:5:p:956-968