Learning When to Quit: An Empirical Model of Experimentation
Research productivity depends on the ability to discern whether an idea is promising, and a willingness to abandon the ones that are not. Economists know little about this process, however, because empirical studies of innovation typically begin with a sample of issued patents or published papers that were already selected from a pool of promising ideas. This paper unpacks the idea selection process using a unique dataset from the Internet Engineering Task Force (IETF), a voluntary organization that develops protocols for managing Internet infrastructure. For a large sample of IETF proposals, we observe a sequence of decisions to either revise, publish, or abandon the underlying idea, along with changes to the proposal and the demographics of the author team. Using these data, we provide a descriptive analysis of how R&D is conducted within the IETF, and estimate a dynamic discrete choice model whose key parameters measure the speed at which author teams learn whether they have a good (i.e., publishable) idea. The estimates imply that sixty percent of IETF proposals are publishable, but only one-third of the good ideas survive the review process. Author experience and increased attention from the IETF community are associated with faster learning. Finally, we simulate two counterfactual innovation policies: an R&D subsidy and a publication-prize. Subsidies have a larger impact on research output, though prizes perform better when accounting for researchers' opportunity costs.