Rationalizable Learning
Working Paper 30873
DOI 10.3386/w30873
Issue Date
The central question we address in this paper is: what can an analyst infer from choice data about what a decision maker has learned? The key constraint we impose, which is shared across models of Bayesian learning, is that any learning must be rationalizable. To implement this constraint, we introduce two conditions, one of which refines the mean preserving spread of Blackwell (1953) to take account for optimality, and the other of which generalizes the NIAC condition (Caplin and Dean 2015) and the NIAS condition (Caplin and Martin 2015) to allow for arbitrary learning. We apply our framework to show how identification of what was learned can be strengthened with additional assumptions on the form of Bayesian learning.