Noisy Foresight
Rational agents must perform backwards induction by thinking contingently about future states and actions, but failures of backwards induction and contingent reasoning are ubiquitous. How do boundedly-rational agents make decisions when they fail to correctly forecast actions in the future? We construct an individual decision-making experiment to collect a rich dataset in which subjects must reason only about their own future actions. We demonstrate substantial mistakes relative to the rational benchmark, and use the rich dataset to estimate several possible models of boundedly-rational foresight. We find that a model in which subjects expect to make more mistakes when the payoff consequences of their future actions are more similar best explains behavior.