Information-Constrained Coordination of Economic Behavior
We analyze a coordination game with information-constrained players. The players' actions are based on a noisy compressed representation of the game's payoffs in a particular case, where the compressed representation is a latent state learned by a variational autoencoder (VAE). Our generalized VAE is optimized to trade off the average payoff obtained over a distribution of possible games against a measure of the congruence between the agent's internal model and the statistics of its environment. We apply our model to the coordination game in the experiment of Frydman and Nunnari (2023), and show that it offers an explanation for two salient features of the experimental evidence: both the relatively continuous variation in the players' action probabilities with changes in the game payoffs, and the dependence of the degree of stochasticity of players' choices on the range of game payoffs encountered on different trials. Our approach also provides an account of the way in which play should gradually adjust to a change in the distribution of game payoffs that are encountered, offering an explanation for the history-dependent play documented by Arifovic et al. (2013).