Communicating Scientific Uncertainty via Approximate Posteriors
We cast the problem of communicating scientific uncertainty as one of reporting a posterior distribution on an unknown parameter to an audience of Bayesian decision-makers. We establish novel bounds on the audience’s regret when the analyst reports an approximation to a posterior that the audience treats as exact. Under a palatable restriction on the audience’s decision problems, the bounds take an especially convenient form. Under a further restriction on the audience’s priors, a bootstrap distribution can be used as a stand-in posterior. We propose a practical recipe for checking whether a conventional statistical report (say, a normal parameterized by a point estimate and standard error) is a good approximation, and for improving the report if it is not. We illustrate our proposals using the articles in the 2021 American Economic Review that use a bootstrap for inference.