Bootstrap Diagnostics for Irregular Estimators
Empirical researchers frequently rely on normal approximations in order to summarize and communicate uncertainty about their findings to their scientific audience. When such approximations are unreliable, they can lead the audience to make misguided decisions. We propose to measure the failure of the conventional normal approximation for a given estimator by the total variation distance between a bootstrap distribution and the normal distribution parameterized by the point estimate and standard error. For a wide class of decision problems and a class of uninformative priors, we show that a multiple of the total variation distance bounds the mistakes which result from relying on the conventional normal approximation. In a sample of recent empirical articles that use a bootstrap for inference, we find that the conventional normal approximation is often poor. We suggest and illustrate convenient alternative reports for such settings.