Sniff Tests as a Screen in the Publication Process: Throwing out the Wheat with the Chaff
The increasing demand for empirical rigor has led to the growing use of auxiliary tests (balance, specification, over-identification, placebo, etc.) in assessing the credibility of a paper’s main results. We dub these “sniff tests” because rejection is bad news for the author and standards for passing are informal. Using a sample of nearly 30,000 published sniff tests collected from scores of economics journals, we study the use of sniff tests as a screen in the publication process. For the subsample of balance tests in randomized controlled trials, our structural estimates suggest that the publication process removes 46% of significant sniff tests, yet only one in ten of these is actually misspecified. For other tests, we estimate more latent misspecifiation and less removal. Surprisingly, more authors would be justified in attributing significant sniff tests to random bad luck.