Administrative Data Linking and Statistical Power Problems in Randomized Experiments
Objective:
The increasing availability of large administrative datasets has led to a particularly exciting innovation in criminal justice research, that of the “low-cost” randomized trial in which administrative data are used to measure outcomes in lieu of costly primary data collection. In this paper, we point out that randomized experiments that make use of administrative data have an unfortunate consequence: the destruction of statistical power. Linking data from an experimental intervention to administrative records that track outcomes of interest typically requires matching datasets without a common unique identifier. In order to minimize mistaken linkages, researchers will often use “exact matching” (retaining an individual only if all their demographic variables match exactly in two or more datasets) in order to ensure that speculative matches do not lead to errors in an analytic dataset.
Methods:
In this paper, we derive an analytic result for the consequences of linking errors on statistical power and show how the problem varies across different combinations of relevant inputs, including the matching error rate, the outcome density and the sample size.
Results:
We show that this seemingly conservative approach leads to underpowered experiments and potentially to the failure of entire experimental literatures. For marginally powered studies, which are common in empirical social science, exact matching is particularly problematic.
Conclusions:
We conclude on an optimistic note by showing that simple machine learning-based probabilistic matching algorithms allow criminal justice researchers to recover a considerable share of the statistical power that is lost to errors in data linking.