Regulating Artificial Intelligence
Recent AI advancements promise substantial benefits but also pose significant societal risks. We show that an unregulated equilibrium is unlikely to mitigate these risks in a socially optimal way. Our analysis evaluates different regulatory approaches to AI, taking into account the presence of uncertainty and disagreement about the likelihood of misalignments that can generate societal costs. We characterize the Pigouvian taxes that deliver the efficient allocation. Our analysis emphasizes the practical difficulties of implementing these taxes when developers are protected by limited liability or when they hold different beliefs from the rest of society. We study the optimal time-consistent combination of testing and regulatory approval. While this policy doesn’t guarantee efficient use of resources, it enables society to harness AI’s benefits while mitigating its risks.