How Long Did It Take the United States to Become an Optimal Currency Area?
The United States is often taken to be the exemplar of the benefits of a monetary union. Since 1788 Americans, with the exception of the Civil War years, have been able to buy and sell goods, travel, and invest within a vast area without ever having to be concerned about changes in exchange rates. But there was also a recurring cost. A shock, typically in financial or agricultural markets, would hit one region particularly hard. The banking system in that region would lose reserves producing a monetary contraction that would aggravate the effects of the initial disturbance. Plots of bank deposits by region show these patterns clearly. Often, an interregional debate over monetary institutions would follow. The uncertainty created by the debate would further aggravate the contraction. During these episodes the United States might well have been better off if each region had had its own currency: changes in exchange rates could have secured equilibrium in interregional payments while monetary policy was directed toward internal stability. It is far from clear, to put it differently, that the United States was an optimal currency area. This pattern held until the 1930s when institutional changes, such as increased federal fiscal transfers (which pumped high-powered money into regions that were losing reserves) and bank deposit insurance, addressed the problem of regional banking shocks. Political considerations, of course, ruled out separate regional currencies in the United States. But thinking about U.S. monetary history in this way clarifies the nature of the business cycle before World War II, and may suggest some lessons for other monetary unions.