Read Why Government Fails So Often: And How It Can Do Better Online
Authors: Peter Schuck
Pressman and Wildavsky offer some general principles to explain why even a program as straightforward and enthusiastically sponsored as the Oakland Project failed—and why other more complex and controversial ones can be expected to fail as well.
5
First, the many participants, like those in Oakland, are likely to have
different, often inconsistent perspectives
on the program. They may agree with its substantive goals yet oppose the means for effectuating it for any of a number of reasons: because their incentives differ; because the program conflicts with other, more compelling bureaucratic priorities; because they differ about the program’s legal, procedural, or technical requirements; or because they lack the political or other resources necessary to wage an effective fight for it. The more complex the program, the more moving parts it has, the more agencies it involves, the less likely it is to succeed. (By this logic, the Volcker Rule, discussed below, seems destined to fail.)
Second, implementation must pass through
multiple decision points
(most of which may effectively be
veto
points), a number that only increases as the implementation process unfolds. In the relatively simple Oakland Project, the authors identified thirty separate clearances, involving seventy separate agreements, that were necessary before it could proceed. “The probability of agreement by every participant on each decision point,” they conclude, “must be exceedingly high for there to be any chance at all that a program will be brought to completion.” Moreover, the ability of each decision point to extract a price for its assent will cumulate into costs and constraints that deeply compromise the program’s effectiveness.
Third,
delay
is a formidable barrier to implementation. Whether intentional or not, it can defeat, deform, and sap a program, while increasing its cost. Delay is a powerful weapon for those who either oppose the program or want to extract concessions from its proponents, and the greater their power and the intensity of their preferences, the more attractive and effective this tactic will be. By the same token, proponents’ urgent desire to implement a program quickly—in Oakland, within the four months left before the congressional appropriation would expire—increases the leverage that those in a strategic
position to delay implementation will have to extract programmatic concessions (or even bribes).
The Volcker Rule—named after a genuine hero of monetary policy making who has promoted it—illustrates this delay problem (as well as others). An integral part of the effort to implement the Dodd-Frank Act, the rule seeks to restrict banks’ ability to trade in risky assets for their own accounts using funds that are federally insured or that might trigger a taxpayer-financed bailout in the event of the bank’s insolvency. The protracted impasse over the rule delayed its final adoption until December 2013, two and a half years after the original statutory deadline. The delay was partly due to fierce industry opposition and disputes within and among the Securities and Exchange Commission (SEC) and four other regulatory agencies concerning its provisions.
6
But the delay, which will leave the banks little time to figure out how to comply by the thrice-extended enforcement date (now July 2015), also reflects the law’s unprecedented complexity and its failure to draw clear lines among banks’ different investment activities, including market making, risk hedging, underwriting, and proprietary trading. (As Alan Blinder, a former Federal Reserve vice chair puts it, proprietary trading “often fails the Potter Stewart test: you
don’t
know it when you see it.”
7
) It also reflects the law’s failure to clarify the employee participation provision’s scope;
8
the intricacies of the ever-evolving, increasingly arcane financial products that the banks often trade; the uncertainty about how different restrictions would affect their ability to compete with foreign banks operating under different legal regimes; and the definition of and interactions among the seventeen metrics that the banks would have to calculate each day and report to regulators each month. In designing the rule, the regulators have been stymied by all of the reasons discussed in
chapter 7
as to why markets subvert policy coherence. The financial markets in question are extraordinarily complex and diverse, arguably beyond the ability of the rule drafters (far beneath the secretary of the Treasury) to fully comprehend, much less control. These markets are constantly evolving, in order both to exploit fast-changing market conditions and opportunities and to minimize the rule’s anticipated
costs. To implement the seventeen metrics, regulators must gather, process, assess, and act upon enormous quantities of transactional data, based on banks’ answers to a myriad of opaque, hard-to-answer questions.
*
The banks and their elite lawyers, of course, are hard at work fashioning evasive strategies. No wonder president Barack Obama, more than three years after Dodd-Frank’s enactment, publicly worried that it might not be properly implemented.
9
Although the rule means to limit market-driven moral hazard, this outcome is doubtful because Dodd-Frank will actually
expand
the safety net.
†
Because it will only restrict U.S. companies, it will intensify international competitive and domestic political pressures that regulators can neither ignore nor alleviate without undermining the rule’s bite. As already noted, the high stakes have elicited intensive industry lobbying and protracted delays in finalizing the rule, not to speak of implementing it. Its enforcement will be hobbled by all of the factors that, as we have seen, weaken government enforcement efforts generally. Its restrictions will engender black markets operating in its shadow. Nor is there any nonmarket substitute for these firms that the regulators might look to for guidance, as statist economies have invariably learned at great cost. Even setting aside the feverish politics surrounding the rule, it will be miraculous if at the end of the day the regulators get it right.
‡
Fourth, and most fundamentally, a
flawed theory
may doom a program. The success of any policy requires at a minimum that its designers and implementers know which factors and conditions are likely to cause which consequences, yet such knowledge—the policy’s animating theory—is notoriously elusive when dealing with social behaviors. The problem, however, is even deeper than this because a policy’s theoretical coherence is systematically undermined by the distorting factors analyzed in
chapters 5
and
6
: irrationality, skewed incentives, lack of reliable information, policy rigidity, lack of credibility with the other necessary participants, and mismanagement.
In the Oakland case, for example, Pressman and Wildavsky found that
[t]he economic theory [animating the employment projects] was faulty because it aimed at the wrong target—subsidizing the capital of business enterprises rather than their wage bill. Instead of taking the direct path of paying the employers a subsidy on wages after they had hired minority personnel, the EDA program expanded their capital on the promise that they would later hire the right people. Theoretical defects exacerbated bureaucratic problems. Numerous activities had to be carried on—assessing the viability of marginal enterprises, negotiating loan agreements, devising and monitoring employment plans—that would have been unnecessary if a more direct approach had been taken.
10
Chapter 7
explained how almost all public policies are embedded in one or more markets. As in the Oakland Project, policies cannot be effective unless they can solve the implementation problems that their surrounding markets engender. Such a solution requires, at a minimum, that the policy makers know how these markets work, whether and in what particular ways they fail (if they do), and how effectively, if at all, the government policy can manipulate them. In the case of embryonic or future markets, selecting an appropriate policy is even more delicate, as exemplified by the current status of commercial space travel.
11
We can distinguish at least six different ways in which policy makers try to meet this challenge. Some attempt to
perfect
a relevant market by improving consumer information, controlling externalities, or increasing competition. Some
supplement
the market by providing public goods or infrastructure that make it more efficient. Some seek to
suppress
it, perhaps by banning or criminalizing certain transactions. Some try to
simplify
it by standardizing contract terms or product features. Some try to
redirect
it—for example, by trying to induce banks to lend in certain communities that they would not otherwise serve. Some try to
midwife an infant market
. And some try to
mobilize
it to serve regulatory goals, as with pollution markets. None of these approaches can succeed unless policy makers have a correct, detailed understanding of how these markets actually work and possess the instruments to bend them to their will. In reality, however, the markets that surround policies are often so differentiated, detail-specific, dynamic, and opaque to centralized understanding and control that even the most sophisticated policy maker is apt to misunderstand the relationships among the markets’ myriad moving parts.
I shall use these six approaches to organize the remainder of this chapter, bringing together the empirical findings of policy assessments of numerous federal programs.
PERFECTING MARKETS
The most comprehensive, detailed analysis of existing programs designed to improve imperfect markets was conducted by Clifford Winston, who canvassed every scholarly article evaluating these efforts and whose overall findings in synthesizing this large body of research—that policy makers often exaggerated the extent of market failures, and adopted corrective programs that created government failures of greater magnitude—were briefly foreshadowed in
chapters 1
and
5
. His more detailed, program-specific findings, again based on the scholarly evidence, are as follows. (Market-perfecting policies focused on improving consumer information are discussed below in the section “Simplifying Markets.”)
Antitrust policy
. Finding no serious anticompetitive problems in the U.S. economy, Winston considered whether this might be due to antitrust policies directed at monopolization, collusion, and mergers. The cases in which a court accepted the government’s monopolization argument “consistently found that the court’s relief failed to increase competition and reduce consumer prices.” In the landmark case that broke up AT&T in 1984, “antitrust policy was not necessary to restrain a monopolist from engaging in restrictive practices to block competition; rather it was necessary to overcome anticompetitive policies by another federal regulatory agency [the Federal Communications Commission]. In the absence of regulatory failure, the large costs of breaking up AT&T could have been avoided…. Given the protracted length of a monopolization case (some of the cases noted earlier took more than a decade to resolve), federal antitrust actions are likely to lag far behind market developments and thus be less effective than markets in stimulating competition.” As for collusion cases, economists have yet to find that they have led to significantly lower consumer prices over a protracted period. Government challenges to mergers have not systematically enhanced consumer welfare, and have sometimes reduced it; indeed, mergers that were opposed by the regulators but consummated anyway “have often resulted in gains for consumers.” Canvassing the empirical evidence on deterrence and the arguments of antitrust policy defenders, Winston concludes that “current policy provides negligible benefits to consumers that fall far short of enforcement costs.”
12
Economic regulation
.
13
Beginning with the Interstate Commerce Act of 1887, the federal government regulated prices, entry, exit, and conditions of service in a number of industries, typically on the theory that their high fixed and low variable costs would produce ruinous competition and bankruptcies, producing monopoly power and exploitation of consumers. After a spate of deregulation during the 1970s and early 1980s, however, federal price regulations are now largely confined to agricultural commodities and international trade of selected products, neither of which involves a significant risk of monopolization.
Agricultural support programs, which date from the 1930s, are among the most inefficient and distributively regressive of federal policies. They are also a classic instance of the status quo bias (discussed in
chapter 5
) preventing policy adaptability to new conditions, which in the farm policy case include a vast reduction in the number of family farms, the rise of agribusiness, and farmers’ significant off-farm employment, which helps make them wealthier than nonfarmers. These programs—direct payments, price supports, loans, subsidized insurance, environmental protection subsidies, acreage restrictions, and others—are exceedingly costly ($256 billion between 1995 and 2012), yet they generate large net welfare losses, disproportionately benefit the largest farms, and encourage excessive consumption of scarce water supplies.
14
(A number of farm-belt members of Congress are direct recipients of these subsidies.
15
) Between 1995 and 2010, fully 90 percent of direct and price-support payments went to the top 20 percent of farms; the vast majority of farms received nothing.
16