Read Why Nations Fail: The Origins of Power, Prosperity, and Poverty Online
Authors: Daron Acemoğlu,James Robinson
Tags: #Non-Fiction, #Sociology, #Business, #Science, #Politics, #History
For example, many economies around the world ostensibly implementing such reforms, most notably in Latin America, stagnated throughout the 1980s and ’90s. In reality, such reforms were foisted upon these countries in contexts where politics went on as usual. Hence, even when reforms were adopted, their intent was subverted, or politicians used other ways to blunt their impact. All this is illustrated by the “implementation” of one of the key recommendations of international institutions aimed at achieving macroeconomic stability, central bank independence. This recommendation either was implemented in theory but not in practice or was undermined by the use of other policy instruments. It was quite sensible in principle. Many politicians around the world were spending more than they were raising in tax revenue and were then forcing their central banks to make up the difference by printing money. The resulting inflation was creating instability and uncertainty. The theory was that independent central banks, just like the Bundesbank in Germany, would resist political pressure and put a lid on inflation. Zimbabwe’s president Mugabe decided to heed international advice; he declared the Zimbabwean central bank independent in 1995. Before this, the inflation
rate in Zimbabwe was hovering around 20 percent. By 2002 it had reached 140 percent; by 2003, almost 600 percent; by 2007, 66,000 percent; and by 2008, 230 million percent! Of course, in a country where the president wins the lottery (
this page
–
this page
), it should surprise nobody that passing a law making the central bank independent means nothing. The governor of the Zimbabwean central bank probably knew how his counterpart in Sierra Leone had “fallen” from the top floor of the central bank building when he disagreed with Siaka Stevens (
this page
). Independent or not, complying with the president’s demands was the prudent choice for his personal health, even if not for the health of the economy. Not all countries are like Zimbabwe. In Argentina and Colombia, central banks were also made independent in the 1990s, and they actually did their job of reducing inflation. But since in neither country was politics changed, political elites could use other ways to buy votes, maintain their interests, and reward themselves and their followers. Since they couldn’t do this by printing money anymore, they had to use a different way. In both countries the introduction of central bank independence coincided with a big expansion in government expenditures, financed largely by borrowing.
The second approach to engineering prosperity is much more in vogue nowadays. It recognizes that there are no easy fixes for lifting a nation from poverty to prosperity overnight or even in the course of a few decades. Instead, it claims, there are many “micro-market failures” that can be redressed with good advice, and prosperity will result if policymakers take advantage of these opportunities—which, again, can be achieved with the help and vision of economists and others. Small market failures are everywhere in poor countries, this approach claims—for example, in their education systems, health care delivery, and the way their markets are organized. This is undoubtedly true. But the problem is that these small market failures may be only the tip of the iceberg, the symptom of deeper-rooted problems in a society functioning under extractive institutions. Just as it is not a coincidence that poor countries have bad macroeconomic policies, it is not a coincidence that their educational systems do not work well. These market failures may not be due solely to ignorance.
The policymakers and bureaucrats who are supposed to act on well-intentioned advice may be as much a part of the problem, and the many attempts to rectify these inefficiencies may backfire precisely because those in charge are not grappling with the institutional causes of the poverty in the first place.
These problems are illustrated by intervention engineered by the nongovernmental organization (NGO) Seva Mandir to improve health care delivery in the state of Rajasthan in India. The story of health care delivery in India is one of deep-rooted inefficiency and failure. Government-provided health care is, at least in theory, widely available and cheap, and the personnel are generally qualified. But even the poorest Indians do not use government health care facilities, opting instead for the much more expensive, unregulated, and sometimes even deficient private providers. This is not because of some type of irrationality: people are unable to get any care from government facilities, which are plagued by absenteeism. If an Indian visited his government-run facility, not only would there be no nurses there, but he would probably not even be able to get in the building, because health care facilities are closed most of the time.
In 2006 Seva Mandir, together with a group of economists, designed an incentive scheme to encourage nurses to turn up for work in the Udaipur district of Rajasthan. The idea was simple: Seva Mandir introduced time clocks that would stamp the date and time when nurses were in the facility. Nurses were supposed to stamp their time cards three times a day, to ensure that they arrived on time, stayed around, and left on time. If such a scheme worked, and increased the quality and quantity of health care provision, it would be a strong illustration of the theory that there were easy solutions to key problems in development.
In the event, the intervention revealed something very different. Shortly after the program was implemented, there was a sharp increase in nurse attendance. But this was very short lived. In a little more than a year, the local health administration of the district deliberately undermined the incentive scheme introduced by Seva Mandir. Absenteeism was back to its usual level, yet there was a sharp increase in “exempt days,” which meant that nurses were not actually
around—but this was officially sanctioned by the local health administration. There was also a sharp increase in “machine problems,” as the time clocks were broken. But Seva Mandir was unable to replace them because local health ministers would not cooperate.
Forcing nurses to stamp a time clock three times a day doesn’t seem like such an innovative idea. Indeed, it is a practice used throughout the industry, even Indian industry, and it must have occurred to health administrators as a potential solution to their problems. It seems unlikely, then, that ignorance of such a simple incentive scheme was what stopped its being used in the first place. What occurred during the program simply confirmed this. Health administrators sabotaged the program because they were in cahoots with the nurses and complicit in the endemic absenteeism problems. They did not want an incentive scheme forcing nurses to turn up or reducing their pay if they did not.
What this episode illustrates is a micro version of the difficulty of implementing meaningful changes when institutions are the cause of the problems in the first place. In this case, it was not corrupt politicians or powerful businesses undermining institutional reform, but rather, the local health administration and nurses who were able to sabotage Seva Mandir’s and the development economists’ incentive scheme. This suggests that many of the micro-market failures that are apparently easy to fix may be illusory: the institutional structure that creates market failures will also prevent implementation of interventions to improve incentives at the micro level. Attempting to engineer prosperity without confronting the root cause of the problems—extractive institutions and the politics that keeps them in place—is unlikely to bear fruit.
Following the September 11, 2001, attacks by Al Qaeda, U.S.-led forces swiftly toppled the repressive Taliban regime in Afghanistan, which was harboring and refusing to hand over key members of Al Qaeda. The Bonn Agreement of December 2001 between leaders of the former Afghan mujahideen who had cooperated with the U.S.
forces and key members of the Afghan diaspora, including Hamid Karzai, created a plan for the establishment of a democratic regime. A first step was the nationwide grand assembly, the Loya Jirga, which elected Karzai to lead the interim government. Things were looking up for Afghanistan. A majority of the Afghan people were longing to leave the Taliban behind. The international community thought that all that Afghanistan needed now was a large infusion of foreign aid. Representatives from the United Nations and several leading NGOs soon descended on the capital, Kabul.
What ensued should not have been a surprise, especially given the failure of foreign aid to poor countries and failed states over the past five decades. Surprise or not, the usual ritual was repeated. Scores of aid workers and their entourages arrived in town with their own private jets, NGOs of all sorts poured in to pursue their own agendas, and high-level talks began between governments and delegations from the international community. Billions of dollars were now coming to Afghanistan. But little of it was used for building infrastructure, schools, or other public services essential for the development of inclusive institutions or even for restoring law and order. While much of the infrastructure remained in tatters, the first tranche of the money was used to commission an airline to shuttle around UN and other international officials. The next thing they needed were drivers and interpreters. So they hired the few English-speaking bureaucrats and the remaining teachers in Afghan schools to chauffeur and chaperone them around, paying them multiples of current Afghan salaries. As the few skilled bureaucrats were shunted into jobs servicing the foreign aid community, the aid flows, rather than building infrastructure in Afghanistan, started by undermining the Afghan state they were supposed to build upon and strengthen.
Villagers in a remote district in the central valley of Afghanistan heard a radio announcement about a new multimillion-dollar program to restore shelter to their area. After a long while, a few wooden beams, carried by the trucking cartel of Ismail Khan, famous former warlord and member of the Afghan government, were delivered. But they were too big to be used for anything in the district, and the villagers put them to the only possible use: firewood. So what had
happened to the millions of dollars promised to the villagers? Of the promised money, 20 percent of it was taken as UN head office costs in Geneva. The remainder was subcontracted to an NGO, which took another 20 percent for its own head office costs in Brussels, and so on, for another three layers, with each party taking approximately another 20 percent of what was remaining. The little money that reached Afghanistan was used to buy wood from western Iran, and much of it was paid to Ismail Khan’s trucking cartel to cover the inflated transport prices. It was a bit of a miracle that those oversize wooden beams even arrived in the village.
What happened in the central valley of Afghanistan is not an isolated incident. Many studies estimate that only about 10 or at most 20 percent of aid ever reaches its target. There are dozens of ongoing fraud investigations into charges of UN and local officials siphoning off aid money. But most of the waste resulting from foreign aid is not fraud, just incompetence or even worse: simply business as usual for aid organizations.
The Afghan experience with aid was in fact probably a qualified success compared to others. Throughout the last five decades, hundreds of billions of dollars have been paid to governments around the world as “development” aid. Much of it has been wasted in overhead and corruption, just as in Afghanistan. Worse, a lot of it went to dictators such as Mobutu, who depended on foreign aid from his Western patrons both to buy support from his clients to shore up his regime and to enrich himself. The picture in much of the rest of sub-Saharan Africa was similar. Humanitarian aid given for temporary relief in times of crises, for example, most recently in Haiti and Pakistan, has certainly been more useful, even though its delivery, too, has been marred in similar problems.
Despite this unflattering track record of “development” aid, foreign aid is one of the most popular policies that Western governments, international organizations such as the United Nations, and NGOs of different ilk recommend as a way of combating poverty around the world. And of course, the cycle of the failure of foreign aid repeats itself over and over again. The idea that rich Western countries should provide large amounts of “developmental aid” in order to solve the
problem of poverty in sub-Saharan Africa, the Caribbean, Central America, and South Asia is based on an incorrect understanding of what causes poverty. Countries such as Afghanistan are poor because of their extractive institutions—which result in lack of property rights, law and order, or well-functioning legal systems and the stifling dominance of national and, more often, local elites over political and economic life. The same institutional problems mean that foreign aid will be ineffective, as it will be plundered and is unlikely to be delivered where it is supposed to go. In the worst-case scenario, it will prop up the regimes that are at the very root of the problems of these societies. If sustained economic growth depends on inclusive institutions, giving aid to regimes presiding over extractive institutions cannot be the solution. This is not to deny that, even beyond humanitarian aid, considerable good comes out of specific aid programs that build schools in areas where none existed before and that pay teachers who would otherwise go unpaid. While much of the aid community that poured into Kabul did little to improve life for ordinary Afghans, there have also been notable successes in building schools, particularly for girls, who were entirely excluded from education under the Taliban and even before.
One solution—which has recently become more popular, partly based on the recognition that institutions have something to do with prosperity and even the delivery of aid—is to make aid “conditional.” According to this view, continued foreign aid should depend on recipient governments meeting certain conditions—for example, liberalizing markets or moving toward democracy. The George W. Bush administration undertook the biggest step toward this type of conditional aid by starting the Millennium Challenge Accounts, which made future aid payments dependent on quantitative improvements in several dimensions of economic and political development. But the effectiveness of conditional aid appears no better than the unconditional kind. Countries failing to meet these conditions typically receive as much aid as those that do. There is a simple reason: they have a greater need for aid of either the developmental or humanitarian kind. And quite predictably, conditional aid seems to have little effect on a nation’s institutions. After all, it would have been quite surprising for
somebody such as Siaka Stevens in Sierra Leone or Mobutu in the Congo suddenly to start dismantling the extractive institutions on which he depended just for a little more foreign aid. Even in sub-Saharan Africa, where foreign aid is a significant fraction of many governments’ total budget, and even after the Millennium Challenge Accounts, which increased the extent of conditionality, the amount of additional foreign aid that a dictator can obtain by undermining his own power is both small and not worth the risk either to his continued dominance over the country or to his life.