Intelligence: From Secrets to Policy (29 page)

BOOK: Intelligence: From Secrets to Policy
3.25Mb size Format: txt, pdf, ePub
ads
Sometimes, however, much is at stake. For example, in April 1986 the operators of the Chernobyl nuclear reactor in the Soviet Union, while running an unauthorized experiment, caused a catastrophic explosion. The next afternoon, Sweden reported higher than normal radioactive traces in its air monitors, which had been placed in many cities. In the United States an intelligence manager asked a senior analyst what he made of the Swedish complaints. The analyst played them down, saying the Swedes were always concerned about their air and often made such complaints for the smallest amounts of radiation. On learning the truth, analysts spent the following day frantically trying to catch up with the facts about Chernobyl. The jaded approach precluded the analysts from making the simplest inquiries such as into the types of radiation Sweden detected. The answer would have identified the source as a reactor and not a weapon. And the prevailing winds over Sweden could have been surveyed to identify the source. (Some years later the intelligence manager met with some of his Swedish counterparts. They had initially concluded, based on analysis of the radiation and wind conditions, that a reactor at nearby Ignalina, across the Baltic Sea in Soviet territory, was leaking. Although they misidentified the source, which was a reactor much farther away, they were much closer to the truth than were U.S. intelligence officials.)
The costs of the jaded approach are threefold. First, this approach represents intellectual dishonesty, something all analysts should avoid. Second, it proceeds from the false assumption that each incident is much like others, which may be true at some superficial level but may be false at fundamental levels. Third, it closes the analyst’s thinking, regardless of his or her level of experience, to the possibility that an incident or issue is entirely new, requiring wholly new types of analysis.
Credibility is one of the most highly prized possessions of analysts. Although they recognize that no one can be correct all of the time, they are concerned that policy makers are holding them accountable to an impossible standard. Their concern about credibility—which is largely faith and trust in the integrity of the intelligence process and in the ability of the analysts whose product is at hand—can lead them to play down or perhaps mask sudden shifts in analyses or conclusions. For example, suppose intelligence analysis has long estimated a production rate of fifteen missiles a year in a hostile state. One year, because of improved collection and new methodologies, the estimated production rate (which is still just an estimate) goes to forty-five missiles per year. Policy makers may view this increase—on the order of 300 percent—with alarm. Instead of presenting the new number with an explanation as to how it was derived, an analyst might be tempted to soften the blow. Perhaps a brief memo is issued, suggesting changes in production. Then a second memo, saying that the rate is more likely twenty to twenty-five missiles per year, and so on. until the policy maker sees a more acceptable analytical progression to the new number and not a sudden spike upward. Playing out such a scenario takes time, and it is intellectually dishonest. Intelligence products that are written on a recurring basis—such as certain types of national intelligence estimates—may be more susceptible than other products to this type of behavior. They establish benchmarks that can be reviewed more easily than, say, a memo that is not likely to be remembered unless the issue is extremely important and the shift is dramatic.
At the same time, there are risks inherent in sudden and dramatic shifts in analysis. In November 2007, the DNI released unclassified key judgments of a new national intelligence estimate (NIE) on Iran’s nuclear intentions and capabilities. The NIE estimated that Iran had ceased its weaponization program in 2003, reversing views held in a 2005 estimate. Officials explained that recently collected intelligence had led to the new position. But observers and commentators questioned why this had not been known earlier, failing to understand the nature of intelligence collection. Some wondered if the new conclusions were “compensation” (or penance) for the mistaken conclusions in the 2002 Iraq WMD estimate. And some wondered if the intelligence community was trying to prevent the Bush administration from using force against a recalcitrant Iran. Interestingly, few commentators took the NIE face value, accepting the possibility that analytic views had changed.
Although policy makers have taken retribution on analysts for sudden changes in estimates, more often than not the fear in the minds of analysts is greater than the likelihood of a loss of credibility. Much depends on the prior nature of the relationship between the analyst and the policy maker, the latter’s appreciation for the nature of the intelligence problem, and the intelligence community’s past record. If several revisions have been made in the recent past, there is reason to suspect a problem. If revision is an isolated phenomenon, it is less problematic. The nature of the issue, and its importance to the policy maker and the nation, also matters.
For example, the level of Soviet defense spending—then usually expressed as a percentage of gross national product (GNP)—was a key intelligence issue during the cold war. At the end of the Ford administration (1974-1977), intelligence estimates of Soviet GNP going to defense rose from a range of 6-7 percent to 13-14 percent, largely because of new data, new modeling techniques, and other factors unrelated to Soviet output. The revision was discomforting to the incoming Carter administration. In his inaugural address, Jimmy Carter signaled that he did not want to be constantly concerned with the Soviet issue, that he had other foreign policy issues to pursue. A more heavily armed Soviet Union was not good news. Carter prided himself on his analytical capabilities. When faced with the revised estimates, he reportedly chided the intelligence community, noting that they had just admitted to a 100 percent error in past estimates. That being the case, why should he believe the latest analyses?
Few intelligence products are written by just one analyst and then sent along to the policy client. Most have peer reviews and managerial reviews and probably the input of analysts from other offices or agencies. This is especially true for the intelligence products (analytical reports) that agencies call
estimates
in the United States or
assessments
in Australia and Britain. Participation of other analysts and agencies adds another dimension to the analytical process—bureaucratics—which brings various types of behaviors and strategies.
More likely than not, several agencies have strongly held and diametrically opposed views on key issues within an estimate. How should these be dealt with? The U.S. system in both intelligence and policy making is consensual. No votes are taken; no lone wolves are cast out or beaten to the ground. Everyone must find some way to agree. But if intellectual arguments fail, consensus can be reached in many other ways, few of which have anything to do with analysis.
• Back scratching and logrolling. Although usually thought of in legislative terms, these two behaviors can come into play in intelligence analysis. Basically, they involve a trade-off: “You accept my view on p. 15 and I’ll accept yours on p. 38.” Substance is not a major concern.
• False hostages. Agency A is opposed to a position being taken by Agency B but is afraid its own views will not prevail. Agency A can stake out a false position on another issue that it defends strongly, not for the sake of the issue itself, but so that it has something to trade in the back scratching and logrolling.
• Lowest-common-denominator language. One agency believes that the chance of something happening is high; another thinks it is low. Unless these views are strongly held, the agencies may compromise—a moderate chance—as a means of resolving the issue. This example is a bit extreme, but it captures the essence of the behavior—an attempt to paper over differences with words that everyone can accept.
• Footnote wars. Sometimes none of the other techniques works. In the U.S. estimative process, an agency can always add a footnote in which it expresses alternative views. Or more than one agency might add a footnote, or agencies may take sides on an issue. This can lead to vigorous debates as to whose view appears in the main text and whose in the footnote.
 
In U.S. practice, an estimate may refer to “a majority of agencies” or a “minority.” This is an odd formulation. First, it is vague. How many agencies hold one view or the other? Is it a substantial majority (say. eleven of the sixteen agencies) or a bare one? Second, the formulation strongly implies that the view held by the majority of agencies is more likely the correct one, although no formal or informal votes are taken in the NIE process. The British practice is different. In Britain, if all agencies participating in an assessment cannot agree, then the views of each are simply laid out. This may be more frustrating for the policy maker reading the assessment, but it avoids false impressions about consensus or correct views based on the vague intellectual notion of a majority.
One critique of the intelligence community’s analysis of Iraqi WMD was the absence of different views and the problem of
groupthink.
The Senate Intelligence Committee held that the analysts did not examine their assumptions rigorously enough and thus lapsed too easily into agreement. The case highlights a conundrum for managers and analysts, particularly those involved in estimates. As a rule, policy makers prefer consensus views, which save them from having to go through numerous shades of opinion on their own. After all, isn’t that what the intelligence community is supposed to be doing? Thus, there has always been some impetus to arrive at a consensus, if possible. In the aftermath of Iraq, however, most consensus views—even if arrived at out of genuine agreement—could be viewed with suspicion. How does one determine, when reading intelligence analysis, the basis on which a consensus has been achieved? How does one determine if it is a true meeting of minds or some bureaucratic lowest common denominator?
 
ANALYTICAL STOVEPIPES. Collection stovepipes emerge because the separate collection disciplines are managed independently and often are rivals to one another.
Analytical stovepipes
also appear in the U.S. all-source community. The three all-source analytical groups—the CIA Directorate of Intelligence, Defense Intelligence Agency Directorate of Intelligence, and the State Department Bureau of Intelligence and Research (INR)—exist to serve specific policy makers. They also come together on a variety of community analyses, most often the NIEs. Efforts to manage or, even more minimally, to oversee and coordinate their activities reveal a stovepipe mentality not unlike that exhibited by the collection agencies. The three all-source agencies tend to have a wary view of efforts by officials with community-wide responsibilities to deal with them as linked parts of a greater analytical whole. The analytical agencies manifest this behavior less overtly than do the collectors, so it is more difficult to recognize. It thus may be surprising to some people, perhaps more so than when collectors exhibit this behavior. After all, each of the collectors operates in a unique field, with a series of methodologies that are also unique. The analytical agencies, however, are all in the same line of work, often concerned with the same issues. But bureaucratic imperatives and a clear preference for their responsibilities in direct support of their particular policy clients, as opposed to interagency projects, contribute to analytical stovepipes.
All of these behaviors can leave the impression that the estimative process—or any large-group analytical efforts—is false intellectually. That is not so. However, it is also not a purely academic exercise. Other behaviors intrude, and more than just analytical truths are at stake. The estimative process yields winners and losers, and careers may rise and fall as a result.
ANALYTICAL ISSUES
 
In addition to the mind-set and behavioral characteristics of analysts, several issues within analysis need to be addressed.
 
COMPETITIVE VERSUS COLLABORATIVE ANALYSIS. As important as the concept of
competitive analysis
is to U.S. intelligence, a need has been seen to bring together analysts of agencies or disciplines to work on major ongoing issues, in addition to the collaborative process of NIEs. DCI Robert M. Gates (1991-1993) thus created centers, most of which focused on transnational issues—terrorism, nonproliferation, narcotics, and so on.
The intelligence community also formed task forces to deal with certain issues; among these was the Balkans task force, which has operated since the 1990s, monitoring the range of issues related to the breakup of Yugoslavia.
The 9/11 Commission (National Commission on Terrorist Attacks upon the United States) recommended organizing all analysis around either regional or functional centers. The 2004 intelligence law mandated the establishment of the National Counterterrorism Center (NCTC), which was basically an expansion of the Terrorism Threat Integration Center that DCI George J. Tenet (1997-2004) had created. The law also required that the DNI examine the utility of creating a National Counterproliferation Center, which was done, and gives the DNI the authority to create other centers as necessary. The problem with the center approach for all analysis is that it becomes somewhat inflexible. Inevitably, some issues or some nations do not fit easily into the center construct. What happens to them? Also, although creating a center is easy, centers—like all other offices—do not like to share or lose resources. Centers therefore run counter to the desire for analytic workforce agility. To date, centers have been organized along functional lines and are staffed by analysts who tend to be more expert in the issue than in the national or regional context within which that issue has been raised. A functional center therefore runs the risk of providing technical analysis that is divorced from its political context. For example, analyzing the state of WMD development in a nation is not enough. One should also analyze the internal or regional political factors driving the program, as these will give important indicators as to its purpose and scope. Being housed in a center does not preclude a functional analyst from seeking out his or her regional counterparts. Analysts do this on a regular basis. But it requires some effort and can be dropped during the press of the day’s work. The center concept can serve to make this collaboration more difficult.
BOOK: Intelligence: From Secrets to Policy
3.25Mb size Format: txt, pdf, ePub
ads

Other books

The Measure of a Lady by Deeanne Gist
Break Me by Lissa Matthews
Keeping Dallas by Amber Kell
Division Zero by Matthew S. Cox
X Descending by Lambright, Christian
Rentboy by Alexander, Fyn
The Enforcer by Marliss Melton