Finally, those who are not familiar with the idea of competitive analysis, and even some who are, may regard the planned redundancy as more wasteful than intellectually productive.
POLITICIZED INTELLIGENCE. The issue of politicized intelligence arises from the line separating policy and intelligence. This line is best thought of as a semipermeable membrane; policy makers are free to offer assessments that run counter to intelligence analyses, but intelligence officers are not allowed to make policy recommendations based on their intelligence. For example, in the State Department in the late 1980s, the assistant secretary responsible for the Western Hemisphere, Elliot Abrams, often disagreed with pessimistic INR assessments as to the likelihood that the contras would be victorious in Nicaragua. Abrams would often write more positive assessments on his own that he would forward to Secretary of State George P. Shultz.
Policy makers and intelligence officers have different institutional and personal investments in the issues on which they work. The policy makers are creating policy and hope to accrue other benefits (career advancement, reelection) from a successful policy. Intelligence officers are not responsible for creating policy or for its success, yet they understand that the outcomes may affect their own status, both institutional and personal.
The issue of politicization arises primarily from concerns that intelligence officers may intentionally alter intelligence, which is supposed to be objective, to support the options or outcomes preferred by policy makers. These actions may stem from a number of motives: a loss of objectivity regarding the issue at hand, a preference for specific options or outcomes, an effort to be more supportive, career interests, or outright pandering.
Intentionally altering intelligence is a subtle issue because it does not involve crossing the line from analysis to policy. Instead, the analyst is tampering with his or her own product so that it is received more favorably. The issue is also made more complex by the fact that, at the most senior levels of the intelligence community, the line separating intelligence from policy begins to blur. Policy makers ask senior intelligence officials for their personal views on an issue or policy, which they may give. It is difficult to conceive of a DNI or a DCI always abstaining when the president or the secretary of state asks such a question.
The size or persistence of the politicization problem is difficult to determine. Some who raise accusations about
politicized intelligence
are losers in the bureaucratic battles—intelligence officers whose views have not prevailed or policy makers (in the executive branch or Congress, either loyal to the current administration or in opposition) who are dissatisfied with current policy directions. Thus, their accusations may be no more objective than the intelligence that concerns them. Those unfamiliar with the process are often surprised to hear intelligence practitioners talk about winners and losers. But these debates—within the policy or the intelligence community—are not abstract academic discussions. Their outcomes have real results that can be significant and even dangerous. Analysts’ careers can rise and fall as well as a result of which side of a debate they are on. Just as intelligence officers serve policy makers, career officers—both intelligence and policy—serve political appointees, who are less interested in the objectivity of analysis.
For example, in the late 1940s and early 1950s, many State Department experts on China (the “China hands”) had their careers sidetracked or were forced from office over allegations that they had lost China to the communists. Numerous scholars and officials interpreted their treatment as a gross injustice. But, as Harvard University professor Ernest R. May pointed out, the U.S. public in the elections of the early 1950s largely repudiated the anti-Chiang Kai-shek views of the China hands by returning the pro-Chiang Republicans to power. So the China hands not only had ideological foes within the government, but they also had no political basis on which to pursue their preferred policies. Similarly, the careers of many intelligence officers and Foreign Service officers involved in crafting and promoting the strategic arms limitation talks (SALT II) treaty during the Carter administration failed to prosper when Ronald Reagan, who opposed the treaty, took office. Again, their careers suffered only because of an electoral victory. One can argue that these punishments were not what the electorate had in mind, but they underscore the fact that the government and the underlying policy processes are essentially political in nature.
Politicization by intelligence officers may also be a question of perception. A consensus could probably be reached on what politicized intelligence looked like, but much less agreement would emerge on whether a specific analysis fit the definition.
Thus, politicized intelligence remains a concern, albeit a somewhat vague one, which may make it more difficult and important. Many issues surrounding politicized intelligence came up in the hearings on Robert Gates’s second nomination as DCI, such as when several analysts charged that Gates had altered analyses on the Soviet Union to meet policy makers’ preferences. (Gates asked President Reagan to withdraw his first nomination during the Iran-contra affair. He was subsequently renominated by President George Bush and confirmed in 1991.)
Politicization was also a concern in the Iraq WMD issue. In 2003 the press reported that Vice President Dick Cheney had been out to the CIA several times to receive briefings on Iraq. Critics saw the visits as an attempt to influence the analysts, even though intelligence officials and analysts maintained that they were not swayed. Is there a proper number of times a senior official should be briefed on a highly sensitive topic, after which it appears to be politicization? The answer likely is no. What matters is the substance of the exchange. Also, such exchanges are a primary reason for intelligence agencies—to help officials make decisions. In Britain, charges of politicization on Iraq centered on accusations that Prime Minister Tony Blair or his office asked Defence Ministry officials to “sex up” their intelligence on Iraq WMD, which the government denied. Three external reviews of intelligence on Iraq, by the Senate Intelligence Committee and the WMD Commission in the United States and by Lord Butler in Britain, all concluded that the intelligence had not been politicized. A fourth report, done for the Australian government, came to the same conclusion.
A second type of politicized intelligence is caused by policy makers who may react strongly to intelligence, depending on whether it confirms or refutes their preferences for policy outcomes. For example, according to press accounts in November 1998, Vice President Al Gore’s staff rejected CIA reports about the personal corruption of Russian premier Viktor Chernomyrdin. Staff members argued that the administration had to deal with Chernomyrdin, corrupt or not, and that the intelligence was inconclusive. Analysts countered that the administration set the standard for proof so high that it was unlikely to be met by intelligence. The analysts found that they were censoring their reports to avoid further disputes with the White House. Both policy and intelligence officers denied the allegations.
Policy makers may also use intelligence issues for partisan purposes. Two examples in the United States were the missile gap (1959-1961) and the window of vulnerability (1979-1981). In both cases, the party that was out of power (the Democrats in the first case, the Republicans in the second) argued that the Soviet Union had gained a strategic nuclear advantage over the United States, which was being ignored or not reported. In both cases, the accusing party won the election (not because of its charges) and subsequently learned that the intelligence did not support the accusations—which it then simply claimed had been resolved.
Finally, as noted above, the increased use of unclassified NIEs or their KJs also poses a threat of increased politicization of intelligence.
ANALYTICAL STANDARDS. As this chapter has argued, there is a set of standards in intelligence analysis. Most of them are fairly well-known and accepted, although, until recently, little effort was made to codify them. This changed in the aftermath of the 2001 terrorist attacks and the Iraq WMD issue. The Intelligence Reform and Terrorism Prevention Act (IRTPA, 2004) includes a number of standards for intelligence analysis. The DNI’s office has also issued standards for evaluating intelligence.
It is important to understand analytic standards for their own sake, but they cannot be wholly separated from the circumstances in which they are written. The twin events of 9/11 and Iraq WMD left most observers with the overwhelming impression that the analytical capacity of the intelligence community was flawed and performed badly. However, as has been noted earlier, the perceived “lessons” of the two events tend to run in opposite directions.
• Warning: The “lesson” of 9/11 was that the intelligence community failed to be strident enough in its warnings, leaving policy makers with an imprecise sense of the impending nature of the threat. Intelligence officers serving at the time deny this and also note that the tactical intelligence that would have been useful did not exist. In the case of Iraq WMD. the intelligence community is said to have overblown the threat based on very little new intelligence.
• Analytical process: In 9/11, analysts failed to make the necessary linkages between disparate pieces of intelligence (hence the “connect the dots” metaphor) but for Iraq WMD they made too many linkages, resulting in a false image of the WMD programs. The analysis before 9/11 has also been attacked as a “failure of imagination” but in the case of Iraq the analysis was perhaps too imaginative.
• Information sharing: The failure to discover the 9/11 plot is ascribed, in part, to the failure of the CIA and the Federal Bureau of Investigation (FBI) to share information. But in the case of Iraq WMD, the intelligence community was taken to task for sharing information (the unreliable human source called CURVEBALL) that was not true, although those sharing it did not know that.
Therefore, when crafting the legislation creating the DNI, Congress went into unusual detail about what it expected of future analysis. The DNI must appoint an individual or office responsible for ensuring that finished intelligence produced by any intelligence community element is “timely, objective, independent of political considerations, based upon all sources of available intelligence, and employ the standards of proper analytic tradecraft” (Section 1019). This individual or office can have no direct responsibility for the specific production of any finished intelligence and must prepare regular detailed reviews of analytic products, lessons learned, and recommendations for improvement. The criteria for these evaluations and reviews are detailed. Finally, the act calls for the creation of what has become an analytic ombudsman. In response to this requirement, the position of assistant deputy DNI for analytic integrity and standards was created under the deputy DNI for analysis.
This office, which is also that of the ombudsman, created a set of evaluation tradecraft standards for analysis, few of which are controversial. They deal mostly with the underlying aspects of intelligence: sources, assumptions, judgments, alternative analyses, logical argumentation, and so on. The final standard, accuracy, may not be known for some time.
Most observers would likely agree that these are among the necessary standards for good analysis. The real concern is how these standards are put into practice. It is noteworthy that the standards reflect more of the perceived lessons of Iraq WMD than of September 11. The DNI’s office has stated that these standards will serve as communitywide guidelines, making them part of the training for all new analysts and for analytical managers. Given the paucity of communitywide courses, this training can only capture a small number of the analysts across the community in any given year and far fewer than the large numbers currently being recruited. Therefore, overseeing standards implementation requires insights into the analytic training being conducted at each agency.
The use of these standards as an evaluation tool is more problematic. The congressional mandate for a broad review of finished intelligence products is impractical given the volume of intelligence produced daily. The most that can then be done is to sample, either by topic or by office, or both, and hope that some larger lessons can be drawn. This may prove difficult given the problems inherent in any sampling methodology.
The underlying question is the expectations of either Congress or the DNI’s office about how these standards might affect future analysis. It is possible, for example, to perform highly in each of the standards and still find, after the fact, that the judgments and assessments proved to be inaccurate. Value is given to consistency, which can run counter to the desire for analytic insight and the avoidance of groupthink. If the highest standard for analysis is accuracy, then we face the problem that neither these standards nor any others will guarantee that outcome. Clearly, these standards are more likely to result in analytic products that are sound in terms of methodology, but this is not the same as accuracy. Also, these standards run the risk of creating a very mechanistic approach to what is, at its core, an intellectual process. For example, the truly gifted and occasionally insightful analyst could get poor grades in most of these criteria and still produce an accurate and useful analysis.
ANALYTIC TRANSFORMATION AND THE ANALYTIC WORKFORCE. The Deputy DNI for Analysis has embarked on a broad program called analytic transformation, which seeks “to change how we [intelligence analysts] approach analysis.” The initiatives fall into three broad areas: enhancing the quality of analysis; providing more effective community-level management; and offering more integrated analytic operations. The main drivers appear to be the sense that the community and the analysts’ data and products are not called on to the fullest extent.