Analytic transformation has several initiatives, including new approaches to training, new standards for producing analysis (such as product evaluation, source citation), and especially initiatives intended to get a better sense of community activity and to foster greater collaboration. Several of these latter initiatives have received a fair amount of attention in the media, including the Library of National Intelligence, which will be a central virtual repository of all disseminated intelligence. regardless of classification; A-Space, a common collaborative workspace for all analysts, similar in concept to shared networking Web sites available to the public: and Intellipedia, another collaborative Web space in which analysts can update and annotate other’s work at various levels of classification. Advocates see these as improving collaboration and also note that they will instantly be familiar to the young workforce (those with three years experience or less), which now represents about half of the analytic cadre. These various initiatives have also been controversial, with some veteran analysts asking how these various steps will actually improve the content of analysis and what the benchmarks will be.
The workforce demographics are driven largely by the contraction that the intelligence community endured during the 1990s, suffering deep budget cuts after the cold war. The so-called cold war peace dividend fell more heavily on intelligence than it did on defense. As DCI Tenet expressed it, the net result was the loss of 23,000 employees and positions across U.S. intelligence, meaning both people who left and—more significantly—people who were never hired. In the aftermath of the 2001 terrorist attacks, all agencies began major hiring efforts. The result of these efforts has been a workforce of decreasing experience over time as new hires outnumber veterans, who continue to retire.
These demographic trends have several important implications for analysis:
• Experience: The most obvious issue is the relative inexperience of the workforce as analysts and subject matter experts. As discussed earlier, human intelligence (HUMINT) collectors need five to seven years to be considered seasoned. There is no agreed benchmark for analysts, but the five-year mark is probably a reliable one, give or take a year. This is sometimes referred to as the “green/gray” problem—that is, the analytic workforce is getting younger, not older. This is both a problem in and of itself and, in a few years, a problem in terms of management. The cadre that should be rising to senior analytic management ranks will be too thin to fill all of the necessary positions. This will require promoting more junior analysts sooner. Again, their lack of experience might become problematic.
• Work methods: The new cadre of analysts are more comfortable working in networks and working more collaboratively, both of which are positive attributes. They also are much more comfortable with information technology and working in a “softcopy” world. It is too soon to know, however, if they will be comfortable asserting themselves and their views when necessary or if they will default to lowest-common-denominator analyses as part of their collaborative instinct.
It is also not clear how the new cadre of analysts will assess incoming intelligence. One of the charms of the Worldwide Web is that it is a democratic institution: Anyone is free to post any of their views on any subject. This is also, from an intelligence viewpoint, a problem, as intelligence must address the issue of validity of sources: Who are they? What is their basis for saying this? Are they knowledgeable and credible? Do they have motives for saying this? If one thinks of the Web as a giant bulletin board where anything can be posted and shared, the ability to rise above that in working on intelligence becomes more evident. The Web may be an interesting metaphor for collaboration, but it can be dangerous when assessing views and information.
• Retention: A key issue for intelligence agencies is retaining as many of these new analysts, or at least the good ones, as possible. Poor retention rates will only replicate the current demographic problems that led to this issue. Retention goes to the issues of career management, career progression, and education and training. These have not been areas to which managers have given much attention until recently, but they will underpin much of the other efforts at transformation.
INTELLIGENCE ANALYSIS: AN ASSESSMENT
Sherman Kent, an intellectual founder of the U.S. intelligence community and especially of its estimative process, once wrote that every intelligence analyst has three wishes: to know everything, to be believed, and to influence policy for the good (as the analyst understands it). Kent’s three wishes offer a yardstick by which to measure analysis. Clearly, an analyst can never know everything in a given field. If everything were known, the need for intelligence would not exist—nothing would be left to discover. But what Kent is getting at is the desire of the analyst to know as much as possible about a given issue before being asked to write about it. The amount of intelligence available varies from issue to issue and from time to time. Analysts must therefore be trained to develop some inner, deeper knowledge that enable them to read between the lines, to make educated guesses or intuitive choices when the intelligence is insufficient.
Kent’s second wish—to be believed—goes to the heart of the relationship between intelligence and policy. Policy makers pay no price for ignoring intelligence, barring highly infrequent strategic disasters such as Josef Stalin’s refusal to accept the signs of an imminent German attack in 1941. Intelligence officers see themselves as honest and objective messengers who add value to the process, who provide not just sources but also analysis. Their reward, at the end of the process, is to be listened to, which varies greatly from one policy maker to another.
Finally, and derived from his second wish, Kent notes that intelligence officers want to have a positive effect on policy, to help avert disaster and to help produce positive outcomes in the nation’s interests. But analysts want to be more than a Cassandra, constantly warning of doom and disaster. Their wish to have a positive influence also indicates the desire to be kept informed about what policy makers are doing to enable the intelligence officers to play a meaningful role.
What, then, constitutes good intelligence? This is no small question, and one is reminded of Justice Potter Stewart’s response when he was asked to define pornography: “I can’t define it, but I know it when I see it.” Good intelligence has something of the same indistinct quality. At least four qualities come to mind. Good intelligence is
• Timely. Getting the intelligence to the policy maker on time is more important than waiting for every last shred of collection to come in or for the paper to be pristine, clean, and in the right format. The timeliness criterion runs counter to the first of Kent’s three wishes: to know everything. And time can change the perspective on an occurrence. Napoleon died on St. Helena in May 1821; word of his death did not reach Paris until July. Charles Maurice de Talleyrand, once Napoleon’s foreign minister and later one of his foes, was dining at a friend’s house when they heard of Napoleon’s passing. The hostess exclaimed, “What an event!” Talleyrand corrected her: “It is no longer an event, Madam, it is news.”
• Tailored. Good intelligence focuses on the specific information needs of the policy maker, to whatever depth and breadth are required, but without extraneous material. This must be done in a way that does not result in losing objectivity or politicizing the intelligence. Tailored intelligence products (those responding to a specific need or request) are among the most highly prized by policy makers.
• Digestible. Good intelligence has to be in a form and of a length that allow policy makers to grasp what they need to know as easily as possible. The requirement tends to argue in favor of shorter intelligence products, but it is primarily meant to stress that the message be presented clearly so that it can be readily understood. This does not mean that the message cannot be complex, or even incomplete. But whatever the main message is, the policy maker must be able to understand it with a minimum of effort. Being succinct and clear is an important skill for analysts to learn. Writing a good two-page memo is much more difficult than writing a five-page memo on the same subject. As Mark Twain observed in a letter to a friend, “I am writing you a long letter because I don’t have time to write a short one.”
• Clear regarding the known and the unknown. Good intelligence must convey to the reader what is known, what is unknown, and what has been filled in by analysis, as well as the degree of confidence in the material. The degree of confidence is important because the policy maker must have some sense of the relative firmness of the intelligence. All intelligence involves risk by the very nature of the information being dealt with. The risk should not be assumed by the analysts alone but should be shared with their clients.
Objectivity was not one of the major factors defining good intelligence. Its omission was not an oversight. The need for objectivity is so great and so pervasive that it should be taken as a given. If the intelligence is not objective, then none of the other attributes—timeliness, digestibility, clarity—matters.
Accuracy also is not a criterion. Accuracy is a more difficult standard for assessing intelligence than might be imagined. Clearly, no one wants to be wrong, but everyone recognizes the impossibility of infallibility. Given these limits, what accuracy standard should be used? One hundred percent is too high and 0 percent is too low. Splitting the difference at 50 percent accuracy is still unsatisfactory. Thus, what is left is a numbers game—something more than 50 percent and less than 100 percent.
The issue of accuracy became more demanding in the aftermath of September 11 and the onset of the Iraq war. The political system seemed to have decreasing tolerance for the imperfection that is inherent in intelligence analysis. Even though all observers understand that perfection is not possible, each and every mistake seemed to incur a large political cost for the intelligence agencies. This can have an additional cost in the analytic system if analysts become risk-averse because of the political costs of being wrong. Even though most observers would agree that 100 percent accuracy is unachievable, they would also argue that the “big things” are the issues where accuracy matters. Examples of such “big things” would be the existence of Iraqi WMD or the impending fall of the Soviet Union. But these are the very issues where intelligence is more likely to be wrong because they run counter to years of collected intelligence and presumably accurate analyses. Recall the pearl metaphor discussed under collection: the slow, steady accumulation of intelligence over time, often decades. This accumulative process has an effect on the analysts. It leads them to create what they believe are accurate pictures of behavior and more or less likely outcomes. But the “big things” tend to be hardest to foresee for the very reason that they run counter to all of that accumulated intelligence. Even today, long after the facts, it is difficult to make an
analytical, intelligence- based
case that (1) when a crisis erupts in the Soviet Union the Communist Party will peacefully give up power; or (2) that Saddam Hussein is telling the truth and has no WMD on hand.
As unsatisfactory as this standard is, other metrics are not much better. For example, a batting average could be constructed over time—for an issue, for an office, for an agency, for a product line. Or the quality of intelligence could be assessed on the basis of the number of products produced—estimates, analyses, images. But these measures are inadequate, too. Furthermore, they are not meant to be as frivolous as they seem. They are meant to give a feel for the difficulty of assessing what is good intelligence.
However, producing good intelligence is not some sort of Holy Grail that is rarely achieved. Good intelligence is often achieved. But one must distinguish between the steady stream of intelligence that is produced on a daily basis and the small amount within that daily production that stands out for some reason—its timeliness, the quality of its writing, its effect on policy. The view here—and it is one that has been debated with the highest intelligence officials—is that effort is required to produce acceptable, useful intelligence on a daily basis, but that producing exceptional intelligence is much more difficult and less frequently achieved. A conflict arises between the goal of consistency and the desire to be exceptional. An entire intelligence community cannot be exceptional all the time, but it does hope to be consistently helpful to policy. Consistent intelligence and exceptional intelligence are not one and the same. (As a cynic once said, “Only the mediocre are at their best all the time.”) Consistency is not a bad goal, but it allows analysis to fall into a pattern that lulls both the producer and the consumer. Thus, for all that is known about the distinctive characteristics of good intelligence, it remains somewhat elusive in reality, at least as a widely seen daily phenomenon. But, for analysts, that is one of the positive challenges of their profession.
In the aftermath of 9/11 and Iraq WMD, and after the promulgation of analytic standards, there still has not been closure on the key question: How good is intelligence supposed to be, how often is it to be supplied, and on which issues? There are both professional and political answers to this question, but the inherent differences between them have not been resolved.