Intelligence: From Secrets to Policy (30 page)

BOOK: Intelligence: From Secrets to Policy
2.15Mb size Format: txt, pdf, ePub
Centers can become competitors for resources with offices in agencies. This appears to have been the case with the NCTC and CIA’s Counterterrorism Center, according to the WMD Commission. As has been seen from the time that DCI Gates began creating centers in the early 1990s, the heads of agencies are not willing to siphon away scarce resources to an activity over which they will have no control (centers fell under the jurisdiction of the DCI and now are under the DNI) and from which they will receive no direct results. The WMD Commission recommended the creation of an additional center, the National Counterproliferation Center, which has a managerial role in line with the commission’s concept of mission managers to coordinate collection and analysis on specific issues or topics.
A bureaucratic debate has ensued on the nature of the centers. Although their goal is to bring the intelligence components into a single place, most centers had been located in and dominated by the CIA. Some people argue that the arrangement undercuts the centers’ basic goal—to reach across agencies. Defenders of the system argue that housing the centers in the CIA gives them access to many resources not available elsewhere and also protects their budgets and staffing. A 1996 review by the House Intelligence Committee staff validated the concept of the centers but urged that they be less CIA-centric. Given the location of the centers, however, other agencies are sometimes loath to assign analysts to them, fearing that they will be essentially lost resources during their center service. (A similar problem used to occur on the Joint Staff, which supports the Joint Chiefs of Staff. The military services—Army, Navy, Air Force, Marines—naturally preferred to keep their best officers in duties directly related to their service. This ended when Congress passed the Goldwater-Nichols Act in 1986, which mandated a joint service tour as a prerequisite for promotion to general or admiral.) Centers now are overseen by the DNI, which should serve to make the centers more community-based in terms of staffing. However, the setup raises new issues about how the DNI staffs the centers when he has no direct control over any analytic components comparable to the control that the DCI had over the CIA. DNI McConnell’s requirement that intelligence officers have “joint duty” assignments before being promoted to senior ranks (similar to the requirement for the military) may help make assignments to centers more attractive, especially for one’s most talented officers, as a means of assuring their continued promotion. Another issue for the centers is their duration. In government—in all sectors—ostensibly temporary bodies have a way of becoming permanent, even when the reasons for their creation have long since ended. A certain bureaucratic inertia sets in. Some people wish to see the body continue, as it is a source of power; others fear that by being the first to suggest terminating it they will look like shirkers. The situation has a comic aspect to it, but also a serious one, as these temporary groups absorb substantial amounts of resources and energy.
Thus, the question for the centers—or any other groups—is: When are they no longer needed? Clearly, the transnational issues are ongoing, but even they may change or diminish over time. One former deputy DCI suggested a five-year sunset provision for all centers, meaning that every five years each center would be subject to a hard-nosed review of its functions and the requirement for its continuation.
Finally, some critics question the focus of the centers, arguing that they are concentrating tactically on operational aspects of specific issues instead of on the longer term trends. Center proponents note the presence of analysts and the working relationship between the centers and the national intelligence officers (NIOs), who can keep apprised of the centers’ work, offer advice, and are responsible for the production of NIEs.
The WMD Commission, reporting in March 2005, recommended the creation of mission managers to “ensure a strategic, Community-level focus on priority intelligence missions.” The commission envisioned these managers overseeing both collection and analysis on a given issue, as well as fostering alternative analyses on their issue. However, the mission mangers would not conduct actual analysis; rather, they would facilitate analysis. (An exception was made for counterintelligence, whose mission manager would conduct strategic counterintelligence analysis.) The commission also posited that the mission managers offered a more flexible approach than the centers. The commission recommended that mission managers oversee target development and research and development for their issues.
As of mid-2008, there were six mission managers, covering North Korea, Iran, Cuba/ Venezuela, counterterrorism, counterintelligence and counterproliferation. Interestingly, the three “counter” mission managers were also the directors of DNI centers. The mission manager concept raises several issues. First, and most obvious, is their authority to target collection or facilitate analysis. These activities occur in the various intelligence agencies, where the DNI faces very real limits to his or her authority, as did the DCI. Second, it is exceedingly difficult for managers to maintain awareness of all of the analysis being produced on certain issues, although this is also being addressed within the DNI’s office. The mission managers must also have knowledge of the analysts working on an issue across the community. Here the DNI has benefited from the Analytic Resources Catalog (ARC), a listing of all analysts and their subject area and past expertise, which was created under DCI Tenet.
Ultimately, there is no best way to organize analysts. Each scheme has distinct advantages and disadvantages. And each scheme still revolves around either functional or regional analysts. The goal should be to ensure that the right analysts of both types are brought to bear on topics as needed—either on a permanent or temporary basis, depending on the issue and its importance. Flexibility and agility remain crucial. (
See box, “Metaphors for Thinking about Analysis.”)
METAPHORS FOR THINKING ABOUT ANALYSIS
 
Metaphors are often used to describe the intelligence analysis process.
Thomas Hughes, a former director of the Department of State Bureau of Intelligence and Research, wrote that intelligence analysts were either butchers or bakers. Butchers tend to cut up and dissect intelligence to determine what is happening Bakers tend to blend analysis together to get the bigger picture Analysts assume both roles at different times.
In the aftermath of the September 11 terrorist attacks, the phrase “connect the dots” became prevalent as a means of describing an analytic intelligence failure. It is an inapt metaphor. Connecting the dots depends on all of the dots being present to draw the right picture. (The dots also come numbered sequentially, which helps considerably.) As a senior intelligence analyst pointed out, the intelligence community was accused of not connecting the dots in the run-up to September 11 but was accused of connecting too many dots regarding the alleged Iraqi weapons of mass destruction
Two more useful descriptions are mosaics or pearls. Intelligence analysis is similar to assembling a mosaic, but one in which the desired final picture may not be clear. Not all of the mosaic pieces may be available. Further complicating matters, in the course of assembling the mosaic, new pieces appear and some old ones change size, shape, and color. The pearl metaphor refers to how intelligence is collected and then analyzed. Most intelligence issues are concerns for years or even decades Like the slow growth of a pearl within an oyster, there is a steady aggregation of collected intelligence over time, allowing analysts to gain greater insight into the nature of the problem.
Why do these metaphors matter? They matter because they will affect how one views the analytical process and the expectations one has for the outcomes of that process.
 
DEALING WITH LIMITED INFORMATION. Analysts rarely have the luxury of knowing everything they wish to know about a topic. In some cases, little may be known. How does an analyst deal with this problem?
One option is to flag the problem so that the policy client is aware of it. Often, informing policy consumers of what intelligence officials do not know is as important as communicating what they do know. Secretary of State Colin Powell (2001-2005) used the formulation: “Tell me what you know. Tell me what you don’t know. Tell me what you think.” Powell said he held intelligence officers responsible for the first two but that he was responsible if he took action based on the last one. But admitting ignorance may be unattractive, out of concern that it will be interpreted as a failing on the part of the intelligence apparatus. Alternatively, analysts can try to work around the problem, utilizing their own experience and skill to fill in the blanks as best they can. This may be more satisfying intellectually and professionally, but it runs the risk of giving the client a false sense of the basis of the analysis or of the analysis being wrong.
Another option is to arrange for more collection, time permitting. Yet another is to widen the circle of analysts working on the problem to get the benefit of their views and experience.
A reverse formulation of this same problem has arisen in recent years. To what degree should analysis be tied to available intelligence? Should intelligence analyze only what is known, or should analysts delve into issues or areas that may be currently active but for which no intelligence is available? Proponents argue that the absence of intelligence does not mean that an activity is not happening, only that the intelligence about it is not available. Opponents argue that this sort of analysis puts intelligence out on a limb, where there is no support and the likely outcome is highly speculative worst-case analysis. On the one hand, intelligence analysis is not a legal process in which findings must be based on evidence. On the other hand, analysis written largely on supposition is not likely to be convincing to many and may be more susceptible to politicization.
For many years, the intelligence community has stressed the importance of
analytic penetration,
as an intellectual means of trying to overcome a dearth of intelligence sources. Analytic penetration means thinking longer and harder about the issue, perhaps making suppositions of what is most likely, and perhaps laying out a range of outcomes based on a set of reasonable assumptions. The underlying premise in analytic penetration is that the analytic community does not have the luxury of simply throwing up its hands and saying, “Sorry, no incoming intelligence; no analysis.” But if analysis is required and the sources are insufficient, there has to be rigor applied to the analysis that attempts to make up for these missing sources. This is an area where greater collaboration across offices and agencies would be most useful.
The concerns about dealing with limited intelligence arose in the reviews of intelligence performance before the 2001 terrorist attacks and the intelligence before the Iraq war (2003- ). The problems in each case were not identical. In the case of the September 11 attacks, some people criticized analysts for not putting together intelligence they did have to get a better sense of the al Qaeda threat and plans. Intelligence officials were also criticized for not being more strident in their warnings—a charge that intelligence officials rebutted—and policy makers were criticized for not being more attuned to the intelligence they were receiving. However, no one has been able to make the case that sufficient intelligence existed to forecast the time and place of the attacks. The admonition about strategic versus tactical surprise is apropos (see chap. 1). Stopping a terrorist attack requires tactical insights into the terrorists’ plans.
In the case of Iraq, the critique is just the opposite, that is, that intelligence analysts made too many unsubstantiated connections among various pieces of collected intelligence and created a false picture of the state of Iraqi WMD programs. Implicit in this critique is the view, held by some, that analysts should not analyze beyond the collected intelligence lest they draw the wrong conclusions. This would be a deviating and alarming practice from the norm, given the likelihood at all times of less than perfect collection. Analysts are trained to use their experience and their instinct to fill in the collection gaps as best they can. That is one of the value-added aspects of analysts.
If a lesson is to be drawn from these two analytical experiences it may be no more than that the analytical process is imperfect under any and all conditions. No Goldilocks formula has been devised as to the right amount of intelligence on which analysis should be based. The quality of that intelligence matters a great deal, as does the nature of the issue being analyzed.
 
CONVEYING UNCERTAINTY. Just as everything may not be known, so, too, the likely outcome may not be clear. Conveying uncertainty can be difficult. Analysts shy away from the simple but stark “We don’t know.” After all, they are being paid, in part, for making some intellectual leaps beyond what they do know. Too often, analysts rely on weasel words to convey uncertainty: “on the one hand,” “on the other hand,” “maybe,” “perhaps,” and so on. (President Harry S. Truman was famous for saying he wanted to meet a one-handed economist so that he would not have to hear “on the one hand, on the other hand” economic forecasting.) These words may convey analytical pusillanimity, not uncertainty. (Conveying uncertainty seems to be a particular problem in English, which is a Germanic language and makes less use of the subjunctive than do the Romance languages.)
Some years ago a senior analytical manager crafted a system for suggesting potential outcomes by using both words and numbers—that is, a 1-in-10 chance, a 7-in-10 chance. Such numerical formulations may be more satisfying than words, but they run the risk of conveying to the policy client a degree of precision that does not exist. What is the difference between a 6-in-10 chance and a 7-in-10 chance, beyond greater conviction? It is also important to remember that an event that has a 6-in-10 chance of occurring also has a 4-in-10 chance of not occurring. When presented this way, the event now may seem uncomfortably close to 50/50, which a 6-in-10 chance does not convey by itself. There are very few “sure things.” In reality, the analyst is back to relying on gut feeling. (One chairman of the NIC became incensed when he read an analysis that assessed “a small but significant chance” of something happening.)

Other books

Scriber by Dobson, Ben S.
Catch a Shadow by Potter, Patricia;
Ether by Ben Ehrenreich
Fairy Tale Blues by Tina Welling
Master of Power #1 by Erica Storm
I Dare by Sharon Lee, Steve Miller
First Salvo by Taylor, Charles D.
A Conspiracy of Paper by David Liss