Intelligence and Strategic Warning

Security Dialogue just published an excellent piece on postmodern intelligence and strategic warning. Co-authored by Myriam Cavelty and Victor Mauer, the analysis has important implications for the field of conflict early warning. This blog post summarizes the key points from the co-authored piece entitled “Strategic Warning and Reflexive Intelligence.” I provide a handful of excerpts along with my comments.

“The dominant notion in the study of surprise attacks is that the problem is not the lack of information per se, but rather an incorrect understanding of what the available information means, as well as other difficulties and challenges arising from cognitive and organizational issues.” The mainstream literature on conflict early warning and response has an excellent track record in completely ignoring challenges arising from organizational issues. For more on this issue, see my 2007 ISA paper (PDF).

The authors provide interesting historical insights on the evolution of intelligence analysis:

The failure to detect North Korea’s surprise attack on South Korea in 1950 prompted the establishment of a worldwide warning system, and the United States began to take advantage of its regional military commands around the world. When the Soviet Union emerged as the main rival of the United States, the intelligence community switched to an indicator-based warning system on the premise that the USSR could not mount an attack without some prior effort to gear up for war, and that if certain key intelligence targets were watched carefully, indications that an attack was being prepared would be detected.

Furthermore, the authors argue that many of the changes brought about by the end of the Cold War still baffle the intelligence community at large.

‘Many of today’s principal analytic problems arise from continued reliance on analytic tools, methodologies, and processes that were appropriate to the static and hierarchical nature of the Soviet threat during the Cold War.’ The tendency is to press the new, still undefined, highly complex post-modern world into the old Cold-War mind-set with all that implies, exemplified in a high degree of ‘spatial fetishism’, a tendency to reduce the units of analysis to territorially demarcated national states.

Despite this general tendency, there is a growing part of the intelligence community that has come to realize that the changing context has significant consequences for strategic early-warning methodologies and methods. However, even though the techniques of alternative analysis have been around for many years, they have only recently (and still only intermittently) been applied in the intelligence community.

For an example of an alternative analytical technique, see the piece I co-authored with Didier Sornette and presented at ISA 2008 (PDF) on applying earthquake physics to conflict early warning.

Cavelty and Mauer trace the application of traditional indicator systems developed during the Cold War to the post-9/11 environment, in which “monitoring is moving from an exercise in surveillance-monitoring towards forecasting, understood as a probabilistic assessment focusing on general trends.” The authors thus argue that the nature of the new threat environment requires that “new kinds of methodologies are needed in order to capture the nature of the new threats (networked, transnational, complex).”

In general, monitoring now focuses on forecasting certain activities or patterns. Successful forecasting is only possible, however, if the problem to be confronted has been clearly defined, which of course necessitates that the threat must be recognized in the sense that it is, at least in part, known.

The authors point to three broad type of methodologies for predicting and forecasting events that have not been clearly identified:

  1. Trends and patterns
  2. Frequency
  3. Probability

In addition, Cavelty and Mauer list four other methodologies that might enhance monitoring: geospatial predictive analysis, data-mining technologies, project management-based approaches and social network analysis.

The first of these, geospatial predictive analysis, is the attempt to predict the location and date of future terror attacks by accumulating data on the geographic location of previous incidents. To this end, data is fed into a software application that generates threat signatures, such as trends in tactics, techniques and procedures. Using a geographic interface, this system is then able to identify terrorist hot spots

The second technique involves data-mining technologies. Here, large volumes of data on known and potential terrorists can be harnessed and analyzed using data-mining tools to identify links and patterns in different data repositories, to identify anomalies, and to predict which individuals are likely to carry out terror attacks. Data-mining tools complement human intelligence and signals intelligence surveillance and can help identify key players and their communication tendencies.

A third technique is based on the use of a project management approach. A project management model can be used to characterize terrorist operations in terms of tasks, schedules and lines of responsibility. Understanding this model enables the counter-terrorism anomalies, and to predict which individuals are likely to carry out terror community to delay or disrupt an imminent operation, conduct ‘what if?’ analyses and guide the systematic search for evidence.

The fourth technique is based on social network analysis. This incorporates, correlates and visualizes biographic, religious, demographic and other social data, and identifies the networks of connections and relationships between individual actors, enablers or groups. Such an approach enables one to understand why individuals become radicalized and how they are actually recruited.

Cavelty and Mauer recognize that these approaches are not without faults.

To mention just a few: The first approach only considers successful attacks and not aborted operations or failed attempts; insight from high-frequency areas is not necessarily applicable to rare-event regions; and this approach focuses on incidents rather than on people, which limits its ability to predict terrorist behavior.

Data-mining, on the other hand, does not enable effective information-gathering on unknown individuals and does not solve the problems of pattern recognition. The project management approach might generate false positives, because identifying terrorists is harder than identifying suspicious consumer behavior and because the approach relies on a limited set of technical indicators rather than complementing technical factors such as the characteristics of groups and the nature of their leadership.

Finally, social network analysis, while important, does not complete the big picture. However, with careful consideration of the pros and cons and through careful combination of more than one method, it may be possible to derive attack indicators with some predictive potential.

The authors also touch on other types of indicator-based systems developed in the realm of political risk analysis such as the innovative project called the Canadian Country Indicators for Foreign Policy (CIFP). Again, however, these models have some important limits.

Such models may produce reasonable forecasts when there is good available data and if there is a belief that existing, well-understood and precisely delineated patterns of behavior will continue into the future despite the fact that many aspects of the particular challenges may still be undiscovered. In other words, for monitoring activities to make sense, it must be believed that the threat is analytically tractable and that cause–effect relationships are identifiable.

Clearly, however, there is an inherent danger in this assumption: such a certainty about being able to know might lead to wrong actions based on over reliance on these systems. Where there is doubt that the relationships described in the model will continue or where forecasts of the independent variables are unreliable, different tools are needed.

Cavelty and Mauer juxtapose traditional models of strategic early warning with domain of discovery. The former assumes that discontinuities do not emerge without warning.

Warning signs have been described as ‘weak signals’ or as factors for change that are hardly perceptible at present but will constitute a strong trend in the future or can have dramatic consequences. The management of ‘unknown unknowns’ makes it necessary to gather ‘weak signals’ and to identify events or developments that could set off alternative dynamics and paths [which] makes it very clear why discovering such signals is a daunting task.

In contrast, the concept of discovery is not about pattern recognition of known patterns but rather about “pattern discovery or the identification of new patterns.” However, “we tend to perceive what we expect to perceive,” and these “patterns of expectation become so deeply embedded that they continue to influence perceptions even when people are alerted to and try to take account of the existence of data that do not fit their preconceptions.”

In sum, “because patterns must be ‘recognized’ by the observer, any observed structure or pattern may be an artefact of the research question; other patterns may go unnoticed for the same reasons.” Equally problematic is pattern bias, “which makes one look for evidence that confirms rather than rejects a hypothesis and fill in missing data with data from previous experiences.”

Put differently, “the belief about the nature of a threat (its ontology) and our knowledge or belief about the way we should approach it (epistemologically and methodologically) shape possible policy responses.” As Cavelty and Mauer point out, “this implies that there is no such thing as apolitical risk analysis: as soon as something is identified as a risk, it is managed and therefore changed.”

The authors describe the techniques developed to overcome some of these problems and cope with uncertain futures. This alternative analysis seeks to “seek to help analysts and policymakers to stretch their thinking by broadening the array of out-comes considered or by challenging underlying assumptions that may constrain thinking.” These methods include scenario development, Delphi exercises, brainstorming, horizon scanning, etc. While these techniques can stimulate strategic thinking, “they do not bring back certainty.”

In order to anticipate threats that can suddenly emerge at any time, anywhere and in a variety of forms, ‘analysts need to think more in terms of a broad mental readiness to perceive early warning signs of threat than in terms of challenging specific assumptions or identifying specific alternative outcomes’. Alternative analysis is designed to overcome biases: using them does not mean that one can know the future. If they are conceived as a set of tools rather than as an ongoing organizational process aimed at promoting sustained mindfulness, it is unlikely that they will be accepted within the community.

Cavelty and Mauer conclude their piece with a serious push for complex systems thinking in the social sciences. I was particularly excited that the authors integrated insights from complexity science into their analysis—very refreshing to read this in peer-reviewed literature. For an in-depth analysis on the intersection between complexity science and early warning, see my 2007 ISA paper (PDF) on the subject.

If the twin forces of complexity and change are taken seriously, “there can be no ‘grand’ theoretical project that distils complexity, ambiguity and uncertainty into neat theoretical packages and categories.” They point out that 
“global risks contradict the language of control of industrial societies [that seek to] feign control over the uncontrollable.” In short, “expert knowledge is […] an insufficient and unreliable resource for political decisions.”

The complexity paradigm implies that certain situations are unpredictable by nature, “not just by virtue of the limitations of the observer.” The authors thus argue that the task ahead consists of “learning to recognize and appreciate complexity, ambiguity and uncertainty,” which implies that we need to “start focusing on different methods that might work well in situations where the assumption of order does not hold.” In other words, “the aim should not be to reduce uncertainty, as traditional scientific methods do, but to accept it for what it is.”

Instead of adopting different methods, however, the strategic warning community has moved from “the threat-based approach towards vulnerability assessment […] to ‘play defense’ in lieu of developing new indication and warning systems.” In other words, this approach substitutes missing knowledge by broadly applying defensive measures. This is particularly problematic since such an approach leads to the development of 
“worst-case scenario approaches and the irreversible damages associated with them [which] logically lead to a politics of zero risk and legitimize any kind of action.”

In short, “this so-called hypothesis-based analysis starts with a preferred scenario and then finds data that support such a scenario.” Not surprisingly, “because such an approach can be presented as having the advantage of countering various kinds of unknowns and allows policymakers to contend with uncertainty, it has significant appeal today. However, such an approach would ultimately fail to deal with the basic tenet of the new threat environment, namely, uncertainty.”

2 responses to “Intelligence and Strategic Warning

  1. Great article – excellent summary of the early warning identification dilemmas and possible solutions.

Leave a comment