Tag Archives: DARPA

DARPA’s Crisis Early Warning and Decision Support System

The International Studies Review just published a piece by Sean O’Brien entitled “Crisis Early Warning and Decision Support: Contemporary Approaches and Thoughts on Future Research.” Sean outlines the latest attempt by the US military (ie, DARPA) to develop a crisis forecasting tool. This time, the platform is called ICEWS for Integrated Crisis Early Warning System.

O’Brien gives a brief overview of recent efforts in this space including Bueno de Mesquita’s Policon and Senturion forecasting systems, which are said to be 90%+ accurate. That said, O’Brien notes that Mesquita himself acknowledges that “he is not exactly sure how to interpret this accuracy claim since most of the reported assessments he has were not explicit about how accuracy was measured.” Incidentally, I like how this rather important qualifier is buried at the bottom of a footnote.

Some of Policon’s/Senturion’s supposedly accurate predictions had to do with questions like:

  • What policy is Egypt likely to adopt towards Israel?
  • What is the Philippines likely to do about US bases?

Keep in mind that millions of dollars were spent on these sophisticated systems and yet I can’t help but think that paying some experts on Egypt and the Philippines a few thousand dollars would have more or less accomplished the same task. Apparently, Senturion accurately predicted the deteriorating disposition of Iraqis toward US forces. Really? Shocking, who would have expected Iraqi public opinion to shift? Yes, that was sarcasm.

O’Brien also references a forthcoming study by Ward, Greenhill and Bakke, which “delivers a serious blow to the predominant way in which most conflict models are evaluated using statistical significance.” These include the predictive models developed by Fearon and Latin (2003) and Collier and Hoeffler (2004). Ward et al. show that these models predict few if any civil war cases at a reasonable probability cut off of 50%. In fact, the Fearon and Latin model “does not even appear to generate a probability of greater than 30%.” In sum, Ward et al. conclude that we cannot correctly predict over 90% of the cases with which our models are concerned.

Many of the most interesting, policy-relevant theoretical questions are also the most complex, nonlinear, and highly context-dependent. They demand consideration of hundreds of massively interacting variables that are difficult to measure systematically and at a level of granularity consistent with the theory. In such cases it is at best impractical and at worst impossible to apply standard regression techniques within the context of a Large N study, short of invoking unreasonable, oversimplifying assumptions. This may in part account for contradictory findings in the literature relative to the validity of alternative theoretical claims.

So lets keep in mind that previous “breakthroughs” have since been largely discounted.

ICEWS phase one of three consisted of a competition between different groups to successfully predict events of interest (EoI) on a set of historical data. The most successful team was Lockheed Martin-Advanced Technology Laboratories (LM-ATL) in cooperation with a number of established scholars and industry partners. The team integrated and applied six different conflict modeling systems, including:

  1. Agent-based models drawn from Barry Silverman’s Factionism and Ian Lustick’s Political Science-Identity (PSI) computational modeling platforms. The latter is created with “agents representing population elements of various ethnic ⁄ political identities organized geographically and in authority structures designed to mirror the society being studied.”
  2. Logistic regression models developed by Phil Schrodt and Steve Shellman, which use “macro-structural and event data factors commonly analyzed in the academic literature.” Shellman’s approach uses a Bayesian statistics model.
  3. Geo-spatial network models built by Michael Ward, which uses “structural factors, event counts, and various types of spatial networks—trade ties, people flows, and ‘‘social similarity’’ profiles—that embody potential EOI co-dependencies between proximate countries.”
  4. “A final model was developed by aggregating the forecasts from the above mentioned models using Bayesian techniques.

I’m particularly interested in the use of Agent-based models (ABM) for conflict analysis. O’Brien references a very interesting project at Virginia Tech which I was unaware of:

Scholars at Virginia Tech have already developed a 100 million agent simulation that includes synthetic versions of many American citizens, and plan to expand to 300 million agents this year (Upson 2008). Each synthetic agent has as many as 163 variables describing age, ethnicity, socio-economic status, gender, and various attitudinal factors. The simulation is used to assess how different types of pandemics could spread across the United States under different scenarios.

Patrick Philippe Meier

DARPA’s New Approach to Situational Awareness

Making sense of multiple flows of information is a continuing challenge in conflict early warning and early response; particularly vis-a-vis decision making. How can we make overall sense of conflict data originating from different sources? DARPA’s new approach is to turn warzone data into simple stories.

From Wired:

Drone feeds, informant tips, news reports, captured phone calls — sometimes, a battlefield commander gets so much information, it’s hard to make sense of it all. So the Pentagon’s far out research arm, Darpa, is looking to distill all that data into “a form that is more suitable for human consumption.” Namely, a story.

Making sense of a complex situation is like understanding a story; one must construct, impose and extract an interpretation. This interpretation weaves a commonly understood narrative into the information in a way that captures the basic interactions of characters and the dynamics of their motivations while filling in details not explicitly mentioned in the input stream. It uses story lines with which we all have experience as analogies, and it simplifies the detail in order to communicate the crucial aspects of a situation. The story lines it uses are those the decision maker should be reminded of, because they are similar to the current situation based upon what the decision maker is trying to do.

These stories, however, would be authored by artificial intelligence (AI) algorithms courtesy of Darpa’s Information Processing Technology Office. I’m sceptical of purely AI-driven solutions for obvious reasons. What caught my interest, rather, was the idea of story telling, i.e., a qualitative, narrative approach to conflict analysis and situational awareness that may overcome some of the cognitive biases that surface during decision-making processes.