The Center for Strategic International Studies (CSIS) has just released a review of 30 conflict early warning systems. In this blog entry, I provide preliminary reactions to the report based on a first read. I will add subsequent thoughts and comments in the near future. Before articulating some of my responses, let me say that this type of review is exactly what is needed to begin a serious conversation about what conflict early warning systems can, and cannot, do.
- The case selection for the review is problematic in that no distinction appears to be made between conflict early warning systems and conflict risk assessments.
- The assertion by proponents of conflict early models that their models have a success rate of between 75%-90% needs to be critically reviewed and independently assessed. Equally importantly, the triggers identified by these models should be evaluated to determine whether they can be factors practically influenced by policymakers.
- It should not come as a surprise that few decision makers rely on (early warning) watch lists to take politically risky decisions or to take preventive action in advance of a crisis. We should be upfront and honest about this.
- The reason that knowledge of conflicts is still rudimentary is because social systems are complex and the tools we apply are far too linear and discrete to capture the complex dynamics of conflicts. Academics should move away from econometrics and towards systems analysis as well as agent-based modeling. I would also highly recommend reading Nassim Taleb’s “The Black Swan.”
- Baseline data is more appropriate for structural prevention than operational conflict prevention.
- Weighing indicators implies attaching a static number to each indicator. Not only do these weights change depending on the combination of other indicators at play, but they also change over time, at different levels of analysis and in different political, cultural, social and economic contexts. Appropriate weights cannot be determined a priori.
For me, the two most important points identified by the authors of the study are:
Small pools of experts dominate interpretations – It is nearly impossible to predict outcomes from chaotic and complex situations, and even the experts tend not to get it right any more than lay people do. In fact, experts often overlook information that goes against years of viewing a place in a certain way, while minority voices are typically ignored.
Models do not account for political will – The real challenge is almost always how to get political actors to take risks. Generally, government officials have a naturally optimistic and can-do nature or they are reluctant to give higher-ups bad news, which prevents thinking of worst-case scenarios.
My main concerns and questions:
- Most, if not all, of the systems under review have not undergone any strictly independent evaluations vis-a-vis their accuracy. A follow up review would be valuable if it included success stories associated with these systems.
- If the models do not account for political will, then how are we any different from Cassandra even if our models were to be accurate?
- The report repeats the issue of not being able to measure success if nothing happens, ie, if prevention is successful. This mistakenly assumes that early warning alerts actually lead to response in the first place, regardless of whether the response is subsequently successful or unsuccessful. We can trace warnings to response far more easily. The problem is that hardly any warnings lead to any kind of response. So why exactly are we concerned about proving a negative?
- The report recommends that information be collected at the ground level. This is necessary but not sufficient. If this information is collected at the local level and then wired up to bureaucratic headquarters thousands of miles away, it will do little good to the local at-risk communities.
The report rightly argues for measures to improve accountability of those taking or not taking action. I would suggest the authors review the UN IASC’s work on Minimum Preparedness Actions (MPAs) and perhaps this piece on Decision-Making and Conflict Early Warning at the UN, in which my colleague Susanna Campbell and I consider the possibility of Minimum Preventive Actions for the purposes of applying greater accountability within the UN bureaucracy.
Indeed, we make a distinction between early warning systems for lobbying and those for operational response. The vast majority of systems reviewed in the CSIS study are more geared towards lobbying and advocacy rather than operational response. This is not a criticism but simply an observation. We should not confuse lobbying for operational response.
To conclude, this is an important report that I hope will generate some fruitful reflection.