Professor Howard Adelman kindly shared some interesting insights (via email) in response to (1) Michael Lund’s new chapter on conflict prevention, and (2), my reaction to it (see previous blog entry). In his response, Professor Adelman also drew on a number of my other blog entries, which I greatly appreciate—starting a conversation on early warning was exactly what I was hoping to do with this blog.
What follows are some reactions to Professor Adelman’s email. I’ve chosen to “reply by blog” as opposed to email in order open the conversation to others who might wish to contribute. I want to keep this blog entry at readable length (i.e., under five minutes) and will therefore be biased in selecting the issues I respond to. Professor Adelman is certainly invited to share additional thoughts via the comments section.
Reading the entries on the blog and Michael’s chapter suggested to me that there is confusion over the relationship between conflict prevention and early warning. […] Early warning not only includes the gathering of data but the analysis of that data to develop strategic options for response but does not include the responses themselves which come under conflict prevention.
Whether early response should be filed under conflict prevention or some other term is perhaps more a question for academics. I do realize the importance of having clear definitions and sharp conceptual frameworks. However, I’m more preoccupied with early response actually happening at all, regardless of which toolbox it belongs to.
Patrick observes that CEWARN’s methodology, like the majority of intergovernmental systems gets rather technical, institutional and bureaucratic very quickly, it is unclear whether he is pointing out to a structural flaw or a propensity because the system has strong governmental links. Though he is correct that, “It is easy to forget the human element of early warning when faced with fancy language such as baselines, trends analysis, structural indicators,” it should be noted that the few early successes of the system did not come from the highly developed technical side but from the very personal reporting side of those individuals gathering information before it was subjected to systematic extrapolations. Nevertheless, the systematic framework allowed the observer to ask the right questions and look for the data that revealed an impending crisis.
In my view, the structural flaw of CEWARN is the system’s strong governmental links. This is why the few early successes of the system did not come from the highly developed technical (or data-driven) side but from the personal reporting side of those individuals gathering information before it was subject to systematic extrapolations and institutional inertia.
I find it particularly telling that CEWARN’s first success story occurred in July 2003, barely a month after the system went operational, which is when I first joined the CEWARN team. None of the institutional or highly technical procedures were in place at the time so when a CEWARN field in monitor called a country coordinator to alert him that an armed group was mobilizing to raid another group’s cattle, the communication of this information to CEWARN was all done ad hoc, right through to the early response. I am skeptical that institutionalizing effective early response is possible. In fact, I see it as an oxymoron. To find out why, please see my ISA paper on new strategies for early response (PDF).
CEWARN and other such systems are intended to involve communities at the grass roots level to sideline the source of violence and initiate processes that will keep them sidelined. Further, the Ushahidi approach involving peer-to-peer, networked communication tools was not that different than the networking design and open information system at the base of the CEWARN system.
I disagree. CEWARN and Ushahidi are hardly similar or comparable, either in design or in operation. CEWARN is not an open information system by any measure—the project’s incident and situation reports are not open to the public. The online CEWARN Reporter is password protected, only the CEWARN team and select government officials have access. In fact, CEWARN’s design is an excellent example of anti-crowdsourcing. CEWARN’s network design remains far more centralized than Ushahidi’s can ever be; not least because the source code of Ushahidi will be made available freely to anyone who wants it. If there is one similarity between the two systems, it has to do with the fact that both projects need to focus far more on operational and tactical early response.
Patrick’s argument is akin to saying that when we see certain kinds of spots on the skin we know the child has measles, so why do we need greater in-depth analysis for detecting patterns of spread or for detecting the disease even before the spots appear on the skin.
Close. Why do we need greater in-depth analysis when this analysis will be sent to a hospital a thousand miles away for further analysis and not result in any response by public health professionals who have no incentive to respond? Why not train the parents directly to deal with the measles instead?
The CEWARN and WANEP systems were deliberately designed to be frugal operations rooted in community-based gather of information and data with the analysis located in the state and the region of the conflict.
Why are we not designing systems rooted in community-based early responses? Why ask communities to code data that is ultimately of limited use to them?
Alex de Waal’s depiction of the documentation provided in Sudan that allowed villagers to evacuate is but one example of one end of the early warning spectrum but does not obviate the need for more developed systems. However, Patrick’s message needs to be heeded: the latter should not be developed at the expense of community-based systems such as GI-NET in Burma using a civilian radio network to enable civilians to receive and send warning information and distress calls.
I would add that more sophisticated systems need to demonstrate cases of operational response (particularly since these systems tend to be expensive to fund). Note that I’m not even raising the bar to successful cases of operational prevention. Just responses, that’s all.
CEWARN reported more than 3,000 conflict events in the first three years of operation but has only responded to a dozen at most. That’s a “success” rate of 0.4%. On the other hand, the system can be assessed using other measures. For example, the project has successfully documented extensive evidence human rights abuses, which has forced governments to acknowledge that a problem exists and to start taking responsibility for that problem.
Recall when the CEWARN team reported its first year of data to government officials in Addis Ababa (you and I were both there, Professor Adelman). The government representatives were so taken aback by the extent of the violence taking place in cross-border regions that they refused to release the country reports (in direct violation of the CEWARN protocol which had been ratified). They eventually did release the reports six months later and by doing so have acknowledged there was a problem, which is a critical first step.