I attended an academic conference (ISA) a while back which included a panel presentation on the African Union’s Continental Early Warning System. I had looked forward to the presentations with great interest. Much to my chagrin, however, little has changed. The Ivory Tower still dominates the development of Africa’s conflict early warning systems.
Following the presentations on CEWS, the panel’s Chair opened the floor to questions from the audience. (Note that most of the panelists were working directly on CEWS). When I asked what their measurement of success for CEWS was, the answer confirmed my concerns: “The success of CEWS is measured by the number of regular, high-quality early warning reports issued per year.”
“So not operational response, then?” I asked. “No,” the panelist confirmed. In other words, nothing has changed. Despite years of so-called “lessons learned and best practices,” successful conflict early warning is still measured in number of reports, not in the number of lives saved, let alone operational responses.
The Chair tried to change the subject by generalizing and suggested the following four indicators of success for any early warning system: Description (of trends), Explanation (of trends), Formulation (of policy recommendations) and Action.
I have no doubt that academics excel at the first three. However, “Action” is what is missing. And because academics are always the consultants employed to develop conflict early warning systems, these systems are no more than glorified databases.
Trying to control the “damage” of the “all-too-honest” panelists, the Chair then cited the (outdated) refrain that successful prevention can not be proven. And yet this same person has published the very opposite opinion (in peer reviewed literature no less), thus fully contradicting himself.
Rather ironic that an academic would make a statement of the type “X cannot be proven” and still pursue the same research for ten years. The very foundation of academia and scientific thought is built on Karl Popper’s principle of falsification. A theory that is not falsifiable is simply unscientific.
If successful conflict prevention cannot be proven, as suggested by the Chair, then how will we ever know whether conflict early warning systems have any impact whatsoever? On the other hand, this may be an advantage. By repeatedly stating that successful prevention cannot be proven, we infer that unsuccessful prevention can not be proven either.
This is rather convenient for academics. They can continue asking donors for funding without having to demonstrate any impact beyond a few reports every year. In addition, they can conveniently dismiss anyone who might have the nerve to suggest that an indicator of success for conflict prevention should be number of lives saved.