Monitoring Results at a Health Center by The Reboot @ flickr

As part of my work for Vanderbilt Law School‘s International Legal Studies Program I recently pushed a grant application to a US Government Agency that was all about increasing the capacity of an institution to perform its mandated functionality. It was a relatively straight-forward submission – the safest I've been involved in lately as it had the least amount of innovation I have built into a submission in years. For these reasons, I feel we have a relatively high chance of success.

The part I like least when it comes to developing grant applications is the monitoring and evaluation (M&E) portion of the grant response. It is not because I feel that M&E is worthless. Indeed, the contrary is true. I fully buy into the notion that development programs must monitor their implementation based on measurable metrics. Further, I fully buy into the idea that development programming is simply a means that is an attempt to achieve some (other) identifiable end and I fully buy into the idea that we should develop a theory for how the programming will lead to the desired results portfolio and we should evaluate the project based on those outcomes.

Despite intellectually understanding the need for and efficacy of M&E, I detest writing M&E frameworks. Mostly, I suspect this is due to the sector I work in. If I worked in education, health, agriculture, or some other sectors of the development world perhaps M&E would be less of a motivational hurdle for me; perhaps not, to be honest I have no idea. There are two reasons why I think this could be true. First, these sectors can more easily be measured. Second, these sectors present programs with a quicker ROI where success or failure of the program can more easily be measured.

If we can all agree that the ability to measure success is one of the more important elements to the design of any M&E framework, then we need to (as practitioners) build M&E frameworks that can be measured. Duh. Yet, if the goal is to “build institution X” or to “build the capacity of policymakers within area Y” or “provide technical support to the reform of policy sector Z” how can we craft metrics which are easily measured? Governance work (which I am referring to in isolation from democracy & governance work) usually boils down to one of those three goals. Either the donor and the beneficiary government want an institution to be built, want the capacity of the individuals comprising an institution to be bolstered, or want to achieve a longer-term legal or policy reform.

When we look at measuring the implementation and the success of any development programming (I'm not a fan of the intervention jargon with respect to governance work), there are basically two ways in which we can measure things. We can measure what has been done. These are commonly called output variables. Examples of output measurement: How many memos were written and given to policymakers? How many schools were built? How many police units were outfitted with advanced investigation kit? Output variables commonly measure things that are usually well within the control of the implementing agency or consortium. We can also measure what has happened. These are commonly called outcome variables. Examples of outcome measurement: How much better is the institution performing? How many more students are graduating from the impact area? How many few crimes were committed within the impact area? Outcome variables commonly measure things that are not so easily within the control of the implementing agency or consortium.

Between the two (output and outcome) it is more beneficial to the implementing agency or consortium to measure the outputs. The implementer can more easily control the output as the delivery of the output is usually well within the decision space of the implementer. It is self-evident that no implementer wants to show that they did not deliver on a project when the delivery on that project was a result of things outside their control. Most humans would find it within the realm of normality to have their performance evaluated based upon what they actually did. Output variables are simply a measurement of what the implementer has done, and therefore easier to measure and also easier to say, “evaluate me on this.”

Between the two (output and outcome), however, it is more beneficial to the environment in which the program is being implemented to measure the outcomes. Outcome variables are a method which more precisely measures the results of the program that was implemented. The overarching mission of what we, as development workers, should be doing is to produce results that positively affect the environment. While purchasing advanced investigation kit for police units and training personnel on that kit's usage is important that is not necessarily a result (it is an output). That purchase and training effort will not, of itself, positively affect the environment. However, if the police are empowered to use the kit and training to reduce crime within the impact area, then we are looking at a result that positively affects the environment.

While they may be more beneficial to build into M&E frameworks, it is often not a simple matter to measure outcomes – particularly when we are talking about building and/or reforming institutions. The real problem that I often face when I am trying to think through building an M&E framework is the same reason why, psychologically, governance work can be wholly unsatisfying for those who dedicate their professional lives to it. I am sure that social scientists and development thinkers with more brains and experience than I have some lingo for it, but I just call it the “gratification delay.” When the effort you are embarking upon is about transmitting knowledge, it will take time for you to see the results of that effort. This is self-evident and we all see it in our every day lives. If we are talking about building an institution we are talking about transmitting knowledge in how to run that institution much more than we are talking about building some physical structure, and so we are facing a situation where the real gratification for the effort cannot be measured until some non-specific time in the future.

This gets me to the biggest problem I have with the evaluation portion of any M&E framework. It is not that it is difficult to build the framework, it is that is exceptionally short sited of donors to require implementing agencies to evaluate the project within the timeframe of the project. So in other words it is simply a paper tiger. It wreaks to me of work for work's sake, rather than work that will end with the satisfaction of some logical goal.

For my own purposes, I would love if a donor said that I should submit a grant application and build an M&E framework into that submission but that my &E portion will be conducted by another organization at some reasonable time following the conclusion of the project.

Almost every project that I have seen or been a part of is behind from the beginning and is rushing towards the end to deliver on its promises. Even where those projects have built the final quarter of the project as M&E time to properly evaluate the project, the best they can do within that timeframe is simply to get a very early picture of the results. So why do donors not simply extend out the &E part to four, six, eight quarters following the completion of the project? I am sure there are reasons, but I surely do not know what they are.

~ # ~