Using theory-driven evaluation within the mCHW project

Using theory-driven evaluation within the mCHW project

By Martin Oliver

Our project’s aim – to advance the training and supervision of community
health workers in Kenya, following a participatory approach – creates some
very specific requirements for evaluation. Unlike some intervention
projects, our work with CHWs, CHEWs and other close-to-community actors
isn’t stable. It has to be negotiated at every step, and has responded to
their priorities – for example, in the topics that were selected as the
basis for app development. As a consequence, the project’s evaluation work
can’t make claims about a standardized process or product; it needs to
evaluate a moving target.

In addition, the areas in which we are working are fairly ill-defined,
which has made it very difficult to come up with simple metrics with which
to judge success. The project is funded under the broad aspiration of
‘poverty reduction’, but we expect our influence on this to be diffuse at
best. We also hope that the apps that are developed might reduce mortality
or improve quality of life – but of course, even if they were easy to
measure, which of those is appropriate depends on what the app is created
for, and that decision had to be taken with participants, during the
project. So conventional baselines were hard to establish. It would be
possible to count number of homes CHWs visit, for example, but so many
factors influence this that it doesn’t make much sense to assume this will
be shaped by the app we develop. There are some more promising candidates
- number of referrals made, for example – but it turns out these are
complicated, too, as I’ll discuss below.

So, given the lack of an intervention that could be ‘black boxed’ and the
absence of obvious indicators, conventional experimental approaches to
evaluation were ill-suited to what we were trying to achieve. As a
consequence, we turned to theory-driven evaluation. This approach has been
developed to help make sense of complex situations, and to make judgements
whilst exploring processes of change, and it has gained increasing
traction within the field of international development (Vogel, 2012).

The heart of the approach involves identifying the ‘programme logic’ – the
processes that are thought to connect the inputs to the intended outcomes
- and then looking for evidence that supports or challenges that, or else
helps to develop that change model by providing details or filling in
gaps. It is typically iterative, adding layers of detail over time; this
has meant that it is adaptable, and able to support us throughout the
process of creating the app and delivering it to CHWs. It also involves
working with participants’ ideas about why changes happens (or fails),
making it well-suited to participatory projects.

As an example of how this has worked, we can look at the way in which our
understanding of the links between the app and referrals has developed
over time. Initially, we knew CHWs had many responsibilities, but it
wasn’t even clear which of these we should try to support. Focus groups
with CHWs and CHEWs helped by describing the day-to-day work of CHWs, and
identifying specific actions that could be supported. Referrals were soon
identified as one of these. Follow-on interviews with CHWs helped us to
understand how they went about making the decision about whether or not to
refer a client to a clinic. Some of these decisions were clear; there was
no need for an app to support them. However, some were more challenging,
and with the CHWs, we identified that infant development was one area
where the decision about whether or not to refer was particularly
difficult. There were many decisions that also needed evaluative input
whilst developing the app – Which framework should be used for assessing
development? What should the interface look like? – but, glossing over
these for the moment, there were interesting issues when we came to
evaluate whether or not this app was helping. What became clear was that
this was not a dose/response situation, where adding the intervention
caused a simple, measurable effect. Instead, there were two contrasting
things happening at once, confounding measures: cases that might otherwise
have been missed were now being spotted, increasing referral; but cases
that had been referred ‘just in case’ no longer had to be referred,
reducing referrals.

As a consequence, what this approach to evaluation has let us achieve is
an account of what CHWs do, and how we can help them, that is far better
developed than we had available to us at the start of the project. What we
haven’t been able to do is map the prevalence of particular issues or
decisions. Given the insights we gained during the app development, and
the involvement we have had from close-to-community actors, however, we
feel this was a price worth paying.

References

Vogel, I. (2012) Review of the use of ‘Theory of Change’ in International
development. London: UK Department for International Development.
Available online here.

Comments are closed.