Conflicting Trends? Evidence-Based Policy and Systemic Change

By Dr Ben Taylor, The Springfield Centre for Business in Development

Conflicting Trends? Evidence-Based Policy and Systemic Change

Ben Taylor, The Springfield Centre for Business in Development

Like British sportspeople and the overuse of acronyms, evidence-based policy and systemic change are in fashion. For the last decade these trends have increasingly been the flavours of the month but no one seems to have realised that, when put together, they taste a bit odd. There is, then, a cognitive dissonance within donor agencies; they have become the beast with two heads!

The need for development programmes to consider not just a problem but the causes of that problem and the causes of those causes etc. is increasingly recognised by development agencies and NGOs. Call it systemic change, M4P, complexity theory or whatever en vogue term emerges next week, the value of understanding poverty as the result of a number of interlinked constraints in interlinked systems or markets is increasingly recognised as the most likely way to make the changes that are brought about by development programmes stick. In other words, in order to achieve sustainability and scale in poverty alleviation, you have to address constraints which cause that poverty within the broader system in which it was produced. This must be done in a way that results in a permanent change in the adaptive capacity of the system; not a one off pot of cash for businesses, governments, associations or other local partners to address their personal and immediate needs.

In parallel, there is an increasing recognition of the value in measuring results. While there is dispute in the way that this should be achieved, development is no longer the sector where good money is thrown after bad because all charity is good charity. There is a demand to know what works in reducing poverty. However, the tightest measurement requires laboratory conditions and the world is far from a sterile and controlled environment.

So if a development programme is to affect sustainable poverty alleviation for large numbers of people through embracing the complexity of markets and systems and implementing multiple complimentary activities to do so, we need new ways to monitor and measure the success of the approach. Programmes need to be accountable to their donors and an approach needs to demonstrate its value. However, just because measuring results is more difficult in such programmes should not be a reason to abandon the approach all together. Don’t throw the baby out with the bathwater, if current measurement techniques are inappropriate, let’s pull the plug and get some new ones!

You can find out more about the issues raised in this blog post in a paper available here: Springfield Centre

Share this story

3 Responses

  1. Dear Ben,

    I really appreciate your analysis of the stretch between systemic approaches to development and evidence-based policy making! Very insightful!

    At Endeva, we are currently working on a publication on results measurement in development partnerships at (will be published by BMZ in October). Interviews with donors really made us optimistic that the rigid focus on measuring “impact” via RCTs and experimental design as well as on simply counting outputs is changing towards a focus on intermediary level indicators and on learning what works. In particular, donors mentioned the need to understand how partnerships work as an instrument, which would require qualitative indicators and questions such as “Does developing results chains jointly really contribute to the effectiveness of the partnership?” or “How do partners change their individual approach when they start working together?”. It is striking that, after at least a decade of using the partnership approach, we know so little about what the benefits of partnerships are and how to make them work. (Indeed, it is often claimed that partnerships can achieve systemic change – but we know few examples where they actually have. See also the Blog on “Strengthening Inclusive Business Ecosystems” referred above.)

    On a related note, donors also seem to change their outlook on “additionality.” While they used to check the box that the project would not have happened without them (input additionality), they are now becoming more interested in “outcome additionality.” To what degree can  donors influence companies to pay attention to development objectives? What capacities are being developed within companies as a  result of the partnership to plan for, implement, and report on development results? How do companies change their approaches due to the collaboration, e.g. by working together with NGOs? Understanding this kind of additionality is surely more interesting than knowing that a project would not have happened – a question that can never be answered for the kind of experimental approaches partnerships are taking, anyway.

    We still need much more clarity on which questions to ask and how to answer them, from a methodological point of view. But it seems that, at least in the area of partnerships, we are moving into the right direction!

  2. Hi Christina,

    Thanks for your comments, very interesting.

    The problem I’m finding recently is that some of the people involved in designing programmes and even those at the policy level, are beginning to recognise and accept the need for a different approach to evaluating systemic change. However, the different perspectives of different departments within donors means that they exhibit a collective cognitive dissonance, with the upshot being that these progressive voices are dampened by more normative voices within their organisation.

    As such, I really look forward to seeing your publication. I’d be interested to know which donors you’ve talked to and which people within those donors – not names but whether they were from the evaluation department, country offices etc. I’ve seen greater acceptance of the disconnect amongst smaller donors who, perhaps, have greater internal cohesion and a greater propensity to speak with a single voice as a result. They are more maneuverable.

    I’m working on new methodological approaches to evaluating indirect methods of development implementation, but its a long and challenging process. It’s also political in trying to convince donors of the value and robustness of alternative techniques.

    I look forward to seeing your publication and any other work you feel may be of relevance (email to btaylor [at]

    Good to engage with you,


  3. Hi Ben

    Thanks for your excellent paper – which provides a compelling and concise analysis of the reasons for the re-emergence of ‘evidence-based policy (EBP) and results agenda’ in international development; and a searing critique of the misconceived drive to use experimental and quasi-experimental methods including RCTs for measuring change in complex social systems (incl. markets).
    I really appreciated your witty digression on why applying the methods of medical science to international development is like comparing apples and existentialism.

    At Practical Action Consulting we are developing and applying practitioner-based tools for qualitative assessment of changes in the form of business relationships between, say, poor farmers and their market intermediaries or input-suppliers.  Our observations suggest that these behavioral and micro-institutional changes are key to realizing sustainable increases in income for marginalised producers.  Building on work by Marshall Bear for example we’ve developed a simple ‘relationship matrix’ tool that is adaptable by market-actors themselves.  see   It is form of results measurement which is useful to both market development practitioners and businesses in the value-chain.