Skip to main content

Let's talk about funders and evaluation

2018-01-09

Bethia McNeilIn this blog Bethia McNeil, Director of the Centre for Youth Impact, opens a conversation about funders and evaluation. You can read Jane Steele's, Director, Evidence and Learning at the Paul Hamlyn Foundation, response to Bethia's blog here.

 

What role does the funding community play in shaping evaluation in youth-serving organisations? This might sound like a disingenuous question; after all, many youth organisations would say that ‘funder requirements’ are the main driver of their evaluation activity (for better or worse). It’s certainly clear that a significant volume of evaluation activity happens in association with particular funding pots, but how does funding – and specifically, the funding community itself – shape this evaluation activity?

This is a particularly interesting question because ‘funder requirements’ are not only considered to be a major driver of evaluation practice, but also a major barrier to such practice ever changing. Very many of my conversations with delivery organisations about re-thinking their evaluation activities end in “but what if my funders don’t like it? And we have to do different things for every one….”. So, if we accept that the funding community exerts such a strong influence on evaluation practice, could or should we do more to channel that influence?

But before we ask that question, we have to ask another. What is evaluation for?

Certainly, for some funders, evaluation is a form of monitoring: checking that what they are funding is actually being delivered, and reaching the specified people and communities. This, as Tamsin Shuker from the Big Lottery Fund refers to it, is more about checking “what it is”, than asking “what is it?”.

Sometimes this also extends to checking whether the funding is having the impact that a delivery organisation said it would. Again, this tends to be in the form of ‘demonstrating’ impact, rather than genuinely enquiring.

Such monitoring activity is also a form of accountability, but it tends to be seen and felt by delivery organisations as accountability to funders, rather than to people and communities – even if this is not the intention of the funder in question. ‘Accountability to funders’ – even the most open and inclusive funding organisations – brings with it a certain high stakes mentality: the potential to fail with negative consequences, the burden of compliance that rarely feels like time well spent, and a sense of potentially unachievable standards.

Increasingly, funders are framing their evaluation ‘asks’ in terms of learning: enabling and encouraging organisations to learn what went well and what didn’t, and to share and apply this learning in the future. But when this is mixed up with perceptions of accountability (whether real or otherwise), does it fatally undermine the conditions necessary for open and reflective learning?

Many if not all funders would hope to leave delivery organisations stronger and better placed for the future as a result of their funding. ‘Evaluation capacity’ is often part of this, and a number of funders provide grants plus support to delivery organisations in the form of matching them with a consultant or evaluation ‘expert’. But, adding this in to an already murky blend of accountability, monitoring and learning, does it make sense to locate evaluation expertise outside the organisation? And why building capacity to evaluate? What about capacity to learn and change practice as a result? They are not the same thing.

My sense is that the purpose of evaluation has become very confused, and hopelessly entangled with other concepts and activities that effectively shape evaluation practice – and rarely for the better.

What should evaluation be for? Lots of things: it can be about accountability, but to young people and communities as well as funders. It can also be about enquiry, and about learning and improvement.  They are all important, but it can be quite hard to do all of them at the same time. They give rise to different questions, and there are different approaches to answering different questions.

So, let’s return to my original question: could or should we do more to channel the influence of the funding community over evaluation practice? I think the answer has to be yes, but we have to go beyond a rather simplistic perspective that assumes that evaluation either happens or it doesn’t, and funder influence can make it happen, and more of it. This is what has led to where we are today.

This debate should be about channelling influence, and recognising that influence in all its complexity, rather than using the power of funders like a blunt tool. It should also be about unpacking evaluation and its purposes and drivers. And we must talk about ownership: too much evaluation practice is undertaken in response to perceived demands from ‘outside’ delivery organisations. Outside demands do little to engender ownership, which in turn shapes the entire organisational culture surrounding evaluation.

But these questions are divergent: they have no one answer, and instead call on all of us to think broadly about the issues. As a result we will be focusing more of our work this year, and in the coming years, on the relationship between the funding community, delivery organisations and evaluation, and will be sharing more of our thoughts on what this could look like soon.