Let’s note as well that we have some shared experiences of accountability - and that accountability is, overall, a good thing. Accountability within an organisation is critical to planning and effectiveness and it’s equally important in relationships with stakeholders, including beneficiaries, partners and the wider communities we work with.
Our formal accountability is very similarly constructed as well. Most delivery organisations, trusts and foundations are registered charities, with trustees who are accountable to the Charity Commission for serving charitable purpose and achieving public benefit. For foundations, this means showing that their grant-making is serving charitable purpose and public benefit, even when their grantees are not registered charities. Accountability for any organisation depends on having reasonably reliable information about and understanding of how resources have been used and what was achieved. And that is a need we all have in common. Evaluation is one source of that information but somehow we have come to see it as something separate from the central flow of an organisation’s work. The Evaluation Roundtable talks about ‘strategic learning’, a process that might involve formal evaluation alongside the use of other types of evaluative information, such as management and financial information, regular user feedback and the intelligence that we all gather in the course of our work. Strategic learning is learning to inform decisions about what to do next, about changes - minor or major - that we might need to make. In my experience, most of the people working for funders and delivery organisations, and certainly the most effective, are characterised by a curiosity that drives the quest for impact. We want to know whether we are achieving what we set out to do, how and why things are working or not, and what changes we might consider. We are missing a trick if we don’t see evaluation as part of that strategic learning, whether at the level of the whole organisation, service or project. Do we all do this well? Do we have the resources we need? As a funder, I can say that our organisation does not yet have in place all the skills, systems or culture we need for strategic learning, but we are developing our capacity and becoming more of a learning organisation. We also recognise that this is a greater challenge for the organisations we fund, who are hard pressed for resources, including staff time. However, we see many grant applications in which evaluation work is seriously under-costed and there is inadequate provision for staff time to manage and run evaluative processes, interpret the findings and consider the implications. So we look forward to the conversation about how we can work together to make much better use of the effort that is going into ‘evaluation’ at the moment, and which is not delivering all it could for those we work with. Effective evaluations benefit everyone. Grantees, those they support and funders. So let’s talk about grantees and evaluation as well as about funders and evaluation. Let’s talk about shared ownership and differing perceptions as part of this, and what funders and delivery organisations can do, together with evaluators, to make better use of evaluation as an integral part of our work. ![]() In this blog Bethia McNeil, Director of the Centre for Youth Impact, opens a conversation about funders and evaluation. You can read Jane Steele's, Director, Evidence and Learning at the Paul Hamlyn Foundation, response to Bethia's blog here. What role does the funding community play in shaping evaluation in youth-serving organisations? This might sound like a disingenuous question; after all, many youth organisations would say that ‘funder requirements’ are the main driver of their evaluation activity (for better or worse). It’s certainly clear that a significant volume of evaluation activity happens in association with particular funding pots, but how does funding – and specifically, the funding community itself – shape this evaluation activity? This is a particularly interesting question because ‘funder requirements’ are not only considered to be a major driver of evaluation practice, but also a major barrier to such practice ever changing. Very many of my conversations with delivery organisations about re-thinking their evaluation activities end in “but what if my funders don’t like it? And we have to do different things for every one….”. So, if we accept that the funding community exerts such a strong influence on evaluation practice, could or should we do more to channel that influence? But before we ask that question, we have to ask another. What is evaluation for? Certainly, for some funders, evaluation is a form of monitoring: checking that what they are funding is actually being delivered, and reaching the specified people and communities. This, as Tamsin Shuker from the Big Lottery Fund refers to it, is more about checking “what it is”, than asking “what is it?”. Sometimes this also extends to checking whether the funding is having the impact that a delivery organisation said it would. Again, this tends to be in the form of ‘demonstrating’ impact, rather than genuinely enquiring. Such monitoring activity is also a form of accountability, but it tends to be seen and felt by delivery organisations as accountability to funders, rather than to people and communities – even if this is not the intention of the funder in question. ‘Accountability to funders’ – even the most open and inclusive funding organisations – brings with it a certain high stakes mentality: the potential to fail with negative consequences, the burden of compliance that rarely feels like time well spent, and a sense of potentially unachievable standards. Increasingly, funders are framing their evaluation ‘asks’ in terms of learning: enabling and encouraging organisations to learn what went well and what didn’t, and to share and apply this learning in the future. But when this is mixed up with perceptions of accountability (whether real or otherwise), does it fatally undermine the conditions necessary for open and reflective learning? Many if not all funders would hope to leave delivery organisations stronger and better placed for the future as a result of their funding. ‘Evaluation capacity’ is often part of this, and a number of funders provide grants plus support to delivery organisations in the form of matching them with a consultant or evaluation ‘expert’. But, adding this in to an already murky blend of accountability, monitoring and learning, does it make sense to locate evaluation expertise outside the organisation? And why building capacity to evaluate? What about capacity to learn and change practice as a result? They are not the same thing. My sense is that the purpose of evaluation has become very confused, and hopelessly entangled with other concepts and activities that effectively shape evaluation practice – and rarely for the better. What should evaluation be for? Lots of things: it can be about accountability, but to young people and communities as well as funders. It can also be about enquiry, and about learning and improvement. They are all important, but it can be quite hard to do all of them at the same time. They give rise to different questions, and there are different approaches to answering different questions. So, let’s return to my original question: could or should we do more to channel the influence of the funding community over evaluation practice? I think the answer has to be yes, but we have to go beyond a rather simplistic perspective that assumes that evaluation either happens or it doesn’t, and funder influence can make it happen, and more of it. This is what has led to where we are today. This debate should be about channelling influence, and recognising that influence in all its complexity, rather than using the power of funders like a blunt tool. It should also be about unpacking evaluation and its purposes and drivers. And we must talk about ownership: too much evaluation practice is undertaken in response to perceived demands from ‘outside’ delivery organisations. Outside demands do little to engender ownership, which in turn shapes the entire organisational culture surrounding evaluation. But these questions are divergent: they have no one answer, and instead call on all of us to think broadly about the issues. As a result we will be focusing more of our work this year, and in the coming years, on the relationship between the funding community, delivery organisations and evaluation, and will be sharing more of our thoughts on what this could look like soon. |
Newsletter
Our monthly newsletter collects news, events, research and blogs from the Centre, our networks and practitioners and organisations around the world. Sign-up here Archives
February 2019
Categories
All
|