Kevin Franks is the Programmes Director at Youth Focus: North East and is the lead for the Centre's North East regional network. Kevin has 25 years’ experience in statutory and voluntary sector youth and community work, which includes centre based, outreach, detached and schools work. This blog is adapted from Kevin's talk at 'Funding Change: Making impact measurement work for funders and providers of youth services', held on 21 March 2018 at the Leeds Rhinos Stadium.
The idea of impact assessment within the youth sector is not new and both funders and youth providers have come a long way in measuring impact over recent years. However, that doesn’t mean it is always done well, correctly or meaningfully. Comments and questions, we often hear, range from:
And we commonly hear the term ‘tick box exercise’.
When I recently posed the question of “what are the challenges around impact assessment and evaluation between funders, commissioners and youth organisations” to a range of colleagues from across the sector in the North East; responses fell into two main areas:
We know and understand there is an intense pressure on commissioners, funders and youth organisations to deliver ‘improved value for money’ and ‘better outcomes’. Interventions cost money and we need to know whether, or not, that money is being spent to produce the best outcomes for young people. This raises the quest of what we mean by ‘value for money’?
In a recent debate on volunteering in the House of Lords the issue of the National Citizen Service representing value for the taxpayer was raised. This was in relation to senior staff salaries and surplus of income over expenditure against participation targets for the programme, potentially being missed by as much as 40 per cent. However, we do know that young people participating in the National Citizen Service Scheme can have a quality experience and many of them report a high level of satisfaction. Focusing on purely the cost of something, as being the best value, runs the risk that there is always going to be someone, somewhere, who’s prepared to do it for less. And if you take price as your main distinction, you’re in a race to the bottom, in regards to quality.
There is value in youth work, and I am sure we all have examples of where being involved with youth work has transformed young people’s lives for the better. However, it is a challenge that this value is easily captured in terms of cost-benefit ratios.
Quality is subjective, whereas quantity is not. Quality can be disputed, questioned and challenged. One cannot dispute quantity – it is easily measured.
Focusing on the price makes it easy to miss the real value – and can turn complex decisions based on ethics, culture, empathy, and understanding of society into much simpler games based on numbers and calculations.
Given the challenges, how can these be overcome going forward. Personally, I believe there is 3 main areas for consideration:
1) Be Clear About Youth Work
Youth work is a distinctive field of practice that puts young people at the centre of the work, and starts from their concerns, their interests and their own starting points. Young people engage in youth work by choice.
The great strength of youth work (and youth workers) is its capacity to adapt, change and grow. However, how many funders, commissioners and indeed the general public really understand what youth work is and what it achieves? How much consensus is there in the youth work sector on the ‘purpose’ of youth work?
We need to be clear about the outcomes we claim youth work can achieve:
If we do believe that youth work is responsible, or at least contributes, to these outcomes, and others, then surely we have a responsibility to provide evidence to back our claims.
If we expect young people to invest their own time and effort participating in youth provision, it is only reasonable that we make an effort to make it worth their while. And surely part of this is investing some of our time and effort in evaluating if we are providing a quality service.
2) Shared Language There is an obvious need to support the development, agreement and acceptance of a common language and framework to describe what the youth sector does. This will better enable commissioner and funders to understand what youth work is and support them to invest in quality provision that will provide the best outcomes for young people and communities. A shared common language will also allow delivery organisations, including the local small voluntary ones, to clearly communicate where they sit within the diversity of the wider youth sector and ultimately enable them to be better able to articulate their value.
There are a number of ways funders, commissioners and youth organisations can engage with each other.
Commissioners can see the value of the youth sector as a critical player in developing ‘asset-based’ approaches to providing high quality support, and by engaging youth organisations as partners in co-production of outcomes.
Evidence gathered by commissioners and funders can be better shared across the youth sector. Good evidence can be used to confirm or challenge approaches and interventions and to examine which features make them successful and worth investing in.
More can be done to build relationships between commissioners, funders and youth organisations. For instance, invite your funder to come and visit your organisation and see for themselves the work at first hand and hold events that create space for open and critical dialogue between all parties.
However, these approaches will require a level of courage from all involved. They will require strong and mature relationships, both within the sector, and between the sector and commissioners. These relationships will require time and attention to develop and maintain.
The youth sector has a role in coming together to provide a strong and unified voice. This requires leadership from within the sector to manage competition between different organisations.
We know the youth sector is diverse in its interests and organisational forms and, at times, struggle to (or refuse to) speak with one voice. Yes, difference of opinion is good. And we should always be open to critiques and different perspectives. However, we if can’t agree on some fundamental issues then we will be forever doomed to remain in this static state of being seen as a second class provision for young people by the state and general public. If we want consistency from commissioners and funders we have to consistency from our own sector. And surely the best way to do this is to have a strong, united youth work sector, delivering quality interventions that enable young people to succeed and thrive.
Quality youth work is a process of continuous evaluation and learning – both for young people and practitioners.
Quality youth work equals quality outcomes for young people, communities and society as a whole.
Our young people have a right to the best quality interventions - and we have a duty to provide them.
Let’s note as well that we have some shared experiences of accountability - and that accountability is, overall, a good thing. Accountability within an organisation is critical to planning and effectiveness and it’s equally important in relationships with stakeholders, including beneficiaries, partners and the wider communities we work with.
Our formal accountability is very similarly constructed as well. Most delivery organisations, trusts and foundations are registered charities, with trustees who are accountable to the Charity Commission for serving charitable purpose and achieving public benefit. For foundations, this means showing that their grant-making is serving charitable purpose and public benefit, even when their grantees are not registered charities.
Accountability for any organisation depends on having reasonably reliable information about and understanding of how resources have been used and what was achieved. And that is a need we all have in common. Evaluation is one source of that information but somehow we have come to see it as something separate from the central flow of an organisation’s work.
The Evaluation Roundtable talks about ‘strategic learning’, a process that might involve formal evaluation alongside the use of other types of evaluative information, such as management and financial information, regular user feedback and the intelligence that we all gather in the course of our work. Strategic learning is learning to inform decisions about what to do next, about changes - minor or major - that we might need to make.
In my experience, most of the people working for funders and delivery organisations, and certainly the most effective, are characterised by a curiosity that drives the quest for impact. We want to know whether we are achieving what we set out to do, how and why things are working or not, and what changes we might consider. We are missing a trick if we don’t see evaluation as part of that strategic learning, whether at the level of the whole organisation, service or project.
Do we all do this well? Do we have the resources we need? As a funder, I can say that our organisation does not yet have in place all the skills, systems or culture we need for strategic learning, but we are developing our capacity and becoming more of a learning organisation. We also recognise that this is a greater challenge for the organisations we fund, who are hard pressed for resources, including staff time.
However, we see many grant applications in which evaluation work is seriously under-costed and there is inadequate provision for staff time to manage and run evaluative processes, interpret the findings and consider the implications. So we look forward to the conversation about how we can work together to make much better use of the effort that is going into ‘evaluation’ at the moment, and which is not delivering all it could for those we work with.
Effective evaluations benefit everyone. Grantees, those they support and funders. So let’s talk about grantees and evaluation as well as about funders and evaluation. Let’s talk about shared ownership and differing perceptions as part of this, and what funders and delivery organisations can do, together with evaluators, to make better use of evaluation as an integral part of our work.
In this blog Bethia McNeil, Director of the Centre for Youth Impact, opens a conversation about funders and evaluation. You can read Jane Steele's, Director, Evidence and Learning at the Paul Hamlyn Foundation, response to Bethia's blog here.
What role does the funding community play in shaping evaluation in youth-serving organisations? This might sound like a disingenuous question; after all, many youth organisations would say that ‘funder requirements’ are the main driver of their evaluation activity (for better or worse). It’s certainly clear that a significant volume of evaluation activity happens in association with particular funding pots, but how does funding – and specifically, the funding community itself – shape this evaluation activity?
This is a particularly interesting question because ‘funder requirements’ are not only considered to be a major driver of evaluation practice, but also a major barrier to such practice ever changing. Very many of my conversations with delivery organisations about re-thinking their evaluation activities end in “but what if my funders don’t like it? And we have to do different things for every one….”. So, if we accept that the funding community exerts such a strong influence on evaluation practice, could or should we do more to channel that influence?
But before we ask that question, we have to ask another. What is evaluation for?
Certainly, for some funders, evaluation is a form of monitoring: checking that what they are funding is actually being delivered, and reaching the specified people and communities. This, as Tamsin Shuker from the Big Lottery Fund refers to it, is more about checking “what it is”, than asking “what is it?”.
Sometimes this also extends to checking whether the funding is having the impact that a delivery organisation said it would. Again, this tends to be in the form of ‘demonstrating’ impact, rather than genuinely enquiring.
Such monitoring activity is also a form of accountability, but it tends to be seen and felt by delivery organisations as accountability to funders, rather than to people and communities – even if this is not the intention of the funder in question. ‘Accountability to funders’ – even the most open and inclusive funding organisations – brings with it a certain high stakes mentality: the potential to fail with negative consequences, the burden of compliance that rarely feels like time well spent, and a sense of potentially unachievable standards.
Increasingly, funders are framing their evaluation ‘asks’ in terms of learning: enabling and encouraging organisations to learn what went well and what didn’t, and to share and apply this learning in the future. But when this is mixed up with perceptions of accountability (whether real or otherwise), does it fatally undermine the conditions necessary for open and reflective learning?
Many if not all funders would hope to leave delivery organisations stronger and better placed for the future as a result of their funding. ‘Evaluation capacity’ is often part of this, and a number of funders provide grants plus support to delivery organisations in the form of matching them with a consultant or evaluation ‘expert’. But, adding this in to an already murky blend of accountability, monitoring and learning, does it make sense to locate evaluation expertise outside the organisation? And why building capacity to evaluate? What about capacity to learn and change practice as a result? They are not the same thing.
My sense is that the purpose of evaluation has become very confused, and hopelessly entangled with other concepts and activities that effectively shape evaluation practice – and rarely for the better.
What should evaluation be for? Lots of things: it can be about accountability, but to young people and communities as well as funders. It can also be about enquiry, and about learning and improvement. They are all important, but it can be quite hard to do all of them at the same time. They give rise to different questions, and there are different approaches to answering different questions.
So, let’s return to my original question: could or should we do more to channel the influence of the funding community over evaluation practice? I think the answer has to be yes, but we have to go beyond a rather simplistic perspective that assumes that evaluation either happens or it doesn’t, and funder influence can make it happen, and more of it. This is what has led to where we are today.
This debate should be about channelling influence, and recognising that influence in all its complexity, rather than using the power of funders like a blunt tool. It should also be about unpacking evaluation and its purposes and drivers. And we must talk about ownership: too much evaluation practice is undertaken in response to perceived demands from ‘outside’ delivery organisations. Outside demands do little to engender ownership, which in turn shapes the entire organisational culture surrounding evaluation.
But these questions are divergent: they have no one answer, and instead call on all of us to think broadly about the issues. As a result we will be focusing more of our work this year, and in the coming years, on the relationship between the funding community, delivery organisations and evaluation, and will be sharing more of our thoughts on what this could look like soon.
Our monthly newsletter collects news, events, research and blogs from the Centre, our networks and practitioners and organisations around the world. Sign-up here