This blog was written by Thomas Lawson, Chief Executive of Leap Confronting Conflict and the Chair of the Centre’s new board.
Last September, I was invited to give the closing address at the Centre for Youth Impact’s annual conference. My central message was that the youth sector will not achieve lasting change for young people unless its staff, volunteers and organisations collaborate within and beyond its boundaries: we know that that the UK’s systems do not work for an increasing number of young people who, despite their talent and potential, face challenges that mean it is much harder for them to thrive. The reason our organisations exist is to achieve change for them – and that means changing the systems. Otherwise we are just tackling the symptoms. But we can’t do it alone. If we want to achieve significant and lasting impact, we must work together.
That, I believe, is the vision of the Centre for Youth Impact, and why I am delighted to be its new Chair. The Centre stands for a collective, collaborative approach that focuses on understanding and strengthening the impact of our work with and for young people.
With so many warning signs of an increasingly fractured society, understanding how we can achieve ever more impact with and for young people, so that they can thrive today and tomorrow as our next generation of wonderful leaders, parents, entrepreneurs, community workers, has never been more important. Without understanding how to measure the impact of our work, it’s impossible to understand how to improve it and achieve more for young people.
Collaboration can dramatically amplify impact, but it takes leadership – at all levels. I know this from my role as Chief Executive at Leap Confronting Conflict. Leap’s purpose is to give young people the skills to manage the conflict in their lives, reduce violence in our communities and to help lead our society. We have a deep belief in the talent and potential of young people. The young people for whom we work are those for whom conflict is most likely to turn in to destructive behaviour. They are over represented in the worlds of care, criminal justice, alternative education or may be on the edge of gangs. It’s largely one cohort of young people who bounce between those worlds.
With the complexity of the world we work in, it’s absurd to think that, alone, we can achieve meaningful change either for the young people we work with or to achieve significant changes to the systems that cause these problems. We have to collaborate in well-designed, highly-effective partnerships to do anything meaningful.
So, what about the role of leadership in collaboration?
From a leadership point of view, there’s the success of the organisation and the applause that goes with it. Some of my ambition is related to growth in turnover and reputation, but if I’m honest with myself, that’s just vanity. Once I think about it more deeply, I realise that what I really care about is growth in impact. And as the new Chair of the Centre, I’m well aware that we will never measure our success in terms of growth or turnover – it is no coincidence that the Centre delayed its move to independence for more than three years.
What I have learnt is that when we promote the high-quality work of our partners and partnerships to our funders, we strengthen both our organisations and our impact, much more so than when we are protective. We spend less time competing and more time working out how to succeed together. This is a different type of leadership, that calls on different skills, motivations and conversations. But I believe it’s the form of leadership that we need to embody and encourage as we look to the future.
It is critical to create a culture that recognises that leadership can come from anywhere in an organisation, or indeed a network. Participants can feel excluded if they are not part and parcel of the design, delivery and evaluation of the work – not as volunteers, but as paid personnel. Their expertise derived from their personal experiences is as valuable as any professional expertise I’ve ever come across, and gives us the best insights in order to have the best design. Many of those experts can be found in community-based organisations. We have to make sure that we partner on-the-ground with communities, as equals.
The speed of change in the worlds in which we work is also reshaping the nature of leadership. In the seven years I’ve been at Leap, public sector organisations have seen extraordinary levels of cuts. Local councils have seen 40-60% cuts to their annual budgets. We’ve seen youth services eviscerated. Local authorities and statutory bodies have found their roles shifting and stretching, often painfully, but of course there remain, in the face of these challenges, statutory partners who are incredibly ambitious and creative for the young people for whom they work.
In summary, these are the principles I think we need:
If we can do this, the prize will be knowing that when you’re looking back at your career, you contributed to a change for young people the benefits of which you can still see; that you made friendships with a great diversity of people and grew your understanding; it was hard work, and you failed and got back up and succeeded.
This is my ambition for the Centre for Youth Impact, and I believe that its networks, funders, partners and my fellow trustees share this ambition. We have listened and we have learned. We’ve done some things wrong and some things right. Above all, we bring energy, openness and a commitment to leadership in collaboration, and I am excited about the year ahead.
Kevin Franks is the Programmes Director at Youth Focus: North East and is the lead for the Centre's North East regional network. Kevin has 25 years’ experience in statutory and voluntary sector youth and community work, which includes centre based, outreach, detached and schools work. This blog is adapted from Kevin's talk at 'Funding Change: Making impact measurement work for funders and providers of youth services', held on 21 March 2018 at the Leeds Rhinos Stadium.
The idea of impact assessment within the youth sector is not new and both funders and youth providers have come a long way in measuring impact over recent years. However, that doesn’t mean it is always done well, correctly or meaningfully. Comments and questions, we often hear, range from:
And we commonly hear the term ‘tick box exercise’.
When I recently posed the question of “what are the challenges around impact assessment and evaluation between funders, commissioners and youth organisations” to a range of colleagues from across the sector in the North East; responses fell into two main areas:
We know and understand there is an intense pressure on commissioners, funders and youth organisations to deliver ‘improved value for money’ and ‘better outcomes’. Interventions cost money and we need to know whether, or not, that money is being spent to produce the best outcomes for young people. This raises the quest of what we mean by ‘value for money’?
In a recent debate on volunteering in the House of Lords the issue of the National Citizen Service representing value for the taxpayer was raised. This was in relation to senior staff salaries and surplus of income over expenditure against participation targets for the programme, potentially being missed by as much as 40 per cent. However, we do know that young people participating in the National Citizen Service Scheme can have a quality experience and many of them report a high level of satisfaction. Focusing on purely the cost of something, as being the best value, runs the risk that there is always going to be someone, somewhere, who’s prepared to do it for less. And if you take price as your main distinction, you’re in a race to the bottom, in regards to quality.
There is value in youth work, and I am sure we all have examples of where being involved with youth work has transformed young people’s lives for the better. However, it is a challenge that this value is easily captured in terms of cost-benefit ratios.
Quality is subjective, whereas quantity is not. Quality can be disputed, questioned and challenged. One cannot dispute quantity – it is easily measured.
Focusing on the price makes it easy to miss the real value – and can turn complex decisions based on ethics, culture, empathy, and understanding of society into much simpler games based on numbers and calculations.
Given the challenges, how can these be overcome going forward. Personally, I believe there is 3 main areas for consideration:
1) Be Clear About Youth Work
Youth work is a distinctive field of practice that puts young people at the centre of the work, and starts from their concerns, their interests and their own starting points. Young people engage in youth work by choice.
The great strength of youth work (and youth workers) is its capacity to adapt, change and grow. However, how many funders, commissioners and indeed the general public really understand what youth work is and what it achieves? How much consensus is there in the youth work sector on the ‘purpose’ of youth work?
We need to be clear about the outcomes we claim youth work can achieve:
If we do believe that youth work is responsible, or at least contributes, to these outcomes, and others, then surely we have a responsibility to provide evidence to back our claims.
If we expect young people to invest their own time and effort participating in youth provision, it is only reasonable that we make an effort to make it worth their while. And surely part of this is investing some of our time and effort in evaluating if we are providing a quality service.
2) Shared Language There is an obvious need to support the development, agreement and acceptance of a common language and framework to describe what the youth sector does. This will better enable commissioner and funders to understand what youth work is and support them to invest in quality provision that will provide the best outcomes for young people and communities. A shared common language will also allow delivery organisations, including the local small voluntary ones, to clearly communicate where they sit within the diversity of the wider youth sector and ultimately enable them to be better able to articulate their value.
There are a number of ways funders, commissioners and youth organisations can engage with each other.
Commissioners can see the value of the youth sector as a critical player in developing ‘asset-based’ approaches to providing high quality support, and by engaging youth organisations as partners in co-production of outcomes.
Evidence gathered by commissioners and funders can be better shared across the youth sector. Good evidence can be used to confirm or challenge approaches and interventions and to examine which features make them successful and worth investing in.
More can be done to build relationships between commissioners, funders and youth organisations. For instance, invite your funder to come and visit your organisation and see for themselves the work at first hand and hold events that create space for open and critical dialogue between all parties.
However, these approaches will require a level of courage from all involved. They will require strong and mature relationships, both within the sector, and between the sector and commissioners. These relationships will require time and attention to develop and maintain.
The youth sector has a role in coming together to provide a strong and unified voice. This requires leadership from within the sector to manage competition between different organisations.
We know the youth sector is diverse in its interests and organisational forms and, at times, struggle to (or refuse to) speak with one voice. Yes, difference of opinion is good. And we should always be open to critiques and different perspectives. However, we if can’t agree on some fundamental issues then we will be forever doomed to remain in this static state of being seen as a second class provision for young people by the state and general public. If we want consistency from commissioners and funders we have to consistency from our own sector. And surely the best way to do this is to have a strong, united youth work sector, delivering quality interventions that enable young people to succeed and thrive.
Quality youth work is a process of continuous evaluation and learning – both for young people and practitioners.
Quality youth work equals quality outcomes for young people, communities and society as a whole.
Our young people have a right to the best quality interventions - and we have a duty to provide them.
Youth work beyond the measurement imperative? Reflections on the Youth Investment Fund Learning Project from a critical friend
In this blog Tania de St Croix, Lecturer in the Sociology of Youth and Childhood at King's College London, offers her thoughts on the Youth Investment Fund Learning Project, which the Centre is leading with NPC and others. You can find out more information on the YIF Learning Project at https://yiflearning.org.
Many involved in the youth work field are critical of the youth impact agenda, particularly its emphasis on the quantitative measurement of outcomes for individuals, and its neglect of process, group work, and structural inequalities. Those of us involved in ‘In Defence of Youth Work’ have argued that the contemporary emphasis on impact and outcomes cannot be separated from its context, the neoliberal ‘desire to financialise human existence’, and its consequences for which practices are valued and who gets to decide. We have claimed that open access youth work is particularly unsuited to outcomes based management, and that open youth work's future existence is undermined by an emphasis on impact measurement.
While those of us making a political critique of impact measurement (within and beyond young people’s services) face an uphill struggle against dominant understandings of ‘what works’ and ‘what counts’, there has been a growing recognition of the specific challenges in evaluating open access youth work. In this context, it has been interesting to follow the development of the Youth Investment Fund (YIF), a £40 million government (DCMS) and Big Lottery Fund investment in open access youth work. While we might start by noting that £40 million over 3 years is dwarfed by a decade of youth work cuts, the YIF is nevertheless significant: it suggests that someone, somewhere in policy recognises the potential value of open youth work. The YIF is also significant in relation to impact debates, as it included “an explicit objective to strengthen the evidence base on the impact of non-formal learning opportunities for young people”.
A change of emphasis?
This objective to ‘strengthen the evidence base’ of open access youth work is carried out by the YIF Learning Project, led by NPC and the Centre for Youth Impact. Its tone and approach are encouraging, and some of the significant concerns of the youth work field have been taken on board. This is demonstrated both by a language of learning and openness, and an emphasis on collaboration with young people, practitioners and youth work organisations. The principles of the YIF Learning Project laudibly include:
The YIF evaluation approach is more closely aligned to youth work’s approaches and methodologies than it might have been, and this is great to see. And yet, there is still a sense that it attempts to ‘measure the unmeasurable’. As I write this, I imagine the weary sighs of colleagues in the youth impact world; however much they take on board youth workers’ views, it is never enough to stop us complaining! None of what follows is intended to criticise for criticism’s sake, or to take away from the respect with which the Centre for Youth Impact (in particular) has treated those of us who are critical of the very tenets of the youth impact agenda they were set up to promote. The following are five dilemmas that are important to address:
1) Despite moving away from ‘blanket outcomes measurement’, quantitative outcomes measurement continues to play a central role. Given the tendency that ‘what gets counted’ is too often the only thing that ‘counts’, how can the project guard against the preference for more structured, time-limited, ‘project-based’ youth work (that is easier to ‘measure’) over informal, open-ended, open access practice? How can the group processes that are central to youth work be recognised, when it is individual change that tends to be measured?
2) What are the dangers of standardising and quantifying ‘youth work quality’ and ‘young people’s views’, of inventing new tools (or importing them from other countries), and of engaging private sector consultancies and agencies to do this work?
3) It is inevitable that evaluation – especially on behalf of a funding agency – will affect practice, including in unintended ways. How much of a ‘data burden’ will be created for organisations? Will they really feel free to share their experiences, reservations, and honest reflections?
4) Can evaluation be separated from top-down performance management, judgement, comparison and control? Measurement changes how practitioners are perceived, and how they perceive themselves in relation to their work. How can data be used for collective learning without it also being used as evidence of ‘success’ or ‘failure’ by individual practitioners and organisations, and even by the field of open access youth work as a whole?
5) How can ‘footfall’ and other data be collected without unacceptable levels of surveillance, and breaches of confidentiality about young people’s whereabouts and their activities? How can the most marginalised young people, many of whom are (rightly) suspicious of authorities and institutions, be assured that their privacy is respected?
So what? And what next?
The current approach to evaluating the Youth Investment Fund demonstrates thoughtfulness and attention to the special characteristics and challenges of open access youth work. As a result, the experiences of young people and youth workers funded by this scheme will be more meaningful and less onerous than they would have been under a more prescriptive top-down approach. The YIF Learning Project goes some way towards challenging dominant approaches to impact measurement. Yet in other ways it is reinforcing the status quo: continuing to prioritise the measurement of individual change, converting qualitative elements of youth work (its quality and young people’s experiences) into statistics, and aiming towards a financialised ‘value for money’ analysis.
Ultimately, without questionning the broader context - the basis on which measurement is still preferred by most funders and governments, as a neoliberal tool of governance and control – many of these problems remain intractable. Moving beyond such dilemmas, then, is not merely a matter of creating more congruent impact tools, reducing the data burden, and involving young people and practitioners in the process (important though all of these things are). It requires imagining meaningful evaluation beyond a focus on outcomes and measurement, thinking seriously about the social and political purpose of youth work, and the role of young people in creating change. It involves working with others - beyond the youth sector and beyond our national and regional borders - to challenge the global dominance of finance and investment logic in activities that hold to a different version of ‘value’. While such aspirations may seem momentous, there is nothing to stop us dreaming of a different world, and doing what we can to make it real in our everyday lives.
Tania de St Croix has been a youth worker for over 20 years, and is an active part of ‘In Defence of Youth Work’ (IDYW); she thanks IDYW colleagues for helpful feedback on this blog post. She is a lecturer at King’s College London. Her book, ‘Grassroots Youth Work: Policy, Passion and Resistance in Practice’, was published in 2016, and her forthcoming research project is entitled ‘Rethinking impact, evaluation and accountability in youth work’.
here to edit.
This blog as written by Brahmpreet Gulati, a Member of the Youth Parliament and a youth councillor for Thurnby Lodge, who attended Raven Youth Centre in Leicester. Brahmpreet was part of the 'How Will You Hear Me?' project where young people shared their personal stories and talked about their experiences of being listened to – or not – by different public bodies across a series of short films.
It is a common view among our society that youth centres are old-fashioned buildings with some snooker tables and sofa’s lying around. This is the perspective that most adults, and most politicians take, however it is not the view of the young people who actually use these spaces. For them it’s a place where bantering and opening-up about the deepest fears is acceptable, a place where meeting your friends is not seen as a threat by the outside world, and most importantly a place where two generations meet and can have a conversation without awkwardness.
Leicester City is one of the local authorities who had drastic cuts made to the youth services they provide. Many young people were engaged in the City’s consultation process used for these cuts, for example through the Young People's Council attending scrutiny meetings and meeting with the Assistant Mayor, to ensure that young people’s voices are heard and acted upon. These examples show that they’ are effective mechanisms available for local authorities to work in partnership with young people, however this does not always take place.
Many local authorities across the country are beginning to take control of this space, and the voices of the young people affected are ignored as budgets are cut with little care for the impact on the futures of the people affected. Massive changes of youth workers and provision means that the environment in youth centres that‘s crucial for young people changes. Distress replaces the cohesiveness of the same youth centre - as changes in timing and staffing replace that warm comfort of the centre. Young people already face lots of ongoing battles, whether that’s looking after an ill parent or family member, or being vulnerable to online trolls. When making these cuts, the authorities fail to recognise that they’re creating an additional battle for these young people, they now have to fight for a youth club, a designated space, for a place to escape! This goes against the common message to young people that it’s “their” youth club.
Closing one youth centre may not seem like much of an immediate loss, however individual youth clubs are a part of larger whole, and when all the losses are added up society becomes increasingly fragmented. Anyone who thinks that this can simply be filled by schools is missing that the relationship between a teacher and young person could never compare or compete between a youth worker and a young person. The classroom setting for many young people is a setting of listening and learning in order to pass exams, rather than a platform to let loose and explore other areas of life.
With the positive effects of good youth services often only clear in the future, youth clubs have become too easy a place to cut. Unfortunately, once the space is lost it’s lost. I can’t help but think that there will be a time when the next generation of decision makers question the burden among the remaining services, which will have to face the increased pressures created from a loss of youth clubs, and realise that the cut youth services a crucial missing piece in the puzzle.
Let’s note as well that we have some shared experiences of accountability - and that accountability is, overall, a good thing. Accountability within an organisation is critical to planning and effectiveness and it’s equally important in relationships with stakeholders, including beneficiaries, partners and the wider communities we work with.
Our formal accountability is very similarly constructed as well. Most delivery organisations, trusts and foundations are registered charities, with trustees who are accountable to the Charity Commission for serving charitable purpose and achieving public benefit. For foundations, this means showing that their grant-making is serving charitable purpose and public benefit, even when their grantees are not registered charities.
Accountability for any organisation depends on having reasonably reliable information about and understanding of how resources have been used and what was achieved. And that is a need we all have in common. Evaluation is one source of that information but somehow we have come to see it as something separate from the central flow of an organisation’s work.
The Evaluation Roundtable talks about ‘strategic learning’, a process that might involve formal evaluation alongside the use of other types of evaluative information, such as management and financial information, regular user feedback and the intelligence that we all gather in the course of our work. Strategic learning is learning to inform decisions about what to do next, about changes - minor or major - that we might need to make.
In my experience, most of the people working for funders and delivery organisations, and certainly the most effective, are characterised by a curiosity that drives the quest for impact. We want to know whether we are achieving what we set out to do, how and why things are working or not, and what changes we might consider. We are missing a trick if we don’t see evaluation as part of that strategic learning, whether at the level of the whole organisation, service or project.
Do we all do this well? Do we have the resources we need? As a funder, I can say that our organisation does not yet have in place all the skills, systems or culture we need for strategic learning, but we are developing our capacity and becoming more of a learning organisation. We also recognise that this is a greater challenge for the organisations we fund, who are hard pressed for resources, including staff time.
However, we see many grant applications in which evaluation work is seriously under-costed and there is inadequate provision for staff time to manage and run evaluative processes, interpret the findings and consider the implications. So we look forward to the conversation about how we can work together to make much better use of the effort that is going into ‘evaluation’ at the moment, and which is not delivering all it could for those we work with.
Effective evaluations benefit everyone. Grantees, those they support and funders. So let’s talk about grantees and evaluation as well as about funders and evaluation. Let’s talk about shared ownership and differing perceptions as part of this, and what funders and delivery organisations can do, together with evaluators, to make better use of evaluation as an integral part of our work.
In this blog Bethia McNeil, Director of the Centre for Youth Impact, opens a conversation about funders and evaluation. You can read Jane Steele's, Director, Evidence and Learning at the Paul Hamlyn Foundation, response to Bethia's blog here.
What role does the funding community play in shaping evaluation in youth-serving organisations? This might sound like a disingenuous question; after all, many youth organisations would say that ‘funder requirements’ are the main driver of their evaluation activity (for better or worse). It’s certainly clear that a significant volume of evaluation activity happens in association with particular funding pots, but how does funding – and specifically, the funding community itself – shape this evaluation activity?
This is a particularly interesting question because ‘funder requirements’ are not only considered to be a major driver of evaluation practice, but also a major barrier to such practice ever changing. Very many of my conversations with delivery organisations about re-thinking their evaluation activities end in “but what if my funders don’t like it? And we have to do different things for every one….”. So, if we accept that the funding community exerts such a strong influence on evaluation practice, could or should we do more to channel that influence?
But before we ask that question, we have to ask another. What is evaluation for?
Certainly, for some funders, evaluation is a form of monitoring: checking that what they are funding is actually being delivered, and reaching the specified people and communities. This, as Tamsin Shuker from the Big Lottery Fund refers to it, is more about checking “what it is”, than asking “what is it?”.
Sometimes this also extends to checking whether the funding is having the impact that a delivery organisation said it would. Again, this tends to be in the form of ‘demonstrating’ impact, rather than genuinely enquiring.
Such monitoring activity is also a form of accountability, but it tends to be seen and felt by delivery organisations as accountability to funders, rather than to people and communities – even if this is not the intention of the funder in question. ‘Accountability to funders’ – even the most open and inclusive funding organisations – brings with it a certain high stakes mentality: the potential to fail with negative consequences, the burden of compliance that rarely feels like time well spent, and a sense of potentially unachievable standards.
Increasingly, funders are framing their evaluation ‘asks’ in terms of learning: enabling and encouraging organisations to learn what went well and what didn’t, and to share and apply this learning in the future. But when this is mixed up with perceptions of accountability (whether real or otherwise), does it fatally undermine the conditions necessary for open and reflective learning?
Many if not all funders would hope to leave delivery organisations stronger and better placed for the future as a result of their funding. ‘Evaluation capacity’ is often part of this, and a number of funders provide grants plus support to delivery organisations in the form of matching them with a consultant or evaluation ‘expert’. But, adding this in to an already murky blend of accountability, monitoring and learning, does it make sense to locate evaluation expertise outside the organisation? And why building capacity to evaluate? What about capacity to learn and change practice as a result? They are not the same thing.
My sense is that the purpose of evaluation has become very confused, and hopelessly entangled with other concepts and activities that effectively shape evaluation practice – and rarely for the better.
What should evaluation be for? Lots of things: it can be about accountability, but to young people and communities as well as funders. It can also be about enquiry, and about learning and improvement. They are all important, but it can be quite hard to do all of them at the same time. They give rise to different questions, and there are different approaches to answering different questions.
So, let’s return to my original question: could or should we do more to channel the influence of the funding community over evaluation practice? I think the answer has to be yes, but we have to go beyond a rather simplistic perspective that assumes that evaluation either happens or it doesn’t, and funder influence can make it happen, and more of it. This is what has led to where we are today.
This debate should be about channelling influence, and recognising that influence in all its complexity, rather than using the power of funders like a blunt tool. It should also be about unpacking evaluation and its purposes and drivers. And we must talk about ownership: too much evaluation practice is undertaken in response to perceived demands from ‘outside’ delivery organisations. Outside demands do little to engender ownership, which in turn shapes the entire organisational culture surrounding evaluation.
But these questions are divergent: they have no one answer, and instead call on all of us to think broadly about the issues. As a result we will be focusing more of our work this year, and in the coming years, on the relationship between the funding community, delivery organisations and evaluation, and will be sharing more of our thoughts on what this could look like soon.
This blog was written by Pippa Knott, Head of Networks at the Centre for Youth Impact
Working with the Talent Match partnerships has given me an amazing opportunity to reflect on the power of relationships in the context of an employability programme. We’re now focusing on what the Centre’s role might be in strengthening and promoting those that sit at the heart of youth work and other provision for young people.
It’s an issue that has woven through much of our previous work: Robin Bannerjee gave a very well-received presentation at our first event of 2017, discussing approaches to measuring personal development in the context of relationships, and the team at Dartington Social Research Unit (now Policy Lab) wrote for us on the place of relationships in social provision.
Through Talent Match, I’m reminded again of how supportive relationships at their best are hugely powerful – sometimes transformative – and often at the heart of programmes that are ‘working well’. They’re also as complicated as the combination of the individuals who make them up. Many people are happy to ‘feel their way’ through relationships, drawing on past experience and what seems right in the moment. This applies as much to relationships within services and other provision as relationships elsewhere. So devising a framework capturing how to ‘do them well’ is difficult: it can quickly feel an academic, even unhelpful exercise, unlikely to be valued and used by practitioners. Watch this space for how I’m working with the partnerships to try and progres some of these issues! Full findings from the project will be launched in March.
The way in which relationships recur in our work at the Centre also suggests it might not make sense to think of them as a ‘topic’. Instead, they could be a crucial piece in the puzzle of how we can support others to flourish, while also reflecting on ourselves and what we’re bringing to any situation.
Thinking only about the relationship between adult professional or volunteer and young person might also be limiting. I’m also thinking about how we can make the best of the relationships upon which the Centre exists: within the central team, between us and our networks, and across the web of organisations, connections and friendships within which we work. Relationships are a key mechanism for development and support in the overwhelmingly complex systems we live within, and something that any individual can learn about and use to affect change in their own lives, and the lives of others.
What might it look like if we invested in relationships for social good, rather than in organisations or programmes? Are our current methods and frameworks for measuring impact in work with youth people sufficient to take account for the nuances, complexities and potential impact of positive relationships? Do they tell us enough about how relationships can be improved? How can we develop them if so? We’ll be learning from youth work principles and practice [for example, Relationship, Learning and Education; Benefits of Youth Work; Grassroots Youth Work] and the work of the Search Institute, the R-Word, and Lankelly Chase as we develop our approach in this area.
This blog was written by Venetia Boon, Children and Young People Grants Manager at Comic Relief
Being a funder of youth organisations is a great position to hold – the width and breadth of the work taking place is astonishing, and the amount of expertise hard to fathom. It would be impossible to get a detailed understanding of the knowledge running through and around those organisations. But to do my job well, I need to strive for three things:
It’s also part of my role to feed into the collective organisational knowledge of Comic Relief. My feedback on the impact of the project should fit into the jigsaw of the charity sector activities and Comic Relief’s approach to funds in the UK and internationally.
The keywords for our approach to learning and evaluation are: appropriate, proportionate, grantee-led, and realistic. Learning should primarily be for and by the people receiving funding, so they can focus on what is relevant in the context they work in. They need to be able to own and feel able to use the learning effectively. While we want to drive good monitoring, evaluation and learning practice, and learn from our grantees - we don’t want to dictate the specifics to them. We also don’t want to make people feel duty-bound to create processes they can’t keep up with or that don’t serve their purpose.
As a grant manager, other things I’m interested in are honest conversations with people receiving our funding. I want to know about the unexpected outcomes, what went wrong and how the life experiences of beneficiaries and staff were integrated into the learning. The things that didn’t work quite as expected, that were adapted and adjusted are just as interesting and valuable as the things that worked perfectly. More so even.
Coming away from the Centre for Youth Impact Gathering, my attention was caught by a couple of things. Firstly, I was really struck by the discussions about whether it might be possible to measure the quality of work and then make links to outcomes of change. This would be in place of desperately trying to measure change, which we all know can be nebulous and take a long time to show up. This felt like a positive story, and something I’d be keen to hear more on. I’m sure there’s no magic answer, so perhaps that approach just shifts the onus on data to a different area? But I definitely want to know more.
I also thought about the speaker who mentioned the changes they had made to a project after discussions with their funder. They realised there was a potentially more effective approach than originally planned, but it needed them to shift targets and outputs. We as funders need to communicate how open we are to hearing those messages and that we understand all the experience in the world doesn’t mean you’ll get it right every single time. If we truly believe in putting people at the centre of our collective work, we have to understand the repercussions of one-size-doesn’t-fit-very-many.
The last point was how exciting it was to have a group of funders discussing what we think about evaluation and monitoring with other people in the sector. In general, we had very similar thinking and have a collective responsibility to promote that thinking. This is in addition to talking with grantees and other funders about such matters. We need to push for monitoring that works for everyone concerned, putting the experience of people right at the heart. Finally - we all need to understand and accept that people have different needs, can be changeable, and won’t all want the same thing.
“Cracking the impact nut” or How the Youth Investment Fund learning and impact strand responds to the challenges of evaluation in open access provision
This blog has been written Matthew Hill, Head of Research and Learning at the Centre for Youth Impact.
Our YIF approach to data collection
NPC and the Centre for Youth Impact are leading the learning and impact strand of the £40 million Youth Investment Fund (YIF), which is a joint programme supported by government funding from DCMS and National Lottery funding from Big Lottery Fund. Eighty-six youth providers are being supported for three-years (2017-2020) to develop and expand their open access youth provision, and we are currently working them to design an evaluation approach that captures the value of their work, and supports their learning and improvement. Our overarching approach is an attempt to crack some of the perpetual practical and methodological nuts (my favourite bar snack) in measuring the impact of open access provision. This blog outlines five ways in which we are confronting these challenges within our work.
Moving away from blanket outcomes measurement
The past decade or so has seen a concerted push to be more outcomes focused. We continue to support an outcomes focused approach to service design and delivery (that is why we all do what we do after all) but our YIF-work represents a shift away from blanket outcome measurement (i.e. trying to capture every outcome for every young person for every organisation). This shift is a direct response to many of the perpetual challenges of outcome measurement in open access settings including fleeting or irregular engagement, defining generalised outcomes for individualised provision, developing robust metrics for broad personal change and the issue of measuring long term impacts. Far from abandoning outcome measurement, we are focusing on high quality targeted measurement with a sub-sample of the YIF cohort, which will ultimately provide us with more robust and meaningful data.
As well as this targeted approach to outcomes, our YIF-work places increased emphasis on the experience of young people and the quality of the provision they receive. Crucially, we are aiming to link the data on outputs, user feedback and quality to the targeted outcome data so we can understand not only whether the provision is having an impact on young people – but why.
Focusing on the user experience
Another challenge is that young people often feel overburdened with rather obscure and meaningless (to them at least) surveys. In response, our YIF-approach focuses on the elements of delivery that are most relevant and meaningful to young people – namely their experience of services (e.g. feelings of safety, respect and positive challenge). We are working with Keystone accountability to develop a set of standardised feedback questions around this experience. Instead of large annual surveys this feedback process uses regular light touch feedback - perhaps 3-5 questions once a month. This ensures that user feedback is embedded in ongoing reflective practice, and crucially, means that organisations can respond more immediately to the findings. Critically, we are also working with providers to process and act on this feedback, and tell young people what has changed as a result.
Improving as well as proving
Another nut that needs cracking is practitioners’ sense of dislocation between a lot of impact measurement and their everyday work. Part of our commitment to ‘going with the grain’ of provision is a focus on the quality of youth work practice. This data absolutely has to be linked to outcomes data – as ultimately this dictates what is and what isn’t quality provision – but by emphasising considerations of quality we are focusing on those elements of provision that are most relevant and meaningful to youth workers themselves. Our YIF work is drawing on an established quality improvement framework from the US – the Youth Program Quality Assessment – which relies on peer observation with youth workers identifying ‘markers’ of quality in the delivery of their colleagues. This framework is not a critique of existing quality assessment frameworks but is, in fact, a complement to them – ensuring quality is also monitored and increased as part of ongoing practice improvement rather than just assessed against an existing standard.
Understanding young people’s journey through services
Although most providers collect detailed attendance data, many tell us that they use this for monitoring overall service demand rather than truly understanding the way that individuals engage with their services. By utilising existing data and trialing new digital methods such as Yoti we aim to build a much more nuanced picture of what young people do with their feet i.e. how often they attend, for how long, and how they move through provision – as a proxy for their levels of engagement and ‘exposure’ to interactions.
Arguably the greatest opportunity presented by the YIF is the potential to collect shared data across 86 grantees for three years. This offers a rare (probably unique) opportunity to build an evidence base across a huge diversity of open access provision (detached/ building-based; structured/ unstructured; universal/ targeted) and, by comparing the results across different types of provision, we will be able to really understand the strengths and weaknesses of different services.
We recognise the many challenges that open access providers face and believe that the dominant paradigm of measurement is not fit for such settings. Our YIF work has the potential to overcome some of these challenges, and develop approaches that are applicable across the wider sector. It is certainly ambitious and we are trying new things out – some of which will work but some of which will no doubt fail. We will confront this uncertainty with a pioneering spirit and a humbleness to admit when things don’t work. As well as grantees we are committed to working with the wider sector and we would greatly value your input in testing, refining and reflecting upon the tools and evidence that emerges. Our dedicated YIF learning and impact website will be live soon… so please stay posted… or get in touch with Matthew.Hill@youthimpact.uk or Anoushka.Kenley@thinkNPC.org if you want to find out more in the meantime.
Inspiral Targets: the measurement of everything and the value of nothing
Dan Gregory, of Common Capital, makes the case for ‘uncertainty, complexity and modesty’, as he reflects on his recent keynote presentation at The Centre for Youth Impact Gathering 2017.
Dan has over 10 years’ experience of funding and financing voluntary and social enterprises, through developing policy at the highest level and delivering in practice at the grassroots. He has worked for the Treasury and the Cabinet Office where he led the development of government policy on third sector access to finance, social investment and the role of the sector in service delivery. Dan spends some of his time at Social Enterprise UK and also works independently under the banner of Common Capital.
I’m not much of a natural public speaker. Certainly not an exciting or inspiring one. So when I am invited to speak at a conference or event, I try to be at least informative. To try to satisfy the audience with substance if not style. So lots of facts, evidence, insight or expertise from a field I have worked in for a decade and more. Talking from the solid ground of territory I know well and feel comfortable upon.
Sometimes this goes down well. But other times I sense the audience feels a bit bombarded with wonk grenades. They don’t seem to really warm to me, they feel my facts and evidence are a bit relentless and dry. They don’t ask questions afterwards because, frankly, they’ve had enough already, thanks.
Recently I spoke at The Centre for Youth Impact Gathering 2017: Shaping the future of impact measurement. The event was for practitioners, researchers and funders with an interest in learning, evidence and evaluation in work with young people.
It seemed to go very well. A few people said afterwards that they really valued my presentation. In an unprecedented turn of events, a few even told me they enjoyed it. This was confusing for me because, to be honest, I wasn’t really sure what I was talking about and certainly wasn’t on solid ground. Frankly, I don’t really know much about social impact measurement. I’m not an economist or accountant. I’m not a social impact guru or measurement Maharishi. I haven’t got a social impact measuring stick. So why did it seem to go so well, at least compared to normal?
In short, my presentation was about how social impact measurement is largely a load of rubbish. Although not entirely. I started off admitting I was a bit of a fraud and didn’t have any technical expertise as such. But I do have a broadperspective, having worked in and around this area for 15 years now, in government and outside, and from watching the rise of new fields of social impact measurement, social impact bonds and so on.
So why would someone argue that social impact measurement is a load of nonsense?
Perhaps above all, because the world is just too complex to reduce to the logic of input > output > outcome > impact. Such linearity is a joke. Perhaps this works in the laboratory - in very controlled and restricted conditions. But out in the field, people are more complex than bacteria, social programmes are not vaccines, homeless people are not a disease. As social innovation guru Geoff Mulgan has pointed out, evidence can be as unreliable and contingent as humans are irrational and unpredictable. Mulgan has described how, “Unlike molecules, which follow the rules of physics rather obediently, human beings have minds of their own, and are subject to many social, psychological, and environmental forces…. Very few domains allow precise predictions about what causes will lead to what effects.” Sadly, later in the same article, he subsequently suggests his own, new, social impact measurement methodology as the answer to this problem, immediately undoing all his good work in rising above the fray, and bringing himself down to the level of the more common or garden impact measuring gun for hire.
Second, and related to complexity, is the realisation that the impact of any action is always to some degree context specific - to a particular family, community or individual, for instance. Until the day arrives when we are able to construct multiple realities, it remains impossible to ever really know what would have happened otherwise. Randomised Control Trials, for instance, might tell us what happened elsewhere but not what would have happened here. And even RCTs are somewhat problematic. Beyond the issue of their considerable expense, even the World Health Organisation is losing faith in RCTs - “As the complexity of interventions or contexts increases, randomization alone will rarely suffice to identify true causal mechanisms.”
Third, as Zhou Enlai once pointed out, when it comes to impact, it’s just too early to say. When is impact? Perhaps my biggest professional fear is that charities and social enterprise have been doing such a good job for the last few decades in mitigating the worst excesses of capitalism, mopping up the problems and making just enough of a difference to hold together our society that would otherwise rip apart at the seams, that we have kept our prevailing economic system in place just long enough to allow it to do irreparable environmental damage to our planet, putting at risk all life on earth! Add up all those SROI ratios from among our sector and how’s that for impact?
Fourth, and more practically, half the social impact metrics we use are total nonsense. Lives touched, for instance, is one of the most common and sadly, not even the most ludicrous of the currencies we circulate. How do Stalin and the Chuckle Brothers measure up against that yardstick? Even if we did somehow develop less blunt and limited methodologies, they would be inevitably discredited by some other new social impact salesperson within weeks, hawking a supposedly new and improved model. Some of the metrics out there are frankly a total sham. I have listened to panellists at events describe how they have methodologies endorsed by The Pope and Will.I.Am which can calculate your social impact in under 7 seconds. Yet no-one calls these charlatans out.
Fifth, social impact measurement can bring accompanying dangers. Maybe metrics can tell us what happened in the past but that doesn’t mean they can tell us what to do in future. The world changes. What works changes. Not acknowledging this may bring dangerous consequences. Measurement can be dangerous if it is used to influence behaviour, often creating perverse incentives. Italian fireman paid by results start lighting fires to put out. Big outsourcing companies are rewarded for dead people not reoffending. NHS Patients are unable to book appointments with their GP more than two weeks in advance. In fact, as researchers have concluded, “Target based performance management always creates gaming”.
Finally, social impact measurement can be expensive, bringing negative or little benefit while diverting resources away from other work. Metrics can also be demotivating for staff and volunteers at charities and social enterprises, undermining their public service ethos, crowding out creativity, freedom, intuition, trust and the human touch.
So much for social impact measurement then. What a waste of everyone’s time?
Well largely, yes. But not entirely. I concluded my presentation by suggesting a different tack and one which seemed to go down quite well. Much of my frustration with this field is that social impact measurement always seems so sure of itself. “You have to get better at measuring your impact “ they say. “Of course it’s possible” they proclaim. That can be really annoying. So with that in mind, I also admitted where I might, in fact, be wrong.
First, if clinical tests and trials and RCTs have brought us so far forward in medicine and the health profession then perhaps it’s only a matter of time before we develop similar capabilities for better understanding wider fields of human activity. Who knows how far technological advances and artificial intelligence might take us in future?
Second, perhaps our metrics and methodologies will get less stupid. Jeremy Nicholls of Social Value UK - who has done more than anyone to advance this somewhat preposterous cause - is nevertheless doing a fantastic job in building bridges and overcoming divides between competing schools of metrics. Jeremy – and other good folk like Tris Lumley at NPC - have been working hard for many years to get the social auditors to agree to shared principles with the SROI merchants and to find common cause among competing clans. This can only be a good thing. Now we just need to call out the charlatans.
Third, and a point which Jeremy himself makes well, even if the numbers are nonsense, the process of social impact measurement, done well, can serve to empower those who are too often forgotten by charities, social enterprises, government and funders. Maybe the calculations and the spreadsheets and the ratios turn out to be meaningless. But if they were developed in a way which gives voice to previously powerless stakeholders – the beneficiaries – then the process isn’t entirely pointless.
Fourth, maybe there’s money here. Maybe funders and financiers and customers and contractors may also be fooled by these nonsensical numbers. If this brings money in, then this is just as useful to charities and social enterprises as a rebrand, a snazzy website or a shiny annual report.
So where does that leave us? Why was this train of thought slightly less boring for my audience than my usual barrage of wonk?
I think the message here is to be proportionate, to be humble, to even be sceptical. But nevertheless, to keep on trying. Trying to understand your impact is a laudable ambition at least. Often metrics might be too complex, too uncertain, too contingent, of limited or even dangerous practical application. They might be expensive. Sometimes, we might better focus on means, values and behaviours, ownership, co-operation, openness and respect. Sometimes we might just throw the spreadsheets in the bin.
But like emojis (in a text message, not on a gravestone) these metrics may yet have a time and a place. Who really knows for sure? People seemed to like my message that certainty is overrated. Uncertainty, complexity, modesty and admitting fallibility seem all together more human and more popular. Maybe that is exciting and inspiring.
Our monthly newsletter collects news, events, research and blogs from the Centre, our networks and practitioners and organisations around the world. Sign-up here