The place of power in evaluation is an interesting question to ponder: it is somehow everywhere and yet simultaneously nowhere. Power relations could arguably be the single biggest influence on how an evaluation is designed, undertaken and experienced: power often decides the questions that an evaluation is seeking to answer, and how it goes about answering them. Yet in the results, or the write up – particularly of large scale, ‘heavyweight’ impact studies and service evaluations, these power relations are rarely explicitly acknowledged. This is despite acknowledging that many go into the evaluation ‘profession’ precisely because they wish to amplify the voices of those who are seldom heard or listened to, and to redress some of the power imbalances that persist.
Similarly, where and how we perceive power to be located and ‘held’ is a major determinant of how we define not just ‘impact’ but even ‘change’. Who gets to define what ‘good’ looks like, or what constitutes ‘progress’? Perhaps the most common example of this relates to ‘empowerment’ as a focus for provision. Implicit in the aim of empowering young people is a recognition that a power imbalance is present, but it also says something a little more challenging about how power flows and may be granted or ‘bestowed’ on one community by another. These more contentious questions about power can be hard-wired into theory of change (both the process and the output), and ignored or even exacerbated by evaluation. We should question whether anything other than young people’s voices can tell us about the extent to which they ‘feel empowered’.
Power is also intimately woven into how we consider and value evidence: what we decide to share, and what we decide to hear and act upon.
We often hear that for many people, stories are more powerful than numbers – “everyone loves a case study”. To others, the words of individuals can appear to have very little power: greater power seems to reside in assimilating voices, translating them into numbers and ironing out the differences.
In either case the picture can become more complicated. The power that stories bestow has the potential to shift when third parties select some to be retold and disregard others, and appropriate these stories and use them to meet their own needs. Similarly, the people doing the assimilation, translation and ironing required to provide neat statistics represent a further locus for the power bestowed by information.
One place where power is very explicitly referenced in evaluation is the concept of ‘statistical power’. This refers to the likelihood of over-looking the effect of an intervention because there was not enough data to distinguish between the effect just happening by chance. This is intriguing to me: it’s a case of there being power in numbers (that is, the more data, the more confident we can be about the finding) but yet little power in individual experience. The risk is that individual experience can be literally ‘invalidated’.
It is with these thoughts in mind that we’ve decided to focus on the power question for our annual conference this year. It would be relatively easy – and indeed, I’ve had this conversation many times – to ascribe malign intent to evaluation; to suggest that the act of evaluating disempowers or actually devalues that which we’re trying to capture. I agree that this is certainly a possibility, but I don’t think it’s inevitable. Evaluation can be as ‘empowering’ as disempowering, and as powerful as it can be toothless. Fundamentally, evaluation is nothing without human intent and human interaction, even though many evaluation designs seek to limit or ‘control for’ this. And so, we need to talk about it more openly, to consider how power and evaluation are experienced, and how it might be different.
And that’s what we hope our conference will do. We will hear challenge about whether evaluation will always be a tool of the powerful, and whether there’s an opportunity to reclaim it – as many are already trying to do. We’ll explore whether a focus on the quality of experience might helpfully shift the line of sight away from our power to create impact, and towards our power to create the right conditions in which communities create impact for themselves. We’ll consider the power of data, funding, the collective, and lived experience. We look forward to seeing you there!
Buy tickets and view the draft programme for The Gathering 2018 here.
This blog was written by Bethia McNeil, Director of the Centre for Youth Impact. It is part of a pair blogs, written by James Noble, Impact Management Lead at NPC, and Bethia. You can read James's blog “Let's stop chasing our tails on impact measurement” here.
I have had a couple of challenging conversations recently about whether the Centre for Youth Impact is doing what it should be, or at least what people want us to be doing. These conversations, and James Noble’s blog reflecting on the difficulties of evidencing impact in youth work, have reminded me of a regular theme in my work over the last eight years: the search for the holy grail, also known as definitive proof that ‘youth work works’.
There has been, for some time, a sense that if only we could collectively prove – or at least demonstrate beyond reasonable doubt – that youth work achieves social good, then a battle would have been won, and the spoils of war would follow. There are many who believe that this should be the job of the Centre for Youth Impact. This is a seductive notion, and one that I have myself been attracted to on occasion.
But what does that actually mean? Allow me, for a moment, to ‘problematise’ the holy grail statement.
I don’t think there’s an agreed answer to any of these questions (that is, individuals or groups might share a view, but it’s unlikely to be shared more widely). More to the point, I’m not sure that it’s achievable or even desirable to reach a consensus view on all of these issues. And I certainly don’t think it’s the Centre for Youth Impact’s role to take a view and dictate it more widely.
For the record, here is my take – and you should interpret this exactly as I intend, my take.
It is extremely difficult to ‘prove’ anything in the realm of social programmes and human relationships (and most social programmes have human relationships at their heart). Most attempts at doing this have become a drawn-out debate, with as many people seeking to disprove as there are seeking to prove. There’s a complex relationship between what we, as individuals, think we ‘know’ and the confidence we have in this knowledge, and the route to gaining it. This is different for different people (and different purposes) and as yet, there has been no real agreement on any level about what constitutes ‘proof’ in the social sector.
Much of the research that has been done in attempt to ‘prove’ impact has been called into question, either because the method has not stood up to scrutiny, or because the results were impossible to replicate a second time, or because the research took too narrow a view in an attempt to prove a particular point. Such research or evaluation is never neutral, and there will always be disagreements about what constitutes evidence, or ‘good evidence’ or even a ‘positive outcome’. And this is all assuming that the results say what we want them to say… what happens if they don’t?
Attempts to prove impact in the social sector frequently end up dictating or influencing practice. At the most extreme, this can involve certain young people being randomly selected to ‘receive’ an intervention while others do not, in order to compare their outcomes. Many people find this influence on practice unacceptable, but it is part and parcel of attempts to ‘prove’. We need to consider the lengths to which we’re prepared to go, and whether we’ll go there together. If we genuinely want to reach the utopian land of “having made the case”, we won’t get there as lone explorers.
Alongside, attempts to prove tend to be externally focused. I sense that they are rarely a genuine personal enquiry. I have not often (though not never) met a practitioner who says “Above all, I really want to know whether what I’m doing is having a positive impact on young people, and whether it could be better”. What I tend to hear much more often is “we just need to demonstrate what we already know, that what we’re doing works”. I don’t mean this as a criticism – it is an almost inevitable consequence of a focus on proving something to someone else. The implication of this is that we need to gather evidence that we neither want nor require ourselves, and this increases the distortion of practice and creates the perception of evaluation as a disproportionate and unreasonable burden – a perception which is intensified when there remains an apparent need to “make the case” despite the investment of time and resources in evaluation across the youth sector over the last decade or more.
Attempting to prove something en masse (i.e. rather than doing it ‘intervention by intervention’ or more likely, organisation by organisation) would require a consensus on what youth work is seeking to achieve and for whom. This consensus does not exist. In my experience, there is broad agreement but this agreement does not extend to the level of granularity necessary to ‘prove’ anything. ‘Proving impact’ calls for total agreement on intended outcomes, and on how we would assess whether or not they’ve been achieved. This is possible and probably desirable for a structured programme or project delivered by one or a small number of organisations, but less so for a field of practice or a sector.
‘Proving’ implies a higher standard of evidence than the youth sector is accustomed to (and perhaps, as James Noble suggests, is appropriate). Looking in to other sectors that are engaged in a battle to ‘prove’ the impact of one approach or another (formal education is an interesting example), I see a highly contested and rather unpleasant debate. I don’t want this for the youth sector. If this is a battle, I don’t know how one determines the winner, particularly if both sides emerge bruised and polarised. As I’ve already said, attempts to prove are almost always governed by an externally-set standard, and this inevitably opens us up to a level of scrutiny that is not only a challenge, but an ongoing challenge – frequently with moving goalposts. It’s not something that is achieved once, and then we can all relax. Is this really what we want?
And finally, and perhaps fundamentally, I question whether the people we want to pay attention to our efforts to prove really are. Who do we think is poised to behave differently if only youth work could prove its impact? What riches do we think will come our way? Looking around me, I do not see many examples of decision makers changing their mind on the basis of evidence. On the contrary, I see many examples of evidence being selected and sought retrospectively to back up a position that’s already been taken. Indeed, I have had more conversations than I can count with practitioners furious and despairing about decisions they perceive commissioners and funders to have taken without the evidence to support them.
So where does this leave us? Where does it leave the Centre for Youth Impact?
Firstly, for all the reasons I’ve set out here, the Centre does not exist to ‘prove’ the impact of youth work, or any other approach to supporting young people, for that matter. If this were our purpose, I believe that we could work at it for years, and not be perceived to have succeeded, and we’d alienate many of those we’d hope to work with along the way.
Secondly, this is not a statement of defeat. Far from it. I believe that the Centre’s purpose is to support all those who ‘deliver’, design, fund, commission and evaluate a diverse range of provision for young people to better understand the quality of the work they do, and connected to this, its impact. I believe that this is both an individual and a collective endeavour. We do many things differently and we do many things the same. We need to understand and value both. This is what evaluation is about. I believe the Centre’s priority is to contribute to (and lead where appropriate) the development of culture and practice that promotes embedded approaches to evaluation that ‘go with the grain’ of engagement with young people, and generate meaningful and actionable insights. This is critically important at an organisational level, but its power will be limited if it never reaches beyond this.
Thirdly, let’s reclaim evaluation from the pursuit of proof. This means both liberating evaluation from the constraints that ‘proof’ places around it, and embracing it as part our own enquiry rather than meeting an external demand. This should enable us to think and talk more clearly about what it means to take the direction that James sets out: towards an ‘evidence-led sector’.
James also concludes by saying this is not an admission of defeat, but an opportunity: I wholeheartedly agree. I continue to be excited by what we could collectively learn by asking the routine questions that James suggests (are we doing what we do ‘well’? do we reach, engage and build positive relationships with young people?), and being part of a conversation to identify shared gaps in our understanding, which might suggest when and where we need to ask questions about outcomes and impact.
Above all, I welcome James’ focus on ‘genuinely useful research questions’. Critically, the Centre needs to work with a range of stakeholders to unpick what they consider a genuinely useful research question to be, and why – in other words, what do they intend to do with what they learn? What is the decision it will influence? I believe that one of the errors we’ve made in the evidence and impact debate is to conflate too many questions, or at least the reasons for asking them. An accountability-driven question is unlikely to be the same as a learning-oriented question, but that’s ok. We should be asked and ask of ourselves both these types of questions, and understand that they are answered in different ways: the answer to an accountability question will not prompt learning in the same way that a search for insight will. And neither should it.
Finally, James suggests that if we reframe what we think of as evidence, “we could probably make a very strong case for ‘what works’ in supporting young people”. If this were true, what would we all do? What should the Centre for Youth Impact do? Rather than put down our metrics and frameworks and head home, I think this is the invitation we need to ask more, and better questions. But let us agree that we can abandon the question about how to find the holy grail.
This blog was written by James Noble, Impact Management Lead at NPC. It is part of a pair blogs, written by James and Bethia McNeil, Director of the Centre for Youth Impact, questioning the focus of impact measurement in youth work. You can read Bethia's blog “Thoughts on the Holy Grail” here.
It is easy to understand the impulse for impact measurement. We want to support young people to achieve good things, so logically we should try to understand how effective we are at this, and learn which kinds of practices get the best results for young people. Funders and providers want to know that their money and effort have made a difference. But just because something is understandable, and maybe even desirable, doesn’t make it easy to act on, and the youth sector has been stuck on this point for a while.
We need to face the fact that ‘measuring’ impact is difficult, particularly for youth work. Our big problems are that youth work supports a developmental process that takes place over 18+ years, with umpteen influences. We cannot measure or ‘capture’ all these influences and we may not see impact until years have passed— and the Government doesn’t collect or share data we can use, in the way it does in other policy areas like schools and health.
I see these challenges as singularly practical and methodological, but the reaction to them often goes in different, and quite polarised directions.
Firstly, there’s the tendency to deny the challenges: to continue to assert the underlying argument that we should be able to test ‘what works’, perhaps citing medical research as an exemplar. The problem is that the methodological challenges are intractable so there’s always disappointment. Some wrongly blame the sector for this and start to doubt its appetite for testing itself.
A different direction is to question the idea of ‘measurement’ altogether: to see it is as a means of control or denigration of the sector, as a fundamental misunderstanding of what youth work is—even the product of a neoliberal worldview. I see this as the co-option of methodological challenges to make a political argument, and I want to remain focussed on the methodological issues, which I think can be better negotiated to help resolve these tensions.
A first—brief—point is that there is benefit to setting out what we want to achieve with and for young people and how. NPC calls this developing a “theory of change”, but what you call it doesn’t really matter. It’s basically a question of agreeing and articulating:
This process is useful whatever your underlying perspective or aim: whether it’s trying to build young peoples’ ‘employability’ or empowering “questioning, compassionate young citizens committed to the development of a socially just and democratic society”. It helps, because anyone doubting the sector’s commitment to evaluating outcomes and impact should be reassured by the articulation of a clear plan that can be tested; while those who want to argue for different approaches have an opportunity to do so on equal terms.
However, my main suggestion is that we make the measurement question more manageable by acknowledging that studying longer-term outcomes and impact is difficult, and that we should do it sparingly. In particular, I think it helps providers to think about two distinct questions:
1. Are we delivering our service ‘well’? In other words, are we effectively implementing the plan described in our theory of change? Do we reach, engage, and build positive relationships with the young people we want to support?
2. Does the service we are delivering make a difference? (the outcomes and impact part of our theory of change).
Providers should aim to answer the first question routinely by collecting user, engagement and feedback data—because this is part and parcel of delivering a good quality service. But, critically, providers do not have to answer the second question all the time. Once we are confident something is effective we can stop testing it. Measuring outcomes and impact should be reserved for when there are gaps in our understanding; new approaches or practices, user groups, contexts and other unanswered questions. And we should start small; record observational evidence of outcomes where possible, leading to more robust studies with small samples and eventually larger evaluations—but only if there is funding for it and a strong rationale in terms of improving the evidence base. Moreover, larger studies should be coordinated across providers and run by specialists to maximise quality—like the ongoing learning element of the Youth Investment Fund.
The effect of this change could be profound. If providers feel less pressure to ‘prove’ their own version of the youth work model, our energies might be better directed towards strategic questions like how to reach and engage those young people experiencing the greatest need, understanding the mechanisms that work across different settings, and which aspects of programme design are most valuable, for whom and why. This is the real opportunity for an evidence led social sector, not the endless cycle of programme evaluations, which are often more about organisations ticking a box rather than learning, and so have limited influence on wider practice.
This argument does raise the question of when we can be confident something is effective? And my third point is to reject the view that confidence can only be provided by programme-level Randomised Control Trials (RCTs) which compare treatment to control groups. The logic behind RCTs is powerful, but this power diminishes as the focus of what is being studied is broadened. So, a scientist in a laboratory can, theoretically, control all conditions to be sure they are testing the effect of one thing on another, but a youth provider cannot control or limit the countless daily processes and choices needed to deliver a ‘programme’ (nor should they try to).
The results of RCTs of youth programmes are the product of a unique context, innumerable events that will never be repeated, and at best give us a clue that something about a ‘programme’ has worked for some participants. This has limited generalisability, so does not ‘prove’ that the programme will work elsewhere, and it doesn’t explain how the programme worked. It is argued that repeated RCTs will start to turn these clues into an evidence base, and this has happened—over decades—in specialist fields like cognitive behavioural therapy, but we are a long, long way from that. For example, the Realising Ambition programme took six years and a good part of its budget on producing just two inconclusive RCTs. It is hard to argue that more of these kinds of studies are the best use of our resources.
This is not an admission of defeat, but an opportunity. Once we accept the natural limits to the level of ‘proof’ available in the youth sector, we can refocus on genuinely useful research questions. It should also help us to appreciate the value in all types of research, from validated surveys, benchmarking and value for money analysis, to practitioners’ observations; what Nancy Cartwright has referred to as ‘vouching’ rather than ‘clinching’ evidence. Indeed, if we draw on all the vouching evidence already available, we could probably make a very strong case for ‘what works’ in supporting young people.
In summary, this is a call is for a rethink of what ‘measurement’ is for, and what it can achieve. I want us to move away from the cycle of funders and commissioners always expecting outcomes and impact data, and providers trying to ‘prove’ themselves in denial of the constraints, towards a ‘real world’ attitude to measurement, with a better set of research questions and methods that can answer them.
 And by extension the standards of evidence that privilege these kinds of studies.
This blog was written by Thomas Lawson, Chief Executive of Leap Confronting Conflict and the Chair of the Centre’s new board.
Last September, I was invited to give the closing address at the Centre for Youth Impact’s annual conference. My central message was that the youth sector will not achieve lasting change for young people unless its staff, volunteers and organisations collaborate within and beyond its boundaries: we know that that the UK’s systems do not work for an increasing number of young people who, despite their talent and potential, face challenges that mean it is much harder for them to thrive. The reason our organisations exist is to achieve change for them – and that means changing the systems. Otherwise we are just tackling the symptoms. But we can’t do it alone. If we want to achieve significant and lasting impact, we must work together.
That, I believe, is the vision of the Centre for Youth Impact, and why I am delighted to be its new Chair. The Centre stands for a collective, collaborative approach that focuses on understanding and strengthening the impact of our work with and for young people.
With so many warning signs of an increasingly fractured society, understanding how we can achieve ever more impact with and for young people, so that they can thrive today and tomorrow as our next generation of wonderful leaders, parents, entrepreneurs, community workers, has never been more important. Without understanding how to measure the impact of our work, it’s impossible to understand how to improve it and achieve more for young people.
Collaboration can dramatically amplify impact, but it takes leadership – at all levels. I know this from my role as Chief Executive at Leap Confronting Conflict. Leap’s purpose is to give young people the skills to manage the conflict in their lives, reduce violence in our communities and to help lead our society. We have a deep belief in the talent and potential of young people. The young people for whom we work are those for whom conflict is most likely to turn in to destructive behaviour. They are over represented in the worlds of care, criminal justice, alternative education or may be on the edge of gangs. It’s largely one cohort of young people who bounce between those worlds.
With the complexity of the world we work in, it’s absurd to think that, alone, we can achieve meaningful change either for the young people we work with or to achieve significant changes to the systems that cause these problems. We have to collaborate in well-designed, highly-effective partnerships to do anything meaningful.
So, what about the role of leadership in collaboration?
From a leadership point of view, there’s the success of the organisation and the applause that goes with it. Some of my ambition is related to growth in turnover and reputation, but if I’m honest with myself, that’s just vanity. Once I think about it more deeply, I realise that what I really care about is growth in impact. And as the new Chair of the Centre, I’m well aware that we will never measure our success in terms of growth or turnover – it is no coincidence that the Centre delayed its move to independence for more than three years.
What I have learnt is that when we promote the high-quality work of our partners and partnerships to our funders, we strengthen both our organisations and our impact, much more so than when we are protective. We spend less time competing and more time working out how to succeed together. This is a different type of leadership, that calls on different skills, motivations and conversations. But I believe it’s the form of leadership that we need to embody and encourage as we look to the future.
It is critical to create a culture that recognises that leadership can come from anywhere in an organisation, or indeed a network. Participants can feel excluded if they are not part and parcel of the design, delivery and evaluation of the work – not as volunteers, but as paid personnel. Their expertise derived from their personal experiences is as valuable as any professional expertise I’ve ever come across, and gives us the best insights in order to have the best design. Many of those experts can be found in community-based organisations. We have to make sure that we partner on-the-ground with communities, as equals.
The speed of change in the worlds in which we work is also reshaping the nature of leadership. In the seven years I’ve been at Leap, public sector organisations have seen extraordinary levels of cuts. Local councils have seen 40-60% cuts to their annual budgets. We’ve seen youth services eviscerated. Local authorities and statutory bodies have found their roles shifting and stretching, often painfully, but of course there remain, in the face of these challenges, statutory partners who are incredibly ambitious and creative for the young people for whom they work.
In summary, these are the principles I think we need:
If we can do this, the prize will be knowing that when you’re looking back at your career, you contributed to a change for young people the benefits of which you can still see; that you made friendships with a great diversity of people and grew your understanding; it was hard work, and you failed and got back up and succeeded.
This is my ambition for the Centre for Youth Impact, and I believe that its networks, funders, partners and my fellow trustees share this ambition. We have listened and we have learned. We’ve done some things wrong and some things right. Above all, we bring energy, openness and a commitment to leadership in collaboration, and I am excited about the year ahead.
Kevin Franks is the Programmes Director at Youth Focus: North East and is the lead for the Centre's North East regional network. Kevin has 25 years’ experience in statutory and voluntary sector youth and community work, which includes centre based, outreach, detached and schools work. This blog is adapted from Kevin's talk at 'Funding Change: Making impact measurement work for funders and providers of youth services', held on 21 March 2018 at the Leeds Rhinos Stadium.
The idea of impact assessment within the youth sector is not new and both funders and youth providers have come a long way in measuring impact over recent years. However, that doesn’t mean it is always done well, correctly or meaningfully. Comments and questions, we often hear, range from:
And we commonly hear the term ‘tick box exercise’.
When I recently posed the question of “what are the challenges around impact assessment and evaluation between funders, commissioners and youth organisations” to a range of colleagues from across the sector in the North East; responses fell into two main areas:
We know and understand there is an intense pressure on commissioners, funders and youth organisations to deliver ‘improved value for money’ and ‘better outcomes’. Interventions cost money and we need to know whether, or not, that money is being spent to produce the best outcomes for young people. This raises the quest of what we mean by ‘value for money’?
In a recent debate on volunteering in the House of Lords the issue of the National Citizen Service representing value for the taxpayer was raised. This was in relation to senior staff salaries and surplus of income over expenditure against participation targets for the programme, potentially being missed by as much as 40 per cent. However, we do know that young people participating in the National Citizen Service Scheme can have a quality experience and many of them report a high level of satisfaction. Focusing on purely the cost of something, as being the best value, runs the risk that there is always going to be someone, somewhere, who’s prepared to do it for less. And if you take price as your main distinction, you’re in a race to the bottom, in regards to quality.
There is value in youth work, and I am sure we all have examples of where being involved with youth work has transformed young people’s lives for the better. However, it is a challenge that this value is easily captured in terms of cost-benefit ratios.
Quality is subjective, whereas quantity is not. Quality can be disputed, questioned and challenged. One cannot dispute quantity – it is easily measured.
Focusing on the price makes it easy to miss the real value – and can turn complex decisions based on ethics, culture, empathy, and understanding of society into much simpler games based on numbers and calculations.
Given the challenges, how can these be overcome going forward. Personally, I believe there is 3 main areas for consideration:
1) Be Clear About Youth Work
Youth work is a distinctive field of practice that puts young people at the centre of the work, and starts from their concerns, their interests and their own starting points. Young people engage in youth work by choice.
The great strength of youth work (and youth workers) is its capacity to adapt, change and grow. However, how many funders, commissioners and indeed the general public really understand what youth work is and what it achieves? How much consensus is there in the youth work sector on the ‘purpose’ of youth work?
We need to be clear about the outcomes we claim youth work can achieve:
If we do believe that youth work is responsible, or at least contributes, to these outcomes, and others, then surely we have a responsibility to provide evidence to back our claims.
If we expect young people to invest their own time and effort participating in youth provision, it is only reasonable that we make an effort to make it worth their while. And surely part of this is investing some of our time and effort in evaluating if we are providing a quality service.
2) Shared Language There is an obvious need to support the development, agreement and acceptance of a common language and framework to describe what the youth sector does. This will better enable commissioner and funders to understand what youth work is and support them to invest in quality provision that will provide the best outcomes for young people and communities. A shared common language will also allow delivery organisations, including the local small voluntary ones, to clearly communicate where they sit within the diversity of the wider youth sector and ultimately enable them to be better able to articulate their value.
There are a number of ways funders, commissioners and youth organisations can engage with each other.
Commissioners can see the value of the youth sector as a critical player in developing ‘asset-based’ approaches to providing high quality support, and by engaging youth organisations as partners in co-production of outcomes.
Evidence gathered by commissioners and funders can be better shared across the youth sector. Good evidence can be used to confirm or challenge approaches and interventions and to examine which features make them successful and worth investing in.
More can be done to build relationships between commissioners, funders and youth organisations. For instance, invite your funder to come and visit your organisation and see for themselves the work at first hand and hold events that create space for open and critical dialogue between all parties.
However, these approaches will require a level of courage from all involved. They will require strong and mature relationships, both within the sector, and between the sector and commissioners. These relationships will require time and attention to develop and maintain.
The youth sector has a role in coming together to provide a strong and unified voice. This requires leadership from within the sector to manage competition between different organisations.
We know the youth sector is diverse in its interests and organisational forms and, at times, struggle to (or refuse to) speak with one voice. Yes, difference of opinion is good. And we should always be open to critiques and different perspectives. However, we if can’t agree on some fundamental issues then we will be forever doomed to remain in this static state of being seen as a second class provision for young people by the state and general public. If we want consistency from commissioners and funders we have to consistency from our own sector. And surely the best way to do this is to have a strong, united youth work sector, delivering quality interventions that enable young people to succeed and thrive.
Quality youth work is a process of continuous evaluation and learning – both for young people and practitioners.
Quality youth work equals quality outcomes for young people, communities and society as a whole.
Our young people have a right to the best quality interventions - and we have a duty to provide them.
Youth work beyond the measurement imperative? Reflections on the Youth Investment Fund Learning Project from a critical friend
In this blog Tania de St Croix, Lecturer in the Sociology of Youth and Childhood at King's College London, offers her thoughts on the Youth Investment Fund Learning Project, which the Centre is leading with NPC and others. You can find out more information on the YIF Learning Project at https://yiflearning.org.
Many involved in the youth work field are critical of the youth impact agenda, particularly its emphasis on the quantitative measurement of outcomes for individuals, and its neglect of process, group work, and structural inequalities. Those of us involved in ‘In Defence of Youth Work’ have argued that the contemporary emphasis on impact and outcomes cannot be separated from its context, the neoliberal ‘desire to financialise human existence’, and its consequences for which practices are valued and who gets to decide. We have claimed that open access youth work is particularly unsuited to outcomes based management, and that open youth work's future existence is undermined by an emphasis on impact measurement.
While those of us making a political critique of impact measurement (within and beyond young people’s services) face an uphill struggle against dominant understandings of ‘what works’ and ‘what counts’, there has been a growing recognition of the specific challenges in evaluating open access youth work. In this context, it has been interesting to follow the development of the Youth Investment Fund (YIF), a £40 million government (DCMS) and Big Lottery Fund investment in open access youth work. While we might start by noting that £40 million over 3 years is dwarfed by a decade of youth work cuts, the YIF is nevertheless significant: it suggests that someone, somewhere in policy recognises the potential value of open youth work. The YIF is also significant in relation to impact debates, as it included “an explicit objective to strengthen the evidence base on the impact of non-formal learning opportunities for young people”.
A change of emphasis?
This objective to ‘strengthen the evidence base’ of open access youth work is carried out by the YIF Learning Project, led by NPC and the Centre for Youth Impact. Its tone and approach are encouraging, and some of the significant concerns of the youth work field have been taken on board. This is demonstrated both by a language of learning and openness, and an emphasis on collaboration with young people, practitioners and youth work organisations. The principles of the YIF Learning Project laudibly include:
The YIF evaluation approach is more closely aligned to youth work’s approaches and methodologies than it might have been, and this is great to see. And yet, there is still a sense that it attempts to ‘measure the unmeasurable’. As I write this, I imagine the weary sighs of colleagues in the youth impact world; however much they take on board youth workers’ views, it is never enough to stop us complaining! None of what follows is intended to criticise for criticism’s sake, or to take away from the respect with which the Centre for Youth Impact (in particular) has treated those of us who are critical of the very tenets of the youth impact agenda they were set up to promote. The following are five dilemmas that are important to address:
1) Despite moving away from ‘blanket outcomes measurement’, quantitative outcomes measurement continues to play a central role. Given the tendency that ‘what gets counted’ is too often the only thing that ‘counts’, how can the project guard against the preference for more structured, time-limited, ‘project-based’ youth work (that is easier to ‘measure’) over informal, open-ended, open access practice? How can the group processes that are central to youth work be recognised, when it is individual change that tends to be measured?
2) What are the dangers of standardising and quantifying ‘youth work quality’ and ‘young people’s views’, of inventing new tools (or importing them from other countries), and of engaging private sector consultancies and agencies to do this work?
3) It is inevitable that evaluation – especially on behalf of a funding agency – will affect practice, including in unintended ways. How much of a ‘data burden’ will be created for organisations? Will they really feel free to share their experiences, reservations, and honest reflections?
4) Can evaluation be separated from top-down performance management, judgement, comparison and control? Measurement changes how practitioners are perceived, and how they perceive themselves in relation to their work. How can data be used for collective learning without it also being used as evidence of ‘success’ or ‘failure’ by individual practitioners and organisations, and even by the field of open access youth work as a whole?
5) How can ‘footfall’ and other data be collected without unacceptable levels of surveillance, and breaches of confidentiality about young people’s whereabouts and their activities? How can the most marginalised young people, many of whom are (rightly) suspicious of authorities and institutions, be assured that their privacy is respected?
So what? And what next?
The current approach to evaluating the Youth Investment Fund demonstrates thoughtfulness and attention to the special characteristics and challenges of open access youth work. As a result, the experiences of young people and youth workers funded by this scheme will be more meaningful and less onerous than they would have been under a more prescriptive top-down approach. The YIF Learning Project goes some way towards challenging dominant approaches to impact measurement. Yet in other ways it is reinforcing the status quo: continuing to prioritise the measurement of individual change, converting qualitative elements of youth work (its quality and young people’s experiences) into statistics, and aiming towards a financialised ‘value for money’ analysis.
Ultimately, without questionning the broader context - the basis on which measurement is still preferred by most funders and governments, as a neoliberal tool of governance and control – many of these problems remain intractable. Moving beyond such dilemmas, then, is not merely a matter of creating more congruent impact tools, reducing the data burden, and involving young people and practitioners in the process (important though all of these things are). It requires imagining meaningful evaluation beyond a focus on outcomes and measurement, thinking seriously about the social and political purpose of youth work, and the role of young people in creating change. It involves working with others - beyond the youth sector and beyond our national and regional borders - to challenge the global dominance of finance and investment logic in activities that hold to a different version of ‘value’. While such aspirations may seem momentous, there is nothing to stop us dreaming of a different world, and doing what we can to make it real in our everyday lives.
Tania de St Croix has been a youth worker for over 20 years, and is an active part of ‘In Defence of Youth Work’ (IDYW); she thanks IDYW colleagues for helpful feedback on this blog post. She is a lecturer at King’s College London. Her book, ‘Grassroots Youth Work: Policy, Passion and Resistance in Practice’, was published in 2016, and her forthcoming research project is entitled ‘Rethinking impact, evaluation and accountability in youth work’.
here to edit.
This blog as written by Brahmpreet Gulati, a Member of the Youth Parliament and a youth councillor for Thurnby Lodge, who attended Raven Youth Centre in Leicester. Brahmpreet was part of the 'How Will You Hear Me?' project where young people shared their personal stories and talked about their experiences of being listened to – or not – by different public bodies across a series of short films.
It is a common view among our society that youth centres are old-fashioned buildings with some snooker tables and sofa’s lying around. This is the perspective that most adults, and most politicians take, however it is not the view of the young people who actually use these spaces. For them it’s a place where bantering and opening-up about the deepest fears is acceptable, a place where meeting your friends is not seen as a threat by the outside world, and most importantly a place where two generations meet and can have a conversation without awkwardness.
Leicester City is one of the local authorities who had drastic cuts made to the youth services they provide. Many young people were engaged in the City’s consultation process used for these cuts, for example through the Young People's Council attending scrutiny meetings and meeting with the Assistant Mayor, to ensure that young people’s voices are heard and acted upon. These examples show that they’ are effective mechanisms available for local authorities to work in partnership with young people, however this does not always take place.
Many local authorities across the country are beginning to take control of this space, and the voices of the young people affected are ignored as budgets are cut with little care for the impact on the futures of the people affected. Massive changes of youth workers and provision means that the environment in youth centres that‘s crucial for young people changes. Distress replaces the cohesiveness of the same youth centre - as changes in timing and staffing replace that warm comfort of the centre. Young people already face lots of ongoing battles, whether that’s looking after an ill parent or family member, or being vulnerable to online trolls. When making these cuts, the authorities fail to recognise that they’re creating an additional battle for these young people, they now have to fight for a youth club, a designated space, for a place to escape! This goes against the common message to young people that it’s “their” youth club.
Closing one youth centre may not seem like much of an immediate loss, however individual youth clubs are a part of larger whole, and when all the losses are added up society becomes increasingly fragmented. Anyone who thinks that this can simply be filled by schools is missing that the relationship between a teacher and young person could never compare or compete between a youth worker and a young person. The classroom setting for many young people is a setting of listening and learning in order to pass exams, rather than a platform to let loose and explore other areas of life.
With the positive effects of good youth services often only clear in the future, youth clubs have become too easy a place to cut. Unfortunately, once the space is lost it’s lost. I can’t help but think that there will be a time when the next generation of decision makers question the burden among the remaining services, which will have to face the increased pressures created from a loss of youth clubs, and realise that the cut youth services a crucial missing piece in the puzzle.
Let’s note as well that we have some shared experiences of accountability - and that accountability is, overall, a good thing. Accountability within an organisation is critical to planning and effectiveness and it’s equally important in relationships with stakeholders, including beneficiaries, partners and the wider communities we work with.
Our formal accountability is very similarly constructed as well. Most delivery organisations, trusts and foundations are registered charities, with trustees who are accountable to the Charity Commission for serving charitable purpose and achieving public benefit. For foundations, this means showing that their grant-making is serving charitable purpose and public benefit, even when their grantees are not registered charities.
Accountability for any organisation depends on having reasonably reliable information about and understanding of how resources have been used and what was achieved. And that is a need we all have in common. Evaluation is one source of that information but somehow we have come to see it as something separate from the central flow of an organisation’s work.
The Evaluation Roundtable talks about ‘strategic learning’, a process that might involve formal evaluation alongside the use of other types of evaluative information, such as management and financial information, regular user feedback and the intelligence that we all gather in the course of our work. Strategic learning is learning to inform decisions about what to do next, about changes - minor or major - that we might need to make.
In my experience, most of the people working for funders and delivery organisations, and certainly the most effective, are characterised by a curiosity that drives the quest for impact. We want to know whether we are achieving what we set out to do, how and why things are working or not, and what changes we might consider. We are missing a trick if we don’t see evaluation as part of that strategic learning, whether at the level of the whole organisation, service or project.
Do we all do this well? Do we have the resources we need? As a funder, I can say that our organisation does not yet have in place all the skills, systems or culture we need for strategic learning, but we are developing our capacity and becoming more of a learning organisation. We also recognise that this is a greater challenge for the organisations we fund, who are hard pressed for resources, including staff time.
However, we see many grant applications in which evaluation work is seriously under-costed and there is inadequate provision for staff time to manage and run evaluative processes, interpret the findings and consider the implications. So we look forward to the conversation about how we can work together to make much better use of the effort that is going into ‘evaluation’ at the moment, and which is not delivering all it could for those we work with.
Effective evaluations benefit everyone. Grantees, those they support and funders. So let’s talk about grantees and evaluation as well as about funders and evaluation. Let’s talk about shared ownership and differing perceptions as part of this, and what funders and delivery organisations can do, together with evaluators, to make better use of evaluation as an integral part of our work.
In this blog Bethia McNeil, Director of the Centre for Youth Impact, opens a conversation about funders and evaluation. You can read Jane Steele's, Director, Evidence and Learning at the Paul Hamlyn Foundation, response to Bethia's blog here.
What role does the funding community play in shaping evaluation in youth-serving organisations? This might sound like a disingenuous question; after all, many youth organisations would say that ‘funder requirements’ are the main driver of their evaluation activity (for better or worse). It’s certainly clear that a significant volume of evaluation activity happens in association with particular funding pots, but how does funding – and specifically, the funding community itself – shape this evaluation activity?
This is a particularly interesting question because ‘funder requirements’ are not only considered to be a major driver of evaluation practice, but also a major barrier to such practice ever changing. Very many of my conversations with delivery organisations about re-thinking their evaluation activities end in “but what if my funders don’t like it? And we have to do different things for every one….”. So, if we accept that the funding community exerts such a strong influence on evaluation practice, could or should we do more to channel that influence?
But before we ask that question, we have to ask another. What is evaluation for?
Certainly, for some funders, evaluation is a form of monitoring: checking that what they are funding is actually being delivered, and reaching the specified people and communities. This, as Tamsin Shuker from the Big Lottery Fund refers to it, is more about checking “what it is”, than asking “what is it?”.
Sometimes this also extends to checking whether the funding is having the impact that a delivery organisation said it would. Again, this tends to be in the form of ‘demonstrating’ impact, rather than genuinely enquiring.
Such monitoring activity is also a form of accountability, but it tends to be seen and felt by delivery organisations as accountability to funders, rather than to people and communities – even if this is not the intention of the funder in question. ‘Accountability to funders’ – even the most open and inclusive funding organisations – brings with it a certain high stakes mentality: the potential to fail with negative consequences, the burden of compliance that rarely feels like time well spent, and a sense of potentially unachievable standards.
Increasingly, funders are framing their evaluation ‘asks’ in terms of learning: enabling and encouraging organisations to learn what went well and what didn’t, and to share and apply this learning in the future. But when this is mixed up with perceptions of accountability (whether real or otherwise), does it fatally undermine the conditions necessary for open and reflective learning?
Many if not all funders would hope to leave delivery organisations stronger and better placed for the future as a result of their funding. ‘Evaluation capacity’ is often part of this, and a number of funders provide grants plus support to delivery organisations in the form of matching them with a consultant or evaluation ‘expert’. But, adding this in to an already murky blend of accountability, monitoring and learning, does it make sense to locate evaluation expertise outside the organisation? And why building capacity to evaluate? What about capacity to learn and change practice as a result? They are not the same thing.
My sense is that the purpose of evaluation has become very confused, and hopelessly entangled with other concepts and activities that effectively shape evaluation practice – and rarely for the better.
What should evaluation be for? Lots of things: it can be about accountability, but to young people and communities as well as funders. It can also be about enquiry, and about learning and improvement. They are all important, but it can be quite hard to do all of them at the same time. They give rise to different questions, and there are different approaches to answering different questions.
So, let’s return to my original question: could or should we do more to channel the influence of the funding community over evaluation practice? I think the answer has to be yes, but we have to go beyond a rather simplistic perspective that assumes that evaluation either happens or it doesn’t, and funder influence can make it happen, and more of it. This is what has led to where we are today.
This debate should be about channelling influence, and recognising that influence in all its complexity, rather than using the power of funders like a blunt tool. It should also be about unpacking evaluation and its purposes and drivers. And we must talk about ownership: too much evaluation practice is undertaken in response to perceived demands from ‘outside’ delivery organisations. Outside demands do little to engender ownership, which in turn shapes the entire organisational culture surrounding evaluation.
But these questions are divergent: they have no one answer, and instead call on all of us to think broadly about the issues. As a result we will be focusing more of our work this year, and in the coming years, on the relationship between the funding community, delivery organisations and evaluation, and will be sharing more of our thoughts on what this could look like soon.
This blog was written by Pippa Knott, Head of Networks at the Centre for Youth Impact
Working with the Talent Match partnerships has given me an amazing opportunity to reflect on the power of relationships in the context of an employability programme. We’re now focusing on what the Centre’s role might be in strengthening and promoting those that sit at the heart of youth work and other provision for young people.
It’s an issue that has woven through much of our previous work: Robin Bannerjee gave a very well-received presentation at our first event of 2017, discussing approaches to measuring personal development in the context of relationships, and the team at Dartington Social Research Unit (now Policy Lab) wrote for us on the place of relationships in social provision.
Through Talent Match, I’m reminded again of how supportive relationships at their best are hugely powerful – sometimes transformative – and often at the heart of programmes that are ‘working well’. They’re also as complicated as the combination of the individuals who make them up. Many people are happy to ‘feel their way’ through relationships, drawing on past experience and what seems right in the moment. This applies as much to relationships within services and other provision as relationships elsewhere. So devising a framework capturing how to ‘do them well’ is difficult: it can quickly feel an academic, even unhelpful exercise, unlikely to be valued and used by practitioners. Watch this space for how I’m working with the partnerships to try and progres some of these issues! Full findings from the project will be launched in March.
The way in which relationships recur in our work at the Centre also suggests it might not make sense to think of them as a ‘topic’. Instead, they could be a crucial piece in the puzzle of how we can support others to flourish, while also reflecting on ourselves and what we’re bringing to any situation.
Thinking only about the relationship between adult professional or volunteer and young person might also be limiting. I’m also thinking about how we can make the best of the relationships upon which the Centre exists: within the central team, between us and our networks, and across the web of organisations, connections and friendships within which we work. Relationships are a key mechanism for development and support in the overwhelmingly complex systems we live within, and something that any individual can learn about and use to affect change in their own lives, and the lives of others.
What might it look like if we invested in relationships for social good, rather than in organisations or programmes? Are our current methods and frameworks for measuring impact in work with youth people sufficient to take account for the nuances, complexities and potential impact of positive relationships? Do they tell us enough about how relationships can be improved? How can we develop them if so? We’ll be learning from youth work principles and practice [for example, Relationship, Learning and Education; Benefits of Youth Work; Grassroots Youth Work] and the work of the Search Institute, the R-Word, and Lankelly Chase as we develop our approach in this area.
Our monthly newsletter collects news, events, research and blogs from the Centre, our networks and practitioners and organisations around the world. Sign-up here