Skip to main content

Thoughts on the Holy Grail

2018-05-16

This blog was written by Bethia McNeil, Director of the Centre for Youth Impact.Bethia McNeil It is part of a pair blogs, written by James Noble, Impact Management Lead at NPC, and Bethia. You can read James's blog “Let's stop chasing our tails on impact measurement” here.

 

 

​I have had a couple of challenging conversations recently about whether the Centre for Youth Impact is doing what it should be, or at least what people want us to be doing. These conversations, and James Noble’s blog reflecting on the difficulties of evidencing impact in youth work, have reminded me of a regular theme in my work over the last eight years: the search for the holy grail, also known as definitive proof that ‘youth work works’.
 
There has been, for some time, a sense that if only we could collectively prove – or at least demonstrate beyond reasonable doubt – that youth work achieves social good, then a battle would have been won, and the spoils of war would follow. There are many who believe that this should be the job of the Centre for Youth Impact. This is a seductive notion, and one that I have myself been attracted to on occasion.
 
But what does that actually mean? Allow me, for a moment, to ‘problematise’ the holy grail statement.
 

  • What does it mean to ‘prove’ something? Is there a bar we have to reach, and is this bar the same for everyone? Is it enough to prove something once?
  • To whom are we trying to prove it? Do we know they’re listening?
  • Do we all agree what youth work is intending to achieve? Do we all agree what constitutes positive impact?
  • Do we all agree on what evidence would ‘prove’ that youth work has achieved what we intend it to, and how we’d generate that evidence?
  • What do we hope will happen when we’ve proved what we set out to prove?

 
I don’t think there’s an agreed answer to any of these questions (that is, individuals or groups might share a view, but it’s unlikely to be shared more widely). More to the point, I’m not sure that it’s achievable or even desirable to reach a consensus view on all of these issues. And I certainly don’t think it’s the Centre for Youth Impact’s role to take a view and dictate it more widely.
 
For the record, here is my take – and you should interpret this exactly as I intend, my take.
 
It is extremely difficult to ‘prove’ anything in the realm of social programmes and human relationships (and most social programmes have human relationships at their heart). Most attempts at doing this have become a drawn-out debate, with as many people seeking to disprove as there are seeking to prove. There’s a complex relationship between what we, as individuals, think we ‘know’ and the confidence we have in this knowledge, and the route to gaining it. This is different for different people (and different purposes) and as yet, there has been no real agreement on any level about what constitutes ‘proof’ in the social sector.
 
Much of the research that has been done in attempt to ‘prove’ impact has been called into question, either because the method has not stood up to scrutiny, or because the results were impossible to replicate a second time, or because the research took too narrow a view in an attempt to prove a particular point. Such research or evaluation is never neutral, and there will always be disagreements about what constitutes evidence, or ‘good evidence’ or even a ‘positive outcome’. And this is all assuming that the results say what we want them to say… what happens if they don’t?
 
Attempts to prove impact in the social sector frequently end up dictating or influencing practice. At the most extreme, this can involve certain young people being randomly selected to ‘receive’ an intervention while others do not, in order to compare their outcomes. Many people find this influence on practice unacceptable, but it is part and parcel of attempts to ‘prove’. We need to consider the lengths to which we’re prepared to go, and whether we’ll go there together. If we genuinely want to reach the utopian land of “having made the case”, we won’t get there as lone explorers.
 
Alongside, attempts to prove tend to be externally focused. I sense that they are rarely a genuine personal enquiry. I have not often (though not never) met a practitioner who says “Above all, really want to know whether what I’m doing is having a positive impact on young people, and whether it could be better”. What I tend to hear much more often is “we just need to demonstrate what we already know, that what we’re doing works”. I don’t mean this as a criticism – it is an almost inevitable consequence of a focus on proving something to someone else. The implication of this is that we need to gather evidence that we neither want nor require ourselves, and this increases the distortion of practice and creates the perception of evaluation as a disproportionate and unreasonable burden – a perception which is intensified when there remains an apparent need to “make the case” despite the investment of time and resources in evaluation across the youth sector over the last decade or more.
 
Attempting to prove something en masse (i.e. rather than doing it ‘intervention by intervention’ or more likely, organisation by organisation) would require a consensus on what youth work is seeking to achieve and for whom. This consensus does not exist. In my experience, there is broad agreement but this agreement does not extend to the level of granularity necessary to ‘prove’ anything. ‘Proving impact’ calls for total agreement on intended outcomes, and on how we would assess whether or not they’ve been achieved. This is possible and probably desirable for a structured programme or project delivered by one or a small number of organisations, but less so for a field of practice or a sector.
 
‘Proving’ implies a higher standard of evidence than the youth sector is accustomed to (and perhaps, as James Noble suggests, is appropriate). Looking in to other sectors that are engaged in a battle to ‘prove’ the impact of one approach or another (formal education is an interesting example), I see a highly contested and rather unpleasant debate. I don’t want this for the youth sector. If this is a battle, I don’t know how one determines the winner, particularly if both sides emerge bruised and polarised. As I’ve already said, attempts to prove are almost always governed by an externally-set standard, and this inevitably opens us up to a level of scrutiny that is not only a challenge, but an ongoing challenge – frequently with moving goalposts. It’s not something that is achieved once, and then we can all relax. Is this really what we want?
 
And finally, and perhaps fundamentally, I question whether the people we want to pay attention to our efforts to prove really are. Who do we think is poised to behave differently if only youth work could prove its impact? What riches do we think will come our way? Looking around me, I do not see many examples of decision makers changing their mind on the basis of evidence. On the contrary, I see many examples of evidence being selected and sought retrospectively to back up a position that’s already been taken. Indeed, I have had more conversations than I can count with practitioners furious and despairing about decisions they perceive commissioners and funders to have taken without the evidence to support them.
 
So where does this leave us? Where does it leave the Centre for Youth Impact?
 
Firstly, for all the reasons I’ve set out here, the Centre does not exist to ‘prove’ the impact of youth work, or any other approach to supporting young people, for that matter. If this were our purpose, I believe that we could work at it for years, and not be perceived to have succeeded, and we’d alienate many of those we’d hope to work with along the way.
 
Secondly, this is not a statement of defeat. Far from it. I believe that the Centre’s purpose is to support all those who ‘deliver’, design, fund, commission and evaluate a diverse range of provision for young people to better understand the quality of the work they do, and connected to this, its impact. I believe that this is both an individual and a collective endeavour. We do many things differently and we do many things the same. We need to understand and value both. This is what evaluation is about. I believe the Centre’s priority is to contribute to (and lead where appropriate) the development of culture and practice that promotes embedded approaches to evaluation that ‘go with the grain’ of engagement with young people, and generate meaningful and actionable insights. This is critically important at an organisational level, but its power will be limited if it never reaches beyond this.
 
Thirdly, let’s reclaim evaluation from the pursuit of proof. This means both liberating evaluation from the constraints that ‘proof’ places around it, and embracing it as part our own enquiry rather than meeting an external demand. This should enable us to think and talk more clearly about what it means to take the direction that James sets out: towards an ‘evidence-led sector’.  
 
James also concludes by saying this is not an admission of defeat, but an opportunity: I wholeheartedly agree. I continue to be excited by what we could collectively learn by asking the routine questions that James suggests (are we doing what we do ‘well’? do we reach, engage and build positive relationships with young people?), and being part of a conversation to identify shared gaps in our understanding, which might suggest when and where we need to ask questions about outcomes and impact.
 
Above all, I welcome James’ focus on ‘genuinely useful research questions’. Critically, the Centre needs to work with a range of stakeholders to unpick what they consider a genuinely useful research question to be, and why – in other words, what do they intend to do with what they learn? What is the decision it will influence? I believe that one of the errors we’ve made in the evidence and impact debate is to conflate too many questions, or at least the reasons for asking them. An accountability-driven question is unlikely to be the same as a learning-oriented question, but that’s ok.  We should be asked and ask of ourselves both these types of questions, and understand that they are answered in different ways: the answer to an accountability question will not prompt learning in the same way that a search for insight will. And neither should it.
 
Finally, James suggests that if we reframe what we think of as evidence, “we could probably make a very strong case for ‘what works’ in supporting young people”. If this were true, what would we all do? What should the Centre for Youth Impact do? Rather than put down our metrics and frameworks and head home, I think this is the invitation we need to ask more, and better questions. But let us agree that we can abandon the question about how to find the holy grail.