Let’s note as well that we have some shared experiences of accountability - and that accountability is, overall, a good thing. Accountability within an organisation is critical to planning and effectiveness and it’s equally important in relationships with stakeholders, including beneficiaries, partners and the wider communities we work with.
Our formal accountability is very similarly constructed as well. Most delivery organisations, trusts and foundations are registered charities, with trustees who are accountable to the Charity Commission for serving charitable purpose and achieving public benefit. For foundations, this means showing that their grant-making is serving charitable purpose and public benefit, even when their grantees are not registered charities.
Accountability for any organisation depends on having reasonably reliable information about and understanding of how resources have been used and what was achieved. And that is a need we all have in common. Evaluation is one source of that information but somehow we have come to see it as something separate from the central flow of an organisation’s work.
The Evaluation Roundtable talks about ‘strategic learning’, a process that might involve formal evaluation alongside the use of other types of evaluative information, such as management and financial information, regular user feedback and the intelligence that we all gather in the course of our work. Strategic learning is learning to inform decisions about what to do next, about changes - minor or major - that we might need to make.
In my experience, most of the people working for funders and delivery organisations, and certainly the most effective, are characterised by a curiosity that drives the quest for impact. We want to know whether we are achieving what we set out to do, how and why things are working or not, and what changes we might consider. We are missing a trick if we don’t see evaluation as part of that strategic learning, whether at the level of the whole organisation, service or project.
Do we all do this well? Do we have the resources we need? As a funder, I can say that our organisation does not yet have in place all the skills, systems or culture we need for strategic learning, but we are developing our capacity and becoming more of a learning organisation. We also recognise that this is a greater challenge for the organisations we fund, who are hard pressed for resources, including staff time.
However, we see many grant applications in which evaluation work is seriously under-costed and there is inadequate provision for staff time to manage and run evaluative processes, interpret the findings and consider the implications. So we look forward to the conversation about how we can work together to make much better use of the effort that is going into ‘evaluation’ at the moment, and which is not delivering all it could for those we work with.
Effective evaluations benefit everyone. Grantees, those they support and funders. So let’s talk about grantees and evaluation as well as about funders and evaluation. Let’s talk about shared ownership and differing perceptions as part of this, and what funders and delivery organisations can do, together with evaluators, to make better use of evaluation as an integral part of our work.
In this blog Bethia McNeil, Director of the Centre for Youth Impact, opens a conversation about funders and evaluation. You can read Jane Steele's, Director, Evidence and Learning at the Paul Hamlyn Foundation, response to Bethia's blog here.
What role does the funding community play in shaping evaluation in youth-serving organisations? This might sound like a disingenuous question; after all, many youth organisations would say that ‘funder requirements’ are the main driver of their evaluation activity (for better or worse). It’s certainly clear that a significant volume of evaluation activity happens in association with particular funding pots, but how does funding – and specifically, the funding community itself – shape this evaluation activity?
This is a particularly interesting question because ‘funder requirements’ are not only considered to be a major driver of evaluation practice, but also a major barrier to such practice ever changing. Very many of my conversations with delivery organisations about re-thinking their evaluation activities end in “but what if my funders don’t like it? And we have to do different things for every one….”. So, if we accept that the funding community exerts such a strong influence on evaluation practice, could or should we do more to channel that influence?
But before we ask that question, we have to ask another. What is evaluation for?
Certainly, for some funders, evaluation is a form of monitoring: checking that what they are funding is actually being delivered, and reaching the specified people and communities. This, as Tamsin Shuker from the Big Lottery Fund refers to it, is more about checking “what it is”, than asking “what is it?”.
Sometimes this also extends to checking whether the funding is having the impact that a delivery organisation said it would. Again, this tends to be in the form of ‘demonstrating’ impact, rather than genuinely enquiring.
Such monitoring activity is also a form of accountability, but it tends to be seen and felt by delivery organisations as accountability to funders, rather than to people and communities – even if this is not the intention of the funder in question. ‘Accountability to funders’ – even the most open and inclusive funding organisations – brings with it a certain high stakes mentality: the potential to fail with negative consequences, the burden of compliance that rarely feels like time well spent, and a sense of potentially unachievable standards.
Increasingly, funders are framing their evaluation ‘asks’ in terms of learning: enabling and encouraging organisations to learn what went well and what didn’t, and to share and apply this learning in the future. But when this is mixed up with perceptions of accountability (whether real or otherwise), does it fatally undermine the conditions necessary for open and reflective learning?
Many if not all funders would hope to leave delivery organisations stronger and better placed for the future as a result of their funding. ‘Evaluation capacity’ is often part of this, and a number of funders provide grants plus support to delivery organisations in the form of matching them with a consultant or evaluation ‘expert’. But, adding this in to an already murky blend of accountability, monitoring and learning, does it make sense to locate evaluation expertise outside the organisation? And why building capacity to evaluate? What about capacity to learn and change practice as a result? They are not the same thing.
My sense is that the purpose of evaluation has become very confused, and hopelessly entangled with other concepts and activities that effectively shape evaluation practice – and rarely for the better.
What should evaluation be for? Lots of things: it can be about accountability, but to young people and communities as well as funders. It can also be about enquiry, and about learning and improvement. They are all important, but it can be quite hard to do all of them at the same time. They give rise to different questions, and there are different approaches to answering different questions.
So, let’s return to my original question: could or should we do more to channel the influence of the funding community over evaluation practice? I think the answer has to be yes, but we have to go beyond a rather simplistic perspective that assumes that evaluation either happens or it doesn’t, and funder influence can make it happen, and more of it. This is what has led to where we are today.
This debate should be about channelling influence, and recognising that influence in all its complexity, rather than using the power of funders like a blunt tool. It should also be about unpacking evaluation and its purposes and drivers. And we must talk about ownership: too much evaluation practice is undertaken in response to perceived demands from ‘outside’ delivery organisations. Outside demands do little to engender ownership, which in turn shapes the entire organisational culture surrounding evaluation.
But these questions are divergent: they have no one answer, and instead call on all of us to think broadly about the issues. As a result we will be focusing more of our work this year, and in the coming years, on the relationship between the funding community, delivery organisations and evaluation, and will be sharing more of our thoughts on what this could look like soon.
This blog was written by Pippa Knott, Head of Networks at the Centre for Youth Impact
Working with the Talent Match partnerships has given me an amazing opportunity to reflect on the power of relationships in the context of an employability programme. We’re now focusing on what the Centre’s role might be in strengthening and promoting those that sit at the heart of youth work and other provision for young people.
It’s an issue that has woven through much of our previous work: Robin Bannerjee gave a very well-received presentation at our first event of 2017, discussing approaches to measuring personal development in the context of relationships, and the team at Dartington Social Research Unit (now Policy Lab) wrote for us on the place of relationships in social provision.
Through Talent Match, I’m reminded again of how supportive relationships at their best are hugely powerful – sometimes transformative – and often at the heart of programmes that are ‘working well’. They’re also as complicated as the combination of the individuals who make them up. Many people are happy to ‘feel their way’ through relationships, drawing on past experience and what seems right in the moment. This applies as much to relationships within services and other provision as relationships elsewhere. So devising a framework capturing how to ‘do them well’ is difficult: it can quickly feel an academic, even unhelpful exercise, unlikely to be valued and used by practitioners. Watch this space for how I’m working with the partnerships to try and progres some of these issues! Full findings from the project will be launched in March.
The way in which relationships recur in our work at the Centre also suggests it might not make sense to think of them as a ‘topic’. Instead, they could be a crucial piece in the puzzle of how we can support others to flourish, while also reflecting on ourselves and what we’re bringing to any situation.
Thinking only about the relationship between adult professional or volunteer and young person might also be limiting. I’m also thinking about how we can make the best of the relationships upon which the Centre exists: within the central team, between us and our networks, and across the web of organisations, connections and friendships within which we work. Relationships are a key mechanism for development and support in the overwhelmingly complex systems we live within, and something that any individual can learn about and use to affect change in their own lives, and the lives of others.
What might it look like if we invested in relationships for social good, rather than in organisations or programmes? Are our current methods and frameworks for measuring impact in work with youth people sufficient to take account for the nuances, complexities and potential impact of positive relationships? Do they tell us enough about how relationships can be improved? How can we develop them if so? We’ll be learning from youth work principles and practice [for example, Relationship, Learning and Education; Benefits of Youth Work; Grassroots Youth Work] and the work of the Search Institute, the R-Word, and Lankelly Chase as we develop our approach in this area.
This blog was written by Venetia Boon, Children and Young People Grants Manager at Comic Relief
Being a funder of youth organisations is a great position to hold – the width and breadth of the work taking place is astonishing, and the amount of expertise hard to fathom. It would be impossible to get a detailed understanding of the knowledge running through and around those organisations. But to do my job well, I need to strive for three things:
It’s also part of my role to feed into the collective organisational knowledge of Comic Relief. My feedback on the impact of the project should fit into the jigsaw of the charity sector activities and Comic Relief’s approach to funds in the UK and internationally.
The keywords for our approach to learning and evaluation are: appropriate, proportionate, grantee-led, and realistic. Learning should primarily be for and by the people receiving funding, so they can focus on what is relevant in the context they work in. They need to be able to own and feel able to use the learning effectively. While we want to drive good monitoring, evaluation and learning practice, and learn from our grantees - we don’t want to dictate the specifics to them. We also don’t want to make people feel duty-bound to create processes they can’t keep up with or that don’t serve their purpose.
As a grant manager, other things I’m interested in are honest conversations with people receiving our funding. I want to know about the unexpected outcomes, what went wrong and how the life experiences of beneficiaries and staff were integrated into the learning. The things that didn’t work quite as expected, that were adapted and adjusted are just as interesting and valuable as the things that worked perfectly. More so even.
Coming away from the Centre for Youth Impact Gathering, my attention was caught by a couple of things. Firstly, I was really struck by the discussions about whether it might be possible to measure the quality of work and then make links to outcomes of change. This would be in place of desperately trying to measure change, which we all know can be nebulous and take a long time to show up. This felt like a positive story, and something I’d be keen to hear more on. I’m sure there’s no magic answer, so perhaps that approach just shifts the onus on data to a different area? But I definitely want to know more.
I also thought about the speaker who mentioned the changes they had made to a project after discussions with their funder. They realised there was a potentially more effective approach than originally planned, but it needed them to shift targets and outputs. We as funders need to communicate how open we are to hearing those messages and that we understand all the experience in the world doesn’t mean you’ll get it right every single time. If we truly believe in putting people at the centre of our collective work, we have to understand the repercussions of one-size-doesn’t-fit-very-many.
The last point was how exciting it was to have a group of funders discussing what we think about evaluation and monitoring with other people in the sector. In general, we had very similar thinking and have a collective responsibility to promote that thinking. This is in addition to talking with grantees and other funders about such matters. We need to push for monitoring that works for everyone concerned, putting the experience of people right at the heart. Finally - we all need to understand and accept that people have different needs, can be changeable, and won’t all want the same thing.
“Cracking the impact nut” or How the Youth Investment Fund learning and impact strand responds to the challenges of evaluation in open access provision
This blog has been written Matthew Hill, Head of Research and Learning at the Centre for Youth Impact.
Our YIF approach to data collection
NPC and the Centre for Youth Impact are leading the learning and impact strand of the £40 million Youth Investment Fund (YIF), which is a joint programme supported by government funding from DCMS and National Lottery funding from Big Lottery Fund. Eighty-six youth providers are being supported for three-years (2017-2020) to develop and expand their open access youth provision, and we are currently working them to design an evaluation approach that captures the value of their work, and supports their learning and improvement. Our overarching approach is an attempt to crack some of the perpetual practical and methodological nuts (my favourite bar snack) in measuring the impact of open access provision. This blog outlines five ways in which we are confronting these challenges within our work.
Moving away from blanket outcomes measurement
The past decade or so has seen a concerted push to be more outcomes focused. We continue to support an outcomes focused approach to service design and delivery (that is why we all do what we do after all) but our YIF-work represents a shift away from blanket outcome measurement (i.e. trying to capture every outcome for every young person for every organisation). This shift is a direct response to many of the perpetual challenges of outcome measurement in open access settings including fleeting or irregular engagement, defining generalised outcomes for individualised provision, developing robust metrics for broad personal change and the issue of measuring long term impacts. Far from abandoning outcome measurement, we are focusing on high quality targeted measurement with a sub-sample of the YIF cohort, which will ultimately provide us with more robust and meaningful data.
As well as this targeted approach to outcomes, our YIF-work places increased emphasis on the experience of young people and the quality of the provision they receive. Crucially, we are aiming to link the data on outputs, user feedback and quality to the targeted outcome data so we can understand not only whether the provision is having an impact on young people – but why.
Focusing on the user experience
Another challenge is that young people often feel overburdened with rather obscure and meaningless (to them at least) surveys. In response, our YIF-approach focuses on the elements of delivery that are most relevant and meaningful to young people – namely their experience of services (e.g. feelings of safety, respect and positive challenge). We are working with Keystone accountability to develop a set of standardised feedback questions around this experience. Instead of large annual surveys this feedback process uses regular light touch feedback - perhaps 3-5 questions once a month. This ensures that user feedback is embedded in ongoing reflective practice, and crucially, means that organisations can respond more immediately to the findings. Critically, we are also working with providers to process and act on this feedback, and tell young people what has changed as a result.
Improving as well as proving
Another nut that needs cracking is practitioners’ sense of dislocation between a lot of impact measurement and their everyday work. Part of our commitment to ‘going with the grain’ of provision is a focus on the quality of youth work practice. This data absolutely has to be linked to outcomes data – as ultimately this dictates what is and what isn’t quality provision – but by emphasising considerations of quality we are focusing on those elements of provision that are most relevant and meaningful to youth workers themselves. Our YIF work is drawing on an established quality improvement framework from the US – the Youth Program Quality Assessment – which relies on peer observation with youth workers identifying ‘markers’ of quality in the delivery of their colleagues. This framework is not a critique of existing quality assessment frameworks but is, in fact, a complement to them – ensuring quality is also monitored and increased as part of ongoing practice improvement rather than just assessed against an existing standard.
Understanding young people’s journey through services
Although most providers collect detailed attendance data, many tell us that they use this for monitoring overall service demand rather than truly understanding the way that individuals engage with their services. By utilising existing data and trialing new digital methods such as Yoti we aim to build a much more nuanced picture of what young people do with their feet i.e. how often they attend, for how long, and how they move through provision – as a proxy for their levels of engagement and ‘exposure’ to interactions.
Arguably the greatest opportunity presented by the YIF is the potential to collect shared data across 86 grantees for three years. This offers a rare (probably unique) opportunity to build an evidence base across a huge diversity of open access provision (detached/ building-based; structured/ unstructured; universal/ targeted) and, by comparing the results across different types of provision, we will be able to really understand the strengths and weaknesses of different services.
We recognise the many challenges that open access providers face and believe that the dominant paradigm of measurement is not fit for such settings. Our YIF work has the potential to overcome some of these challenges, and develop approaches that are applicable across the wider sector. It is certainly ambitious and we are trying new things out – some of which will work but some of which will no doubt fail. We will confront this uncertainty with a pioneering spirit and a humbleness to admit when things don’t work. As well as grantees we are committed to working with the wider sector and we would greatly value your input in testing, refining and reflecting upon the tools and evidence that emerges. Our dedicated YIF learning and impact website will be live soon… so please stay posted… or get in touch with Matthew.Hill@youthimpact.uk or Anoushka.Kenley@thinkNPC.org if you want to find out more in the meantime.
Inspiral Targets: the measurement of everything and the value of nothing
Dan Gregory, of Common Capital, makes the case for ‘uncertainty, complexity and modesty’, as he reflects on his recent keynote presentation at The Centre for Youth Impact Gathering 2017.
Dan has over 10 years’ experience of funding and financing voluntary and social enterprises, through developing policy at the highest level and delivering in practice at the grassroots. He has worked for the Treasury and the Cabinet Office where he led the development of government policy on third sector access to finance, social investment and the role of the sector in service delivery. Dan spends some of his time at Social Enterprise UK and also works independently under the banner of Common Capital.
I’m not much of a natural public speaker. Certainly not an exciting or inspiring one. So when I am invited to speak at a conference or event, I try to be at least informative. To try to satisfy the audience with substance if not style. So lots of facts, evidence, insight or expertise from a field I have worked in for a decade and more. Talking from the solid ground of territory I know well and feel comfortable upon.
Sometimes this goes down well. But other times I sense the audience feels a bit bombarded with wonk grenades. They don’t seem to really warm to me, they feel my facts and evidence are a bit relentless and dry. They don’t ask questions afterwards because, frankly, they’ve had enough already, thanks.
Recently I spoke at The Centre for Youth Impact Gathering 2017: Shaping the future of impact measurement. The event was for practitioners, researchers and funders with an interest in learning, evidence and evaluation in work with young people.
It seemed to go very well. A few people said afterwards that they really valued my presentation. In an unprecedented turn of events, a few even told me they enjoyed it. This was confusing for me because, to be honest, I wasn’t really sure what I was talking about and certainly wasn’t on solid ground. Frankly, I don’t really know much about social impact measurement. I’m not an economist or accountant. I’m not a social impact guru or measurement Maharishi. I haven’t got a social impact measuring stick. So why did it seem to go so well, at least compared to normal?
In short, my presentation was about how social impact measurement is largely a load of rubbish. Although not entirely. I started off admitting I was a bit of a fraud and didn’t have any technical expertise as such. But I do have a broadperspective, having worked in and around this area for 15 years now, in government and outside, and from watching the rise of new fields of social impact measurement, social impact bonds and so on.
So why would someone argue that social impact measurement is a load of nonsense?
Perhaps above all, because the world is just too complex to reduce to the logic of input > output > outcome > impact. Such linearity is a joke. Perhaps this works in the laboratory - in very controlled and restricted conditions. But out in the field, people are more complex than bacteria, social programmes are not vaccines, homeless people are not a disease. As social innovation guru Geoff Mulgan has pointed out, evidence can be as unreliable and contingent as humans are irrational and unpredictable. Mulgan has described how, “Unlike molecules, which follow the rules of physics rather obediently, human beings have minds of their own, and are subject to many social, psychological, and environmental forces…. Very few domains allow precise predictions about what causes will lead to what effects.” Sadly, later in the same article, he subsequently suggests his own, new, social impact measurement methodology as the answer to this problem, immediately undoing all his good work in rising above the fray, and bringing himself down to the level of the more common or garden impact measuring gun for hire.
Second, and related to complexity, is the realisation that the impact of any action is always to some degree context specific - to a particular family, community or individual, for instance. Until the day arrives when we are able to construct multiple realities, it remains impossible to ever really know what would have happened otherwise. Randomised Control Trials, for instance, might tell us what happened elsewhere but not what would have happened here. And even RCTs are somewhat problematic. Beyond the issue of their considerable expense, even the World Health Organisation is losing faith in RCTs - “As the complexity of interventions or contexts increases, randomization alone will rarely suffice to identify true causal mechanisms.”
Third, as Zhou Enlai once pointed out, when it comes to impact, it’s just too early to say. When is impact? Perhaps my biggest professional fear is that charities and social enterprise have been doing such a good job for the last few decades in mitigating the worst excesses of capitalism, mopping up the problems and making just enough of a difference to hold together our society that would otherwise rip apart at the seams, that we have kept our prevailing economic system in place just long enough to allow it to do irreparable environmental damage to our planet, putting at risk all life on earth! Add up all those SROI ratios from among our sector and how’s that for impact?
Fourth, and more practically, half the social impact metrics we use are total nonsense. Lives touched, for instance, is one of the most common and sadly, not even the most ludicrous of the currencies we circulate. How do Stalin and the Chuckle Brothers measure up against that yardstick? Even if we did somehow develop less blunt and limited methodologies, they would be inevitably discredited by some other new social impact salesperson within weeks, hawking a supposedly new and improved model. Some of the metrics out there are frankly a total sham. I have listened to panellists at events describe how they have methodologies endorsed by The Pope and Will.I.Am which can calculate your social impact in under 7 seconds. Yet no-one calls these charlatans out.
Fifth, social impact measurement can bring accompanying dangers. Maybe metrics can tell us what happened in the past but that doesn’t mean they can tell us what to do in future. The world changes. What works changes. Not acknowledging this may bring dangerous consequences. Measurement can be dangerous if it is used to influence behaviour, often creating perverse incentives. Italian fireman paid by results start lighting fires to put out. Big outsourcing companies are rewarded for dead people not reoffending. NHS Patients are unable to book appointments with their GP more than two weeks in advance. In fact, as researchers have concluded, “Target based performance management always creates gaming”.
Finally, social impact measurement can be expensive, bringing negative or little benefit while diverting resources away from other work. Metrics can also be demotivating for staff and volunteers at charities and social enterprises, undermining their public service ethos, crowding out creativity, freedom, intuition, trust and the human touch.
So much for social impact measurement then. What a waste of everyone’s time?
Well largely, yes. But not entirely. I concluded my presentation by suggesting a different tack and one which seemed to go down quite well. Much of my frustration with this field is that social impact measurement always seems so sure of itself. “You have to get better at measuring your impact “ they say. “Of course it’s possible” they proclaim. That can be really annoying. So with that in mind, I also admitted where I might, in fact, be wrong.
First, if clinical tests and trials and RCTs have brought us so far forward in medicine and the health profession then perhaps it’s only a matter of time before we develop similar capabilities for better understanding wider fields of human activity. Who knows how far technological advances and artificial intelligence might take us in future?
Second, perhaps our metrics and methodologies will get less stupid. Jeremy Nicholls of Social Value UK - who has done more than anyone to advance this somewhat preposterous cause - is nevertheless doing a fantastic job in building bridges and overcoming divides between competing schools of metrics. Jeremy – and other good folk like Tris Lumley at NPC - have been working hard for many years to get the social auditors to agree to shared principles with the SROI merchants and to find common cause among competing clans. This can only be a good thing. Now we just need to call out the charlatans.
Third, and a point which Jeremy himself makes well, even if the numbers are nonsense, the process of social impact measurement, done well, can serve to empower those who are too often forgotten by charities, social enterprises, government and funders. Maybe the calculations and the spreadsheets and the ratios turn out to be meaningless. But if they were developed in a way which gives voice to previously powerless stakeholders – the beneficiaries – then the process isn’t entirely pointless.
Fourth, maybe there’s money here. Maybe funders and financiers and customers and contractors may also be fooled by these nonsensical numbers. If this brings money in, then this is just as useful to charities and social enterprises as a rebrand, a snazzy website or a shiny annual report.
So where does that leave us? Why was this train of thought slightly less boring for my audience than my usual barrage of wonk?
I think the message here is to be proportionate, to be humble, to even be sceptical. But nevertheless, to keep on trying. Trying to understand your impact is a laudable ambition at least. Often metrics might be too complex, too uncertain, too contingent, of limited or even dangerous practical application. They might be expensive. Sometimes, we might better focus on means, values and behaviours, ownership, co-operation, openness and respect. Sometimes we might just throw the spreadsheets in the bin.
But like emojis (in a text message, not on a gravestone) these metrics may yet have a time and a place. Who really knows for sure? People seemed to like my message that certainty is overrated. Uncertainty, complexity, modesty and admitting fallibility seem all together more human and more popular. Maybe that is exciting and inspiring.
This blog was written by Jack Welch, who is an autistic and youth voice activist, and has been working with the Centre as a young researcher. You can find him on Twitter and Medium.
Throughout the time I have been involved with the youth sector, it has become increasingly necessary for organisations to attempt to evidence a tangible and defined impact for their beneficiaries in an environment where funding is scarce and demand has only risen. However, what struck me throughout in the series of presentations and conversations among delegates at this third annual gathering for the Centre was an increased willingness to share expertise, as well as resources, where a silo approach is simply not viable in the current landscape.
This message became particularly apparent through Ruth Rickman-Williams’s presentation, which set out Youth Focus West Midlands’ recent challenges, and journey in response to those challenges. As austerity began to bite and resources became ever more limited, a struggle to survive ensued. However, it was clear that Ruth’s approach to restructuring her organisation and network to facilitate a more united agenda across the sector, including commissioners and those delivering services, could lead to a more holistic approach that is more likely to improve outcomes for young people and serve the needs of the wider community.
For local networks that are pooling their resources as part of regional ‘Youth Impact Networks’, there is a detectable sense of just how vital collaboration across the range of organisations in the voluntary sector and public services is. The networks seem to be providing a solid foundation for sharing learning, ideas and resources. From my own recent advisory work in patient participation in the health agenda, I have seen how all areas in England now have a Sustainability and Transformation Plan. These plans go far beyond just how healthcare is provided, but much more about the wider needs of local populations and are showing how voluntary sector organisations working in partnership with one another can improve health and wellbeing outcomes for young people. I am interested to see how more youth sector services might become part of significant pieces of work like this.
I was also struck by Dan Gregory’s emphasis in his keynote that much current impact measurement risks being meaningless to the organisations to whom it relates. I would add that this is a particular risk where young people that are not kept at the core of service design, delivery and evaluation.
Within the breakout workshops, I was drawn to the new initiatives led by the Centre and New Philanthropy Central, on how data can be effectively captured in open access settings and ensuring an individual’s journey can be tracked, specifically in services funded through the Youth Investment Fund (YIF). The YIF will create a new body of evidence about whether and how services are making a difference to the lives of young people.
From my own experience, having attended the same open access settings for various unrelated projects, I know that young people can be transient and the location could be playing a whole host to diverse services from access to housing and welfare to career assistance. While the new ‘Footfall’ resource in gathering data via mobile devices will be an innovative means of building a consistent record, I particularly remember a comment from a delegate in the room that many young people in the most disadvantaged circumstances will not have access to smartphones and in some cases have the signal to even make use of it. A fellow Young Researcher made the point too that without young people seeing how their role in evaluation has influenced the practice later on, their participation in this is more likely to be tokenistic. I believe these should be important considerations as the YIF evaluation plans are developed.
I began as a volunteer with Dorset Youth Association in 2010. Since then, we have seen a profound upheaval within the sector, and many are still learning how to thrive as well as survive in the new circumstances, structures and network. We are still in a time of flux and unpredictability, but it looks to me like we are reaching a point where the sector is becoming more resilient against much of the shock to its financial security post-2010 and more willing to work in more collaboration. With closer partnerships and even mergers changing the structures through which organisations are able to have an impact on young people’s development, I look forward to seeing what comes next for the sector.
Pippa Knott is Head of Networks at the Centre for Youth Impact. She is part of a small team that has established and developed the Centre, and has talked increasingly about creating the conditions for meaningful impact measurement. This requires shifts in culture and behaviour as much as developments in methodological design and research tools.
Will Millard is a Senior Associate at the education and youth think and action tank, LKMco. LKMco works across the education, youth and policy sectors, helping organisations develop and evaluate projects for young people, and carrying out academic and policy research. LKMco seeks to help practitioners and organisations working with young people develop and use stronger evidence to help enhance their impact.
The Centre for Youth Impact and LKMco are both part of (overlapping) evidence movements, and want to play our part in helping people across our respective sectors engage with evidence and promote its place in improving service design, quality, and sustainability.
What is the challenge?
Why does talk of evidence and impact excite some people while alienating others? We (Pippa and Will) met to talk about the similarities and differences between the demand for and use of evidence in formal (school- and classroom-based), informal and non-formal educational and youth settings. Specifically, we discussed the conditions under which ResearchEd, a teacher-led ‘evidence movement’, was established and has since grown, and parallels with the evolution of the Centre for Youth Impact, which supports networks of practitioners across England to develop their use of evidence to improve out-of-school provision for young people.
We identified three influences on the culture, growth and ‘successes’ of both ResearchEd and the work of the Centre for Youth Impact, and engagement with evidence more generally:
What is ResearchEd?
ResearchEd seeks to help teachers share educational research and raise evidence ‘literacy’ through events run across the UK and internationally. It was originally intended to be a one-off event in 2013 but has since become an international movement. Describing ResearchEd as a “grass-roots” organisation, founder and director Tom Bennett says, “it wanted to be built. It built itself”. Some, though, are sceptical whether the term ‘grass-roots’ accurately describes ResearchEd, yet as Debra Kidd argues: “What matters is that it’s here. It’s an opportunity. And it’s there for teachers to make of it what they will.”
What is the Centre for Youth Impact?
The Centre for Youth Impact exists to support organisations working with young people in informal/non-formal settings to improve how they generate and use evidence. It was set up with funding from central government, but has always aspired to be owned and led by the sector. The Centre is perceived by some sector stakeholders as an arms-length and external influence on practice, and so cannot be said to be a grassroots movement. However, a focus on building networks across the country and relationships with practitioners is now beginning to generate a community and momentum that goes beyond the core team.
In many ways, ResearchEd has historically been the opposite of the Centre: the former having a tiny core with a large following, and the latter starting life as a slightly larger ‘institution’ but attempting to build a following once it came into existence.
Why do we ‘do’ impact measurement, and why do those working in formal and informal and non-formal settings sometimes view this differently? We think that part of the answer lies in the relationship between evidence and accountability.
Teachers associate accountability with Ofsted, the Department for Education, line managers, and tests and exams. LKMco’s recent report on assessment found that teachers often feel accountability is something done to them, rather than with them (similar language to that used to describe the experience of evaluation in the youth sector). Consequently, ResearchEd and other events and networks including Northern Rocks, TeachMeet, and #BAMEed – precisely because they are seen as grass-roots movements organised by and for teachers – represent a means by which to reassert professional identity and authority.
Furthermore, there is a widespread sense that evidence on which decisions about teachers’ and schools’ performance has been based has at times been dubious, as was explored here, here and here. ResearchEd tapped into teachers’ desire to raise the quality of discussion at all levels, not just about what learning is for and how it can be best supported, but also how it can be evaluated. ResearchEd is by no means the only expression of this desire: nearly one third of schools in England have taken part in one or more Education Endowment Foundation-funded trials, and the Chartered College of Teaching has made sharing evidence one of its core aims.
By way of contrast, the growth of the evidence and impact agenda as it is currently constituted has been associated in parts of the youth sector with a certain form of ‘high stakes’ accountability, and ‘proving’ worth or value to others. Our experience is that the current impact agenda is too often divorced from the processes of reflection, learning and practice improvement that to many are – or should be – inherent in youth work practice, and other forms of informal and non-formal education with young people.
An Education Select Committee report published in 2011 was a notable point at which evidence of effectiveness was posited as the key to sustainability of provision. In subsequent years, funding has remained the primary pressure facing youth workers and managers, and continues to be associated with a pressure to ‘prove’ impact to funders, commissioners and policy makers. So it’s hardly surprising that for many managers and practitioners in youth organisations, the concept of evidence generation and data gathering remains firmly coupled with the pressures of securing and managing funding, and externally-imposed accountability processes. A widespread scepticism that funding and commissioning decisions are not as ‘evidence informed’ as one might wish has done little to increase the value of impact measurement in the minds of many practitioners.
There is a critical eye on the nature and use of evidence in the youth sector, and many would say that it bears little relevance to either practice or the lives of young people. Some have referred to the increasing desire to focus on ‘what works’ in developing young people’s social and emotional capabilities as ‘pseudo-science’. But the corresponding drive of the youth sector - though it might also be seen to be about raising the quality of discussion – has been more ambivalent about the status of research and evidence in its work. The youth sector has certainly not seen such polarization amongst practitioners as in the formal education world, where debates between the ‘trads’ and the ‘progressives’ can get fairly spicy. It is unclear how many youth providers would sign up to be part of research trials, but one suspects it would not be many – certainly not with parameters similar to those conducted by EEF.
There’s no denying ResearchEd’s growing influence. Senior members of the inspectorate, Civil Service, and government appear to be queuing up to present at the events. This might be a mixed blessing, though. The presence of senior politicians reflects how important it now is to associate with the evidence agenda in education. It also runs the risk of ResearchEd being seen as a conduit through which ministers can exercise their personal ideologies or, worse, an event that actively ostracises those whose views or approaches differ. Bennett, as the founder of ResearchEd, is aware of this challenge, and has publicly talked about steps ResearchEd has taken to remain inclusive (of both people and their ideas). However, ResearchEd’s ongoing success, and more importantly its impact on classroom practice, will depend on it striking the right balance between working with those at the top of the education system while championing the experiences of everyday teachers.
The Centre’s influence is also growing, though it is probably some way off the momentum and international profile of ResearchEd! Meanwhile, the Centre’s team is continuing to challenge and interrogate what is the best trajectory for the Centre. Currently, it feels more meaningful to create space for inclusive debate, with practitioners supported to lead debate, than to provide a platform for politicians.
So what, then, can we do to make evidence and research synonymous with empowerment for all professionals working with young people? We are finding that:
Professional learning and development
ResearchEd, like the Chartered College of Teaching, Teacher Development Trust, Institute for Teaching, and EEF, in part reflects a fundamental belief that teaching is something that can be deconstructed, honed and improved over time. Teachers are made, not born. In their respective ways, these organisations reflect teachers’ desire to share their practice, and learn from others. Testament to this have been the high turnouts at ResearchEd events, and teachers moving heaven and earth to attend.
Alongside a deep-rooted belief that coming together to deconstruct and evaluate practice is an inherently good thing to do, teachers have also felt the need to increase their evidence ‘literacy’ to guard against the false promises of ‘snake oil salesmen’ (as Bennett explores, here and here). Sharing research and evidence can therefore generate and bolster good ideas, and reduce the prevalence of bad ones.
At the heart of traditional youth work is the relationship between the youth worker and the young person – a relationship that is voluntary, led by the young person, starts where the young person is, centres on their individuality, networks and community and cultural identities, and focuses on how they feel as well as what they know and can do. The ‘what works’ debate tends to lead to a focus on interventions – programmes or ‘methods’ of working with young people. Arguably the importance of the relationship is missed in some of this thinking. Could we use the evidence agenda help us better understand and communicate what is at the heart of effective work with young people – the relationship between youth worker and young person as a ‘mechanism of change’ in that young person’s life, and their broader community? How can we reboot and inform learning and development – something increasingly neglected in an underfunded and fragmented youth sector? Perhaps in contrast to the parts of the teaching professional body, the youth sector as a whole is not harnessing evidence to challenge ‘fads’ and attempt to strengthen itself and rise up. It has effectively rejected or resisted it instead..
We are learning that:
Networks (and particularly social media)
In addition to the role that evidence does and has historically played in professional experience, it’s possible that the structures and processes of networks themselves have had an influence in the evolution of our evidence movement.
Networks – and particularly those on Twitter – have been fundamental in ResearchEd’s success. As Bennett has explained, ResearchEd grew out of a single tweet, posted late one evening in 2013. The first conference in September that same year attracted over 500 people. Ever since, ResearchEd has used the combination of events and social media to expand its reach. Likewise, grassroots events such as Beyond Levels and TeachMeet also quickly gained traction through a budding online community.
Of course, it would be naïve to pretend social media is hassle free. Alongside its many benefits, social media can sometimes be any or all of reactive, aggressive, cliquey, and, consequently, intimidating for new (and sometimes even veteran) users. This is an inherent challenge associated with using social media platforms, but has practical ramifications for ResearchEd, which must probe at deeply held ideas while remaining constructive.
The Centre for Youth Impact’s networks have evolved primarily offline – through a small core team, and regional networks, whose leaders received a small fee to contribute towards administration, and help them widen their reach. We have reached out through and invested in existing networks where possible, seeking to reflect the views of practitioners on the ground while drawing together a diverse range of experiences and perspectives from across a fragmented sector. Building trust and perceptions of time well spent, face to face and in small groups, has been critical to the sustainability of the Centre.
There’s clearly no ‘right’ or ‘wrong’ way to establish and grow networks. Funding to support the development of regional networks supports the sharing of ideas locally, but perhaps lacks an overarching, galvanising force. On the other hand, engagement through social media is energetic and organic, but could isolate teachers who do not have, or feel under-confident using online accounts or ‘speaking out’ in this forum.
The emergence of ResearchEd (and other events and networks) indicates:
Where do we go from here?
Our conversation highlighted to us that when promoting evidence and impact to our respective audiences we need to more explicitly acknowledge how systems of accountability affect practitioners; how and where practitioners look for inspiration; and how networks support empowerment.
The evidence movements surrounding both the Centre for Youth Impact and ResearchEd differ significantly. This may not be due to a fundamental lack of interest or engagement in the youth sector, but because ‘evidence’ is perhaps seen as another stick with which to beat an already weakened profession. In addition, there are arguably philosophical and pedagogical differences between teachers and youth practitioners, influencing how each group engages with research and evidence. In theory, we recognise huge potential in the youth sector for a grassroots-led movement that allows practitioners to ‘reclaim’ the evidence agenda, but in reality, we must recognise the impact of political pressures and resource limitations on people’s willingness to engage.
Both the Centre for Youth Impact and LKMco intend to support practitioners to embrace evidence as a means by which to celebrate their achievements, while honing and refining their craft. We will continue to acknowledge that research and use of evidence will always be a political activity, but strive for openness and clarity about why generating evidence is important, and who stands to benefit and learn from it.
Furthermore, there is potential for more widespread voluntary online communities to develop in the informal and non-formal educational sectors, giving a wider range of practitioners a say, and engaging a wider range of people. However, we see roles for both online and offline discussion spaces in helping build these movements.
When treasuring is measuring, and why we might need a rethink
In this blog jointly written by Bethia McNeil, Pippa Knott and Matt Hill, the Centre’s core team responds to some of the key issues raised by Tony Taylor’s article regarding measurement in personal and social development. The Centre addresses the challenges found in the current dominant measurement framework and propose a rethink of the value of measurement in youth work.
Back in March this year, we hosted an event focused on measurement in personal and social development. We were really pleased to see Tony Taylor’s recent article in Youth and Policy, following up on the discussion, and agree that it would have been most beneficial had there been more time and space to explore the themes. Indeed, these themes are so vital that we felt moved to add our voice to Tony’s in this blog. Overall, we were struck at the many points where we agree with Tony’s forthright critique of the dominant paradigm in impact measurement, but there also remain some areas of fundamental disagreement – perhaps as might be expected in such a complex and contested area.
No measurement framework is ideologically neutral
We agree wholeheartedly that the theory and practice of measurement is never neutral - and arguably, neither should it be. Accepting this latter point would create more space for critical reflection on the inevitable ‘positioning’ of existing practices and frameworks, rather than an illusory search for independence and objectivity. Equally, focusing measurement efforts on the potentially less contested area of ‘skills’ as opposed to character, awareness or consciousness does nothing to make neutrality any more likely.
Any measurement framework involves collating, distilling and selecting information. This process is influenced by the background, experience and disposition of the individual or team carrying out the task, and the broader political and relational context in which they are acting. We are learning from other sectors - even those more historically predisposed towards the 'scientific method' - where people are equally questioning whether we can really rely on scientific robustness to neutralise values and context. Yet we disagree that failing the objectivity test is fatal. Instead, the best research and measurement acknowledges context and bias and embeds a constant reflection into research practice, including consideration of the influence on relationships ‘in the moment’. This has the potential to result in an enhanced, not a diminished science and practice.
The current dominant measurement framework systematically undervalues certain forms of activity, and privileges others
Open youth work, a term usefully endorsed by Tania de St Croix, sits at odds with many dominant forms of measurement, not least due to a (laudable) resistance among practitioners to impose pre-determined measurable ‘outcomes’ upon young people. We believe that we absolutely need to find ways of gathering, interpreting and sharing data about this approach to engaging with young people that both enhances practice and generates meaningful evidence and insight. We do not believe that this should be led by the pursuit of funding (particularly given that, as Tony and others have noted, the relationship between evidence and funding is nowhere near as straightforward as we might think/hope), or capitulation to broader forces that can seem overwhelming. We believe that this is about learning, improving, developing and advocating.
At the Centre we do not define the youth sector tightly. In fact, we sometimes deliberately don’t define it at all, preferring to work with and build shared understanding and learning among as many organisations engaging with and supporting young people as possible. But this is dependent at least in part on the ability to talk meaningfully and collectively about our practice. We agree that it might be harder to ‘measure’ the impact of youth work than other more targeted or narrowly defined forms of work with young people – but, for us, this demands that we develop how we measure and understand what really counts about youth work, and via a process that enriches rather than undermines practice. We should also pause to reflect on why we feel it’s ‘harder’ to measure impact in (particularly open) youth work: harder fundamentally, or harder to fit within the dominant paradigm?
There are broader conceptions of measurement out there
We agree the current dominant paradigm of measurement in the social sector has emerged within the neo-liberal landscape and shares (and actively perpetuates) many of its key features - monetisation, marketisation and competition to name a few. But neo-liberalism doesn’t have a monopoly on measurement and indeed there are many participatory and emancipatory methodologies of ‘measurement’ that are fundamentally opposed to it. There is a risk that we throw out the ‘measurement’ baby with the neo-liberal bathwater. Instead we believe that effective measurement of open youth work is a crucial bulwark against narrow conceptions of value. ‘Measurement’ in the current parlance has become short-hand for a particular approach that tends to focus on outcomes, objectivity, attribution and individual change. It is perhaps inevitable that most practitioners experience this form of measurement as externally imposed, though we should not overlook or discount those who feel it has brought a helpful challenge to their practice.
Our stance is that measurement is a fundamentally human activity that is woven into every aspect of our lives, and which helps us make sense of the world around us. We need to reclaim this broader understanding, alongside questioning the current drivers of impact measurement.
The specific challenges of measuring open youth provision are a call to arms not an excuse to down measurement tools
Do fluctuations in levels make something hard to measure? Not necessarily, and it depends entirely on what one is trying to measure and why. Do we care most about the ‘amount’ of change between point A and point B, or the journey along the way? How much do we care about understanding what influences the fluctuations? And is it possible to measure one thing that necessarily fluctuates according to context (such as confidence) alongside another that may slowly develop (like self-awareness)?
When one talks of measurement in relation to personal and social development, one is necessarily talking of perception. It is inevitably young people’s (individually and collectively) perception of themselves, and of the world around them. Simply asking about their perception has the potential to change it – and this process sits at the heart of most relational work with young people, and is to be welcomed and celebrated. It also demands measurement tools and approaches that are fit for purpose, and which go with rather than distort this process.
Some measurement practice is poor and meaningless
Much as assessing quality and process without one eye on outcomes (whether intended or unintended) can become a bureaucratic or meaningless exercise, too heavy-handed a focus on ‘delivering’ outcomes, particularly one driven by funding and PR pressures, too often detracts from and even distorts the quality of provision. This is a central concern at the Centre and we would see our role as supporting organisations and funders to move away from such narrow practice. Poor or meaningless measurement practice is not simply a waste of time: its effects reach much more widely than that. Understanding the interplay and relationship between process, conditions and outcomes matters enormously, but this importance is drowned out in performative impact measurement activity.
How we are taking these perspectives forward in the Centre’s work
Our forthcoming conference will address many of these issues explicitly, with a particular focus on what we do as a result. Tony’s critique is timely and important – we need to talk more about these perspectives – but we also need to shape practical responses.
We’ll be following up the conference with a series of blogs on where our work will focus in the next 12 months and beyond.
Pursuing any agenda related to impact measurement perhaps hangs on the question of whether we expect to see change as a result of our work. If yes, we remain as committed as ever to being clear about what we (the young person, the practitioner, and the external observer) hope that change will be, how we will know if it is happening, and what we can do to create the best conditions in which change might occur. We see a clear value in a focus on measurement but understand that these issues are value laden, highly contested and the source of much debate. A debate we are always happy to have.
In this blog, Bethia McNeil, Director of the Centre, reflects on how we can collectively shape a new path for evaluation and impact measurement. She argues that we need to do more than just think differently, we need to behave differently.
Social sector organisations would be forgiven for giving a weary eye roll at yet another invitation to ‘look to the future’. The promise of opportunities that might be on the horizon, just hidden from view, is a well-used trope within the third sector. Especially within reports and conferences. I can’t be the only person that mentally sings a Disney tune when they read about a ‘whole new world’ just around the corner.
So, why did we decide that it was a good idea to focus our forthcoming conference on ‘shaping the future’? Because, on this occasion, I believe it’s true. And it may well also be an actual opportunity.
All too often, the debate about impact measurement and evaluation is reduced to one of either technical skills (all you need to know is how to produce a theory of change, and which standardised questionnaire to use) or capacity (and I’m never entirely sure whether we all mean the same thing here). But it’s so much more than this. Not only are evaluation and impact more about culture and enquiry, but to focus primarily on building capacity and skills suggests that all we need to know is sitting somewhere, waiting to be passed on. And I just don’t think that’s how it is, nor how it should be.
Developing our collective understanding about how and why our work with young people contributes towards changes in their lives is not going to be achieved through doing more of what we’ve done so far. Will we reach nirvana when every youth charity has a theory of change? I doubt it.
What if we need to do things really quite differently to see things differently? What if we needed to try some approaches that have never been tried before, and stop doing some things that we’ve been doing for some time? And what if now was as good a time as any to do this? This is what our conference is about, and it marks a new phase of our work through which we want to make this a reality.
At our conference, and in our forthcoming work, we’ll be exploring the evolution of some ideas you will be familiar with, and also what happens when we bring different disciplines together. We will also look at a range of other conditions and contexts that shape evaluation and impact measurement: leadership, cultures of learning, truth claims and power.
I believe we need to set a new path for evaluation and impact measurement. Sometimes, it can be hard to turn back from a path we’ve travelled for some time (especially if lots of other people are on it too), and we must acknowledge this. But we also need to have the collective courage and openness to explore some different steps.
So whose responsibility is it to shape this future path? I think about this a lot, and I’m not sure. I think it’s our responsibility at the Centre for Youth Impact to create space to actively explore it, and to share ideas and resources that help us understand what the journey might involve, and where it might lead. But – to end as I began: on a cliché – this is your journey.
I look forward to seeing many of you at the conference on 11 September, and to working with you in the coming months.
Our monthly newsletter collects news, events, research and blogs from the Centre, our networks and practitioners and organisations around the world. Sign-up here