When treasuring is measuring, and why we might need a rethink
In this blog jointly written by Bethia McNeil, Pippa Knott and Matt Hill, the Centre’s core team responds to some of the key issues raised by Tony Taylor’s article regarding measurement in personal and social development. The Centre addresses the challenges found in the current dominant measurement framework and propose a rethink of the value of measurement in youth work.
Back in March this year, we hosted an event focused on measurement in personal and social development. We were really pleased to see Tony Taylor’s recent article in Youth and Policy, following up on the discussion, and agree that it would have been most beneficial had there been more time and space to explore the themes. Indeed, these themes are so vital that we felt moved to add our voice to Tony’s in this blog. Overall, we were struck at the many points where we agree with Tony’s forthright critique of the dominant paradigm in impact measurement, but there also remain some areas of fundamental disagreement – perhaps as might be expected in such a complex and contested area.
No measurement framework is ideologically neutral
We agree wholeheartedly that the theory and practice of measurement is never neutral - and arguably, neither should it be. Accepting this latter point would create more space for critical reflection on the inevitable ‘positioning’ of existing practices and frameworks, rather than an illusory search for independence and objectivity. Equally, focusing measurement efforts on the potentially less contested area of ‘skills’ as opposed to character, awareness or consciousness does nothing to make neutrality any more likely.
Any measurement framework involves collating, distilling and selecting information. This process is influenced by the background, experience and disposition of the individual or team carrying out the task, and the broader political and relational context in which they are acting. We are learning from other sectors - even those more historically predisposed towards the 'scientific method' - where people are equally questioning whether we can really rely on scientific robustness to neutralise values and context. Yet we disagree that failing the objectivity test is fatal. Instead, the best research and measurement acknowledges context and bias and embeds a constant reflection into research practice, including consideration of the influence on relationships ‘in the moment’. This has the potential to result in an enhanced, not a diminished science and practice.
The current dominant measurement framework systematically undervalues certain forms of activity, and privileges others
Open youth work, a term usefully endorsed by Tania de St Croix, sits at odds with many dominant forms of measurement, not least due to a (laudable) resistance among practitioners to impose pre-determined measurable ‘outcomes’ upon young people. We believe that we absolutely need to find ways of gathering, interpreting and sharing data about this approach to engaging with young people that both enhances practice and generates meaningful evidence and insight. We do not believe that this should be led by the pursuit of funding (particularly given that, as Tony and others have noted, the relationship between evidence and funding is nowhere near as straightforward as we might think/hope), or capitulation to broader forces that can seem overwhelming. We believe that this is about learning, improving, developing and advocating.
At the Centre we do not define the youth sector tightly. In fact, we sometimes deliberately don’t define it at all, preferring to work with and build shared understanding and learning among as many organisations engaging with and supporting young people as possible. But this is dependent at least in part on the ability to talk meaningfully and collectively about our practice. We agree that it might be harder to ‘measure’ the impact of youth work than other more targeted or narrowly defined forms of work with young people – but, for us, this demands that we develop how we measure and understand what really counts about youth work, and via a process that enriches rather than undermines practice. We should also pause to reflect on why we feel it’s ‘harder’ to measure impact in (particularly open) youth work: harder fundamentally, or harder to fit within the dominant paradigm?
There are broader conceptions of measurement out there
We agree the current dominant paradigm of measurement in the social sector has emerged within the neo-liberal landscape and shares (and actively perpetuates) many of its key features - monetisation, marketisation and competition to name a few. But neo-liberalism doesn’t have a monopoly on measurement and indeed there are many participatory and emancipatory methodologies of ‘measurement’ that are fundamentally opposed to it. There is a risk that we throw out the ‘measurement’ baby with the neo-liberal bathwater. Instead we believe that effective measurement of open youth work is a crucial bulwark against narrow conceptions of value. ‘Measurement’ in the current parlance has become short-hand for a particular approach that tends to focus on outcomes, objectivity, attribution and individual change. It is perhaps inevitable that most practitioners experience this form of measurement as externally imposed, though we should not overlook or discount those who feel it has brought a helpful challenge to their practice.
Our stance is that measurement is a fundamentally human activity that is woven into every aspect of our lives, and which helps us make sense of the world around us. We need to reclaim this broader understanding, alongside questioning the current drivers of impact measurement.
The specific challenges of measuring open youth provision are a call to arms not an excuse to down measurement tools
Do fluctuations in levels make something hard to measure? Not necessarily, and it depends entirely on what one is trying to measure and why. Do we care most about the ‘amount’ of change between point A and point B, or the journey along the way? How much do we care about understanding what influences the fluctuations? And is it possible to measure one thing that necessarily fluctuates according to context (such as confidence) alongside another that may slowly develop (like self-awareness)?
When one talks of measurement in relation to personal and social development, one is necessarily talking of perception. It is inevitably young people’s (individually and collectively) perception of themselves, and of the world around them. Simply asking about their perception has the potential to change it – and this process sits at the heart of most relational work with young people, and is to be welcomed and celebrated. It also demands measurement tools and approaches that are fit for purpose, and which go with rather than distort this process.
Some measurement practice is poor and meaningless
Much as assessing quality and process without one eye on outcomes (whether intended or unintended) can become a bureaucratic or meaningless exercise, too heavy-handed a focus on ‘delivering’ outcomes, particularly one driven by funding and PR pressures, too often detracts from and even distorts the quality of provision. This is a central concern at the Centre and we would see our role as supporting organisations and funders to move away from such narrow practice. Poor or meaningless measurement practice is not simply a waste of time: its effects reach much more widely than that. Understanding the interplay and relationship between process, conditions and outcomes matters enormously, but this importance is drowned out in performative impact measurement activity.
How we are taking these perspectives forward in the Centre’s work
Our forthcoming conference will address many of these issues explicitly, with a particular focus on what we do as a result. Tony’s critique is timely and important – we need to talk more about these perspectives – but we also need to shape practical responses.
We’ll be following up the conference with a series of blogs on where our work will focus in the next 12 months and beyond.
Pursuing any agenda related to impact measurement perhaps hangs on the question of whether we expect to see change as a result of our work. If yes, we remain as committed as ever to being clear about what we (the young person, the practitioner, and the external observer) hope that change will be, how we will know if it is happening, and what we can do to create the best conditions in which change might occur. We see a clear value in a focus on measurement but understand that these issues are value laden, highly contested and the source of much debate. A debate we are always happy to have.
In this blog, Bethia McNeil, Director of the Centre, reflects on how we can collectively shape a new path for evaluation and impact measurement. She argues that we need to do more than just think differently, we need to behave differently.
Social sector organisations would be forgiven for giving a weary eye roll at yet another invitation to ‘look to the future’. The promise of opportunities that might be on the horizon, just hidden from view, is a well-used trope within the third sector. Especially within reports and conferences. I can’t be the only person that mentally sings a Disney tune when they read about a ‘whole new world’ just around the corner.
So, why did we decide that it was a good idea to focus our forthcoming conference on ‘shaping the future’? Because, on this occasion, I believe it’s true. And it may well also be an actual opportunity.
All too often, the debate about impact measurement and evaluation is reduced to one of either technical skills (all you need to know is how to produce a theory of change, and which standardised questionnaire to use) or capacity (and I’m never entirely sure whether we all mean the same thing here). But it’s so much more than this. Not only are evaluation and impact more about culture and enquiry, but to focus primarily on building capacity and skills suggests that all we need to know is sitting somewhere, waiting to be passed on. And I just don’t think that’s how it is, nor how it should be.
Developing our collective understanding about how and why our work with young people contributes towards changes in their lives is not going to be achieved through doing more of what we’ve done so far. Will we reach nirvana when every youth charity has a theory of change? I doubt it.
What if we need to do things really quite differently to see things differently? What if we needed to try some approaches that have never been tried before, and stop doing some things that we’ve been doing for some time? And what if now was as good a time as any to do this? This is what our conference is about, and it marks a new phase of our work through which we want to make this a reality.
At our conference, and in our forthcoming work, we’ll be exploring the evolution of some ideas you will be familiar with, and also what happens when we bring different disciplines together. We will also look at a range of other conditions and contexts that shape evaluation and impact measurement: leadership, cultures of learning, truth claims and power.
I believe we need to set a new path for evaluation and impact measurement. Sometimes, it can be hard to turn back from a path we’ve travelled for some time (especially if lots of other people are on it too), and we must acknowledge this. But we also need to have the collective courage and openness to explore some different steps.
So whose responsibility is it to shape this future path? I think about this a lot, and I’m not sure. I think it’s our responsibility at the Centre for Youth Impact to create space to actively explore it, and to share ideas and resources that help us understand what the journey might involve, and where it might lead. But – to end as I began: on a cliché – this is your journey.
I look forward to seeing many of you at the conference on 11 September, and to working with you in the coming months.
The Centre's essay collection informs, challenges and encourages you to take your thinking one step further.
It is several years since the Select Committee Report on Services for Young People pushed the debate about evidence and impact into the mainstream.
For some, this was a new conversation; for others, it was a repeat of many that had gone before. The conversation shows no signs of going away, and has arguably only picked up pace. But where has the debate got to?
In 2016, The Centre for Youth Impact launched a new collection of essays to help us to take stock, understand the differing perspectives, and explore where the conversation has taken us. The essays brought together leading thinkers, practitioners and policy makers, along with young people who were reflecting on their role in this agenda. We hope that the essays will continue to inform, challenge and encourage you, taking your thinking one step further.
The first two essays, from Dr Nick Axford and Dr Genevieve Maitland Hudson, were published in April 2016.
Two further essays were published on 6 July 2016 at the Centre for Youth Impact Gathering 2016, from Jenny North and Tania De St Croix.
Since September 2016, five additional essays have been published via our monthly newsletter, from Kai Hopkins, Kaz Stuart and Steve Hillman, Emma Taylor, Asha Ali, Beca Sandu and Michael Little, and Jane Melvin.
You can read the essays along with many other useful tools in our Resource Hub.
In an article originally published in Campbell Tickell’s CT Brief, the Centre’s Pippa Knott explores charities’ different objectives in undertaking impact measurement – and on whose behalf. She argues that the mission must be to understand impact and care about getting meaningful answers, rather than seeking tick-box ways to respond to others’ demands.
The arguments for why charities should measure the impact of their work are well rehearsed. Some say that charities must measure impact to be accountable to existing funders, or able to market their work to new ones – both pragmatic responses to external context. Others set out ethical responsibilities to understand and improve interventions (and stop anything that isn’t working), or to support beneficiaries to reflect on their own progress. Others again are interested in increasing sector-wide knowledge about society and social programmes.
But are these objectives mutually reinforcing? Can they all be met by the same impact measurement activities, or by a single organisational function? Might there be contradictions between some of them? And more fundamentally, which relate most directly to sustainability for the most effective socially focused work?
Impact measurement has been one of the most dominant – and disputed – features of the evolution of the charitable sector in recent decades. Its profile has risen in line with debates about new public management, social value and austerity. Reasonable consensus exists as to what impact measurement is attempting to achieve. The Big Lottery Fund defines impact measurement as “the process of trying to find out what effect an intervention is having on people, organisations or their external … environment”. Measurement practice often seeks evidence of positive impact on outcomes: the changes brought about in the lives of beneficiaries. Activities associated with impact measurement include: producing narratives of how impact is intended to be achieved (logic models or theories of change); gathering monitoring or outcomes data, often from beneficiaries via self-report surveys; or gathering stories of individual lives transformed.
Exactly how, why and for whom impact measurement “gets done” is more confused and contentious. Many people will have views on what “evidence” a charity should collect, and how. These people may be internal (managers, fundraisers, practitioners, beneficiaries) or external (policy makers, funders, consultants, academics). Varying pressures on social organisations to measure the impact of their work have resulted in a flurry of data collection, and strong debate about what constitutes evidence, both “good” and “bad”. But there is a risk that this data is more about responding to immediate pressures, and misses the point of the relational work that is at the heart of socially orientated work.
To be valuable in the widest sense, impact measurement needs to be done with curiosity and honesty: a desire to use evaluation to become more effective, rather than to reinforce existing views on effectiveness. It also needs to be a collective endeavour, rather than an exercise in competitive advantage for individual organisations. However, the environment in which charities operate is rarely conducive to this, and the incentives are weak. For many, meaningful impact measurement requires a difficult process of culture change, and a mind-set shift for policy makers and funders as much as for delivery organisations.
The Centre for Youth Impact was established in 2014 to support organisations that work with young people to change their practice in relation to impact measurement. In partnership with our networks we are developing approaches that are valuable to the statutory and voluntary organisations that make up today’s youth sector with the aim of improving provision and moving it to a more sustainable footing.
In the midst of the complexities and tensions around impact measurement, charities need to establish organisation-wide clarity about why (and for whom) they are measuring, and whether they are developing insight that will enable practice – and outcomes for young people – to improve as a result. They need to be able to access tools that will enable them to do this, and align their practice with their peers. We believe that the key questions that every organisation should be asking with openness as to what they might learn and do differently as a result are: Why do you do what you do? What exactly are you doing? Are you doing it consistently well? Are you true to your premises? What do your beneficiaries think of what you do? Are you achieving your aims?
When individuals at all levels understand the questions at the heart of impact measurement and care about getting meaningful answers, the resultant data is more likely to reflect the realities of practice and be used to improve it. The mission must be to understand impact, rather than seeking tick-box ways to respond to others’ demands. There is a technical aspect to this, but relationships and culture matter much more.
In this blog, Bethia McNeil, Director of the Centre explores the seeming disconnect between evidence and learning.
The debate between whether proving or improving should be the focus of evaluation is a particularly live one. The underlying suggestion is that we all – obviously - want to improve rather than prove, given half a chance. But improving provision is intentional, and at its heart involves learning: not just the desire to learn but also the capacity and the feasibility.
This blog is a write up of a recent talk I gave at an event where I was asked to talk about why we might not be learning from our evaluation and impact measurement activity.
It is perhaps odd, maybe contentious, to suggest that we – those of us involved in the broad range of provision for young people – have arrived as a point where evaluation is disconnected from learning, but I believe this might just be the case. It has become something of a cliché to talk about a shift from proving to improving, but this is overly simplistic. It suggests that there is an either/or choice, and that the two might be mutually exclusive. It also creates a reassuring sense that all we need to do to refocus is to change our language. None of these things are true. Much of our evidence gathering effort is about decision making – this is the case whether we’re seeking to prove or improve – but again, making decisions implies that we are learning from and acting on the evidence that we gather.
There are three particular signs that I think suggest that evidence and learning are disconnected:
Firstly, that after many years’ effort in evidencing the impact of informal and non-formal learning, we are no further forward in coalescing around a shared evidence base that affords us a collective language in identifying, building on and advancing “the difference that makes the difference” – that is, the particular elements of effective practice in informal and non-formal provision, and the difference it creates, or contributes towards, in young people’s lives. This is particularly remarkable given the strong – and justified – complaints about the burden of impact measurement on providers, and the hours of effort expended. It also suggests that if indeed we have all been focused on proving over improving, then our bar has not been set very high.
Secondly, we have no sector-wide quality improvement effort, based on a shared understanding of what constitutes ‘quality’, its relationship with impact and how we know whether quality ‘is present’. We continue to advance an organisation by organisation approach that is designed to highlight and protect the uniqueness of each one’s work. We are nowhere near a scenario where we can say – individually or collectively – that because we do this, we can reliably predict that young people will experience that, and that these changes might be sustained into other contexts in their lives.
Thirdly, and finally, there continues to be a fundamental disconnect between the evidence we are gathering, and the act of ‘knowing’ about our work. We rightly prioritise the voices of young people, but seem to imply that our impact measurement practice is incompatible with this. We talk of the difficult trade-off between relevance and rigour. Why are the two apparently in conflict?
But my complaints come from a place that assumes that evidence gathering and evaluation should be about learning. What if they aren’t?
I think there are several reasons why we collect and collate evidence: PR and fundraising, accountability (to both our funders and to young people), and learning and improvement. There may be a few more too.
In an ideal world, these reasons would come together, rather than divide our efforts. But we are a long way away from that now. And where is the incentive to change? Culturally, we don’t have a clear blueprint or understanding of what it means to be a ‘learning organisation’ – and this goes so much further than the current fashion for talking openly about failure.
This is a blog about complaints, rather than solutions, so I am not offering any particular responses here. That will be the subject of a future piece.
A member of the audience for the talk I gave last week told me afterwards that I had made people ‘bristle’. Good, and let us now mount a defence, but not as individuals – as a collective.
Ed. - A version of this talk was originally given at a Generation Change/Step up to Serve roundtable on quality youth social action.
Nadia Zemouri and Sam Bell also attended the Portsmouth Social Action Conference and recorded their impressions of the events.
On the 16th Feburary 2017, Sam Bell and I attended a conference on Youth Social Action, held at Fratton Park football stadium in Portsmouth.
The day was overwhelming, discussing many different aspects of social action, from sharing good practice and forming relationships/networks to benefit young people from across the city. Unlike most conferences, the network of people involved were often unaware of each other’s existence and work. However, with slots in the programme designated purely for networking, organisations, both statutory and non-statutory, formed friendships that will, undoubtedly, lead to benefits for young people across Portsmouth and, arguably, the country.
The day started with several speeches, firstly Brian Bacher (Portsmouth Togethercoordinator), who introduced the key aims of the conference, highlighting the existing work that is being carried out and how to expand, continually improving upon work undertaken.
The conference aimed to bring together both young people from local schools/colleges and professionals in the youth work industry. This collaborative approach, with the aim of gathering a range of opinions from all ages and backgrounds, was incredibly useful in alleviating stereotypes and naivety and offered a whole new thinking aspect to many. He then introduced the six speakers who would form the panel for the Q&A discussion, alongside Steve Frampton MBE.
After a brief interval the crowd were split off into the first of their selected workshops to have a focused, in-depth discussion around key issues. The four workshops available to attend were: Benefits and barriers to young people getting involved in youth social action, Creating more opportunities for young people involved in volunteering and social action, Enabling more young people to make a difference across health & social care and Giving young people a voice.
In the workshop I attended - Enabling more young people to make a difference across health & social care - some of the delegates, representing the local NHS trust, were particularly interested in how to engage young people in the evaluation of NHS services within the area. They were unsure about how to go about involving young people; fortunately, opportunities arose for them to utilise experience, to which we contributed our own experiences along with our counterparts in the group discussion. The delegates from the NHS had several avenues to utilise/include young people in their research, thus ensuring a more accurate evaluation/reflection.
After the first workshop, everyone enjoyed a buffet lunch. The delegates were encouraged to network over the lunch period. Afterwards, the groups switched around so that everyone was in a different workshop. Along with the aforementioned intervals designated for networking, some VIP guests attended. Two Conservative MPs (both representing Portsmouth constituencies) gave their personal time to attend, speak and acknowledge the work from volunteers, organisations and charities from across their city.
The overwhelming consensus of the event appeared to be positive, with the young people wanting to engage further in youth social action and the professionals in attendance left with real food for thought on how to improve their current operations.
In this blog written for the Centre, Jack Welch, one of our Young Researchers, reports on his personal experiences from and thoughts on the Portsmouth Youth Social Action conference, held on 16 February, 2017.
Not so long ago, the mention of ‘social action’ in a voluntary context would generally be regarded as little else than another buzzword to be included as part of an already expansive collection.
That was my initial reaction, at least. Since the transition of government in 2010, we have already had ‘big society’, and, before that, ‘participation’. Priorities change, depending on the political weather. As with most opinions, they are open for change and reassessment, and youth social action, in particular, represents a wider umbrella term for all acts of activism in which young people can take part.
By the government’s own definition, from the Office of Civil Society, social action is:
‘… about people coming together to help improve their lives and solve the problems that are important in their communities.’
On the surface, it is vague as it needs to be. Social action manifests itself in the form of many aspects of community development, literally or more figuratively. From formal volunteering in an organisation to befriending and co-production, research has shown an upward trend in young people engaging in some kind of social action.
In the past two years alone, the #iWill Campaign, set up to help 60% of young people aged 10-20 become involved in social action by 2020, have calculated that 42% of 10-20 year olds have, at least, taken part in something ‘meaningful’ once in that last year. By most accounts, according to the data gathered, there is a marked benefit in terms of life satisfaction and increased resilience to challenges they might face in their lives.
Numbers are of little significance though, in proving impact, compared to witnessing the testimony and evidence of individuals themselves, which is why the Portsmouth Youth Social Action Conference made the personal value of social action a driving focus for its event. Hosted by Portsmouth Together, it was valuable to see a more equal balance of young people and professional attendees in the room for the day.
What was immediately striking, from the line-up of diverse speakers in the morning agenda, was just how commonplace certain terminology had become. As Steve Frampton, Principal of Portsmouth College, remarked, we are now in need of a ‘Curriculum for Life’ (more detail on its definition here) to ensure that social action can be embedded earlier on, through education. A campaign led by young people is now a movement across distinct sectors in society, especially education and voluntary.
Full-time volunteering, as presented by speakers on behalf of City Year, is another example of how young people are willing to commit to social action, in this case for at least a year of their lives. The evidence has already pushed government to pursue a review of how full-time volunteering can be given legal recognition in the UK. As highlighted by Deputy CEO of Step up to Serve (#iWill Campaign), Rania Marandos, there is, however, still a visible gap in the numbers of young people taking part in any kind of social action, let alone full-time, when compared from the most to the least affluent backgrounds (9% in 2016).
I was very interested to hear, in the course of the panel Q&A following individual presentations, a question raised by an audience member, about how disabled people can be engaged in social action. Most were giving encouraging overtures and their support to ensure those with additional requirements have the same level of access to take part but, as with employment, this is not always easy to put into practice and barriers are still to be overcome for all social action opportunities.
Within workshops, it was interesting to hear just how many attendees (who are not exclusively within youth sector provision) had heard of few or no examples of the benefits of youth social action. Good practice sharing is vital and a common aspiration to ensure there is greater success within the social action framework, but even for charities (as I was gathered from group discussions), this does not always present itself as we would like to believe. Additionally, many businesses and other organisations with a smaller capacity, who would like to take on more volunteers, are often limited in giving support and training to support those who could be engaged. For small institutions, such as local museums, who depend on volunteers, they are likely to lose out on this opportunity most.
For every barrier though, the benefits outweigh these recurring problems: inclusion, responsibility, empowerment and personal development prime examples of evidence of the strengths of social action.
Whether a revolution, or simply the next phase of enhancing our society, youth social action looks set to stay for the foreseeable future. While its benefits, which seem obvious, make it a worthy theme of development in the political spheres and beyond, it needs to find a way to serve a much wider group of beneficiaries and to be taken up by other than the usual suspects within the sector.
This is something of which the #iWill Campaign and other supporters are fully aware. For cities like Portsmouth, a good case study of social action taken seriously, cutbacks in public funds have meant its own youth projects have had to acclimatise to a less hospitable climate. While we may take these projects for granted, continued squeezing of resources will ultimately diminish the level of work able to be accomplished, regardless of the high ideals we place on its societal value.
It is time this imbalance was addressed.
In this piece for our November 2016 newsletter, Kenton Hall, Communications Officer for the Centre, offers his reflections on "proving" and "improving" and the questions this discussion have raised within the youth sector.
Imagine for a moment that you know nothing about working with young people.
For those who have dedicated their lives to the study and practice of supporting better outcomes for young people, this may seem an impossible leap to make. It is too much part of the fabric of who they are; they are confronted, day-to-day, by challenges both personal and procedural, by the dichotomy between the complexity of working with individuals and the minutiae of enabling the provision of this work.
And, most importantly, there are actual young people involved, whose lives and futures can be affected positively or negatively as a result of that work.
It is, therefore, easy to understand, in what must often feel like an all-consuming role, the temptation to take sides, to seek allies, to push back against anything that feels as though it conflicts with your core goals or appears to further politicise an already demanding mission.
Within the youth sector, this often flourishes in a sense of conflicting agendas, or in some cases, the feeling that such an agenda has been imposed: to satisfy funders, to conform to a “new” way of doing things or, worst of all, to undermine or discredit instinctive and professional approaches.
The idea that seems to cause the most consternation, in some circles, is that of “proving impact”.
In what can be highly pressurised conditions, does this feel like criticism? A suggestion that the work has not made a difference?
To which, it may seem a natural response to ask:
“Why am I being asked to prove it, when I could just be getting on with doing it?”
“Of course I can prove it. I already know the work makes a difference.”
Sometimes, the concern is more about whether anyone is listening.
It is difficult to imagine, at first glance, why anyone who works with young people would be averse to the idea of improving the offer they make to them, the support they provide.
And yet, under challenge, has it become the case that the need to prove, whether willingly or otherwise, has displaced the desire to improve?
And if the key problem is that terminology, and the ideology it reflects, are obscuring shared goals, preventing us from acknowledging and embracing them, then perhaps it would be worthwhile to look at both ideas with fresh eyes.
How would it look if we aligned “proving” and “improving” along an axis that, broadly, equated to the “head” versus the “heart”?
Let’s take the position that proving is often seen as an exercise, a way of satisfying, or convincing, those on whom practitioners, often grudgingly, rely for funding and resources to do the work about which they are passionate. They put their “heart” into the day-to-day work of relating, of listening, of supporting real life ‘outcomes’: an improved quality of life for the young people with whom they work.
Being asked to prove the benefits of what they’re doing, to measure and demonstrate the impact of their work, may feel like a distraction and an imposition: further time subtracted from a clock that already seems to be running too fast. For others, it is antithetical to their approach; it fundamentally undermines and interferes with their relationships with young people. It is a manifestation of a political climate that emphasises competition and austerity.
But the work is important; the proof should be obvious: what they do matters. They can see the things – feel the things – they are being asked to prove.
Those who work in impact measurement, on the other hand, might feel a similar frustration with the suspicion, in some quarters, of their work. They see, just as clearly, the necessity, with such important work, to be evaluate, to ask systematic questions, to understand what makes a difference and to whom. By being able to ‘prove’ the impact the work is having and, perhaps most importantly, by going through the process of considering what ‘proof’ might mean, then not only can organisations explore, question and reflect upon their inner certainty that a difference is being made but also upon where the work can be improved, even evolved, to better support young people.
And so, it may feel that the head is concerned that the heart will miss something vital.
What would happen, however, if we reversed these positions? If we considered “proving” as a function of the heart and “improving” as a function of the head?
Can impact truly be measured or outcomes demonstrated without an emotive component? Without the drive, the passion, to do the best for the young people with whom you work, can sufficient honesty being generated to identify flaws, identify approaches that do not achieve what was intended, to thoroughly investigate processes to ensure that they are fit for purpose?
Or could it be that this is the source of the resistance? Is there an element of fear in play when asked to prove something, a fear that admitting a need for improvement and the associated uncertainty would be, in essence, an admission of failure?
Anyone who has ever shared knowledge with someone else, whether in a group setting or one-to-one, knows that one of the subsidiary benefits is that in the act of communicating knowledge, we refresh our own, we sharpen our understanding of our own practice.
So, consider the process of ‘proving’ impact, of measuring and communicating that impact to others, as an act of sharing learning and insight. You may have to demonstrate the impact of your work for what appear to be procedural reasons – to funders, for instance – but is this not an opportunity, a means of helping them to understand what you do and why, and, in turn, reflecting yourself on what you do that ‘works’ and where you can improve?
Conversely, those focused on impact measurement, on processes, tools and methods by which this can be refined, made more reliable, made more robust – must never forget that the result for which they’re working is not better measurement, but better outcomes. The eventual target of improvement is the quality of the young people’s lives.
With that in mind, a different sort of rigour may be applied to the scientific and academic approaches by which tools are developed and processes honed. “Improving” with all its potentially ephemeral qualities becomes an essential factor in the work. “Proving” can no more be a means unto itself within the academic community than it can be for those working with directly with young people.
These are, however, simply thought experiments, designed to make us think about two broad and, seemingly, contradictory ideas from a different angle.
Is there really that marked a divide between the head and he heart, between “proving” and “improving”? Surely, as in life, to achieve all of which both are capable, they must work together in concert?
The head and the heart must be joined.
We prove because it offers the opportunity to improve in ways we may not have otherwise identified. We improve so that the act of proving becomes less of an onerous addition to our work and more an inherent component of it.
Obviously, in practice, it would be short-sighted to imagine that any division in practice is easily bridged. The head and heart are, too often, prone to conflict. But they are part of the same organism. They inform and feed each other. They must admit and understand the limitations of the other, in order for the body to function.
It is much the same with “proving” and “improving”. If we start from the assumption that we are all working towards the same goal albeit, at times, using different methods and following different paths, then the act of proving and the art of improving can serve most effectively in tandem, for the benefit of the young people with whom we work.
In this guest blog for the Centre, Dan Barton, Senior Area Youth Worker for Devon Youth Service discusses his experience of measuring evidence in youth work and the questions and thoughts it has inspired.
If you are reading this, the value of youth work is a subject about which you either already have a strong opinion, or genuine questions. It is certainly discussed regularly across the sector, with passionate arguments being made for a variety of positions.
Personally, I think that, for years, we have been guilty of searching for a “holy grail” of outcomes, a “silver bullet” that would, finally, solve the eternal mystery of why the value of this work is often undervalued outside of the sector.
Conversations about teenage pregnancy, anti-social behavior, employability and, increasingly, radicalisation and child sexual exploitation are commonplace; people want to believe that there is a place for youth work but are often unsure as to where that place lies, or what form it should take.
The following questions, therefore, are ones we all need to ask ourselves: What part do we have to play in these strategic objectives? How do we contribute to these outcomes?
How can we stop young people being a problem?
The at times controversial “P” word has been less explicit in recent years, but it remains an important part of the puzzle.
Simply put, I would argue that it is impossible to practice good youth work and not meet these outcomes, or contribute in our own small way (and it is sometimes a small way, something we all have to learn to accept as part of this journey.)
I was reflecting to my wife last week that I never have to challenge my colleagues about their behaviour. I never feel uncomfortable. I never feel threatened, for instance, bullied or undervalued. Their values are intrinsically linked to youth work – inclusion, diversity, honesty, respect and a real sense of optimism for young people. As a group, we seem to know instinctively that quality youth work has value. We are lucky enough to see evidence of it every day and luckier still to have confirmation from the young people themselves.
For example, I can’t get through a day at work without seeing at least two or three of the following things happening:
Young people being spoken to in a way that is absolutely without a power or status divide, aiding a sense of self-respect and the development of trusting relationships.
Young people being given their third or fourth chance, when they tell us stories of how they have been isolated from other areas of their lives (school, home, work), aiding the development of resilience, belief and self-worth.
Young people being challenged to be more than they ever thought they could be: in the language they use, how they treat others, and how they expected to be treated by others, nurturing respect and tolerance.
Young people being offered responsibility – real responsibility, such as allocating funding, making decisions that affect their communities, taking an active role in social action. This fosters a sense of belonging to society to as a whole and to their communities.
Young people turning to us daily when others have let them down in their hour of need and receiving support that helps to increase their sense of worth.
There are many more examples, of course, but the demands of a word count will stop me from rambling.
Yet, despite this seemingly definitive, real-world evidence, we must contend with words and phrases that seem to defy the demonstrated value: ‘austerity’, ‘tough choices’, ‘we can’t provide the gold standard anymore’ and ‘doing more with less’.
Of course, when money is tight, who is going to simply take our word for it when everyone else is talking about their value too, and when judgments are being made on a sweeping scale about services and the contributions they provide? Again, perhaps we have been caught reaching for the gold when a sufficient amount of silver would accomplish more: the thousands and thousands of small steps that young people make every week.
Can we prove that we have stopped a young couple become pregnant (or even judge conclusively that this would be a positive outcome)?
Can we prove that our intervention was responsible for young person X gaining employment?
Can we state empirically that we were responsible for young person Y being less at risk of being sexual exploited?
This would be particularly difficult.
That's where the Centre for Youth Impact and Project Oracle come into our story. They understand youth work; they understand the small steps we take in working with young people. Crucially, they have the ear of funders, commissioners and service directors (who are willing to learn) and others that so desperately need to understand the value of our work.
We designed our Theory of Change (no mean feat), we drew up our evaluation plan, we designed or adopted recording and measuring tools and we started to tell our story in a different way – our way. Our impact measurement practice now fits our work. Developing the voice of young people and ensuring that it is heard remains a core expectation of our youth work.
We just record it differently now, in a way that allows us to gather the statistics and qualitative data that are relevant to us. Pride amongst our staff team has, in turn, improved. We know that our work does change lives, a little bit at a time, and we are learning more about how that happens.
We especially enjoyed developing our Theory of Change (‘what we do and how we do it’ as the staff called it). We liked discovering how we make assumptions in our work and owning them – allowing ourselves to unpick what has previously been regarded as ‘given’ or ‘implicit’ in our practice: talking respectfully to a young person for example – something young people tell us is unusual, believe it or not!
Working through the Standards of Evidence validation process with Project Oracle was quick and painless. We heard great ideas about how to tweak our frameworks and were given some tools to get the desired effects. We also had some help with our evaluation plan, which was great – I got to role-play, challenging the staff with things like “so what?”, “who cares?”, and my personal favorite, “we can’t say we do it if we can’t prove it”. Fine tuning our outcomes framework, looking at what we can measure against each area, figuring out creative ways to evidence each statement was very enlightening and great to share with others. I believe we have a collective understanding now of the above and its helps staff to focus their efforts on specific tools rather than use a scattergun approach.
I would certainly recommend embarking on this journey to any organisation that works with young people. You owe it to our staff and to your young people to do all you can to make quality youth work sustainable.
With potential changes to how relationship education could be delivered in schools and Scotland taking the lead in the UK for putting youth work back on the map, there has never been a more opportune time to combat the frustration that many are feeling, by everyone within our sector grabbing a song sheet and singing to the same tune.
Dimitrios Tourountsis writes about his experiences and some of the challenges he faced as London Youth's Head of Learning.
his blog is written for those with a keen interest in understanding the principles and methods of implementing evidence-informed practice in the youth sector. Through my story I want to challenge you to consider equality of intelligence and complexity as the starting points in any efforts to evidence and understand the value of youth programmes. First though, I need to make an important disclaimer. My thinking is influenced by Rancière’s theory on equality as presented in his book The Ignorant Schoolmaster: Five Lessons in Intellectual Emancipation (1981), and Roberto Mangabeira Unger’s essay in a recent publication by Nesta and Palgrave (2015), New Frontiers in Social Innovation Research.
In a scene from The Hitchhiker’s Guide to the Galaxy, robots capture Arthur Dent and Ford Prefect. Meanwhile, marketing people and shoe shops have conquered the world. “What’s the matter with him?” Dent asks about one person who is moaning. “Oh, his feet are the wrong size for his shoes,” snarls the marketing droid. In this throwaway remark, Douglas Adams captures a specific attitude and way of thinking. Fit the feet to the shoes, not the other way around. Take a manufactured or prescribed item and fit the human into it.
The above scene encapsulates the challenge I faced when I joined London Youth as its first Head of Learning. Back in 2013 the youth sector was already experiencing crippling funding cuts. Existing business models were obsolete, youth organisations were feeling disoriented, and practitioners were cynical. A number of sector-level initiatives were responding to the pessimism by setting out a collective and progressive approach to evidence, including publication of the Catalyst Youth Outcomes Framework and introduction of Project Oracle’s Standards of Evidence.
Nevertheless, fundamental questions remained unanswered and untested.
Where do humans fit into conversations around impact, evidence and outcomes? How can you reconcile two seemingly different worlds: an alienated sense of academic reality and the lived experience of practitioners? Is it worth applying complicated designs to complex issues? Are we developing evaluation frameworks and using measuring tools that don’t fit our feet? And if we all agree that it is really important to wear shoes that fit our feet, who has the ultimate responsibility of ensuring that the shoes are of the right size?
Equality of intelligence and understanding complexity were my guiding principles when changing London Youth and supporting practitioners, managers and funders to understand the value of youth programmes.
Equality of intelligenceLondon Youth began its adventures in the land of knowledge by engaging and asking practitioners about the value of their work in the manner of ordinary people rather than academics or scholars. By acknowledging his own ignorance, the Head of Learning refuses to assume the position of ‘knowledgeable expert’. He orchestrates an environment where knowledge about what works is the result of a collective learning exercise. The ignorant Head of Learning verifies the work of practitioners’ intelligence with attention and interrogation. He abandons the rhetoric of deficiency and expertise, listens to a youth practitioner or manager whose thinking might never have been valued before, and facilitates professional autonomy and intellectual growth in virtually unlimited directions. According to Rancière, “there is stultification whenever one intelligence is subordinated to another... whoever teaches without emancipating stultifies”.
Understanding complexityHowever, the ignorant Head of Learning needs a broad awareness when making choices. The youth sector is a complex rather than complicated system, and the the Head of Learning needs to know the difference between the two. He or she should know how parts of the system give rise to collective behaviours, understand indirect effects and how the system interacts with its environment. Pushing on the youth sector "here" often has effects "over there" because the parts are interdependent.
The scientific method and systematic experimentation are often viewed as designs borrowed from complicated systems (medicine, physics or engineering) and imposed from the top (government and funders) to complex systems (youth or social work and education).
The Head of Learning should find ways of utilising methodologies from complicated systems, and supporting practitioners to take ownership of them.
If we argue that societies and practitioners have the potential to be far more active agents of their own future than one assumes, then we should accept that systematic experimentation is a faster and far more robust way to solve complex problems than clever authorship of case studies, funding bids, press releases and campaign material.
Practitioners’ professional autonomy is strengthened when they are able to express scientific awareness of social and economic changes going on in the lives of young people, rather than falling back to anecdotes, hunches and political patronage. The ignorant Head of Learning ensures that robust experiments in the real world of youth work drive the development of new ideas.
The ignorant Head of Learning has followed the collective intelligence of London Youth’s practitioners and members by firmly believing that practice precedes theory.
He helps them to respond to complex issues by adapting to changes, improving practice, being accountable, and learning.
Our monthly newsletter collects news, events, research and blogs from the Centre, our networks and practitioners and organisations around the world. Sign-up here