Skip to main content

What's in a word? Unpacking impact terminology

2019-09-05

This blog is co-authored by Bethia McNeil, our Chief Executive, and Mary McKaskill, our Practice Development Manager. In it, they explore terminology used across the charity sector lexicon and share some reflections on the Centre’s approach to ‘impact terminology​.

​Several terms featuring the word ‘impact’ have made their way into charity sector lexicon, some being more helpful than others. Two such terms are ‘impact measurement’ and ‘impact management’ – neither of which are perfect reflections of their meanings, but offer interesting insights into what we want (or feel we should want) to say, and can frame a conversation about how we understand the collection of data and what to do with it. We’ve spent some time debating terminology at the Centre, and have decided to write a bit about our thoughts.
 
Before getting into measurement vs management, we want to address ‘impact’ as a term in its own right. We tend to agree with James Noble’s definition as outlined in his blog What does ‘impact measurement’ really mean? In it, he argues that ‘impact’ in the charity sector is positive and intended, meaningful and important, sustained, and achieved by individuals and communities themselves. However, this definition is not without challenge, and surfaces two important distinctions. The first is between intended and unintended. Just because an effect is unintended doesn’t mean that we shouldn’t care about it – in fact, we’d argue it’s the reverse. Secondly, and perhaps more importantly, is the distinction between what we are looking for and what we’re not. The sector’s tendency to define impact by what it hopes will happen runs the risk of framing approaches to evaluation that are laden with confirmation bias. ‘Impact measurement’ then becomes less an exploration of what is, and more a narrative of what we want it to be, whether or not there is evidence to support it.
 
Defining ‘impact measurement’ literally, then, is measuring the ‘amount’ of impact, using some form of ‘system of meaning’: collecting data to show evidence of the qualities and conditions of ‘impact’. It’s important to say here that we subscribe to the view that measurement is meaning making – it’s making sense of the world around us through representative systems of meaning. In reality, impact measurement tends to be neither measurement nor of impact. ‘Impact measurement’ has taken on the role of being a general term that encompasses all the different types of data that organisations could collect about the (hoped for) effects of their work – and their proxies. ‘Lives touched’ is a common and oft-derided approach to ‘measuring impact’, when it is really nothing of the sort. This sort of popular yet not very useful assertion has contributed to the widespread confusion of what is evidence of impact and what isn’t.
 
Impact measurement as a term is contested, and even within our team (ourselves included) there is a level of discomfort in using it. This discomfort is part of a wider debate about what can and should be measured, and whether the term accurately reflects our work. We don’t always agree about whether everything can be measured, and – even if it can – whether we should want to.  Similarly, ‘impact’ is frequently conflated with ‘outcomes’ – the tendency to compress the shorter-term indicators of potential change with the longer-term, sustained effects that we hope individuals and communities experience. A laser-like focus on impact can also eclipse other elements we could usefully measure: the quality of our relationships with young people, for example, how and which young people engage with provision, and whether young people experience what we might hope, such as trust, respect or feelings of safety. The approach to impact measurement that the Centre supports organisations to embed into their practice emphasises collecting data to understand these things. Not necessarily impact.  
 
On the whole, we prefer ‘evaluation’ as a broad term. We think that dropping the i-word reduces the risk of overclaiming findings, both in terms of scale of effect and any one organisation’s role in directly ‘causing’ it. In some cases, this overclaiming feels deliberate, but often such misuse of the term is a reflection of the confusion spread by its overuse: the blitheness with which funders and commissioners ask charities how they will “measure the impact” of even the smallest grants or contracts, for example. Emphasising impact less when we refer to data collection more explicitly opens up the opportunity for learning and action that is grounded in the present rather than the future. This is where our relationships with young people reside, and where we have the most scope to put our learning into practice.
 
We suspect ‘evaluation’ isn’t a perfect term either though, given the connotations of judgement and accountability attached to it. The act of measurement, by contrast, has the potential to be rooted more in meaning and context. We also like that measurement as an act is intentional, systematic, and recorded. If data, whether quantitative or qualitative, does not reflect any of those three qualities, it quickly loses its usefulness.
 
At the Centre, we’re going to stick with ‘measurement’, but decouple it from ‘impact’ – unless that’s what we actually mean. We feel the intentional and systematic practice of measurement is really useful, as is the process of considering how and what you’re measuring, and why. More broadly, we’re going to focus on evaluation – considering and ‘judging value’ – but never without contextualising the term. Our hope is that our collective vocabulary can evolve to be more reflective of intention (whether explicit or implicit) and science: an intellectual and practical activity considering the structure and behaviour of the physical and natural world.
 
So what then of ‘impact management’? This refers to using data to learn and improve what you do. Like ‘impact measurement’ not often being the measurement of impact, ‘impact management’ is not the literal process of controlling the positive, important, sustained change that individuals and their communities achieve for themselves. It’s about learning from young people’s experiences and adapting your offer and your practice accordingly, with vigilance, care and interest.
 
This learning could be about the impact young people experience down the road, but absolutely should not be limited to it. Charities are able to be much more responsive to young people’s views and experiences in the moment than relying on longer-term impact data would enable them to be, which is why seeking regular feedback and implementing continuous quality improvement systems in practice is so important. We expect and hope, however, that adopting the term ‘impact management’ and taking it seriously in practice, does increase the likelihood of impact. This is why we are the Centre for Youth Impact. Impact management rests on effective monitoring – another common term in the social sector lexicon. This is why we are so interested in the user and engagement data and that organisations routinely collect. ‘Monitoring’ is so much more than surveillance or counting ‘bums on seats’. It is about observation and reflection – essentially the practice of systematic curiosity. Is what I thought might happen, happening? All the time, or just sometimes? What can I learn from what’s really happening?
 
While some have predicted a shift away from measurement towards management, we think that these two terms happily work together. If ‘impact management’ is about curiosity, reflection, and making data-driven decisions, then measurement is a distinct and important stage in that process. Without ‘management’, any ‘measurement’ becomes measurement for measurement’s sake, but without it, ‘management’ runs too high a risk of being ill-informed and top down. It is the combination of the two that makes each a part of something transformative.
 
Post-script
 
We’ve only covered impact measurement, impact managementevaluation and monitoring in this blog. There are obviously many more terms that appear in social sector language. For the sake of clarity, here are some terms we won’t use:
 

  • Impact assessment: assessing implies estimation. We’d rather we focused on measuring what is useful and meaningful in the present, rather than estimating what could be, but in reality isn’t (or isn’t yet). Assessment also implies the potential to pass or fail. This is unhelpful.

 

  • Demonstrating impact: demonstration implies both the proof of something’s existence and the exhibition of it. This feeds the narrative that the impact agenda is always about external drivers – demonstrating it to funders; proving it to government, and so on. It also says little about how one gets to the point of demonstration or ‘proof’.

 

  • Articulating impact: articulating something means that you’re speaking about it fluently and clearly. That’s great. But it says nothing about the nature of the thing you’re articulating, and to whom or why. This is the end point of a much more important process, and certainly not where one should start. Articulating what high quality practice looks like and feels like as well as how and why you do what you do, should not be confused with articulating impact.

 
Our next blog will focus on outcomes – conspicuously absent here. More to come….