This blog was written by Venetia Boon, Children and Young People Grants Manager at Comic Relief.
Being a funder of youth organisations is a great position to hold – the width and breadth of the work taking place is astonishing, and the amount of expertise hard to fathom. It would be impossible to get a detailed understanding of the knowledge running through and around those organisations. But to do my job well, I need to strive for three things:
It’s also part of my role to feed into the collective organisational knowledge of Comic Relief. My feedback on the impact of the project should fit into the jigsaw of the charity sector activities and Comic Relief’s approach to funds in the UK and internationally.
The keywords for our approach to learning and evaluation are: appropriate, proportionate, grantee-led, and realistic. Learning should primarily be for and by the people receiving funding, so they can focus on what is relevant in the context they work in. They need to be able to own and feel able to use the learning effectively. While we want to drive good monitoring, evaluation and learning practice, and learn from our grantees - we don’t want to dictate the specifics to them. We also don’t want to make people feel duty-bound to create processes they can’t keep up with or that don’t serve their purpose.
As a grant manager, other things I’m interested in are honest conversations with people receiving our funding. I want to know about the unexpected outcomes, what went wrong and how the life experiences of beneficiaries and staff were integrated into the learning. The things that didn’t work quite as expected, that were adapted and adjusted are just as interesting and valuable as the things that worked perfectly. More so even.
Coming away from the Centre for Youth Impact Gathering, my attention was caught by a couple of things. Firstly, I was really struck by the discussions about whether it might be possible to measure the quality of work and then make links to outcomes of change. This would be in place of desperately trying to measure change, which we all know can be nebulous and take a long time to show up. This felt like a positive story, and something I’d be keen to hear more on. I’m sure there’s no magic answer, so perhaps that approach just shifts the onus on data to a different area? But I definitely want to know more.
I also thought about the speaker who mentioned the changes they had made to a project after discussions with their funder. They realised there was a potentially more effective approach than originally planned, but it needed them to shift targets and outputs. We as funders need to communicate how open we are to hearing those messages and that we understand all the experience in the world doesn’t mean you’ll get it right every single time. If we truly believe in putting people at the centre of our collective work, we have to understand the repercussions of one-size-doesn’t-fit-very-many.
The last point was how exciting it was to have a group of funders discussing what we think about evaluation and monitoring with other people in the sector. In general, we had very similar thinking and have a collective responsibility to promote that thinking. This is in addition to talking with grantees and other funders about such matters. We need to push for monitoring that works for everyone concerned, putting the experience of people right at the heart. Finally - we all need to understand and accept that people have different needs, can be changeable, and won’t all want the same thing.
Our monthly newsletter collects news, events, research and blogs from the Centre, our networks and practitioners and organisations around the world. Sign-up here