Author Archives: Steph Clarke

Demystifying Impact Measurement, some notes from the session at #Locality16

Phil Tubla and Elly Townsend ran a session today at the Locality conference around Demystifying Impact, and Impact takes many guises, Environmental, Economical and Social and sometimes all of this in one project – but how do you measure it?

These are some notes and my thoughts from throughout the session:

The session started by asking for introductions from the room, and how groups are currently measuring Impact – and when someone else in the room, who turned out to be a client of ours introduces your app before you get the chance to, it would be rude to not join in the conversation.

Impact can be defined as the change, effects or benefits that result from the services or activity of your organisation, but you need to be aware when measuring and reporting Impact that you are on mission and avoid mission drift – don’t just report for funders, report on the things that matter you.

What are the consequences, intended and unintended of the work you do? It is important to understand the consequences of your work.

Impact measurement helps you to: plan, evaluate, promote and communicate.

Outputs vs Outcomes 

Outputs are immediate and short term – quantitative data, numbers counting “bums on seats”
Outcomes are the changes that occur as the result of the activity – these are the longer term effects of your projects and is more qualitative – it’s harder to turn this into data, into numbers.

There was a lot of feeling that funders look for short term reporting – and that demonstrating impact especially outcomes are difficult on these time frames, and the further you move from your intervention the harder the evidence is to find that the outcome is yours..

The stages of measuring impact are:

  • Inputs
  • Outputs
  • Outcomes
  • Impact

Link the stages together – tell your story!

– Or in our point of view, more importantly tell your client / users story! They are best placed to tell you the impact your project is having for them – and use their voice, that is way more powerful then just your data.


The session then moved no with Gen Maitland Hudson discussing data, data collection, data triangulation – and open data, something we understand from our work earlier this year helping community groups use open data.

We shouldn’t be duplicating measurements when the data already exists – it just needs to be more freely available, and easier for the casual user, or grassroots community activist to understand.

There was then quite a long conversation around data, data collection, surveys and survey collection tools  – but I feel while data and data collection is great for number crunching, and measuring outputs it doesn’t tell the whole story – a survey might capture opinions, but doesn’t capture user experience. It doesn’t tell you the real difference you are making.



How do charities capture and measure Impact? Notes on the House Lords Charities Committee enquiry.

On Tuesday the House of Lords Charities Select Committee held the first of a set of of evidence collecting sessions for their enquiry into how charities measure impact. There were two sessions.  We watched and made some notes, mainly of the things that struck us as pertinent during the first, which revolved mainly around Impact.

Here they are.

Session one focussed on Impact, the panel was.

There were a few points made around measuring impact, that unsurprisingly we wholeheartedly agree with.

Paul Streets  thought it was unrealistic to expect their to be universal ways to measure impact.

To paraphrase

Every funder will need a different set of metrics which work on a different timescale.  As a funder we shouldn’t be asking for bespoke measurements… we need to find out what charities are measuring first.   If they’re good at it let them do it their way and if they’re not, maybe think twice about funding them.

This make huge sense to us.  Part of what we try and do is encourage charities or social enterprises to be proactive about how they measure and report impact.   We think that getting ahead of your commissioners and funders and being clear about what you do and how you measure that will ultimately helps funders.

What info should donors expect from Charities on the difference / Impact their funding has made?

Paul Streets:

We should start by looking at the organisation – have a conversation with them about how they will report outcomes, Tell us how you are listening to your beneficiaries – and how you’ll measure outcomes. If you have 50 beneficiaries and they are hard to reach that’s o.k just tell us how you’ll listen to them.

You have to measure the soft outcomes, as some projects the first outcome might just be getting someone to catch a bus.”

Dan Corry:

There is a a cost to administrate impact. There is a problem where some smaller charities do not wanted to push up admin costs or  they can’t afford it. What they need is head space to think it through. If you’re are going to collect soft data, that’s ok, but don’t just talk about the success stories.

Gen Maitland Hudson:

You shouldn’t just be collecting data, stories and feedback, you should be acting on it to make change, to make things better, there’s no point collecting it otherwise. Value the information.

Paul Streets;

As a funder we want honesty, if you have under performed tell us, we’ll still want to hear of the impact. As funders we need to allow charities to frame their own impact. Different charities have different Impact. some of it will be qualitative, some quantitative, some financial – but we need to understand this.

Gen Maitland Hudson;

We can help smaller charities measure Impact, we just need to find what tools can be provided – for collection and analysis – and it’s often the analysis that gets left out.”

The analysis side of story capture is what we recognise as the hard part. You can collect stories, but where do you store, them and how to you analyse them?  That’s one thing the impact app does. You collect the stories, it sorts, analyses and begins to to make sense of the content you’re collecting. Your stories becomes searchable, you can begin to spot trends, gaps in service and best practice. You can build case studies and evidence for funder’s old (and new). And we’re not the only ones to recongnise this strength .

The talk then moved on to comparison;

Can you compare the work of charities working in different circumstances?

Dan Corry; 

If you work in the same area – having the same metrics is useful, working together means you can work together, within reason, but you can’t compare different sectors. It’s apples and pears.

There’s been a danger that you try and work out which charity offers the greatest returns on investment and comparing across sectors doesn’t work

Gen Maitland Hudson;

What are you measuring for and what do you want your benchmark to do. Different projects need different benchmarks. It comes back to knowing what you want to use the data for.

We understand that comparing apples and pears isn’t realistic, but we also know that some funders may want to keep track of how they’re money affects people’s lives across a range  providers. The observer account makes it easy for a funder to collect and analyse live stories from a range of organisations they have funded.

The conversation continues;

Paul Streets;

Bench marking can work for funders, instead of for the charities How many people apply to you that don’t get a grant – if that number is high then are we wasting our time and theirs”

Dan Corry:

When you’ve been commissioned you need to remember that you are there to service your beneficiaries, not the funders.

The pay by results model has made charities think about measurement and then you have to think about value for money. PBR can pull charities away from their mission, but they have to know the contract is right. PBR can also make charities skim the easy cases for quick results.

Paul Streets;

Some PBR contracts can work, Meals on wheels, Domiciliary care etc, but contract commissioning can be highly destructive to other charities – as the shift from grants to contracts happen it’s highly unfair to the multiple disadvantaged.”

Cherry picking, or skimming the easy cases, is something some of our clients have recognised across their sectors.  We think that good use of impact measurement can help the best organisations show that they don’t do that.

It’s not simply where you’ve got a client to, but where they started from that important.  The best organisations may do dozens of things to help someone – and by routinely capturing the voice of the people you help we think that builds of the evidence of the myriad small things you do and why they matter

By consistently collecting, and by trusting your clients voice you can show the impact you’re making whether they are the easy to work with, or the hard to reach.