Compare your impact against sector benchmarks
The problem
You've got your impact data but no context. You helped 500 people this year - is that good for your budget? Your programme costs £150 per beneficiary - is that efficient? Sector reports exist but they're dense PDFs full of averages you can't easily compare to your situation. You're guessing whether you're performing well or falling behind.
The solution
Use Claude or ChatGPT to analyse sector benchmark reports alongside your data. Paste in relevant sector research (charity evaluation reports, foundation studies, sector analyses) and your own metrics. Ask the AI to extract comparable benchmarks, explain how you compare, identify where you're strong or weak, and flag what context matters (different geographies, client groups, operating models). You get critical analysis, not just data regurgitation.
What you get
A contextualised comparison of your performance: 'Your cost per beneficiary (£150) is 20% below sector average (£190) but you serve a different demographic (urban vs rural). Your retention rate (65%) is above benchmark (55%) suggesting strong engagement. Your volunteer ratio (1:10) is below sector norm (1:5) - capacity constraint?' For each metric you see: sector benchmark, your performance, what explains the gap, whether it matters.
Before you start
- Your impact metrics (outputs, outcomes, costs, beneficiary numbers)
- Access to sector benchmark reports (foundation evaluations, charity commission data, sector studies)
- Understanding of what makes your organisation comparable or different to benchmarks
- A Claude or ChatGPT account (paid tier recommended for longer documents - free tiers have message limits that may cut off mid-analysis when processing large PDFs)
When to use this
- You want to understand if your performance is good, bad, or typical
- Funders are asking how you compare to similar organisations
- You're trying to set realistic targets based on what others achieve
- You've got sector reports but struggle to extract relevant comparisons
When not to use this
- You can't find relevant sector benchmarks - AI can't create data that doesn't exist
- Your organisation is so unique that comparisons are meaningless
- You're using this to judge staff performance (benchmarks have too many variables)
- You want validation rather than honest analysis - AI will point out weaknesses
Steps
- 1
Gather your metrics
Compile your key performance data: beneficiaries served, outcomes achieved, cost per beneficiary, income sources, staff/volunteer ratios, retention rates, programme costs as percentage of total. Whatever you measure that matters for your theory of change. Be honest about what the numbers actually show. Important: Only share aggregated data with AI tools - don't include individual beneficiary details or anything that could identify specific people.
- 2
Find relevant sector benchmarks
Look for: foundation evaluation reports (Esmée Fairbairn, Lloyds Bank Foundation), sector studies (NCVO, Directory of Social Change), academic research, charity evaluation sites (Charity Excellence Framework). Focus on organisations with similar: cause area, beneficiary group, geography, operating model. Exact matches don't exist but reasonably comparable does.
- 3
Ask AI to extract comparable metrics
Paste sector reports into Claude or ChatGPT. Ask: 'Extract benchmark metrics relevant to youth mental health services in urban areas. I'm looking for: cost per beneficiary, outcomes data, staffing ratios, retention rates, income mix. Explain what each benchmark covers (sample size, geography, methodology).' The AI pulls out relevant numbers and context.
- 4
Share your data for comparison
Now paste your metrics. Ask: 'Compare my organisation's performance to these sector benchmarks. For each metric explain: am I above/below average, by how much, what might explain the difference (geography, client group, model), and whether the gap matters.' You're asking for analysis, not just arithmetic.
- 5
Probe the differences
For significant gaps, ask follow-up questions: 'My cost per beneficiary is 40% higher than benchmark - what factors could explain this that aren't a problem vs which suggest inefficiency?' Or: 'My retention rate is much lower - is that concerning or expected for my client group?' Get the AI to help you think through what differences mean.
- 6
Check comparability assumptions
Ask: 'What makes my organisation not directly comparable to these benchmarks? What differences in context, client group, or model mean I should interpret these comparisons cautiously?' The AI will flag when you're comparing apples to oranges. Take this seriously - bad comparisons lead to bad decisions.
- 7
Identify areas for investigation
Ask: 'Based on this analysis, which metrics suggest I should investigate further? Where am I underperforming in ways that might indicate problems vs just reflecting my context?' Use this to focus attention on genuine concerns, not all differences matter equally.
- 8
Use insights strategically
Turn analysis into action: areas where you're strong become talking points for funders. Areas where you're weak but it's explained by context need documenting ('We serve more complex cases'). Areas where you're weak without good explanation need addressing. This is about honest self-assessment informed by sector context.
Tools
Resources
At a glance
- Time to implement
- hours
- Setup cost
- free
- Ongoing cost
- free
- Cost trend
- stable
- Organisation size
- small, medium, large
- Target audience
- ceo-trustees, operations-manager, data-analyst
Free tier works for short comparisons but will have the option to train on your data. For confidential impact data or comprehensive sector reports (20+ pages), paid tier (£18-20/month) handles longer context better and offers better data protection. Much cheaper than consultants (£500-2,000 per benchmarking exercise).