← Back to recipes

Analyse feedback when you only have 20 responses

impact-measurementbeginnerproven

The problem

You've collected feedback from a small group - workshop attendees, service users after a pilot programme, or donors from a specific appeal - but the sample size is too small for quantitative analysis. This isn't routine feedback you can quickly summarise; it needs proper qualitative analysis because you're evaluating a pilot or informing important decisions. You want to be systematic about finding themes while being honest about small sample limitations.

The solution

Use qualitative coding techniques (systematically categorising text to identify themes) enhanced by LLM to identify themes, sentiment, and actionable insights from small feedback sets. Combine manual reading to get the overall sense with AI-assisted theme identification to spot patterns you might miss. The AI helps structure your analysis while you provide the domain expertise.

What you get

A structured thematic analysis with 3-5 main themes supported by example quotes, identification of outlier responses that don't fit the main themes but offer unique perspectives, and a clear acknowledgment of the small sample limitations for any reporting.

Before you start

  • Feedback compiled in a single document (15-50 responses works best)
  • A Claude or ChatGPT account (for sensitive feedback, use paid tiers which don't train on your data, or enable privacy settings). If feedback contains personal data, ensure UK GDPR compliance - anonymise where possible
  • Clear understanding of what decisions this feedback will inform
  • Acceptance that small samples have limitations

When to use this

  • Evaluating pilot programmes where you need methodological rigour (15-50 responses)
  • High-stakes feedback where you'll report limitations clearly (board, funders)
  • Stakeholder consultations where outlier views matter as much as consensus
  • Post-event evaluations from specialist workshops with diverse perspectives
  • Initial exploration before designing a larger survey (testing questions and themes)

When not to use this

  • When you could reasonably get more responses first (send reminders, extend deadline)
  • For decisions requiring statistical validity (you can't get that from small samples)
  • When feedback contains highly sensitive disclosures (safeguarding concerns)
  • When funders require a specific validated methodology (e.g., Grounded Theory)
  • Routine feedback that just needs quick summarising (use 'Find themes in feedback small batch' instead)
  • If you already have 100+ responses (use the 'at scale' approach instead)

Steps

  1. 1

    Compile all feedback into one document

    Gather all responses into a single document. Number each response (1. First response, 2. Second response, etc.). Include any rating scores if you have them. Remove personal identifiers but keep demographic categories if useful (e.g., 'age group 18-25' rather than exact age).

  2. 2

    First pass: Manual reading

    Read through all responses yourself to get an overall sense. What's your gut reaction? What surprises you? What confirms what you already thought? Make quick notes. This step is crucial - you're building the domain context that the AI lacks.

  3. 3

    Identify main themes with AI

    Paste the feedback into Claude or ChatGPT: "Read these [X] feedback responses from our [programme/event/service]. Identify the 3-5 main themes. For each theme, provide a clear label, a brief description, roughly how many responses mention it, and 2-3 example quotes." Review the themes - do they match what you noticed?

  4. 4

    Identify outliers and unique perspectives

    Ask the AI: "Which responses don't fit the main themes? What unique perspectives or concerns do they raise?" Small samples often contain one or two responses with crucial insights that get lost in theme-finding. Surface these explicitly.

  5. 5

    Synthesise into actionable recommendations

    Ask: "Based on these themes and outliers, what are the top 3 actionable recommendations for improving [the programme/service]?" or "What should we definitely keep doing, stop doing, or start doing?" The AI can help structure your thinking, but apply your judgment about feasibility.

  6. 6

    Note limitations clearly

    Add to any reporting: "Based on feedback from 23 participants. This small sample provides valuable insights into participant experience but cannot be generalised to the full population." Be honest about what you can and can't conclude from small samples.

  7. 7

    Decide if you need more data(optional)

    If themes are unclear or contradictory, or if the insights don't give you enough to act on, you might need a larger sample or a different approach. Use this analysis to inform whether you should collect more feedback before making decisions.

Tools

Claudeservice · freemium
Visit →
ChatGPTservice · freemium
Visit →

Resources

At a glance

Time to implement
hours
Setup cost
free
Ongoing cost
free
Cost trend
stable
Organisation size
micro, small, medium
Target audience
program-delivery, operations-manager, comms-marketing, fundraising

Free tier is sufficient for small feedback sets. Small cost if you exceed free tier limits.

Written by Suzanne Begley

Last updated: 2026-01-13

Photo by Headway on Unsplash