← Back to recipes

Write an AI acceptable use policy for your charity

compliancebeginnerproven

The problem

Staff are using AI tools but there's no guidance on what's appropriate. Some are uploading beneficiary data to ChatGPT. Others are using AI-generated content without review. A few are avoiding AI entirely because they're unsure what's allowed. Your trustees are asking questions about AI governance that you can't answer. You need a policy that enables safe AI use without creating bureaucratic barriers.

The solution

Create a practical AI acceptable use policy tailored to your charity's context. This isn't about banning AI - it's about enabling staff to use it safely and effectively. A good policy covers: what tools are approved, what data can and cannot be used, when human review is required, and how to handle AI outputs. Keep it short enough that people will actually read it.

What you get

A 2-4 page AI acceptable use policy document that you can share with staff and present to trustees. The policy will clarify what's allowed, what needs approval, and what's prohibited. It includes practical examples relevant to charity work and can be adapted as AI tools evolve. Staff will know exactly what they can do, reducing both risk and hesitation.

Before you start

  • Understanding of what AI tools staff are currently using
  • Knowledge of what data your charity handles (beneficiary data, donor data, etc.)
  • Access to your existing data protection and IT policies for alignment
  • Input from relevant stakeholders (DPO, IT, senior management)

When to use this

  • Staff are using AI tools without guidance
  • Trustees are asking about AI governance
  • You want to enable safe AI adoption across the organisation
  • You need to demonstrate due diligence to funders or regulators
  • You're preparing for wider AI rollout and want guardrails in place

When not to use this

  • No one in your charity is using AI yet (address awareness first)
  • You want to ban AI entirely (a policy isn't needed for that)
  • You're looking for technical security controls (this is about use, not IT security)

Steps

  1. 1

    Audit current AI use

    Before writing policy, understand reality. Survey staff: what AI tools are they using? For what tasks? What data are they inputting? This informs what your policy needs to cover and reveals any immediate risks to address.

  2. 2

    Define scope and principles

    Decide what your policy covers (all generative AI? specific tools? personal and work use?). Establish core principles: human oversight, data protection, transparency, fairness. These principles guide decisions when specific situations aren't covered.

  3. 3

    Classify data sensitivity

    Create clear categories: data that can never go into external AI (identified beneficiary data, safeguarding information), data that needs anonymisation first, and data that can be used freely (public information, internal drafts). Give specific examples for each.

  4. 4

    Specify approved tools and uses

    List which AI tools are approved for use. Consider: free vs paid tiers (data handling differs), tools with enterprise agreements, tools that process data in the EU/UK. Be specific about approved use cases to reduce ambiguity.

  5. 5

    Define review requirements

    Specify when human review is mandatory: any external communications, content about beneficiaries, anything published in your name. Be practical - requiring review for internal brainstorming creates unnecessary friction.

  6. 6

    Address transparency and disclosure

    Decide when AI use should be disclosed. Grant applications increasingly ask about AI use. External communications may need disclosure. Internal use typically doesn't. Provide guidance on how to disclose appropriately.

  7. 7

    Include practical examples

    Add scenarios staff will recognise: 'Can I paste survey responses into Claude?' 'Can I use AI to draft job adverts?' 'What if a funder asks about AI?' Real examples make abstract policy concrete.

  8. 8

    Plan for review and updates

    AI evolves fast. Include a review date (6-12 months) and a process for updating the policy. Name who owns the policy and how staff can ask questions or request changes.

Example code

AI Acceptable Use Policy Template

A complete policy template you can adapt for your charity.

# AI Acceptable Use Policy

**[Charity Name]**
**Version:** 1.0
**Approved by:** [Board/SMT]
**Date:** [Date]
**Review date:** [Date + 12 months]

## 1. Purpose

This policy enables staff to use AI tools safely and effectively while protecting our beneficiaries, donors, and organisation. It provides clear guidance on what is allowed, what needs approval, and what is prohibited.

## 2. Scope

This policy applies to all staff, volunteers, and contractors using AI tools for [Charity Name] work, including:
- Generative AI (Claude, ChatGPT, Gemini, Copilot)
- AI features within existing software (Microsoft 365 Copilot, Google Workspace AI)
- AI image, audio, or video generation tools

## 3. Core Principles

1. **Human oversight**: AI assists decisions; humans make them
2. **Data protection**: Personal data requires extra care
3. **Transparency**: Be honest about AI use when asked
4. **Quality**: Review AI outputs before use
5. **Fairness**: Watch for bias in AI suggestions

## 4. Data Classification

### 4.1 Never input to external AI tools:
- Names, addresses, or contact details of beneficiaries
- Safeguarding information or case notes with identifying details
- Health information or sensitive personal data
- Financial account details
- Any data where individuals could be identified

### 4.2 May use with anonymisation:
- Aggregated feedback or survey responses (remove names first)
- Case studies (change identifying details)
- Statistical summaries

### 4.3 May use freely:
- Publicly available information
- Internal drafts and planning documents
- General questions and research
- Your own writing for editing/improvement

## 5. Approved Tools

### 5.1 Approved for general use:
- Claude.ai (free and Pro tiers)
- ChatGPT (free and Plus tiers)
- Microsoft Copilot (via our Microsoft 365 subscription)
- Grammarly

**Important:** Free tiers of Claude and ChatGPT may use your conversations for model training unless you opt out. Check Settings > Data Controls to disable this. For any work involving sensitive information, use paid tiers where training on your data is disabled by default.

### 5.2 Requires IT approval:
- Any tool requiring API access
- Any tool with access to our systems
- Any paid tool not listed above

### 5.3 Data Processing Agreements:
For any AI tool processing personal data, ensure a Data Processing Agreement (DPA) is in place with the provider. Most major providers (Anthropic, OpenAI, Microsoft) offer standard DPAs - your DPO should verify these are signed before staff use tools with any personal data.

### 5.4 Not approved:
- Tools that claim ownership of inputs
- Tools without clear privacy policies
- AI tools accessed via unofficial browser extensions

## 6. Approved Uses

### 6.1 Encouraged uses:
- Drafting and editing text (with human review)
- Summarising long documents
- Brainstorming and generating ideas
- Translating content (with native speaker review for important documents)
- Research and information gathering
- Data analysis (using anonymised data)
- Creating first drafts of policies or procedures

### 6.2 Requires manager approval:
- AI-assisted recruitment (job descriptions, shortlisting criteria)
- Content representing the organisation externally
- Any automated decision-making affecting individuals

### 6.3 Prohibited:
- Inputting identified personal data of beneficiaries or donors
- Using AI to make final decisions about individuals without human review
- Claiming AI-generated content as original work when disclosure is required
- Using AI for safeguarding assessments or decisions

## 7. Review Requirements

**Human review is mandatory for:**
- Any external communications (press releases, website content, social media)
- Grant applications and funder reports
- Content mentioning specific beneficiaries (even anonymised)
- Job advertisements and HR communications
- Anything published in the charity's name

**Human review is recommended for:**
- Internal reports and papers
- Email communications
- Meeting notes and summaries

## 8. Transparency and Disclosure

- If asked directly whether AI was used, be honest
- Grant applications: follow funder guidance on AI disclosure
- Published content: disclosure not required for editing assistance, recommended for substantially AI-generated content
- Internal use: disclosure not required

## 9. Reporting Concerns

If you become aware of:
- Personal data being input to AI tools inappropriately
- AI outputs being used without appropriate review
- Any data breach involving AI tools

Report immediately to [Data Protection Lead / line manager].

## 10. Training and Support

- All staff will receive basic AI awareness training
- Additional training available for specific use cases
- Questions about this policy: contact [named person]

## 11. Review

This policy will be reviewed every 12 months or when:
- Significant new AI tools become available
- Regulatory guidance changes
- Incidents require policy updates

---

**Questions?** Contact [name] at [email]

**Policy owner:** [Role]

Quick reference card for staff

A one-page summary to accompany the full policy.

# AI at [Charity Name] - Quick Reference

## YES - Go ahead
- Draft emails, reports, social posts (review before sending)
- Summarise documents, meeting notes
- Brainstorm ideas, get feedback on your writing
- Research topics, find information
- Translate content (get it checked)

## MAYBE - Check first
- Job descriptions, recruitment materials (manager approval)
- External communications (review required)
- Anything representing the charity publicly

## NO - Never do this
- Paste beneficiary names, addresses, case notes
- Input donor personal details
- Use AI for safeguarding decisions
- Skip review on external content

## Golden rules
1. If in doubt, anonymise first
2. Always review AI outputs
3. Be honest if asked about AI use
4. Report any concerns to [name]

## Quick test
Before pasting anything, ask: "If this appeared on a website, would it identify anyone or embarrass us?"
If yes, don't paste it.

**Questions?** Ask [name] or email [address]

Tools

Claude or ChatGPTservice · freemium
Visit →
Word processorplatform · free · open source

Resources

At a glance

Time to implement
days
Setup cost
free
Ongoing cost
free
Cost trend
stable
Organisation size
small, medium, large
Target audience
ceo-trustees, operations-manager, it-technical

Creating the policy is free. Implementation may require training time.

Part of this pathway