TL;DR: Most Australian businesses have staff using AI tools — ChatGPT, Microsoft Copilot, Google Gemini — without any policy governing how. This creates data privacy risk, IP risk, and compliance exposure. This template gives you a ready-to-use AI usage policy you can customise with your company name and deploy today.
Why Your Business Needs an AI Usage Policy Now
Generative AI adoption in Australian businesses has outpaced governance. Staff are using ChatGPT, Claude, Google Gemini, and dozens of specialised AI tools — often without guidance on what is and is not appropriate. The privacy and confidentiality risks this creates are significant and often misunderstood.
When an employee pastes a client contract into ChatGPT to ask it to summarise the key terms, that data is transmitted to OpenAI’s servers. Depending on account settings and OpenAI’s current data handling policies, it may be used to train future models. For a law firm, medical practice, or accounting firm, this is not theoretical risk — it is a potential breach of professional confidentiality obligations and the Privacy Act 1988.
An AI usage policy does not prevent staff from using AI tools — it establishes clear rules for what tools are approved, what data can be used with them, and what the consequences of policy violations are. It also demonstrates to insurers and clients that your business takes data governance seriously.
AI Usage Policy Template
[Instructions: Replace all [square bracket] items with your company-specific information. Review with legal counsel before deployment, particularly if your business operates in a regulated industry.]
[COMPANY NAME] AI USAGE POLICY
Version: 1.0 Effective Date: [DATE] Policy Owner: [NAME/ROLE] Review Date: [DATE + 12 MONTHS]
1. Purpose and Scope
This policy governs the use of artificial intelligence (AI) tools — including generative AI tools, AI writing assistants, AI image generators, and AI-powered applications — by all [Company Name] employees, contractors, and anyone else accessing [Company Name] systems or handling [Company Name] data.
The purpose of this policy is to:
- Enable [Company Name] staff to benefit from AI tools productively and safely
- Protect [Company Name] confidential information and client data from unintended disclosure
- Manage legal, intellectual property, and compliance risks associated with AI use
- Establish clear accountability for AI-assisted work products
This policy applies to all use of AI tools for [Company Name] business, regardless of whether the tool is accessed on a company device or personal device, during business hours or outside them.
2. Approved AI Tools
The following AI tools are approved for use with [Company Name] data at the specified classification levels:
Tier 1 — Approved for all data including confidential:
- Microsoft Copilot for Microsoft 365 (requires Microsoft 365 Copilot licence) — operates within the Microsoft 365 compliance boundary; data is not used for model training
- [Other enterprise-grade tools with signed data processing agreements]
Tier 2 — Approved for internal and public data only (not confidential or restricted):
- [Tool name] — [brief description of approved use cases]
Tier 3 — Not approved for [Company Name] data:
- ChatGPT (free tier and ChatGPT Plus) — data handling terms do not provide sufficient protection for business data
- Google Gemini (personal accounts) — same as above
- Any AI tool not explicitly listed as Tier 1 or Tier 2
[Note: The Tier 3 list reflects the fact that consumer-grade AI products generally do not provide the data processing agreements required for business confidential data. Enterprise versions of these tools (ChatGPT Enterprise, Google Gemini for Workspace) may qualify for Tier 1 or 2 pending IT security review.]
Requesting approval for additional tools: Staff who wish to use an AI tool not listed above should submit a request to [IT/management contact] with the tool name, intended use case, and a link to the tool’s data processing/privacy terms. Approved additions will be added to this policy.
3. Data Classification and AI Use
AI tool use is governed by the classification of the data involved. Refer to [Company Name]‘s Data Classification Policy for classification definitions.
| Data Classification | Tier 1 Tools | Tier 2 Tools | Tier 3 Tools |
|---|---|---|---|
| Public | Permitted | Permitted | Use with caution |
| Internal | Permitted | Permitted | Not permitted |
| Confidential | Permitted | Not permitted | Not permitted |
| Restricted | Not permitted | Not permitted | Not permitted |
Client data is confidential by default. Any data provided by or relating to a client — including client names, contact details, financial information, legal matters, health records, or business information — is Confidential and may only be used with Tier 1 approved tools.
Staff to check before using AI with data:
- What classification is this data?
- Is the tool I am using approved for this classification?
- If I am unsure — do not proceed until I have confirmed with [IT/manager contact].
4. Prohibited Uses
The following uses of AI tools are prohibited regardless of the tool or data classification:
Confidential data in unapproved tools: Using any AI tool not approved for the relevant data classification to process confidential or client data.
Unreviewed AI output as final work: Submitting AI-generated content — documents, code, legal text, financial analysis, advice — as final work product without human review and verification. AI tools produce plausible-sounding but sometimes incorrect outputs (known as “hallucinations”). Every AI-generated work product must be reviewed and verified by a qualified human before delivery to a client or use in a business decision.
Misrepresentation of AI use: Representing AI-generated content as entirely human-produced when this is material to the recipient (e.g., claiming a legal submission was entirely drafted by a solicitor when it was substantially generated by AI without attorney review).
Creating or distributing harmful content: Using AI tools to generate content that is defamatory, discriminatory, harassing, or violates applicable laws.
Circumventing other policies: Using AI tools to circumvent [Company Name]‘s security controls, access controls, or other IT policies.
5. Intellectual Property Considerations
Input ownership: [Company Name] owns all inputs to AI tools created using company resources, including prompts, documents, and data. Do not share information with AI tools that [Company Name] has an obligation to keep confidential.
Output ownership: The legal status of AI-generated outputs under Australian copyright law is evolving. Current guidance indicates that AI-generated outputs with insufficient human creative input may not attract copyright protection. All AI-assisted work should include meaningful human contribution and review.
Third-party IP: AI tools may generate content that reproduces or closely resembles third-party copyrighted material. Review AI-generated content for potential IP issues before use, particularly in marketing, creative, and published materials.
6. Accuracy and Accountability
AI tools — including large language models — are not reliable sources of facts. They generate plausible text based on statistical patterns in training data. They may:
- Cite sources that do not exist
- State incorrect facts with apparent confidence
- Apply outdated information
- Miss important context or nuance in professional and regulatory matters
Every person who submits AI-assisted work product is responsible for its accuracy. “The AI told me” is not an acceptable explanation for an error in a client deliverable, a financial document, or a legal submission.
7. Privacy Act Compliance
AI use involving personal information of individuals (employees, clients, contacts) must comply with the Privacy Act 1988 and the Australian Privacy Principles. Specifically:
- Personal information should only be used for the purpose for which it was collected
- Inputting personal information into an AI tool that stores or uses it for other purposes may constitute a secondary disclosure requiring consent
- Staff should avoid inputting personally identifiable information into AI tools where the task can be accomplished without it
For guidance on specific use cases, contact [privacy officer/IT contact].
8. Industry-Specific Requirements
[Select and customise the relevant section for your industry.]
Legal practices: Staff must comply with the Law Institute of Victoria’s guidance on AI use in legal practice. Particular care should be taken with: client-confidential information, privilege-protected documents, and AI-generated legal research or citations. Court submissions and legal advice documents must be reviewed and verified by a qualified solicitor regardless of the extent of AI assistance.
Healthcare and allied health: Patient health information is subject to additional protections under the Privacy Act 1988 and My Health Records Act 2012. Patient data must not be inputted into any AI tool that has not been assessed and approved for health information by [IT/Privacy Officer]. AI-generated clinical notes or patient communication drafts must be reviewed by a registered health professional before use.
Accounting and financial services: Financial advice, tax advice, and audit outputs involving AI assistance must be reviewed by a qualified professional before delivery to clients. Tax Practitioners Board obligations apply to AI-assisted tax advice.
9. Reporting and Governance
Policy review: This policy will be reviewed annually or following any significant change in AI tools, regulatory guidance, or [Company Name]‘s AI use practices.
Breach reporting: Any suspected breach of this policy should be reported to [IT/manager contact] immediately.
Consequences of violation: Breaches of this policy may result in disciplinary action up to and including termination, consistent with [Company Name]‘s disciplinary procedures. Breaches involving client data may also trigger notification obligations under the Privacy Act 1988.
End of AI Usage Policy Template
Implementing This Policy
A policy document is only effective if it is communicated, understood, and enforced.
Steps to deploy:
- Customise the template with your company name, approved tool list, and industry-specific section
- Have the policy reviewed by legal counsel, particularly the IP and privacy sections
- Distribute to all staff with a read-and-sign acknowledgement
- Include in onboarding materials for all new staff going forward
- Schedule annual review — the AI landscape is changing rapidly
Practical training note: Most staff do not intuitively understand why pasting a client document into ChatGPT is a problem. When distributing this policy, include a brief explanation — not just the rules, but the why. Staff who understand the risk make better decisions in edge cases.
For related resources:
- Top 10 IT Policies Template
- Microsoft 365 Hidden Features Guide — Microsoft Copilot overview
- 10 AI Tools You Need in Your Office for Productivity