AI security breach warning on computer with data protection concept

AI Data Breaches Are Rising: Here's How to Protect Your Company

PN
Peter Nelson
· · 5 min read

As artificial intelligence adoption grows, so do AI-related data breaches. Discover the steps you must take to secure your business data.

Artificial intelligence adoption in business has accelerated faster than most organisations’ ability to manage the associated data risks. As staff adopt AI tools — Microsoft Copilot, ChatGPT, Gemini, and dozens of specialised tools — they are regularly feeding sensitive business data into systems that were not designed with enterprise data governance in mind.

The result is a new category of data breach: not from an external attacker, but from well-intentioned employees using AI tools in ways that expose confidential information.


How AI Creates New Data Exposure Risks

Staff Inputting Sensitive Data Into Public AI Tools

The most common AI-related data risk is straightforward: an employee copies client data, confidential financial information, or internal business information into a public AI chatbot to get help with a task. The data is transmitted to and stored by the AI vendor, potentially used to train future models, and sits outside the organisation’s data governance controls.

Documented examples include:

  • A legal firm employee pasting a confidential settlement agreement into ChatGPT for summarisation
  • A finance staff member uploading a spreadsheet with client financial data to an AI analysis tool
  • A developer entering proprietary code into a coding assistant

Without explicit policy and technical controls, this is happening at most organisations that have not addressed it.

Microsoft Copilot Exposing Incorrectly Permissioned Data

Microsoft 365 Copilot has a specific risk profile: it can surface any data that the logged-in user has permission to access. In organisations where SharePoint permissions are overly broad — where many staff have access to files they do not routinely need — Copilot becomes an exceptionally efficient tool for finding and exposing data that was nominally controlled but practically accessible to many users.

A user asking Copilot “show me all documents mentioning [client name]” may receive results that include confidential documents they technically had access to (because permissions were never tightened) but that they were never expected to find through normal work.

The fix is not to avoid Copilot — it is to tighten SharePoint permissions before deploying Copilot, so the AI can only surface data that users genuinely need access to.

Third-Party AI Tools With Unclear Data Practices

The AI tool market has exploded. Many of these tools — particularly free or low-cost options — have data practices that range from ambiguous to actively problematic. Data submitted to some tools is retained indefinitely, used for model training, or accessible to the vendor’s staff.

Before any staff member uses an AI tool with business data, the organisation should review:

  • Where is data processed and stored?
  • Is data retained? For how long? For what purpose?
  • Can data be used to train the model?
  • Is the vendor compliant with Australian privacy requirements?
  • Is there an enterprise/business version with stronger data protection?

Building an AI Data Governance Framework

Policy: Define What Staff Can and Cannot Do

An AI Acceptable Use Policy should:

  • Specify which AI tools are approved for business use
  • Prohibit inputting certain categories of data into non-approved tools (client data, financial data, health information, personal information)
  • Require staff to use enterprise versions of AI tools (which typically have stronger data protection) rather than free consumer versions
  • Establish a process for evaluating and approving new AI tools

A template AI Usage Policy is available in our resources section.

Technical Controls: Restrict Unapproved AI Tool Access

Microsoft 365 allows IT administrators to block specific web categories and known AI tool domains through Microsoft Defender for Endpoint web content filtering. Blocking access to consumer AI tools and directing staff to approved enterprise alternatives removes the policy compliance burden from individuals.

For organisations with Microsoft 365 Business Premium or E3+, Conditional Access App Control can provide more granular controls — blocking file upload to unsanctioned applications, for example.

SharePoint Permission Remediation Before Copilot Deployment

Before deploying Microsoft 365 Copilot at any scale, conduct a SharePoint permission audit:

  • Identify sites and libraries with “Everyone” or overly broad group access
  • Apply the principle of least privilege — access based on role requirements
  • Review and remove access for former staff and contractors
  • Implement sensitivity labels to classify confidential content

This is good practice regardless of Copilot — the permissions problem exists independent of AI — but Copilot makes addressing it urgent.

Data Classification and Sensitivity Labels

Microsoft Purview sensitivity labels allow documents and emails to be classified (Public, Internal, Confidential, Highly Confidential) and protection policies applied automatically. Confidential documents can be set to prevent copy-paste, download, or sharing outside the organisation.

Labels also provide signals to Copilot about which content is sensitive — helping the AI handle confidential information appropriately.


If you discover that staff have been submitting sensitive data to unapproved AI tools:

  1. Assess what data was submitted and to which tools
  2. Review the tool’s privacy policy and data retention practices
  3. Determine whether the exposure constitutes a notifiable data breach under the Privacy Act (significant exposure of personal information with likelihood of serious harm)
  4. If notifiable, report to the OAIC (Office of the Australian Information Commissioner) within 72 hours of becoming aware
  5. Notify affected individuals as required

The Office of the Australian Information Commissioner has published guidance on assessing AI-related data breaches specifically.


Getting AI Governance Right

CX IT Services helps Melbourne businesses develop AI acceptable use policies, implement technical controls on AI tool access, and conduct SharePoint permission audits prior to Copilot deployment. Book a Right Fit Call to discuss your current AI risk posture.

Free Right Fit Call

Want to Talk Through What This Means for Your Business?

Book a free 15-minute Right Fit Call. No obligation - just a straight conversation about your IT situation.

  • No lock-in contracts - ever
  • Valued at $250 - completely free
  • 4.5-star Google rated
  • Answer in 60 seconds or less

Book Your Free Right Fit Call

Takes about 2 minutes. We'll confirm if we're the right fit - or point you in the right direction.

Step 1 of 8 13%

Takes about 2 minutes · No obligation