Data, Security & Governance
Sensitive data and approved tools
Know what information must not be entered into unapproved AI tools and why approval matters.
5 min readGovernance
Workplace example
Public vs non-public information
Summarising a public article is normally safer than asking a public tool to draft an unreleased announcement, even if the company name is removed. The unreleased context itself may be sensitive.
What this means
- •Sensitive data includes personal, confidential, commercially sensitive, strategically non-public, regulated, or access-restricted information.
- •An approved AI tool has been reviewed for the organisation use case, data handling, access, security, and policy fit.
- •Removing a name does not always make information safe. Context can still reveal confidential details.
Why it matters
- •Public or unapproved AI tools may store, process, or expose information in ways the organisation has not accepted.
- •Data leaks can create legal, commercial, privacy, and trust harm.
- •Clear data boundaries make safe AI use easier for everyone.
Common mistakes
- •Pasting realistic employee or customer details into a public tool.
- •Assuming anonymised strategy or deal information is safe.
- •Using a personal account when the work account blocks access.
What good judgement looks like
- •Check whether the tool is approved for the data and task.
- •Use non-sensitive or public examples where possible.
- •Ask for guidance before using AI with work data, files, or integrations.
Try this at work
- •List three examples of information you handle that should not go into an unapproved AI tool.
- •Find your organisation approved-tool guidance.
- •Rewrite one prompt so it uses public or fictional data instead of real sensitive data.
How this helps your reassessment
- •You can identify data that should not enter an unapproved public AI tool.
- •You know approval depends on the tool, data, and use case.
- •You do not treat name removal as enough protection.