Data, Security & Governance
AI phishing and social engineering
Understand phishing, social engineering, and why AI can make deceptive messages more convincing.
5 min readGovernance
Workplace example
A personalised request
A message appears to come from a senior leader and asks for urgent access to a file. AI may help make the wording sound realistic. A safer response is to verify the request through a known channel before acting.
What this means
- •Phishing is an attempt to trick someone into revealing information, clicking a harmful link, approving access, transferring money, or taking another unsafe action.
- •Social engineering is the wider tactic of manipulating people rather than breaking directly into systems.
- •AI can increase the risk because it helps attackers create more persuasive, personalised messages faster and at larger scale.
Why it matters
- •Old warning signs such as poor spelling are less reliable when AI can produce polished messages.
- •AI can help attackers imitate tone, roles, urgency, and business context.
- •A healthy reporting culture matters because no one can spot every deceptive message.
Common mistakes
- •Assuming a polished message is legitimate.
- •Thinking phishing only affects technical staff.
- •Clicking or responding before verifying an unusual request through a trusted channel.
What good judgement looks like
- •Pause when a message creates urgency, secrecy, fear, or unusual pressure.
- •Verify requests using known contact details or approved channels.
- •Report suspicious messages and accidental clicks promptly.
Try this at work
- •Find one example of a suspicious request pattern.
- •Identify what pressure or impersonation tactic it uses.
- •Write how you would verify it safely.
How this helps your reassessment
- •You can explain why AI increases phishing and social-engineering risk.
- •You know phishing is not just a technical-team issue.
- •You know to verify unusual requests and report concerns.