AI Foundations
What generative AI can and cannot do
Understand why AI can produce useful work quickly while still being wrong, incomplete, or overconfident.
5 min readFoundations
Workplace example
A confident answer that still needs checking
A manager asks AI to summarise a policy and receives a neat answer with three confident recommendations. Before acting, the manager checks the policy itself and finds one recommendation was inferred rather than stated. The useful behaviour is not rejecting AI; it is checking where accuracy matters.
What this means
- •Generative AI creates or transforms output from patterns, instructions, and context. It does not automatically know whether a workplace claim is true.
- •A clear, confident, well-formatted answer can still include errors, missing context, or invented details.
- •The practical skill is not blind trust or blind avoidance. It is knowing when AI output is a draft, when it needs checking, and when it should not be used.
Why it matters
- •Many workplace mistakes happen because people treat polished wording as evidence.
- •AI can speed up drafting, summarising, and structuring work, but judgement still belongs to the person and organisation using the output.
- •Understanding the limits of AI is the foundation for every other readiness skill.
Common mistakes
- •Assuming confidence means accuracy.
- •Using AI output without checking important claims.
- •Treating a fluent answer as a final decision rather than a starting point.
What good judgement looks like
- •Use AI for first drafts, structure, options, and low-risk exploration.
- •Check important claims against trusted sources.
- •Be especially careful when output affects customers, employees, money, compliance, or reputation.
Try this at work
- •Take one AI answer and highlight every factual claim.
- •Mark which claims would need checking before you shared the output at work.
- •Rewrite the prompt to ask the AI to separate facts, assumptions, and uncertainty.
How this helps your reassessment
- •You can explain why confident wording is not proof of accuracy.
- •You can identify when AI output is safe to use as a draft and when it needs verification.
- •You keep accountability with people rather than shifting it to the tool.