Prompting & Grounding
Grounding AI in approved sources
Use source material carefully so AI answers stay inside the evidence you are allowed to rely on.
5 min readPrompting
Workplace example
Policy-only answer
A safe instruction is: "Use only the supplied policy document. If the answer is not in it, say so and quote the relevant section."
What this means
- •Grounding means giving AI a source or evidence base and asking it to answer from that material.
- •Where accuracy matters, ask AI to say when the answer is not in the source rather than filling gaps with general knowledge.
- •Grounding works best when the source is approved, relevant, current, and safe to use in the selected tool.
Why it matters
- •AI can invent plausible policy answers when the source is incomplete.
- •Grounding makes review easier because you can compare output against evidence.
- •It reduces the risk of relying on outdated, generic, or unsupported advice.
Common mistakes
- •Letting AI infer missing details in a policy answer.
- •Using unapproved or sensitive source material in a public tool.
- •Removing caveats to make the answer sound clearer.
What good judgement looks like
- •Choose approved source material before asking for factual output.
- •Ask the AI to separate source-backed facts from assumptions.
- •Check quoted or cited material before using the answer.
Try this at work
- •Give AI a short public policy or article.
- •Ask it to answer only from that source.
- •Check whether every important point is supported by the source.
How this helps your reassessment
- •You know how to instruct AI to stay within a source.
- •You can spot unsupported additions.
- •You understand why source boundaries matter for workplace reliability.