Evaluation & Human Judgement
Trusted sources and unsupported claims
Know what to do when AI makes a strong claim without evidence or when outputs disagree.
4 min readEvaluation
Workplace example
Unsupported forecast
Two tools disagree on a forecast. A capable employee asks what data each used, what assumptions were made, and whether the source quality is strong enough for the decision.
What this means
- •A strong claim needs evidence before it is used in work.
- •If two AI tools disagree, inspect their assumptions, source quality, and evidence rather than choosing the shorter or more polished answer.
- •Trusted sources depend on the task: policy documents, approved data, expert review, current records, or authoritative public sources.
Why it matters
- •AI can state unsupported claims confidently.
- •Conflicting outputs often reveal hidden assumptions or weak source material.
- •Important decisions need traceable evidence, not just plausible text.
Common mistakes
- •Using claims because they match what the team expected.
- •Averaging two AI forecasts without understanding the assumptions.
- •Asking for a confidence score instead of checking evidence.
What good judgement looks like
- •Verify strong claims before use.
- •Ask for sources, assumptions, and uncertainty.
- •Use subject-matter review when the evidence is weak or stakes are high.
Try this at work
- •Find one factual claim in an AI answer.
- •Check it against a trusted source.
- •Write whether the claim is supported, unsupported, or uncertain.
How this helps your reassessment
- •You know how to respond to unsupported claims.
- •You inspect assumptions when outputs conflict.
- •You treat evidence quality as more important than tone.