Class 5 · CBSE AI · Strand C — Talking to AI: Prompting as Problem Structuring
AI hallucinations — when AI makes things up
Why even the best AI sometimes invents wrong answers — and how to spot them.
Class 5 · CBSE AI · Strand C — Talking to AI: Prompting as Problem Structuring
Why even the best AI sometimes invents wrong answers — and how to spot them.
The student who always has an answer
Think of a classmate who always raises their hand and answers confidently — but sometimes their answer is totally wrong. They're not lying, they just have a habit of answering before they're sure. Now imagine that classmate could speak ten times faster than anyone else and never hesitated or said 'I'm not sure.' That's AI hallucination — fluent confidence without accuracy.
A very enthusiastic auto driver giving directions
An auto driver in a new city might give you detailed, confident directions even when they're not quite sure — because stopping to say 'I don't know' feels uncomfortable. They might get you close, or they might send you to completely the wrong place. You'd always double-check with someone else, right? Same with AI facts.
Every Dhee session for this concept follows three stages. We share the questions Dhee actually asks, so you can hear what a session sounds like.
Stage 1 — Surface
Have you ever confidently told someone something that turned out to be wrong — not because you lied, but because you genuinely believed it? How do you think that felt for the person who relied on your information?
Rote answer
"Child says 'hallucination is when AI lies' — conflating deliberate deception with statistical pattern completion"
Understood
"Child grasps that the AI isn't lying — it genuinely 'thinks' its output is correct, which makes hallucination more dangerous because there's no visible warning sign"
Stage 2 — Reasoning
Why do you think an AI might invent a fact — like making up the name of a scientist who doesn't exist — rather than just saying 'I don't know'?
Follow-up Dhee may use: If the AI doesn't know it's wrong, and sounds completely confident, what does that mean for how you should use AI for research or for finding facts?
Stage 3 — Application
You ask AI to tell you three facts about Aryabhata, the Indian mathematician. It gives you three statements. One is famous and easily verified. One sounds plausible but unusual. One seems surprising and cites a date. Which one should you be most worried about — and why?
Misconception Dhee watches for: Child assumes the most specific and detailed answer is the most accurate because it 'sounds like it did research'
Spark turns this concept into a 15-minute spoken session — asking, listening, and probing — so your child builds the idea themselves.
Why even the best AI sometimes invents wrong answers — and how to spot them.
AI only hallucinates on obscure topics — it can hallucinate about well-known topics too
Dhee opens with a question — for example: "Have you ever confidently told someone something that turned out to be wrong — not because you lied, but because you genuinely believed it? How do you think that felt for the person who relied on your information?" — listens to your child's answer, then probes the reasoning behind it. The session ends when the child can apply the idea to a brand-new situation, not just recall it.