AI hallucinations aren’t magic. They’re caused by vague prompts. Fix your input, fix the output.
Why Hallucinations Happen (Plain English Version)
Hallucinations occur when an AI model generates information that isn’t grounded in facts. This happens for three reasons:
Lack of constraints. You didn’t set boundaries, so the model filled the gaps with plausible-sounding nonsense.
Lack of grounding. You didn’t provide enough context, so the model invented what it needed.
Overly broad instructions. You asked for “everything about X,” so the model generated generic filler to meet your request.
Hallucinations aren’t random. They’re predictable responses to unclear inputs.
The Three Causes of Hallucination You Can Control
1. Missing info. You left out critical details, so the model guessed. If you ask “Write a report on sales,” the model has to invent which sales, which period, and which format.
2. Ambiguous request. You asked for something vague—”Write something persuasive”—so the model defaulted to patterns in its training data, which may not match your intent.
3. Wrong type of task. You asked the model to do something it’s not designed for. Asking for precise financial data or obscure historical facts invites hallucination because the model prioritises fluency over accuracy.
The Anti-Hallucination Prompt Structure
Use this four-line structure:
Task: [Specific action you want]
Context: [Background information the model needs]
Constraints: [Boundaries—length, format, tone]
Grounding: [Facts, data, or sources the model should reference]
Example:
Task: Write a 300-word summary of Q3 sales performance.
Context: This is for the board meeting. They care about year-on-year growth and regional performance.
Constraints: Use three bullet points—one for overall growth, one for top-performing region, one for underperforming region. Each bullet under 100 words.
Grounding: Q3 sales were £450,000, up 12% year-on-year. North region grew 18%, South region declined 3%.
With this structure, the model has no room to invent.
Before/After Comparisons
Example 1
Before: “Write about our company’s achievements.”
After: “Write a 200-word summary of our company’s achievements in 2024, focusing on revenue growth, new clients, and product launches. Use bullet points. Base it on the following data: revenue grew 15% to £2.3m, we signed 12 new clients, and we launched two products.”
Example 2
Before: “Summarise this meeting.”
After: “Summarise this meeting in three bullet points, each under 30 words, focusing on decisions made, actions assigned, and deadlines. Use only information from the meeting notes provided.”
Example 3
Before: “Write a product description.”
After: “Write a 150-word product description for a waterproof hiking jacket, emphasising durability and breathability, aimed at outdoor enthusiasts aged 30–50. Use a conversational tone. Base it on these features: 100% waterproof, reinforced seams, lightweight at 400g.”
In each case, the revised prompt eliminates opportunities for the model to guess or invent.
The “Before You Prompt” Checklist
Before you hit send, ask:
- Have I provided enough context?
- Have I set clear boundaries?
- Have I specified what the model should base its response on?
- Have I limited the scope to prevent guessing?
If the answer to any question is no, revise your prompt.
PreStep Defines Constraints Automatically So Hallucinations Drop
PreStep walks you through the process of building grounded, constrained prompts every time. Answer a few questions, get a structured brief, feed it to any AI. No more hallucinations. No more wasted time.