← Back to prompt library

Socratic prompting

Published on 03/16/2026

Socratic prompting
Why questioning an AI can produce better results than giving orders
Giving the AI an instruction tells it what to do, but not how to think. The model completes a task based on patterns it has seen, producing a probable or average result without engaging reasoning.
Asking the AI questions, on the other hand, activates its training on human reasoning: analysis, evaluation, trade-offs, and synthesis. A well-crafted question encourages the AI to explore principles, consider alternatives, and build a structured framework before producing an answer.
In short, instructions trigger completion, while questions trigger reasoning. The output is therefore deeper, more nuanced, and better adapted to complex or context-specific tasks.
You give orders to your AI. The best ones ask it questions.
Most people use LLMs like vending machines. You issue an instruction, you wait for an output. It’s understandable, it’s intuitive. And it’s often insufficient. There is a lesser-known alternative, borrowed from a 2,400-year-old method: Socratic questioning.
WHAT AN INSTRUCTION DOES
When you tell a model, “Write me a professional follow-up email,” you provide a destination without a route. The model executes. It produces something correct, probable, average in statistical terms. It hasn’t thought. It has completed.
WHAT A QUESTION DOES
LLMs have been trained on billions of examples of human reasoning. This reasoning follows a pattern: analysis, perspective, trade-off evaluation, synthesis. A well-formed question activates this pattern. An instruction bypasses it. When you ask, “What makes a follow-up email effective?” the model doesn’t complete a template. It traces the causal chain. It seeks principles before producing.
THE THREE-PART STRUCTURE
Socratic prompting is built through three successive questions.
The first is theoretical. It targets fundamentals: “What makes this type of content effective?” It forces the model to set a framework before acting.
The second is methodological. It asks which principles or approaches apply to the situation. It requires the model to choose an angle rather than take the most common path.
The third is applicative. It says: now, apply this reasoning to my specific case. At this stage, the model no longer starts from zero. It starts from structured thinking.
WHY IT WORKS
It’s not magic. It’s mechanics. A language model generates tokens based on preceding context. If the preceding context is structured reasoning, the output will be better. Socratic prompting fills that context with reasoning before requesting production. Instructions skip this step. Questions make it mandatory.
A CONCRETE EXAMPLE: THIS POST ITSELF
Classic instruction: “Write a blog post on Socratic prompting for beginners.”
Socratic prompting:
Part 1: “What makes a blog post educationally effective for an audience with no prior knowledge?”
Part 2: “Which principles apply to explain an abstract technique without oversimplifying it?”
Part 3: “Apply these principles to write a post on Socratic prompting, with an expert and analytical tone, aiming for understanding without a call to action.”
The result from the second prompt is structurally different. The model built a framework before writing. The direct instruction would have led it to fill a generic template.
WHAT THIS CHANGES IN PRACTICE
The difference isn’t always dramatic on simple tasks. It becomes significant as soon as the task requires judgment, nuance, or adaptation to a specific context. Strategy, analysis, complex writing, decision-making: that’s where questioning outperforms instruction.
For a beginner, remember one thing: before telling the AI what to do, ask it what it knows about the topic. The response you get afterward will be qualitatively different.

Utilisation IA LLMS