A prompt is the instruction you give to an AI — ChatGPT, Claude AI, Gemini or any other LLM — to get a response. The better structured and more precise the prompt, the better the response. This guide covers the methods that actually work, without unnecessary jargon.
An LLM (Large Language Model) is a program trained on massive amounts of text: books, articles, web pages, source code. From this exposure, it has learned to recognize structures, word associations, and reasoning patterns. When you ask it a question, it does not search a database for an answer. It predicts, token by token, the most statistically likely continuation of what you have written.
ChatGPT, Claude AI and Gemini are interfaces built on these models. The AI you use every day is an LLM with a conversational interface and additional training to behave like an assistant. It is not intelligence in the human sense: it is a very powerful statistical system, capable of producing coherent, nuanced, and sometimes completely wrong text with equal confidence.
This last point is essential. An LLM does not know what it does not know. It does not have real-time access to the internet (unless the feature is explicitly enabled), its knowledge of the world stops at its training cutoff, and it can invent facts, citations or non-existent sources — this is called a hallucination. Understanding this changes how you use it: as a competent collaborator to review, not as a source of truth.
Every effective prompt rests on three elements: the role you assign to the AI, the task you entrust it with, and the expected output format.
Act as [Role] → Create a [Task] → Display as [Format]
This structure applies to virtually all use cases.
1. The role assigned to the AI
It guides the tone, style and level of expertise of the response. Example: "Act as an expert in marketing strategy and communication…"
2. The objective
So the AI knows exactly what you expect. Example: "…I want to create a communication plan for a designer belt bag…"
3. Context or constraints
To adapt the response to your actual situation. Example: "…the product targets a 25-to-35-year-old audience, with an emphasis on creative and trendy aspects…"
4. Target audience
To calibrate vocabulary and technicality. Example: "…for an audience of young urban adults interested in fashion and design…"
5. Output format
To receive a directly usable response. Example: "…Present the response as a structured plan with a 3-month calendar…"
6. Additional instructions
To refine further. Example: "…Give me the priority channels, types of content, and key moments for each channel."
7. Specific examples
Even more precision for specific expectations. Example: "…I want to incorporate strong visuals, creative videos and local influencer testimonials."
8. Critical thinking
Always question what the AI produces. It has been trained to satisfy the user: its primary objective is to produce what it estimates is expected, not necessarily what is correct. It can hallucinate — invent facts, sources, figures with apparent total confidence. Systematically verify factual information.
Rather than giving a direct order, ask a question. LLMs have been trained on billions of examples of human reasoning: a well-formulated question activates that reasoning, a direct instruction short-circuits it.
The method has three stages. First a theoretical question: "What makes a communication plan effective for a creative fashion product?" Then a methodological question: "What principles apply to reach an urban 25-35-year-old audience sensitive to design?" Finally an applicative question: "Now apply these principles to build a communication plan for this designer belt bag."
At this point, the model starts from reasoning it has itself constructed. The quality of the final response is significantly better, particularly on complex tasks: strategy, analysis, high-value writing. For a beginner, one reflex is enough: before telling the AI what to produce, ask it what it knows about the subject.
Researchers have made progress in understanding the internal workings of large language models. AI reasons in an abstract conceptual language, not in a natural language. It thinks through concept associations, anticipates several steps ahead, and often works backwards from a conclusion — which can lead to plausible but factually wrong reasoning.
It has been trained by a reward system whose objective is to satisfy the user. It learns to know you and adapts its responses accordingly. It is not honest by nature: it can invent a credible line of reasoning to arrive at the answer it considers expected. This bias is structural. Being aware of it changes how you use it.
Any information transmitted to the AI should be considered stored. On most platforms, you can ask the AI what it retains about you, and ask it to delete certain data. Take the time to verify what it stores, particularly for sensitive data. Explore the prompt library for ready-to-use prompts built on these methods.