Clarity and Specificity
The single most impactful improvement you can make to any prompt is making it clearer and more specific. Vague prompts produce vague outputs. Specific prompts produce predictable, useful outputs. This lesson establishes the core principles of prompt clarity and gives you practical techniques to apply immediately.
The Ambiguity Problem
LLMs are trained to predict the next token given all previous tokens. A vague prompt like "write something about AI" has thousands of valid continuations — a poem, a technical paper, a business memo, a science fiction story. The model makes a probabilistic choice, which is why the same vague prompt produces wildly different outputs across runs.
Specificity reduces the model's "degrees of freedom" — the range of valid continuations. The more constrained the prompt, the more consistent the output.
The Five Dimensions of Specificity
1. Task State exactly what you want the model to do. Use action verbs: summarize, classify, extract, generate, rewrite, compare, explain.
Vague: "Do something with this email"
Specific: "Classify this email as: Bug Report, Feature Request, or General Inquiry"
2. Audience Who is the output for? Level of expertise, vocabulary, and depth change dramatically based on audience.
Vague: "Explain neural networks"
Specific: "Explain neural networks to a business executive with no technical background. Use an analogy to something from everyday business life. Maximum 150 words."
3. Format Describe the desired structure: length, sections, lists vs. prose, JSON vs. markdown.
Vague: "List the benefits"
Specific: "List exactly 5 benefits as numbered bullet points. Each bullet: one sentence, under 20 words, starting with a verb."
4. Constraints What should be excluded? What limits apply?
Vague: "Write a product description"
Specific: "Write a product description for a noise-canceling headphone. Include: comfort, sound quality, battery life. Exclude: price comparisons. Length: 80-100 words. Tone: professional but warm."
5. Output validation Tell the model how to signal uncertainty or what to do when it can't complete the task.
"If you cannot confidently answer the question, say 'I don't have enough information' rather than guessing."
Practical Before/After Examples
Example 1: Code generation
Before: "Write a function"
After: "Write a Python function `parse_date(date_str: str) -> datetime | None` that:
- Accepts ISO 8601 format (YYYY-MM-DD) and US format (MM/DD/YYYY)
- Returns None (not an exception) for invalid inputs
- Includes a docstring and type hints
- Does not use external libraries"
Example 2: Content analysis
Before: "Analyze this feedback"
After: "Analyze this customer feedback and return a JSON object with:
- sentiment: 'positive' | 'negative' | 'neutral' | 'mixed'
- main_topics: string[] (1-3 topics mentioned)
- action_required: boolean (true if the feedback requires a response)
- priority: 'high' | 'medium' | 'low'"
Example 3: Creative writing
Before: "Write a story"
After: "Write a 200-word short story in second person ('you') about an AI agent that makes a mistake and corrects it. Genre: professional drama. Tone: thoughtful, not humorous. End on a hopeful note."
Testing Specificity
A good rule of thumb: if you showed your prompt to 10 different people and asked them to write an output that satisfies it, would they produce similar results? If the answer is no, the prompt needs more specificity.
You can also test specificity computationally: run the same prompt 5 times at temperature 1.0 and measure variance in the outputs. High variance = insufficient specificity.