Prompt Design Principles

Role and Context Framing

13m read

Role and Context Framing

Role framing — telling the model who it is and what context it's operating in — is one of the most powerful techniques in your prompting toolkit. The right role doesn't just change tone; it activates specific clusters of knowledge and reasoning patterns that make the model more accurate and useful for your task.

Why Role Framing Works

During training, LLMs were exposed to millions of examples of experts in various fields — doctors answering medical questions, lawyers reviewing contracts, senior engineers doing code reviews, teachers explaining concepts. These examples created distinct behavioral patterns associated with different roles.

When you assign a role in a system prompt, the model's attention activates patterns from training examples that match that role context. A "senior security engineer" persona will draw on different reasoning patterns than a "helpful assistant" persona even for the same input — because the training data associated with those roles is different.

This isn't magic — it's pattern activation. The effect is strongest for roles that are well-represented in training data and weakest for roles the model has seen few examples of.

System Prompt Role Framing

from openai import OpenAI

client = OpenAI()

def create_specialized_expert(role: str, domain_context: str) -> str:
    """Generate a high-quality system prompt for a specialized expert."""
    return f"""You are {role}.

{domain_context}

When answering questions:
- Draw on your expertise to provide technically accurate information
- Acknowledge uncertainty when you're not confident
- Use terminology appropriate for the domain
- Structure your responses clearly with headers when explaining complex topics"""

# Security expert
security_system = create_specialized_expert(
    role="a senior application security engineer with 10 years of experience in penetration testing and secure code review",
    domain_context="You specialize in web application security, OWASP Top 10 vulnerabilities, and secure development practices. You think in terms of threat models and risk severity."
)

response = client.chat.completions.create(
    model="gpt-4o-mini",
    messages=[
        {"role": "system", "content": security_system},
        {"role": "user", "content": "Review this authentication code for security issues:\n```python\ndef login(username, password):\n    user = db.query(f\"SELECT * FROM users WHERE username='{username}'\")\n    if user and user.password == password:\n        return create_session(user)\n```"}
    ]
)

Context Framing: Setting the Scene

Beyond the role, providing rich contextual information dramatically improves output quality. Context answers questions the model would otherwise have to guess at:

  • Who is the user? (expertise level, needs)
  • What is the end goal? (not just the immediate task)
  • What constraints apply? (technical, organizational, legal)
  • What has been tried already? (avoid suggesting already-failed approaches)
# Minimal context — model must guess at many things
minimal_prompt = "How should I structure my database?"

# Rich context — model can give genuinely useful advice
rich_context_prompt = """
Context: I'm building a SaaS application for managing legal documents. 
- Expected scale: ~500 law firms, ~200 documents per firm on average
- Tech stack: Node.js backend, PostgreSQL
- Key access patterns: 
  1. List documents by firm (most frequent, requires fast filtering)
  2. Full-text search within a firm's documents (requires search indexing)
  3. Audit logs — every document access must be recorded
- Compliance: Documents must be isolated per firm (row-level security)
- Team: 2 backend engineers, no dedicated DBA

Given this context, how should I structure the database schema? 
Include: table definitions, index strategy, and RLS approach.
"""

Role-Context Combinations

Some powerful combinations:

RoleContext AdditionBest For
"Senior data scientist""Working with a non-technical stakeholder"Translating technical insights
"DevOps engineer""In a startup with no dedicated SRE team"Pragmatic infrastructure advice
"Technical writer""Writing for developers new to the codebase"Clear API documentation
"Code reviewer""For a PR that will be merged today"Actionable, prioritized feedback

Negative Framing: What NOT to Do

Sometimes it's as important to specify what the model should not do:

system_prompt = """You are a customer service assistant for a financial services company.

You MUST NOT:
- Provide specific investment advice or recommend specific securities
- Make promises about account changes you cannot verify
- Discuss competitor products
- Share customer account details in your responses

You SHOULD:
- Empathize with frustrated customers
- Escalate to human agents for disputes over $500
- Refer customers to our fee schedule page for pricing questions"""

Testing Your Role Frame

The best way to evaluate role framing is A/B testing:

# Compare generic vs. role-framed responses
question = "What are the risks of storing passwords as MD5 hashes?"

generic_response = get_response(
    system="You are a helpful assistant.",
    user=question
)

expert_response = get_response(
    system="You are a senior application security engineer specializing in identity and authentication security.",
    user=question
)

# The expert response should be more technically precise,
# mention specific attack vectors (rainbow tables, GPU cracking),
# and provide concrete mitigations (bcrypt, Argon2, scrypt)

Role framing is free — it costs no extra tokens beyond the system prompt — and its impact on output quality is significant enough that it should be part of every production system prompt.