Project Setup and Architecture

Scoping Your Agent Project

12m read

Scoping Your Agent Project

Before writing a single line of code, you need to clearly define what your agent will do, what it won't do, and how you'll measure success. Poor scoping is the number one reason agent projects fail — not bad code, but unclear requirements leading to an agent that's impressive in some scenarios and useless in others.

The Agent Scoping Framework

Use these five questions to scope your agent:

1. What is the core value proposition? One sentence: "This agent helps [user] do [task] by [mechanism]."

Bad: "An AI assistant that helps with productivity." Good: "This agent helps software engineers triage GitHub issues by automatically classifying them by component and severity using the issue title and description."

2. What inputs does it receive? Be specific about format, source, and variability:

  • Plain text from users? Structured API payloads? File uploads?
  • How long? What languages? What encoding?
  • How predictable is the input format?

3. What outputs does it produce? Define measurable outputs:

  • Text responses? Structured JSON? Actions in external systems?
  • What does "good" look like? How do you measure it?

4. What tools does it need? Map each task to a specific tool:

TaskTool
Classify issueLLM reasoning (no tool needed)
Get issue detailsGitHub API tool
Add labelGitHub API tool
Notify teamSlack webhook tool
Log decisionDatabase write tool

5. What should it NOT do? Explicitly defining the boundaries is as important as defining the scope:

  • "Must not close or delete issues"
  • "Must not tag with labels not on the approved list"
  • "Must not access repositories the authenticated user doesn't own"

Example: Scoping a Code Review Agent

## Agent: CodeReviewBot

### Core value proposition
This agent helps development teams maintain code quality by automatically 
reviewing pull requests and identifying common issues before human review.

### Inputs
- GitHub pull request webhook events (JSON)
- Changed files (diff format from GitHub API)
- Repository configuration file (`.codereview.yaml`)

### Outputs
- GitHub PR review comments (via GitHub API)
- PR review status: APPROVE | REQUEST_CHANGES | COMMENT
- Internal metrics log (JSON to our metrics system)

### Tools Required
1. `github_get_pr_diff(pr_number: int) -> str` — Fetch the diff
2. `github_get_pr_files(pr_number: int) -> list[str]` — List changed files
3. `github_post_review(pr_number: int, body: str, event: str) -> bool` — Post review
4. `github_post_comment(pr_number: int, path: str, line: int, body: str) -> bool` — Line comment
5. `get_repo_config(repo: str) -> dict` — Load review rules

### Explicitly OUT OF SCOPE
- Does not approve PRs (only REQUEST_CHANGES or COMMENT)
- Does not merge PRs under any circumstances
- Does not access private repositories without explicit configuration
- Does not run or execute code from PRs
- Does not make style-only comments (only security, correctness, performance)

### Success Metrics
- False positive rate < 10% (issues flagged that aren't real issues)
- False negative rate < 20% (real issues that aren't caught)
- P95 review latency < 30 seconds from webhook receipt
- Developer satisfaction score > 3.5/5 (monthly survey)

Scoping the MVP

For your first agent, ruthlessly minimize scope. The temptation is to build an agent that handles every case — resist it.

# MVP Scoping Checklist

MVP_CRITERIA = {
    "single_use_case": True,       # One clearly defined primary use case
    "bounded_tool_set": True,       # Maximum 5 tools (fewer is better)
    "deterministic_inputs": True,   # Predictable, well-defined input format
    "measurable_output": True,      # You can tell if it worked or not
    "no_external_auth": False,       # Avoid OAuth/SSO complexity in MVP
    "single_user": True,            # Start with single-user, scale later
    "english_only": True,           # Skip localization for MVP
    "no_payment_processing": True,  # Never in MVP
}

def is_mvp_ready(feature: str, criteria: dict) -> bool:
    """Use this to evaluate if a feature belongs in the MVP."""
    # If adding it requires any False criteria, defer it
    return all(criteria.values())

Risk Assessment Before Building

Identify and document risks before you start:

RiskLikelihoodImpactMitigation
LLM produces wrong classificationMediumMediumAdd validation step + human review queue
GitHub API rate limitsLowHighImplement retry with backoff
PR diff too large for contextMediumHighTruncate to changed files only
False positive alienates developersMediumHighConservative threshold, easy feedback mechanism

A thorough scoping document is worth 2-3x the time it takes — it prevents weeks of rework building the wrong thing.