Agent Types

Boundary uses opinionated AI agents with consistent biases. Each agent embodies a perspective that mirrors how senior engineers naturally reason in different roles. The disagreement between agents is the feature, not a bug.

Why Opinionation Matters

Neutral agents are useless. Each agent in Boundary embodies a consistent bias that mirrors how senior engineers naturally reason in different roles. The tension between these positions illuminates the decision space and surfaces hidden trade-offs.

The debate is not about answers. It's about decision risk surfaces. The output is not a verdict. It is a decision map.

The Agents

P

Pragmatist

Questions whether solutions are solving real problems or premature optimization. Focuses on current needs over theoretical scalability.

"Adding Kafka might be overkill for current load. Are we solving a problem we don't have?"

S

Security / Threat Analyst

Identifies security vulnerabilities, attack vectors, and data exposure risks. Challenges assumptions about authentication and authorization.

"This introduces an unverified auth path, any mistake could expose sensitive data."

S

Scalability Maximalist

Focuses on growth scenarios and system limits. Challenges designs that won't scale and advocates for partitioning, sharding, and horizontal scaling.

"If user traffic doubles in a month, this design will fail catastrophically unless we plan for partitioning."

C

Complexity Auditor

Questions whether added complexity is justified. Challenges microservice proliferation, testing burden, and mental overhead.

"Each additional microservice adds a mental overhead and testing burden; is it justified?"

D

Domain Purist

Ensures solutions respect domain invariants and business logic. Challenges designs that violate domain models or create downstream bugs.

"The proposed schema violates the invariants of our billing domain, this will create bugs downstream."

How Agents Interact

During a debate, agents engage in structured rounds of critique:

  1. Initial positions: Each agent takes a position based on their perspective and the decision question.
  2. Structured critique: Agents challenge each other's assumptions, exposing hidden trade-offs, alternative failure modes, and competing engineering values.
  3. Deepening analysis: As context is gathered, agents refine their positions and surface more specific concerns.
  4. Synthesis: The system extracts failure modes, irreversible commitments, and assumption dependencies from the debate.

The tension between positions, not consensus, is what makes Boundary valuable. It reveals the decision space in all its complexity.

Customizing Agents

By default, Boundary uses a balanced set of agents. You can customize which agents participate in debates and configure their behavior through the debate_start tool parameters. This allows you to:

  • Focus on specific perspectives (e.g., security-heavy debates)
  • Add custom agents with specific biases
  • Control the number of debate rounds
  • Adjust the verbosity of agent responses

Understanding Agent Output

When reviewing debate results, pay attention to:

  • Which agents disagree: Strong disagreement indicates a significant trade-off or risk surface.
  • Specific concerns raised: Each agent's concerns point to concrete failure modes or assumptions.
  • Areas of consensus: When agents agree, it often indicates a well-understood aspect of the decision.
  • Escalating concerns: If agents become more concerned as context is gathered, it may indicate a fundamental problem with the approach.