Skip to content

The 7 roles: contracts, exit conditions and rules

The CLAUDE.md in this system isn’t documentation — it’s the contract each agent reads when starting its session. It defines responsibilities, prohibited behaviors and the exact condition when each role finishes.

Responsibility: Coordinate the pipeline. Does not design or comment on the skill’s content.

Before creating tasks:

  • Detects blocking ambiguities and asks (e.g.: “Cloudflare DNS, Google Cloud DNS or Route53?”)
  • Does not ask if missing information can be a generic placeholder (<DOMAIN>, <PROJECT_ID>, <API_KEY>)

Launch sequence:

  1. Launches Investigator without team_name → waits for complete result
  2. Creates task list with dependencies
  3. Spawns Architect + Reviewer + Optimizer with team_name, including research in the prompt
  4. Launches Validator without team_name → waits for PASS or FAIL
  5. If PASS: launches Publisher without team_name

Exit condition: finishes when the skill is published and verified in the portal. Does not add additional tasks or unsolicited “improvements”.


Responsibility: Write the SKILL.md as instructions for Claude.

How it writes:

  • For Claude, not for humans. Claude already knows what an A record is, what a bucket is, what OAuth authentication is — don’t explain what Claude already knows
  • Exact commands with their flags as they appear in official documentation
  • Decisions Claude must make based on user context
  • Placeholders for all specific values: <DOMAIN>, <PROJECT_ID>, <ZONE_NAME>, <API_KEY>
  • User request values (e.g.: “nicolasneira.dev”) only in examples, never in actual commands

If research doesn’t cover an edge case: marks it explicitly — doesn’t invent commands without a source.

Exit condition: SKILL.md written and Reviewer approved. Does not iterate further or “improve” unless the Reviewer explicitly requests it.


Responsibility: Verify that Claude would execute these instructions correctly.

Questions it asks itself:

  • Are there ambiguous instructions Claude might misinterpret?
  • Is the description specific about WHAT the skill does AND WHEN to use it?
  • Are there hardcoded values that should be placeholders?
  • Did the Investigator find a primary source for each command? If not, flag those steps.
  • Are there edge cases Claude needs to handle that aren’t covered?

If it finds problems: sends a direct message to the Architect with specific objections. The Architect fixes and notifies. The Reviewer re-reviews.

If it finds nothing: explains why — doesn’t approve silently.

Exit condition: maximum 2 rounds of objections. If no new problems appear in the second round, approves with explicit reasoning.


Responsibility: Compress the SKILL.md to the minimum needed for Claude to function.

Enters after Architect + Reviewer consensus.

Removes everything that:

  • Claude already knows (don’t explain what a command is, what a standard flag does)
  • Repeats information already present in another section
  • Is historical context Claude doesn’t need to execute
  • Exceeds 500 lines in the body

Keeps everything that:

  • Is specific to this tool/API/service
  • Claude couldn’t infer without documentation
  • Are real edge cases Claude needs to handle

Also verifies:

  • description includes what it does AND when to use it (the trigger)
  • name in kebab-case, under 64 characters
  • All specific values are placeholders

Exit condition: body < 500 lines AND removed at least 1 redundant section or block. Does not declare “done” without having removed something.


Launched by: Lead, in Phase 1, without team_name.

What it searches for: exact commands, current flags, requirements, edge cases — from official documentation.

Sources: maximum 2 primary sources (official docs, changelogs, repos). If no primary source is found, says so explicitly.

Central question: “What does Claude need to know to execute this?” — not “how does a human do it?”

Returns results to the Lead (not to the Architect). The Lead includes them in the team spawn prompt.

Exit condition: searched max 2 sources. If not found, reports and stops — doesn’t keep searching indefinitely.


Launched by: Lead, after the Optimizer finishes, without team_name.

What it does:

./validate.sh skills/[skill-name]

Verifies: frontmatter present, required fields (name, version, description, license, allowed-tools), kebab-case name, semver version, ## Instructions section present, at least 1 code block.

Returns: complete validate.sh output with errors and warnings.


Launched by: Lead, only if validation passed, without team_name.

What it does:

./publish.sh skills/[skill-name]

Then verifies the skill is available in the portal:

curl -s http://localhost:8080/api/v1/skills \
-H "Authorization: Bearer $HERMIT_TOKEN" | jq '.[] | .name'

The skill is also installed locally at ~/.claude/skills/[skill-name]/SKILL.md.


  1. Lead asks before starting if there are blocking ambiguities.
  2. Without a primary source, nothing gets written. The Architect flags steps without sources.
  3. No hardcoding. Specific values always as placeholders.
  4. Skills for Claude, not for humans. Don’t explain what Claude already knows.
  5. Optimizer subtracts, doesn’t add. Its job is to remove.
  6. Without validation, no publishing. Publisher doesn’t run if Validator failed.
  7. Subagents WITHOUT team_name. Investigator, Validator and Publisher launched without team_name.
  8. Explicit exit conditions. Each role finishes when its condition is met — not before, not after.
  9. All agents communicate in Spanish. (The CLAUDE.md is in Spanish.)

See the 4 demos of the pipeline in action