Claude Code · Agent Teams · Experimental
Hybrid architecture: 4 Claude instances debate and build the artifact, 3 subagents execute scoped tasks. One prompt — the skill published.
7 agents, 4 Claude Code sessions. Two layers with distinct responsibilities.
Agent Team (4 sessions):
Lead → orchestrates everything, launches subagents
Architect → writes the SKILL.md
Reviewer → checks if Claude would execute the instructions correctly
Optimizer → removes everything Claude already knows
Lead's Subagents (no own session):
Investigator → fetches official docs, returns research
Validator → runs validate.sh, returns PASS/FAIL
Publisher → publishes to the portal and verifies Agent Team
Each teammate has its own session and can send direct messages to others via Mailbox. The Architect writes, the Reviewer objects, the Optimizer compresses — without the Lead intervening in the debate.
Subagents
The Investigator fetches docs and closes. The Validator runs the script and reports. The Publisher uploads the skill and verifies. Scoped tasks, concrete results — no need to debate with anyone.
The design rule: do the agents need to talk to each other to do the job well?
| Subagents | Agent Teams | |
|---|---|---|
| Communication | Report only to Lead | Talk directly to each other |
| Own session | No — run inside the Lead | Yes — independent context |
| Task Tool | Not available | Lead only |
| Model | Inherits from Lead (Sonnet) | Opus always (automatic) |
| Token cost | Low | High — each teammate = Opus |
| Best for | Scoped execution and concrete results | Work requiring debate and review |
Discovered live
I tried assigning Haiku and Sonnet by role to reduce costs. The system ignored it completely. All teammates ran on Opus — no exception, across all demos.
| Component | Requested model | Actual model |
|---|---|---|
| Lead | Haiku | Sonnet (inherits from user) |
| Teammates | Haiku / Sonnet per role | Opus — always, no exception |
| Subagents | Haiku | Sonnet (inherits from Lead) |
Data from 37 analyzed calls in real logs. Haiku was never used. The only possible saving: launch Investigator, Validator and Publisher as subagents (without team_name) so they run on Sonnet instead of Opus.
Each demo produces a skill published to the Hermit portal and available locally for Claude.
The Reviewer caught a real Terraform bug that would have failed at runtime. The Optimizer compressed the skill from 247 to 157 lines, eliminating 5 redundant blocks.
The Investigator fetched official boto3 docs before the Architect wrote a single line. The skill covers bucket creation, permissions and policies.
Creating and configuring Google Workspace groups via GAM — permissions, members and visibility. The team worked directly from the official GAM CLI docs.
Verifying email identities in Amazon SES v2 with AWS CLI. The skill covers the full flow: creation, verification and DKIM validation.
Three steps to get the factory running.
1. Clone the repository
git clone https://github.com/nneira/claude-agent-teams-skill-factory
cd claude-agent-teams-skill-factory
cp .env.example .env 2. Enable Agent Teams
# ~/.claude/settings.json
{
"env": {
"CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS": "1"
}
} 3. Use it
# From the project root
claude
# Example prompt:
> I need a skill to create a VM on Google Cloud with Terraform. The Lead asks before starting if there are blocking ambiguities. The pipeline runs on its own.
The most common questions when working with Agent Teams for the first time.
An experimental Claude Code feature that lets you orchestrate multiple independent Claude instances working in parallel. Each teammate has its own context, history and session — they communicate via a shared task list and direct messages (Mailbox). Unlike subagents, teammates can debate with each other without the main agent intervening.
A subagent runs inside the agent that launched it, executes a scoped task and returns a result — no own session, no communication with other agents. A teammate is an independent Claude Code session with full tool access that can read and send messages to other teammates. Design rule: do they need to talk to each other to do the job well? Yes → Agent Team. No → Subagents.
No. Agent Teams automatically assigns Opus to all teammates regardless of what you configure in CLAUDE.md or the spawn prompt. The Lead inherits the model from your active session. Subagents (without team_name) also inherit the Lead's model. Haiku was never used in any of the demos. This behavior is documented and confirmed with data from 37 analyzed calls.
No — it requires the CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS=1 flag and is under active development. The system has non-deterministic behaviors: the Lead sometimes launches agents as teammates when they should be subagents, which affects models used and cost. Use it for development, experimentation and asset generation — not as an automated production pipeline without supervision.
Hermit is a self-hosted open source portal where published skills live. Claude detects it automatically and picks the right skill based on task context — without the user specifying which skill to use. It runs with docker compose up and exposes an API the system uses to publish and query skills.
Skills in SKILL.md format work with any tool that supports the Agent Skills standard: Claude Code, Cursor and any compatible agent. The format is open — not tied to Claude.
Everything you need to get started.
YouTube Channel
@NicolasNeiraGarcia
ADK · A2A · Claude Code · Automation · Infrastructure