ADK + A2A Demo: automatic and interactive IDP execution
The same 7 agents, two ways to see them in action.
Automatic mode — end-to-end pipeline
Section titled “Automatic mode — end-to-end pipeline”./start-demo-nicolasneira.sh "Build IDP for Python FastAPI apps"The 7 agents run in sequence. This is what happens at each step:
Agent 1 — Platform Architect (~12s)
Section titled “Agent 1 — Platform Architect (~12s)”Receives the task description and makes stack decisions:
▶ Platform Architect Analyzing: "Build IDP for Python FastAPI apps"
Decisions made: ✓ Runtime: Python 3.11 ✓ Framework: FastAPI ✓ Database: PostgreSQL 15 ✓ Cache: Redis ✓ CI/CD: Jenkins + Bash scripts ✓ Monitoring: Prometheus + Grafana ✓ Security: Trivy ✓ Deploy: Docker Compose (local mode)
→ Saving platform-config.yaml ✅ CompletedAgent 2 — Infrastructure (~18s)
Section titled “Agent 2 — Infrastructure (~18s)”Reads the platform-config.yaml and generates the complete infrastructure:
▶ Infrastructure Agent Reading platform-config.yaml... Stack detected: Python + FastAPI + PostgreSQL + Redis
Generating docker-compose/app-stack.yml: ✓ app service (FastAPI) with healthcheck at /health ✓ PostgreSQL 15 with persistent volume and healthcheck ✓ Redis Alpine — minimal image ✓ Prometheus — scrape config for FastAPI ✓ Grafana — with pre-configured datasource
→ Saving docker-compose/app-stack.yml ✅ CompletedAgent 3 — Security (~14s)
Section titled “Agent 3 — Security (~14s)”Scans the generated infrastructure with Trivy:
▶ Security Agent Reading docker-compose/app-stack.yml...
Scanning images with Trivy: postgres:15 → 0 CRITICAL, 0 HIGH, 2 MEDIUM redis:alpine → 0 CRITICAL, 0 HIGH, 0 MEDIUM prom/prometheus → 0 CRITICAL, 0 HIGH, 1 MEDIUM grafana/grafana → 0 CRITICAL, 0 HIGH, 1 MEDIUM
Checking hardcoded secrets... none detected Checking exposed ports... configuration correct
→ Saving security-report.json ✅ APPROVED — pipeline can continueAgent 4 — CI/CD (~16s)
Section titled “Agent 4 — CI/CD (~16s)”Generates automation scripts adapted to the stack:
▶ CI/CD Agent Reading platform-config.yaml... Stack detected: Docker + Pytest + Jenkins
Generating: ✓ cicd/build.sh — docker build with semantic tagging ✓ cicd/test.sh — pytest in isolated container ✓ cicd/deploy.sh — docker-compose up with health checks ✓ Jenkinsfile — pipeline: build → test → scan → deploy
✅ CompletedAgent 5 — Observability (~19s)
Section titled “Agent 5 — Observability (~19s)”Configures Prometheus and generates Grafana dashboards:
▶ Observability Agent Reading infrastructure configuration...
Generating: ✓ prometheus.yml — scrape config for FastAPI at :8000/metrics — interval: 15s
✓ grafana-dashboards/app-metrics.json — API Latency (p50, p95, p99) — Error Rate by endpoint — Request Throughput
✓ grafana-dashboards/system-metrics.json — CPU usage — Memory usage — Disk I/O — Network in/out
✅ CompletedAgent 6 — DevEx (~15s)
Section titled “Agent 6 — DevEx (~15s)”Generates the project CLI tool with commands adapted to the stack:
▶ DevEx Agent Reading full stack...
Generating cli-tool/idp: ✓ idp init — initialize new project from template ✓ idp build — docker build with correct tag ✓ idp test — pytest in container ✓ idp deploy — docker-compose up -d ✓ idp status — docker-compose ps with health indicators ✓ idp logs — docker-compose logs -f with filters
✅ CompletedAgent 7 — Web Portal (~45s)
Section titled “Agent 7 — Web Portal (~45s)”Builds the complete web portal — the most complex artifact:
▶ Web Portal Agent Reading complete IDP configuration...
Generating portal/ (FastAPI + Jinja2 + HTMX + TailwindCSS): ✓ main.py ✓ routes/dashboard.py — service status (Docker API) ✓ routes/catalog.py — template catalog ✓ routes/services.py — service CRUD ✓ templates/base.html — layout with TailwindCSS ✓ templates/dashboard.html — main dashboard ✓ templates/catalog.html — available templates ✓ templates/create_service.html ✓ services/docker_manager.py — Docker API client ✓ services/template_manager.py ✓ Dockerfile
✅ CompletedOrchestrator final summary
Section titled “Orchestrator final summary”═══════════════════════════════════════════════════ ✅ IDP generated successfully
Agents executed: 7/7 Files generated: 21 Total time: ~4.5 min
→ orchestration-report.json saved═══════════════════════════════════════════════════
To launch the generated IDP:
cd test-outputs/idp-adk-sequential docker-compose -f docker-compose/app-stack.yml up -d
Portal: http://localhost:8001 Grafana: http://localhost:3000 Jenkins: http://localhost:8080═══════════════════════════════════════════════════Also available with other prompts:
./start-demo-nicolasneira.sh "Build IDP for Go microservices with PostgreSQL"./start-demo-nicolasneira.sh "Build IDP for Node.js apps, deploy to Google Cloud"Interactive mode — web interface at localhost:8000
Section titled “Interactive mode — web interface at localhost:8000”./start-interactive-nicolasneira.sh# Open http://localhost:8000From the interface you select the agent and chat in natural language. These are the most revealing scenarios:
Scenario 1 — The Architect that adapts in real time
Section titled “Scenario 1 — The Architect that adapts in real time”The Platform Architect has no hardcoded stack — it reasons based on what you ask. You can change context during the conversation and the agent adapts its recommendations justifying each change:
> "Design the architecture for a Python microservices IDP"
Platform Architect: I recommend a local stack with Docker Compose, PostgreSQL, Jenkins and Trivy for scanning. Justification: local development, full control, no cloud costs.
> "We're actually going to deploy it on Google Cloud"
Platform Architect: Changing recommendation for Google Cloud: — Cloud Run instead of Docker Compose — Cloud SQL instead of local PostgreSQL — Secret Manager for secrets management — Cloud Build for CI/CD The previous architecture no longer applies in this context.
> "The team prefers AWS and uses GitLab CI"
Platform Architect: Adjusting to AWS + GitLab: — ECS Fargate for containers — RDS PostgreSQL — GitLab CI instead of Cloud Build — AWS Secrets Manager Each decision is justified by the constraint you specified.Three completely different stacks in the same session. The agent reasons, it doesn’t fill a template.
Scenario 2 — The Security Agent that can’t proceed
Section titled “Scenario 2 — The Security Agent that can’t proceed”This scenario shows A2A in action: agents know the system state through the available artifacts.
> [You select Security Agent directly in the interface]> "Analyze the security of the current infrastructure"
Security Agent: I have tried to read the infrastructure agent's decisions, but the file docker-compose/app-stack.yml was not found.
The Infrastructure agent must run first so that I can analyze the generated infrastructure.
I cannot proceed without that context.The agent didn’t make up a response. It didn’t improvise with fictional data. It actively searched for the artifact it needed and, not finding it, blocked its own execution. This is A2A’s separation of responsibilities working in real time.
Scenario 3 — Different stack on each run
Section titled “Scenario 3 — Different stack on each run”The same task produces different results because the model reasons on each execution:
# First run./start-demo-nicolasneira.sh "Build IDP for Python microservices"# → Runtime: Python 3.11, DB: PostgreSQL, Cache: Redis,# Scanner: Trivy, CI: Jenkins
# Second run — same prompt./start-demo-nicolasneira.sh "Build IDP for Python microservices"# → Runtime: Python 3.11, DB: SQLite (simpler for local),# No cache, Scanner: Bandit, CI: Jenkins
# Third run — different prompt./start-demo-nicolasneira.sh "Build IDP for Go microservices"# → Runtime: Go 1.21, Framework: Gin, DB: PostgreSQL,# CI: GitHub Actions, Scanner: TrivyReal AI: the model makes decisions. It’s not a switch-case disguised as an agent.