Skip to content

Commit 046be90

Browse files
danielmillerpclaude
andcommitted
Add deep research multi-agent example (orchestrator + 3 subagents)
Demonstrates orchestrator + subagent pattern with shared task ID, Temporal workflows, MCP server integration, and conversation compaction. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
1 parent fdda7a5 commit 046be90

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

42 files changed

+2832
-0
lines changed
Lines changed: 143 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,143 @@
1+
# Deep Research Multi-Agent System
2+
3+
A multi-agent research system built on AgentEx that demonstrates **orchestrator + subagent communication** using Temporal workflows. An orchestrator agent dispatches specialized research subagents (GitHub, Docs, Slack) in parallel, collects their findings, and synthesizes a comprehensive answer.
4+
5+
## Architecture
6+
7+
```
8+
┌─────────────────────┐
9+
│ Orchestrator │
10+
User ────▶│ (GPT-5.1) │
11+
Query │ Dispatches & │
12+
│ Synthesizes │
13+
└───┬─────┬─────┬─────┘
14+
│ │ │
15+
┌─────────┘ │ └─────────┐
16+
▼ ▼ ▼
17+
┌────────────┐ ┌────────────┐ ┌────────────┐
18+
│ GitHub │ │ Docs │ │ Slack │
19+
│ Researcher │ │ Researcher │ │ Researcher │
20+
│ (GPT-4.1 │ │ (GPT-4.1 │ │ (GPT-4.1 │
21+
│ mini) │ │ mini) │ │ mini) │
22+
│ │ │ │ │ │
23+
│ GitHub MCP│ │ Web Search│ │ Slack MCP │
24+
│ Server │ │ + Fetcher │ │ Server │
25+
└────────────┘ └────────────┘ └────────────┘
26+
```
27+
28+
## Key Patterns Demonstrated
29+
30+
### 1. Multi-Agent Orchestration via ACP
31+
The orchestrator creates child tasks on subagents using `adk.acp.create_task()`, sends queries via `EVENT_SEND`, and waits for `research_complete` callback events.
32+
33+
### 2. Shared Task ID for Unified Output
34+
All subagents write messages to the **orchestrator's task ID** (passed as `source_task_id`), so the user sees all research progress in a single conversation thread.
35+
36+
### 3. Conversation Compaction
37+
Subagents use a batched `Runner.run()` pattern with conversation compaction between batches to stay within Temporal's ~2MB payload limit during long research sessions.
38+
39+
### 4. MCP Server Integration
40+
GitHub and Slack subagents use MCP (Model Context Protocol) servers via `StatelessMCPServerProvider` for tool access.
41+
42+
## Agents
43+
44+
| Agent | Port | Model | Tools |
45+
|-------|------|-------|-------|
46+
| Orchestrator | 8010 | gpt-5.1 | dispatch_github, dispatch_docs, dispatch_slack |
47+
| GitHub Researcher | 8011 | gpt-4.1-mini | GitHub MCP (search_code, etc.) |
48+
| Docs Researcher | 8012 | gpt-4.1-mini | web_search (Tavily), fetch_docs_page |
49+
| Slack Researcher | 8013 | gpt-4.1-mini | Slack MCP (search_messages, etc.) |
50+
51+
## Prerequisites
52+
53+
- [AgentEx CLI](https://agentex.sgp.scale.com/docs/) installed
54+
- OpenAI API key
55+
- GitHub Personal Access Token (for GitHub researcher)
56+
- Tavily API key (for Docs researcher) - get one at https://tavily.com
57+
- Slack Bot Token (for Slack researcher)
58+
59+
## Setup
60+
61+
### 1. Environment Variables
62+
63+
Create a `.env` file in each agent directory with the required keys:
64+
65+
**orchestrator/.env:**
66+
```
67+
OPENAI_API_KEY=your-openai-key
68+
```
69+
70+
**github_researcher/.env:**
71+
```
72+
OPENAI_API_KEY=your-openai-key
73+
GITHUB_PERSONAL_ACCESS_TOKEN=your-github-token
74+
```
75+
76+
**docs_researcher/.env:**
77+
```
78+
OPENAI_API_KEY=your-openai-key
79+
TAVILY_API_KEY=your-tavily-key
80+
```
81+
82+
**slack_researcher/.env:**
83+
```
84+
OPENAI_API_KEY=your-openai-key
85+
SLACK_BOT_TOKEN=your-slack-bot-token
86+
SLACK_TEAM_ID=your-slack-team-id
87+
```
88+
89+
### 2. Run All Agents
90+
91+
Start each agent in a separate terminal:
92+
93+
```bash
94+
# Terminal 1 - Orchestrator
95+
cd orchestrator
96+
agentex agents run --manifest manifest.yaml
97+
98+
# Terminal 2 - GitHub Researcher
99+
cd github_researcher
100+
agentex agents run --manifest manifest.yaml
101+
102+
# Terminal 3 - Docs Researcher
103+
cd docs_researcher
104+
agentex agents run --manifest manifest.yaml
105+
106+
# Terminal 4 - Slack Researcher
107+
cd slack_researcher
108+
agentex agents run --manifest manifest.yaml
109+
```
110+
111+
### 3. Test
112+
113+
Open the AgentEx UI and send a research question to the orchestrator agent. You should see:
114+
1. The orchestrator dispatching queries to subagents
115+
2. Each subagent streaming its research progress to the same conversation
116+
3. The orchestrator synthesizing all findings into a final answer
117+
118+
## Customization
119+
120+
### Using Different Research Sources
121+
122+
You can adapt the subagents to search different sources:
123+
- Replace the GitHub MCP server with any other MCP server
124+
- Replace Tavily with your preferred search API
125+
- Replace the Slack MCP with any communication platform's MCP
126+
- Update the system prompts to match your target repositories, docs, and channels
127+
128+
### Adding More Subagents
129+
130+
To add a new research subagent:
131+
1. Copy one of the existing subagent directories
132+
2. Update the manifest.yaml with a new agent name and port
133+
3. Modify the workflow.py system prompt and tools
134+
4. Add a new dispatch tool in the orchestrator's workflow.py
135+
136+
## How Shared Task ID Works
137+
138+
The key pattern that makes all agents write to the same conversation:
139+
140+
1. **Orchestrator** passes its `task_id` as `source_task_id` when creating child tasks
141+
2. **Subagents** extract `parent_task_id = params.task.params.get("source_task_id")`
142+
3. **Subagents** use `message_task_id = parent_task_id or params.task.id` for all `adk.messages.create()` calls and `TemporalStreamingHooks`
143+
4. This means all messages and streamed LLM output appear in the orchestrator's task conversation
Lines changed: 43 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,43 @@
1+
# Python
2+
__pycache__/
3+
*.py[cod]
4+
*$py.class
5+
*.so
6+
.Python
7+
build/
8+
develop-eggs/
9+
dist/
10+
downloads/
11+
eggs/
12+
.eggs/
13+
lib/
14+
lib64/
15+
parts/
16+
sdist/
17+
var/
18+
wheels/
19+
*.egg-info/
20+
.installed.cfg
21+
*.egg
22+
23+
# Environments
24+
.env**
25+
.venv
26+
env/
27+
venv/
28+
ENV/
29+
env.bak/
30+
venv.bak/
31+
32+
# IDE
33+
.idea/
34+
.vscode/
35+
*.swp
36+
*.swo
37+
38+
# Git
39+
.git
40+
.gitignore
41+
42+
# Misc
43+
.DS_Store
Lines changed: 40 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,40 @@
1+
# syntax=docker/dockerfile:1.3
2+
FROM python:3.12-slim
3+
COPY --from=ghcr.io/astral-sh/uv:0.6.4 /uv /uvx /bin/
4+
5+
# Install system dependencies
6+
RUN apt-get update && apt-get install -y \
7+
htop \
8+
vim \
9+
curl \
10+
tar \
11+
python3-dev \
12+
build-essential \
13+
gcc \
14+
cmake \
15+
netcat-openbsd \
16+
&& apt-get clean \
17+
&& rm -rf /var/lib/apt/lists/**
18+
19+
# Install tctl (Temporal CLI)
20+
RUN curl -L https://github.com/temporalio/tctl/releases/download/v1.18.1/tctl_1.18.1_linux_arm64.tar.gz -o /tmp/tctl.tar.gz && \
21+
tar -xzf /tmp/tctl.tar.gz -C /usr/local/bin && \
22+
chmod +x /usr/local/bin/tctl && \
23+
rm /tmp/tctl.tar.gz
24+
25+
RUN uv pip install --system --upgrade pip setuptools wheel
26+
27+
ENV UV_HTTP_TIMEOUT=1000
28+
29+
COPY deep_research/docs_researcher/pyproject.toml /app/docs_researcher/pyproject.toml
30+
31+
WORKDIR /app/docs_researcher
32+
33+
COPY deep_research/docs_researcher/project /app/docs_researcher/project
34+
35+
RUN uv pip install --system .
36+
37+
ENV PYTHONPATH=/app
38+
ENV AGENT_NAME=deep-research-docs
39+
40+
CMD ["uvicorn", "project.acp:acp", "--host", "0.0.0.0", "--port", "8000"]
Lines changed: 27 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,27 @@
1+
# Agent Environment Configuration
2+
schema_version: "v1"
3+
environments:
4+
dev:
5+
auth:
6+
principal:
7+
user_id: # TODO: Fill in
8+
account_id: # TODO: Fill in
9+
helm_overrides:
10+
replicaCount: 1
11+
resources:
12+
requests:
13+
cpu: "500m"
14+
memory: "1Gi"
15+
limits:
16+
cpu: "1000m"
17+
memory: "2Gi"
18+
temporal-worker:
19+
enabled: true
20+
replicaCount: 1
21+
resources:
22+
requests:
23+
cpu: "500m"
24+
memory: "1Gi"
25+
limits:
26+
cpu: "1000m"
27+
memory: "2Gi"
Lines changed: 59 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,59 @@
1+
# Agent Manifest Configuration
2+
3+
build:
4+
context:
5+
root: ../../
6+
include_paths:
7+
- deep_research/docs_researcher
8+
dockerfile: deep_research/docs_researcher/Dockerfile
9+
dockerignore: deep_research/docs_researcher/.dockerignore
10+
11+
local_development:
12+
agent:
13+
port: 8012
14+
host_address: host.docker.internal
15+
16+
paths:
17+
acp: project/acp.py
18+
worker: project/run_worker.py
19+
20+
agent:
21+
acp_type: async
22+
name: deep-research-docs
23+
description: Searches documentation and the web for relevant guides and references
24+
25+
temporal:
26+
enabled: true
27+
workflows:
28+
- name: deep-research-docs
29+
queue_name: deep_research_docs_queue
30+
31+
credentials:
32+
- env_var_name: REDIS_URL
33+
secret_name: redis-url-secret
34+
secret_key: url
35+
36+
env:
37+
OPENAI_ORG_ID: ""
38+
39+
deployment:
40+
image:
41+
repository: ""
42+
tag: "latest"
43+
44+
imagePullSecrets:
45+
- name: my-registry-secret
46+
47+
global:
48+
agent:
49+
name: "deep-research-docs"
50+
description: "Searches documentation and the web for relevant guides and references"
51+
52+
replicaCount: 1
53+
resources:
54+
requests:
55+
cpu: "500m"
56+
memory: "1Gi"
57+
limits:
58+
cpu: "1000m"
59+
memory: "2Gi"

examples/demos/deep_research/docs_researcher/project/__init__.py

Whitespace-only changes.
Lines changed: 29 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,29 @@
1+
import os
2+
3+
from temporalio.contrib.openai_agents import OpenAIAgentsPlugin
4+
5+
from agentex.lib.core.temporal.plugins.openai_agents.models.temporal_streaming_model import (
6+
TemporalStreamingModelProvider,
7+
)
8+
from agentex.lib.core.temporal.plugins.openai_agents.interceptors.context_interceptor import ContextInterceptor
9+
from agentex.lib.sdk.fastacp.fastacp import FastACP
10+
from agentex.lib.types.fastacp import TemporalACPConfig
11+
12+
context_interceptor = ContextInterceptor()
13+
streaming_model_provider = TemporalStreamingModelProvider()
14+
15+
# Create the ACP server
16+
acp = FastACP.create(
17+
acp_type="async",
18+
config=TemporalACPConfig(
19+
type="temporal",
20+
temporal_address=os.getenv("TEMPORAL_ADDRESS", "localhost:7233"),
21+
plugins=[OpenAIAgentsPlugin(model_provider=streaming_model_provider)],
22+
interceptors=[context_interceptor],
23+
),
24+
)
25+
26+
if __name__ == "__main__":
27+
import uvicorn
28+
29+
uvicorn.run(acp, host="0.0.0.0", port=8000)

0 commit comments

Comments
 (0)