Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
143 changes: 143 additions & 0 deletions examples/demos/deep_research/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,143 @@
# Deep Research Multi-Agent System

A multi-agent research system built on AgentEx that demonstrates **orchestrator + subagent communication** using Temporal workflows. An orchestrator agent dispatches specialized research subagents (GitHub, Docs, Slack) in parallel, collects their findings, and synthesizes a comprehensive answer.

## Architecture

```
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ Orchestrator β”‚
User ────▢│ (GPT-5.1) β”‚
Query β”‚ Dispatches & β”‚
β”‚ Synthesizes β”‚
β””β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”˜
β”‚ β”‚ β”‚
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚ └─────────┐
β–Ό β–Ό β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ GitHub β”‚ β”‚ Docs β”‚ β”‚ Slack β”‚
β”‚ Researcher β”‚ β”‚ Researcher β”‚ β”‚ Researcher β”‚
β”‚ (GPT-4.1 β”‚ β”‚ (GPT-4.1 β”‚ β”‚ (GPT-4.1 β”‚
β”‚ mini) β”‚ β”‚ mini) β”‚ β”‚ mini) β”‚
β”‚ β”‚ β”‚ β”‚ β”‚ β”‚
β”‚ GitHub MCPβ”‚ β”‚ Web Searchβ”‚ β”‚ Slack MCP β”‚
β”‚ Server β”‚ β”‚ + Fetcher β”‚ β”‚ Server β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
```

## Key Patterns Demonstrated

### 1. Multi-Agent Orchestration via ACP
The orchestrator creates child tasks on subagents using `adk.acp.create_task()`, sends queries via `EVENT_SEND`, and waits for `research_complete` callback events.

### 2. Shared Task ID for Unified Output
All subagents write messages to the **orchestrator's task ID** (passed as `source_task_id`), so the user sees all research progress in a single conversation thread.

### 3. Conversation Compaction
Subagents use a batched `Runner.run()` pattern with conversation compaction between batches to stay within Temporal's ~2MB payload limit during long research sessions.

### 4. MCP Server Integration
GitHub and Slack subagents use MCP (Model Context Protocol) servers via `StatelessMCPServerProvider` for tool access.

## Agents

| Agent | Port | Model | Tools |
|-------|------|-------|-------|
| Orchestrator | 8010 | gpt-5.1 | dispatch_github, dispatch_docs, dispatch_slack |
| GitHub Researcher | 8011 | gpt-4.1-mini | GitHub MCP (search_code, etc.) |
| Docs Researcher | 8012 | gpt-4.1-mini | web_search (Tavily), fetch_docs_page |
| Slack Researcher | 8013 | gpt-4.1-mini | Slack MCP (search_messages, etc.) |

## Prerequisites

- [AgentEx CLI](https://agentex.sgp.scale.com/docs/) installed
- OpenAI API key
- GitHub Personal Access Token (for GitHub researcher)
- Tavily API key (for Docs researcher) - get one at https://tavily.com
- Slack Bot Token (for Slack researcher)

## Setup

### 1. Environment Variables

Create a `.env` file in each agent directory with the required keys:

**orchestrator/.env:**
```
OPENAI_API_KEY=your-openai-key
```

**github_researcher/.env:**
```
OPENAI_API_KEY=your-openai-key
GITHUB_PERSONAL_ACCESS_TOKEN=your-github-token
```

**docs_researcher/.env:**
```
OPENAI_API_KEY=your-openai-key
TAVILY_API_KEY=your-tavily-key
```

**slack_researcher/.env:**
```
OPENAI_API_KEY=your-openai-key
SLACK_BOT_TOKEN=your-slack-bot-token
SLACK_TEAM_ID=your-slack-team-id
```

### 2. Run All Agents

Start each agent in a separate terminal:

```bash
# Terminal 1 - Orchestrator
cd orchestrator
agentex agents run --manifest manifest.yaml

# Terminal 2 - GitHub Researcher
cd github_researcher
agentex agents run --manifest manifest.yaml

# Terminal 3 - Docs Researcher
cd docs_researcher
agentex agents run --manifest manifest.yaml

# Terminal 4 - Slack Researcher
cd slack_researcher
agentex agents run --manifest manifest.yaml
```

### 3. Test

Open the AgentEx UI and send a research question to the orchestrator agent. You should see:
1. The orchestrator dispatching queries to subagents
2. Each subagent streaming its research progress to the same conversation
3. The orchestrator synthesizing all findings into a final answer

## Customization

### Using Different Research Sources

You can adapt the subagents to search different sources:
- Replace the GitHub MCP server with any other MCP server
- Replace Tavily with your preferred search API
- Replace the Slack MCP with any communication platform's MCP
- Update the system prompts to match your target repositories, docs, and channels

### Adding More Subagents

To add a new research subagent:
1. Copy one of the existing subagent directories
2. Update the manifest.yaml with a new agent name and port
3. Modify the workflow.py system prompt and tools
4. Add a new dispatch tool in the orchestrator's workflow.py

## How Shared Task ID Works

The key pattern that makes all agents write to the same conversation:

1. **Orchestrator** passes its `task_id` as `source_task_id` when creating child tasks
2. **Subagents** extract `parent_task_id = params.task.params.get("source_task_id")`
3. **Subagents** use `message_task_id = parent_task_id or params.task.id` for all `adk.messages.create()` calls and `TemporalStreamingHooks`
4. This means all messages and streamed LLM output appear in the orchestrator's task conversation
43 changes: 43 additions & 0 deletions examples/demos/deep_research/docs_researcher/.dockerignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
# Python
__pycache__/
*.py[cod]
*$py.class
*.so
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
*.egg-info/
.installed.cfg
*.egg

# Environments
.env**
.venv
env/
venv/
ENV/
env.bak/
venv.bak/

# IDE
.idea/
.vscode/
*.swp
*.swo

# Git
.git
.gitignore

# Misc
.DS_Store
40 changes: 40 additions & 0 deletions examples/demos/deep_research/docs_researcher/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
# syntax=docker/dockerfile:1.3
FROM python:3.12-slim
COPY --from=ghcr.io/astral-sh/uv:0.6.4 /uv /uvx /bin/

# Install system dependencies
RUN apt-get update && apt-get install -y \
htop \
vim \
curl \
tar \
python3-dev \
build-essential \
gcc \
cmake \
netcat-openbsd \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/**

# Install tctl (Temporal CLI)
RUN curl -L https://github.com/temporalio/tctl/releases/download/v1.18.1/tctl_1.18.1_linux_arm64.tar.gz -o /tmp/tctl.tar.gz && \
tar -xzf /tmp/tctl.tar.gz -C /usr/local/bin && \
chmod +x /usr/local/bin/tctl && \
rm /tmp/tctl.tar.gz

RUN uv pip install --system --upgrade pip setuptools wheel

ENV UV_HTTP_TIMEOUT=1000

COPY deep_research/docs_researcher/pyproject.toml /app/docs_researcher/pyproject.toml

WORKDIR /app/docs_researcher

COPY deep_research/docs_researcher/project /app/docs_researcher/project

RUN uv pip install --system .

ENV PYTHONPATH=/app
ENV AGENT_NAME=deep-research-docs

CMD ["uvicorn", "project.acp:acp", "--host", "0.0.0.0", "--port", "8000"]
27 changes: 27 additions & 0 deletions examples/demos/deep_research/docs_researcher/environments.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
# Agent Environment Configuration
schema_version: "v1"
environments:
dev:
auth:
principal:
user_id: # TODO: Fill in
account_id: # TODO: Fill in
helm_overrides:
replicaCount: 1
resources:
requests:
cpu: "500m"
memory: "1Gi"
limits:
cpu: "1000m"
memory: "2Gi"
temporal-worker:
enabled: true
replicaCount: 1
resources:
requests:
cpu: "500m"
memory: "1Gi"
limits:
cpu: "1000m"
memory: "2Gi"
59 changes: 59 additions & 0 deletions examples/demos/deep_research/docs_researcher/manifest.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,59 @@
# Agent Manifest Configuration

build:
context:
root: ../../
include_paths:
- deep_research/docs_researcher
dockerfile: deep_research/docs_researcher/Dockerfile
dockerignore: deep_research/docs_researcher/.dockerignore

local_development:
agent:
port: 8012
host_address: host.docker.internal

paths:
acp: project/acp.py
worker: project/run_worker.py

agent:
acp_type: async
name: deep-research-docs
description: Searches documentation and the web for relevant guides and references

temporal:
enabled: true
workflows:
- name: deep-research-docs
queue_name: deep_research_docs_queue

credentials:
- env_var_name: REDIS_URL
secret_name: redis-url-secret
secret_key: url

env:
OPENAI_ORG_ID: ""

deployment:
image:
repository: ""
tag: "latest"

imagePullSecrets:
- name: my-registry-secret

global:
agent:
name: "deep-research-docs"
description: "Searches documentation and the web for relevant guides and references"

replicaCount: 1
resources:
requests:
cpu: "500m"
memory: "1Gi"
limits:
cpu: "1000m"
memory: "2Gi"
Empty file.
29 changes: 29 additions & 0 deletions examples/demos/deep_research/docs_researcher/project/acp.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
import os

from temporalio.contrib.openai_agents import OpenAIAgentsPlugin

from agentex.lib.core.temporal.plugins.openai_agents.models.temporal_streaming_model import (
TemporalStreamingModelProvider,
)
from agentex.lib.core.temporal.plugins.openai_agents.interceptors.context_interceptor import ContextInterceptor
from agentex.lib.sdk.fastacp.fastacp import FastACP
from agentex.lib.types.fastacp import TemporalACPConfig

context_interceptor = ContextInterceptor()
streaming_model_provider = TemporalStreamingModelProvider()

# Create the ACP server
acp = FastACP.create(
acp_type="async",
config=TemporalACPConfig(
type="temporal",
temporal_address=os.getenv("TEMPORAL_ADDRESS", "localhost:7233"),
plugins=[OpenAIAgentsPlugin(model_provider=streaming_model_provider)],
interceptors=[context_interceptor],
),
)

if __name__ == "__main__":
import uvicorn

uvicorn.run(acp, host="0.0.0.0", port=8000)
Loading
Loading