Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -440,9 +440,8 @@ def generate_unique_id(self) -> str:
def get_run_request(
self,
message: str,
response_format: type[BaseModel] | None,
enable_tool_calls: bool,
wait_for_response: bool = True,
*,
options: dict[str, Any] | None = None,
) -> RunRequest:
"""Get the current run request from the orchestration context.

Expand All @@ -451,9 +450,7 @@ def get_run_request(
"""
request = super().get_run_request(
message,
response_format,
enable_tool_calls,
wait_for_response,
options=options,
)
request.orchestration_id = self._context.instance_id
return request
Expand Down
7 changes: 6 additions & 1 deletion python/packages/durabletask/pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ description = "Durable Task integration for Microsoft Agent Framework."
authors = [{ name = "Microsoft", email = "af-support@microsoft.com"}]
readme = "README.md"
requires-python = ">=3.10"
version = "0.0.1"
version = "0.0.1b260113"
license-files = ["LICENSE"]
urls.homepage = "https://aka.ms/agent-framework"
urls.source = "https://github.com/microsoft/agent-framework/tree/main/python"
Expand Down Expand Up @@ -53,6 +53,11 @@ filterwarnings = [
timeout = 120
markers = [
"integration: marks tests as integration tests",
"integration_test: marks tests as integration tests (alternative marker)",
"sample: marks tests as sample tests",
"requires_azure_openai: marks tests that require Azure OpenAI",
"requires_dts: marks tests that require Durable Task Scheduler",
"requires_redis: marks tests that require Redis"
]

[tool.ruff]
Expand Down
17 changes: 17 additions & 0 deletions python/packages/durabletask/tests/integration_tests/.env.example
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
# Azure OpenAI Configuration
AZURE_OPENAI_ENDPOINT=https://your-resource.openai.azure.com/
AZURE_OPENAI_CHAT_DEPLOYMENT_NAME=your-deployment-name
# Optional: Use Azure CLI authentication if not provided
# AZURE_OPENAI_API_KEY=your-api-key

# Durable Task Scheduler Configuration
ENDPOINT=http://localhost:8080
TASKHUB=default

# Redis Configuration (for streaming tests)
REDIS_CONNECTION_STRING=redis://localhost:6379
REDIS_STREAM_TTL_MINUTES=10

# Integration Test Control
# Set to 'true' to enable integration tests
RUN_INTEGRATION_TESTS=true
111 changes: 111 additions & 0 deletions python/packages/durabletask/tests/integration_tests/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,111 @@
# Sample Integration Tests

Integration tests that validate the Durable Agent Framework samples by running them against a Durable Task Scheduler (DTS) instance.

## Setup

### 1. Create `.env` file

Copy `.env.example` to `.env` and fill in your Azure credentials:

```bash
cp .env.example .env
```

Required variables:
- `AZURE_OPENAI_ENDPOINT`
- `AZURE_OPENAI_CHAT_DEPLOYMENT_NAME`
- `AZURE_OPENAI_API_KEY` (optional if using Azure CLI authentication)
- `RUN_INTEGRATION_TESTS` (set to `true`)
- `ENDPOINT` (default: http://localhost:8080)
- `TASKHUB` (default: default)

Optional variables (for streaming tests):
- `REDIS_CONNECTION_STRING` (default: redis://localhost:6379)
- `REDIS_STREAM_TTL_MINUTES` (default: 10)

### 2. Start required services

**Durable Task Scheduler:**
```bash
docker run -d --name dts-emulator -p 8080:8080 -p 8082:8082 mcr.microsoft.com/dts/dts-emulator:latest
```
- Port 8080: gRPC endpoint (used by tests)
- Port 8082: Web dashboard (optional, for monitoring)

**Redis (for streaming tests):**
```bash
docker run -d --name redis -p 6379:6379 redis:latest
```
- Port 6379: Redis server endpoint

## Running Tests

The tests automatically start and stop worker processes for each sample.

### Run all sample tests
```bash
uv run pytest packages/durabletask/tests/integration_tests -v
```

### Run specific sample
```bash
uv run pytest packages/durabletask/tests/integration_tests/test_01_single_agent.py -v
```

### Run with verbose output
```bash
uv run pytest packages/durabletask/tests/integration_tests -sv
```

## How It Works

Each test file uses pytest markers to automatically configure and start the worker process:

```python
pytestmark = [
pytest.mark.sample("03_single_agent_streaming"),
pytest.mark.integration_test,
pytest.mark.requires_azure_openai,
pytest.mark.requires_dts,
pytest.mark.requires_redis,
]
```

## Troubleshooting

**Tests are skipped:**
Ensure `RUN_INTEGRATION_TESTS=true` is set in your `.env` file.

**DTS connection failed:**
Check that the DTS emulator container is running: `docker ps | grep dts-emulator`

**Redis connection failed:**
Check that Redis is running: `docker ps | grep redis`

**Missing environment variables:**
Ensure your `.env` file contains all required variables from `.env.example`.

**Tests timeout:**
Check that Azure OpenAI credentials are valid and the service is accessible.

If you see "DTS emulator is not available":
- Ensure Docker container is running: `docker ps | grep dts-emulator`
- Check port 8080 is not in use by another process
- Restart the container if needed

### Azure OpenAI Errors

If you see authentication or deployment errors:
- Verify your `AZURE_OPENAI_ENDPOINT` is correct
- Confirm `AZURE_OPENAI_CHAT_DEPLOYMENT_NAME` matches your deployment
- If using API key, check `AZURE_OPENAI_API_KEY` is valid
- If using Azure CLI, ensure you're logged in: `az login`

## CI/CD

For automated testing in CI/CD pipelines:

1. Use Docker Compose to start DTS emulator
2. Set environment variables via CI/CD secrets
3. Run tests with appropriate markers: `pytest -m integration_test`
Loading
Loading