Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
40 changes: 40 additions & 0 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
# This workflow will install Python dependencies, run tests and lint with a variety of Python versions
# For more information see: https://docs.github.com/en/actions/automating-builds-and-tests/building-and-testing-python

name: Python package

on:
push:

pull_request:
branches: [ main ]

jobs:
build:

runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
python-version: ["3.11", "3.13"]

steps:
- uses: actions/checkout@v5
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v6
with:
python-version: ${{ matrix.python-version }}
- name: Install Hatch
run: |
python -m pip install --upgrade hatch
- uses: webfactory/ssh-agent@v0.9.1
with:
ssh-private-key: ${{ secrets.SDK_KEY }}
- name: static analysis
run: hatch fmt --check
- name: type checking
run: hatch run types:check
- name: Run tests + coverage
run: hatch run test:cov
- name: Build distribution
run: hatch build
29 changes: 29 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
*~
*#
*.swp
*.iml
*.DS_Store

__pycache__/
*.py[cod]
*$py.class
*.egg-info/

/.coverage
/.coverage.*
/.cache
/.pytest_cache
/.mypy_cache

/doc/_apidoc/
/build

.venv
.venv/

.attach_*

dist/

.vscode/
.kiro/
113 changes: 113 additions & 0 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,119 @@ documentation, we greatly value feedback and contributions from our community.
Please read through this document before submitting any issues or pull requests to ensure we have all the necessary
information to effectively respond to your bug report or contribution.

## Dependencies
Install [hatch](https://hatch.pypa.io/dev/install/).

## Developer workflow
These are all the checks you would typically do as you prepare a PR:
```
# just test
hatch test

# coverage
hatch run test:cov

# type checks
hatch run types:check

# static analysis
hatch fmt
```

## Set up your IDE
Point your IDE at the hatch virtual environment to have it recognize dependencies
and imports.

You can find the path to the hatch Python interpreter like this:
```
echo "$(hatch env find)/bin/python"
```

### VS Code
If you're using VS Code, "Python: Select Interpreter" and use the hatch venv Python interpreter
as found with the `hatch env find` command.

Hatch uses Ruff for static analysis.

You might want to install the [Ruff extension for VS Code](https://github.com/astral-sh/ruff-vscode)
to have your IDE interactively warn of the same linting and formatting rules.

These `settings.json` settings are useful:
```
{
"[python]": {
"editor.formatOnSave": true,
"editor.codeActionsOnSave": {
"source.fixAll": "explicit",
"source.organizeImports": "explicit"
},
"editor.defaultFormatter": "charliermarsh.ruff"
},
"ruff.nativeServer": "on"
}
```

## Testing
### How to run tests
To run all tests:
```
hatch test
```

To run a single test file:
```
hatch test tests/path_to_test_module.py
```

To run a specific test in a module:
```
hatch test tests/path_to_test_module.py::test_mytestmethod
```

To run a single test, or a subset of tests:
```
$ hatch test -k TEST_PATTERN
```

This will run tests which contain names that match the given string expression (case-insensitive),
which can include Python operators that use filenames, class names and function names as variables.

### Debug
To debug failing tests:

```
$ hatch test --pdb
```

This will drop you into the Python debugger on the failed test.

### Writing tests
Place test files in the `tests/` directory, using file names that end with `_test`.

Mimic the package structure in the src/aws_durable_execution_sdk_python directory.
Name your module so that src/mypackage/mymodule.py has a dedicated unit test file
tests/mypackage/mymodule_test.py

## Coverage
```
hatch run test:cov
```

## Linting and type checks
Type checking:
```
hatch run types:check
```

Static analysis (with auto-fix of known issues):
```
hatch fmt
```

To do static analysis without auto-fixes:
```
hatch fmt --check
```

## Reporting Bugs/Feature Requests

Expand Down
180 changes: 171 additions & 9 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,17 +1,179 @@
## My Project
# aws-durable-functions-sdk-python
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: rename

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

repeats


TODO: Fill this README out!
[![PyPI - Version](https://img.shields.io/pypi/v/aws-durable-functions-sdk-python.svg)](https://pypi.org/project/aws-durable-functions-sdk-python)
[![PyPI - Python Version](https://img.shields.io/pypi/pyversions/aws-durable-functions-sdk-python.svg)](https://pypi.org/project/aws-durable-functions-sdk-python)

Be sure to:
-----

* Change the title in this README
* Edit your repository description on GitHub
## Table of Contents

## Security
- [Installation](#installation)
- [Quick Start](#quick-start)
- [Architecture](#architecture)
- [Developer Guide](#developers)
- [License](#license)

See [CONTRIBUTING](CONTRIBUTING.md#security-issue-notifications) for more information.
## Installation

## License
```console
pip install aws-durable-functions-sdk-python-testing
```

## Overview

Use the Durable Functions Python Testing Framework to test your Python Durable Functions locally.

The test framework contains a local runner, so you can run and test your Durable Function locally
before you deploy it.

## Quick Start

### A Durable Function under test

```python
from durable_executions_python_language_sdk.context import (
DurableContext,
durable_step,
durable_with_child_context,
)
from durable_executions_python_language_sdk.execution import durable_handler

@durable_step
def one(a: int, b: int) -> str:
return f"{a} {b}"

@durable_step
def two_1(a: int, b: int) -> str:
return f"{a} {b}"

@durable_step
def two_2(a: int, b: int) -> str:
return f"{b} {a}"

@durable_with_child_context
def two(ctx: DurableContext, a: int, b: int) -> str:
two_1_result: str = ctx.step(two_1(a, b))
two_2_result: str = ctx.step(two_2(a, b))
return f"{two_1_result} {two_2_result}"

@durable_step
def three(a: int, b: int) -> str:
return f"{a} {b}"

@durable_handler
def function_under_test(event: Any, context: DurableContext) -> list[str]:
results: list[str] = []

result_one: str = context.step(one(1, 2))
results.append(result_one)

context.wait(seconds=1)

result_two: str = context.run_in_child_context(two(3, 4))
results.append(result_two)

result_three: str = context.step(three(5, 6))
results.append(result_three)

return results
```

### Your test code

```python
from aws_durable_execution_sdk_python.execution import InvocationStatus
from aws_durable_execution_sdk_python_testing.runner import (
ContextOperation,
DurableFunctionTestResult,
DurableFunctionTestRunner,
StepOperation,
)

def test_my_durable_functions():
with DurableFunctionTestRunner(handler=function_under_test) as runner:
result: DurableFunctionTestResult = runner.run(input="input str", timeout=10)

This project is licensed under the Apache-2.0 License.
assert result.status is InvocationStatus.SUCCEEDED
assert result.result == '["1 2", "3 4 4 3", "5 6"]'

one_result: StepOperation = result.get_step("one")
assert one_result.result == '"1 2"'

two_result: ContextOperation = result.get_context("two")
assert two_result.result == '"3 4 4 3"'

three_result: StepOperation = result.get_step("three")
assert three_result.result == '"5 6"'
```
## Architecture
![Durable Functions Python Test Framework Architecture](/assets/dar-python-test-framework-architecture.svg)

## Event Flow
![Event Flow Sequence Diagram](/assets/dar-python-test-framework-event-flow.svg)

1. **DurableTestRunner** starts execution via **Executor**
2. **Executor** creates **Execution** and schedules initial invocation
3. During execution, checkpoints are processed by **CheckpointProcessor**
4. **Individual Processors** transform operation updates and may trigger events
5. **ExecutionNotifier** broadcasts events to **Executor** (observer)
6. **Executor** updates **Execution** state based on events
7. **Execution** completion triggers final event notifications
8. **DurableTestRunner** run() blocks until it receives completion event, and then returns `DurableFunctionTestResult`.

## Major Components

### Core Execution Flow
- **DurableTestRunner** - Main entry point that orchestrates test execution
- **Executor** - Manages execution lifecycle. Mutates Execution.
- **Execution** - Represents the state and operations of a single durable execution

### Service Client Integration
- **InMemoryServiceClient** - Replaces AWS Lambda service client for local testing. Injected into SDK via `DurableExecutionInvocationInputWithClient`

### Checkpoint Processing Pipeline
- **CheckpointProcessor** - Orchestrates operation transformations and validation
- **Individual Validators** - Validate operation updates and state transitions
- **Individual Processors** - Transform operation updates into operations (step, wait, callback, context, execution)

### Execution status changes (Observer Pattern)
- **ExecutionNotifier** - Notifies observers of execution events
- **ExecutionObserver** - Interface for receiving execution lifecycle events
- **Executor** implements `ExecutionObserver` to handle completion events

## Component Relationships

### 1. DurableTestRunner → Executor → Execution
- **DurableTestRunner** serves as the main API entry point and sets up all components
- **Executor** manages the execution lifecycle, handling invocations and state transitions
- **Execution** maintains the state of operations and completion status

### 2. Service Client Injection
- **DurableTestRunner** creates **InMemoryServiceClient** with **CheckpointProcessor**
- **InProcessInvoker** injects the service client into SDK via `DurableExecutionInvocationInputWithClient`
- When durable functions call checkpoint operations, they're intercepted by **InMemoryServiceClient**
- **InMemoryServiceClient** delegates to **CheckpointProcessor** for local processing

### 3. CheckpointProcessor → Individual Validators → Individual Processors
- **CheckpointProcessor** orchestrates the checkpoint processing pipeline
- **Individual Validators** (CheckpointValidator, TransitionsValidator, and operation-specific validators) ensure operation updates are valid
- **Individual Processors** (StepProcessor, WaitProcessor, etc.) transform `OperationUpdate` into `Operation`

### 4. Observer Pattern Flow
The observer pattern enables loose coupling between checkpoint processing and execution management:

1. **CheckpointProcessor** processes operation updates
2. **Individual Processors** detect state changes (completion, failures, timer scheduling)
3. **ExecutionNotifier** broadcasts events to registered observers
4. **Executor** (as ExecutionObserver) receives notifications and updates **Execution** state
5. **Execution** complete_* methods finalize the execution state


## Developers
Please see [CONTRIBUTING.md](CONTRIBUTING.md). It contains the testing guide, sample commands and instructions
for how to contribute to this package.

tldr; use `hatch` and it will manage virtual envs and dependencies for you, so you don't have to do it manually.

## License

This project is licensed under the [Apache-2.0 License](LICENSE).
1 change: 1 addition & 0 deletions assets/dar-python-test-framework-architecture.svg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading