Skip to content

Conversation

@abrookins
Copy link
Collaborator

@abrookins abrookins commented Jan 26, 2026

Summary

Adds a comprehensive benchmark test suite using pytest-benchmark to document performance characteristics of redis-om-python model operations.

Benchmarks Included

The suite covers 25 benchmarks across these categories:

Instantiation

  • Pydantic BaseModel (baseline)
  • HashModel (simple and complex)
  • JsonModel (simple and complex)

Save Operations

  • HashModel save
  • JsonModel save
  • JsonModel with embedded documents

Get Operations

  • HashModel get by primary key
  • JsonModel get by primary key
  • JsonModel with embedded documents

Query Operations

  • Find all records
  • Find by indexed field
  • Find by embedded field
  • Find with sorting
  • Find with pagination

Update Operations

  • HashModel field updates
  • JsonModel field updates
  • JsonModel embedded document updates

Batch Operations

  • Batch add for HashModel
  • Batch add for JsonModel
  • Batch add for JsonModel with embedded documents

Running Benchmarks Locally

# Run benchmarks with timing output
pytest tests/test_benchmarks.py -v --benchmark-only

# Run benchmarks as part of normal test suite
pytest tests/test_benchmarks.py -v

CI Integration

Benchmarks run in a dedicated CI job that:

  • Executes all 25 benchmarks with full timing measurements
  • Generates a summary table in the GitHub Actions run summary
  • Uploads benchmark results as a JSON artifact for historical comparison

To view benchmark results:

  1. Go to the Actions tab for this PR
  2. Click on the "CI" workflow
  3. Click Summary in the left-hand sidebar
  4. View the summary table (scroll down)

Interpreting Results

The benchmark output shows:

  • Min/Max/Mean: Timing statistics
  • OPS: Operations per second (higher is better)

Changes

  • Added tests/test_benchmarks.py with 25 benchmark tests
  • Added pytest-benchmark as a dev dependency
  • Added dedicated "Benchmarks" job to CI workflow with summary reporting
  • Updated make_sync.py to exclude benchmark tests from sync generation (async benchmarks are sufficient)

Related to #640

@abrookins abrookins merged commit 25ea6a9 into main Jan 27, 2026
16 checks passed
@abrookins abrookins deleted the add-benchmark-tests branch January 27, 2026 01:40
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants