Skip to content

fix: deps and tests update#201

Merged
HardMax71 merged 5 commits intomainfrom
fix/deps-and-tests
Feb 17, 2026
Merged

fix: deps and tests update#201
HardMax71 merged 5 commits intomainfrom
fix/deps-and-tests

Conversation

@HardMax71
Copy link
Owner

@HardMax71 HardMax71 commented Feb 16, 2026


Summary by cubic

Updated backend and frontend dependencies, CI scanner, and base image to improve security and runtime compatibility. Fixed flaky saga state updates with Beanie revision tracking and atomic repository updates, and improved notification event timing and orchestrator logging.

  • Dependencies

    • CI/build: aquasecurity/trivy-action 0.34.0; frontend base image nginx 1.29-alpine.
    • Updates: Backend (greenlet 3.3.1, idna 3.11, uvicorn 0.40.0, yarl 1.22.0), Dev/test (hypothesis 6.151.6), Frontend (dotenv 17.3.1, rollup 4.57.1, @eslint/js 10.0.1, @hey-api/openapi-ts 0.92.4).
  • Bug Fixes

    • Notifications: publish the "notification created" event after immediate delivery to reflect final status.
    • Saga orchestrator: moved contextual values into structured log fields to avoid leaking user data in messages.

Written for commit df1f55e. Summary will update on new commits.

Summary by CodeRabbit

  • Chores
    • Bumped vulnerability scanner, updated frontend base image, and upgraded multiple backend/frontend dependencies.
  • Bug Fixes / Reliability
    • Saga persistence, concurrency and lifecycle flows hardened—state transitions (start, cancel, timeout, fail, compensate, complete) are now consistently persisted.
    • Notification events now published after delivery completes.
  • Tests
    • Tests and test doubles updated to match persistence-driven saga behavior.

@coderabbitai
Copy link

coderabbitai bot commented Feb 16, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Refactors saga orchestration to persist via repository load-modify-save with revision-based concurrency; replaces in-memory saga mutations with repo.save_saga calls and changes several method signatures; updates tests and test doubles; plus unrelated dependency and CI/frontend image bumps.

Changes

Cohort / File(s) Summary
CI/CD workflow
\.github/workflows/docker.yml
Bump Trivy action version 0.33.10.34.0.
Python deps
backend/pyproject.toml
Version bumps for runtime/dev Python dependencies (greenlet, idna, uvicorn, yarl, hypothesis).
Frontend image
frontend/Dockerfile
Production base image updated nginx:1.27-alpinenginx:1.29-alpine.
Frontend packages
frontend/package.json
Multiple dependency/devDependency version bumps (lint/build/test tooling, JSDOM, rollup). No script changes.
Saga document settings
backend/app/db/docs/saga.py
Enable use_revision = True on SagaDocument.Settings (revision-based persistence).
Saga repository
backend/app/db/repositories/saga_repository.py
Remove upsert_saga; add save_saga(saga_id, **updates) that loads document, applies partial updates, sets UTC updated_at, persists, and raises SagaNotFoundError or SagaConcurrencyError on conflicts; handle RevisionIdWasChanged.
Saga orchestrator
backend/app/services/saga/saga_orchestrator.py
Replace in-memory mutations with repo.save_saga(...) at key points (current_step, completed_steps, compensation, failures, timeouts); change method signatures to accept saga_id (cancel/compensate/fail flows); export saga domain exceptions.
Saga service
backend/app/services/saga/saga_service.py
Simplify cancel flow to delegate to orchestrator; remove in-method state validation and SagaInvalidStateError import; logging condensed.
Notification timing
backend/app/services/notification_service.py
Move NotificationCreatedEvent publish to after _deliver_notification completes (publish happens post-delivery).
Unit tests (saga)
backend/tests/unit/services/saga/test_saga_orchestrator_unit.py
Test double updated: add _find_by_id, get_saga, save_saga; adapt tests to new repo API and SagaNotFoundError.
E2E tests (saga)
backend/tests/e2e/services/saga/test_saga_service.py
create_saga_for_execution now uses get_or_create_saga and returns repo-created saga instead of the constructed in-memory instance.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  participant Client
  participant Orchestrator
  participant Repo
  participant StepHandler
  participant Producer

  Client->>Orchestrator: start/execute saga (saga_id)
  Orchestrator->>Repo: get_or_create_saga / get_saga(saga_id)
  Repo-->>Orchestrator: persisted saga (with revision)
  Orchestrator->>Repo: save_saga(saga_id, current_step=...)
  Repo-->>Orchestrator: updated saga
  Orchestrator->>StepHandler: execute step
  StepHandler-->>Orchestrator: success / failure
  alt success
    Orchestrator->>Repo: save_saga(saga_id, completed_steps, context)
    Repo-->>Orchestrator: updated saga
    Orchestrator->>Producer: publish step/completion events (if enabled)
  else failure with compensation
    Orchestrator->>Repo: save_saga(saga_id, state=COMPENSATING)
    Orchestrator->>Producer: publish failure/cancelled (if enabled)
    Orchestrator->>StepHandler: invoke compensation steps
    StepHandler-->>Orchestrator: compensation results
    Orchestrator->>Repo: save_saga(saga_id, compensated_steps / final_state)
  end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

Poem

🐰 I hopped through sagas, saving each stride,
Revisions snug, concurrency set aside.
Bumped images and packages with a cheerful bite,
I nudged events later and made persistence tight,
A carrot-coded hop into the night.

🚥 Pre-merge checks | ✅ 1 | ❌ 2

❌ Failed checks (2 warnings)

Check name Status Explanation Resolution
Title check ⚠️ Warning The title 'fix: deps and tests update' is overly vague and does not capture the main architectural changes involving saga orchestration, repository patterns, and concurrency control. Revise the title to highlight the primary changes, such as 'fix: saga persistence with revision tracking and concurrency control' or 'fix: atomic saga updates with Beanie revision tracking'.
Docstring Coverage ⚠️ Warning Docstring coverage is 72.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (1 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch fix/deps-and-tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No issues found across 3 files

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (3)
backend/app/db/repositories/saga_repository.py (1)

47-66: Clean load-modify-save with optimistic concurrency — consider guarding against invalid field names.

The pattern is sound. One subtle risk: setattr(doc, field, value) will silently accept typos or non-existent field names, creating regular Python attributes that save_changes() will ignore. The update would appear to succeed but the field wouldn't actually be persisted.

Consider adding a guard:

🛡️ Optional field-name validation
+_saga_doc_fields = set(SagaDocument.model_fields)
+
 async def save_saga(self, saga_id: str, **updates: Any) -> Saga:
     ...
     doc = await SagaDocument.find_one(SagaDocument.saga_id == saga_id)
     if not doc:
         raise SagaNotFoundError(saga_id)
+    unknown = set(updates) - _saga_doc_fields
+    if unknown:
+        raise ValueError(f"Unknown SagaDocument fields: {unknown}")
     for field, value in updates.items():
         setattr(doc, field, value)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/app/db/repositories/saga_repository.py` around lines 47 - 66, The
save_saga method currently uses setattr(doc, field, value) which will silently
create non-persisted Python attributes for typos or invalid field names; update
save_saga to validate the keys in updates against the SagaDocument's
declared/persisted fields before applying them (e.g., check against SagaDocument
model/field metadata or doc.__class__ field list), and if any invalid field
names are found raise a clear error (e.g., SagaFieldValidationError or
ValueError) listing the invalid keys; only call setattr and save_changes() when
all update keys are valid, keeping the existing SagaNotFoundError,
RevisionIdWasChanged -> SagaConcurrencyError handling and the final return
self._to_domain(doc).
backend/app/services/saga/saga_orchestrator.py (2)

209-249: _compensate_saga reloads the saga after cancel_saga already has the persisted copy — minor redundancy.

cancel_saga (line 301) gets back saved from save_saga, then calls _compensate_saga(saga_id, context) (line 323), which immediately reloads the saga from the repo (line 213). You could pass the already-loaded saga to avoid the extra DB read. That said, reloading ensures the freshest state, so this is a trade-off between correctness and efficiency.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/app/services/saga/saga_orchestrator.py` around lines 209 - 249, The
_compensate_saga function currently always reloads the saga via _repo.get_saga
causing an extra DB read when cancel_saga already has the persisted copy; change
_compensate_saga to accept an optional saga parameter (e.g., saga:
Optional[Saga] = None) and use that passed-in saga if provided, falling back to
_repo.get_saga(saga_id) only when saga is None; update cancel_saga to pass the
saved saga returned from save_saga into _compensate_saga(saga_id, context,
saga=saved) and adjust any type hints and callers accordingly to avoid the
redundant read while preserving fallback for freshness.

280-325: TOCTOU in cancel_saga between state check and state write — acceptable with revision guard, but consider atomic CAS.

Lines 287–292 read the saga state, and lines 301–304 write the CANCELLED state. A concurrent modification between these two points would raise SagaConcurrencyError (thanks to revision checking), which is correctly documented. However, the caller would need to retry — and there's no retry logic here.

If cancellation failures due to concurrent modification are expected to be rare, this is fine. If they could be frequent (e.g., during active step execution), consider either:

  • Adding a retry loop around the read-check-write in cancel_saga, or
  • Documenting that callers should handle SagaConcurrencyError with a retry.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/app/services/saga/saga_orchestrator.py` around lines 280 - 325, The
cancel_saga flow has a TOCTOU between reading state via get_saga_status and
writing CANCELLED via _repo.save_saga; implement a small retry loop around the
read-check-write to handle SagaConcurrencyError: in cancel_saga, loop up to N
attempts (e.g., 3) where you re-fetch the saga with get_saga_status, re-check
allowed states (SagaState.RUNNING/CREATED), attempt _repo.save_saga and on
SagaConcurrencyError sleep/backoff briefly then retry; if retries exhausted
re-raise the SagaConcurrencyError. Ensure logs (self.logger) record retry
attempts and include execution_id/saga_id, and keep existing compensation and
event publishing logic unchanged after a successful save.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@backend/app/db/repositories/saga_repository.py`:
- Around line 47-66: The save_saga method currently uses setattr(doc, field,
value) which will silently create non-persisted Python attributes for typos or
invalid field names; update save_saga to validate the keys in updates against
the SagaDocument's declared/persisted fields before applying them (e.g., check
against SagaDocument model/field metadata or doc.__class__ field list), and if
any invalid field names are found raise a clear error (e.g.,
SagaFieldValidationError or ValueError) listing the invalid keys; only call
setattr and save_changes() when all update keys are valid, keeping the existing
SagaNotFoundError, RevisionIdWasChanged -> SagaConcurrencyError handling and the
final return self._to_domain(doc).

In `@backend/app/services/saga/saga_orchestrator.py`:
- Around line 209-249: The _compensate_saga function currently always reloads
the saga via _repo.get_saga causing an extra DB read when cancel_saga already
has the persisted copy; change _compensate_saga to accept an optional saga
parameter (e.g., saga: Optional[Saga] = None) and use that passed-in saga if
provided, falling back to _repo.get_saga(saga_id) only when saga is None; update
cancel_saga to pass the saved saga returned from save_saga into
_compensate_saga(saga_id, context, saga=saved) and adjust any type hints and
callers accordingly to avoid the redundant read while preserving fallback for
freshness.
- Around line 280-325: The cancel_saga flow has a TOCTOU between reading state
via get_saga_status and writing CANCELLED via _repo.save_saga; implement a small
retry loop around the read-check-write to handle SagaConcurrencyError: in
cancel_saga, loop up to N attempts (e.g., 3) where you re-fetch the saga with
get_saga_status, re-check allowed states (SagaState.RUNNING/CREATED), attempt
_repo.save_saga and on SagaConcurrencyError sleep/backoff briefly then retry; if
retries exhausted re-raise the SagaConcurrencyError. Ensure logs (self.logger)
record retry attempts and include execution_id/saga_id, and keep existing
compensation and event publishing logic unchanged after a successful save.

@sonarqubecloud
Copy link

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (2)
backend/app/services/saga/saga_orchestrator.py (2)

176-182: Prefer dataclasses.fields() over __dataclass_fields__ for field introspection.

__dataclass_fields__ is an implementation detail. dataclasses.fields(SagaContextData) is the public API for the same purpose and more resilient to future Python changes.

Proposed fix
                     await self._repo.save_saga(
                         instance.saga_id,
                         completed_steps=[*saved.completed_steps, step.name],
                         context_data=SagaContextData(**{
-                            k: v for k, v in context.data.items() if k in SagaContextData.__dataclass_fields__
+                            k: v for k, v in context.data.items()
+                            if k in {f.name for f in dataclasses.fields(SagaContextData)}
                         }),
                     )
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/app/services/saga/saga_orchestrator.py` around lines 176 - 182,
Replace the use of the implementation-detail attribute
SagaContextData.__dataclass_fields__ with the public dataclasses API: import
dataclasses.fields and compute the set of valid field names via {f.name for f in
fields(SagaContextData)}, then use that set in the dict comprehension inside the
await self._repo.save_saga(...) call (the block that currently builds
context_data from context.data). This keeps the logic in the method/class
unchanged but uses dataclasses.fields(SagaContextData) to filter context.data
keys before constructing SagaContextData.

350-378: Minor: redundant self._producer guard at line 372.

This method is only called from cancel_saga (line 309) which already guards on self._producer. The inner check is harmless but redundant.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/app/services/saga/saga_orchestrator.py` around lines 350 - 378,
Remove the redundant self._producer presence check inside
_publish_saga_cancelled_event: since cancel_saga already guards that
self._producer exists, eliminate the inner "if self._producer" branch and
directly await self._producer.produce(...) in _publish_saga_cancelled_event
(keeping the same event construction, metadata, logging, and exception
handling). This simplifies the method (SagaCancelledEvent creation and
self.logger calls remain unchanged) and relies on the caller cancel_saga to
ensure _producer is set.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@backend/app/services/saga/saga_orchestrator.py`:
- Around line 303-325: After persisting the CANCELLED state and publishing, wrap
the await self._compensate_saga(saga_id, context) call in a try/except that
catches SagaConcurrencyError and Exception; on exception do not re-raise (so
cancel_saga reports success) but log the failure with full exception details
(use the orchestrator logger, e.g. self._logger.error) and persist a
compensation-failure marker on the saga (call self._repo.save_saga with saga_id
and an error_message or compensation_status field) so compensations can be
retried or inspected; keep the existing flow for save_saga,
_publish_saga_cancelled_event and context preparation unchanged.
- Around line 260-271: The loop in check_timeouts currently lets a failing
_repo.save_saga abort processing remaining sagas; wrap the per-instance work
(the logger.warning and the await self._repo.save_saga(...) call) in a
try/except so one failure doesn't stop the batch—catch SagaConcurrencyError and
other exceptions, log an error via self.logger.error including saga_id and the
exception, and continue to the next instance; keep the same
state/error_message/completed_at values when calling save_saga and ensure
exceptions are handled per-instance so find_timed_out_sagas results are fully
processed.
- Around line 199-207: The current except block in _execute_saga calls
_compensate_saga or _fail_saga but does not guard against exceptions they may
raise, which can bubble up through _execute_saga → _start_saga →
handle_execution_requested; update the except Exception as e handler to wrap the
calls to _compensate_saga(instance.saga_id, context) and
_fail_saga(instance.saga_id, str(e)) in their own try/except blocks (or a single
try that separately logs failures) so any exception raised by those methods is
caught and logged (include saga_id and exc_info) instead of propagating; ensure
you still log the original error before attempting compensation/fail and use the
same logger used in _execute_saga for consistency.

---

Nitpick comments:
In `@backend/app/services/saga/saga_orchestrator.py`:
- Around line 176-182: Replace the use of the implementation-detail attribute
SagaContextData.__dataclass_fields__ with the public dataclasses API: import
dataclasses.fields and compute the set of valid field names via {f.name for f in
fields(SagaContextData)}, then use that set in the dict comprehension inside the
await self._repo.save_saga(...) call (the block that currently builds
context_data from context.data). This keeps the logic in the method/class
unchanged but uses dataclasses.fields(SagaContextData) to filter context.data
keys before constructing SagaContextData.
- Around line 350-378: Remove the redundant self._producer presence check inside
_publish_saga_cancelled_event: since cancel_saga already guards that
self._producer exists, eliminate the inner "if self._producer" branch and
directly await self._producer.produce(...) in _publish_saga_cancelled_event
(keeping the same event construction, metadata, logging, and exception
handling). This simplifies the method (SagaCancelledEvent creation and
self.logger calls remain unchanged) and relies on the caller cancel_saga to
ensure _producer is set.

@HardMax71 HardMax71 merged commit 59a6dec into main Feb 17, 2026
17 checks passed
@HardMax71 HardMax71 deleted the fix/deps-and-tests branch February 17, 2026 12:12
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant