[do not merge] feat: Span streaming & new span API #6113
test-integrations-flags.yml
on: pull_request
Matrix: Flags
All Flags tests passed
4s
Annotations
15 errors and 28 warnings
|
Flags (3.13, ubuntu-22.04)
Canceling since a higher priority waiting request for Test Flags-feat/span-first-2 exists
|
|
Flags (3.13, ubuntu-22.04)
The operation was canceled.
|
|
Test Flags
Canceling since a higher priority waiting request for Test Flags-feat/span-first-2 exists
|
|
All Flags tests passed
Process completed with exit code 1.
|
|
NameError when Redis command raises exception - `value` is unbound in finally block:
sentry_sdk/integrations/redis/_sync_common.py#L141
When `old_execute_command` raises an exception, the `finally` block still executes but `value` is never assigned. This causes a `NameError: name 'value' is not defined` in line 148 when accessing `value` in `_set_cache_data(cache_span, self, cache_properties, value)`. The exception from the finally block will mask the original Redis exception, making debugging difficult.
|
|
[R7M-PTR] NameError when Redis command raises exception - `value` is unbound in finally block (additional location):
sentry_sdk/integrations/redis/_async_common.py#L135
When `old_execute_command` raises an exception, the `finally` block still executes but `value` is never assigned. This causes a `NameError: name 'value' is not defined` in line 148 when accessing `value` in `_set_cache_data(cache_span, self, cache_properties, value)`. The exception from the finally block will mask the original Redis exception, making debugging difficult.
|
|
StreamedSpan never entered as context manager - spans silently lost:
sentry_sdk/integrations/strawberry.py#L192
When span streaming is enabled, `sentry_sdk.traces.start_span()` creates a `StreamedSpan` but it's never entered via context manager or `start()`. When `finish()` is called later, it invokes `__exit__()` which tries to access `_context_manager_state` (only set in `__enter__`). This causes an `AttributeError` that's silently caught by `capture_internal_exceptions()`, preventing `_end()` from executing and causing the span to never be sent to Sentry.
|
|
[4JU-ML5] StreamedSpan never entered as context manager - spans silently lost (additional location):
sentry_sdk/integrations/strawberry.py#L239
When span streaming is enabled, `sentry_sdk.traces.start_span()` creates a `StreamedSpan` but it's never entered via context manager or `start()`. When `finish()` is called later, it invokes `__exit__()` which tries to access `_context_manager_state` (only set in `__enter__`). This causes an `AttributeError` that's silently caught by `capture_internal_exceptions()`, preventing `_end()` from executing and causing the span to never be sent to Sentry.
|
|
[4JU-ML5] StreamedSpan never entered as context manager - spans silently lost (additional location):
sentry_sdk/integrations/strawberry.py#L261
When span streaming is enabled, `sentry_sdk.traces.start_span()` creates a `StreamedSpan` but it's never entered via context manager or `start()`. When `finish()` is called later, it invokes `__exit__()` which tries to access `_context_manager_state` (only set in `__enter__`). This causes an `AttributeError` that's silently caught by `capture_internal_exceptions()`, preventing `_end()` from executing and causing the span to never be sent to Sentry.
|
|
Spans not closed on exception in async Redis command execution:
sentry_sdk/integrations/redis/_async_common.py#L135
The async `_sentry_execute_command` function does not wrap the `await old_execute_command` call in a `try/finally` block. If the Redis command raises an exception, `db_span.__exit__()` and `cache_span.__exit__()` will never be called, causing improperly closed spans. This contrasts with the sync version in `_sync_common.py` which correctly uses `try/finally`. The consequence is span leaks and potentially incorrect tracing data when Redis operations fail.
|
|
[2AR-5K3] Spans not closed on exception in async Redis command execution (additional location):
sentry_sdk/integrations/anthropic.py#L572
The async `_sentry_execute_command` function does not wrap the `await old_execute_command` call in a `try/finally` block. If the Redis command raises an exception, `db_span.__exit__()` and `cache_span.__exit__()` will never be called, causing improperly closed spans. This contrasts with the sync version in `_sync_common.py` which correctly uses `try/finally`. The consequence is span leaks and potentially incorrect tracing data when Redis operations fail.
|
|
[2AR-5K3] Spans not closed on exception in async Redis command execution (additional location):
sentry_sdk/integrations/graphene.py#L165
The async `_sentry_execute_command` function does not wrap the `await old_execute_command` call in a `try/finally` block. If the Redis command raises an exception, `db_span.__exit__()` and `cache_span.__exit__()` will never be called, causing improperly closed spans. This contrasts with the sync version in `_sync_common.py` which correctly uses `try/finally`. The consequence is span leaks and potentially incorrect tracing data when Redis operations fail.
|
|
StreamedSpan never started in on_operation - spans will be silently dropped:
sentry_sdk/integrations/strawberry.py#L192
When span streaming is enabled, the `graphql_span` is created via `sentry_sdk.traces.start_span()` but `.start()` is never called before `.finish()`. The `StreamedSpan.__enter__()` method (called by `.start()`) sets `_context_manager_state` which is required by `__exit__()` (called by `.finish()`). Without calling `.start()`, the span will fail silently inside `capture_internal_exceptions()` when `.finish()` is called, causing spans to be dropped and scope not restored.
|
|
[JPL-RPK] StreamedSpan never started in on_operation - spans will be silently dropped (additional location):
sentry_sdk/integrations/strawberry.py#L239
When span streaming is enabled, the `graphql_span` is created via `sentry_sdk.traces.start_span()` but `.start()` is never called before `.finish()`. The `StreamedSpan.__enter__()` method (called by `.start()`) sets `_context_manager_state` which is required by `__exit__()` (called by `.finish()`). Without calling `.start()`, the span will fail silently inside `capture_internal_exceptions()` when `.finish()` is called, causing spans to be dropped and scope not restored.
|
|
[JPL-RPK] StreamedSpan never started in on_operation - spans will be silently dropped (additional location):
sentry_sdk/integrations/strawberry.py#L261
When span streaming is enabled, the `graphql_span` is created via `sentry_sdk.traces.start_span()` but `.start()` is never called before `.finish()`. The `StreamedSpan.__enter__()` method (called by `.start()`) sets `_context_manager_state` which is required by `__exit__()` (called by `.finish()`). Without calling `.start()`, the span will fail silently inside `capture_internal_exceptions()` when `.finish()` is called, causing spans to be dropped and scope not restored.
|
|
Flags (3.10, ubuntu-22.04)
❌ Patch coverage check failed: 11.98% < target 80%
|
|
Flags (3.9, ubuntu-22.04)
❌ Patch coverage check failed: 11.90% < target 80%
|
|
Flags (3.9, ubuntu-22.04)
Failed to upload coverage artifact: Failed to CreateArtifact: Received non-retryable error: Failed request: (409) Conflict: an artifact with this name already exists on the workflow run
|
|
Flags (3.9, ubuntu-22.04)
Failed to upload test artifact: Failed to CreateArtifact: Received non-retryable error: Failed request: (409) Conflict: an artifact with this name already exists on the workflow run
|
|
Flags (3.7, ubuntu-22.04)
❌ Patch coverage check failed: 11.90% < target 80%
|
|
Flags (3.7, ubuntu-22.04)
Failed to upload coverage artifact: Failed to CreateArtifact: Received non-retryable error: Failed request: (409) Conflict: an artifact with this name already exists on the workflow run
|
|
Flags (3.7, ubuntu-22.04)
Failed to upload test artifact: Failed to CreateArtifact: Received non-retryable error: Failed request: (409) Conflict: an artifact with this name already exists on the workflow run
|
|
Flags (3.8, ubuntu-22.04)
❌ Patch coverage check failed: 11.90% < target 80%
|
|
Flags (3.8, ubuntu-22.04)
Failed to upload coverage artifact: Failed to CreateArtifact: Received non-retryable error: Failed request: (409) Conflict: an artifact with this name already exists on the workflow run
|
|
Flags (3.8, ubuntu-22.04)
Failed to upload test artifact: Failed to CreateArtifact: Received non-retryable error: Failed request: (409) Conflict: an artifact with this name already exists on the workflow run
|
|
Flags (3.12, ubuntu-22.04)
❌ Patch coverage check failed: 11.98% < target 80%
|
|
Flags (3.12, ubuntu-22.04)
Failed to upload coverage artifact: Failed to CreateArtifact: Received non-retryable error: Failed request: (409) Conflict: an artifact with this name already exists on the workflow run
|
|
Flags (3.12, ubuntu-22.04)
Failed to upload test artifact: Failed to CreateArtifact: Received non-retryable error: Failed request: (409) Conflict: an artifact with this name already exists on the workflow run
|
|
Flags (3.14t, ubuntu-22.04)
❌ Patch coverage check failed: 11.98% < target 80%
|
|
Flags (3.14t, ubuntu-22.04)
Failed to upload coverage artifact: Failed to CreateArtifact: Received non-retryable error: Failed request: (409) Conflict: an artifact with this name already exists on the workflow run
|
|
Flags (3.14t, ubuntu-22.04)
Failed to upload test artifact: Failed to CreateArtifact: Received non-retryable error: Failed request: (409) Conflict: an artifact with this name already exists on the workflow run
|
|
Flags (3.14, ubuntu-22.04)
❌ Patch coverage check failed: 11.98% < target 80%
|
|
Flags (3.14, ubuntu-22.04)
Failed to upload coverage artifact: Failed to CreateArtifact: Received non-retryable error: Failed request: (409) Conflict: an artifact with this name already exists on the workflow run
|
|
Flags (3.14, ubuntu-22.04)
Failed to upload test artifact: Failed to CreateArtifact: Received non-retryable error: Failed request: (409) Conflict: an artifact with this name already exists on the workflow run
|
|
Size estimation serializes span twice during flush:
sentry_sdk/_span_batcher.py#L78
The `_estimate_size` method calls `_to_transport_format` to convert the span to a dict, then uses `str()` for size estimation. When the span is later flushed, `_to_transport_format` is called again. For high-volume spans, this doubles the serialization work. Additionally, `str(span_dict)` produces Python repr format rather than actual JSON, which gives an inaccurate size estimate.
|
|
StreamedSpan always sets ERROR status, ignoring the actual status parameter:
sentry_sdk/integrations/celery/__init__.py#L104
When the span is a `StreamedSpan`, `_set_status()` always sets `SpanStatus.ERROR` regardless of the `status` parameter passed. This causes incorrect behavior for Celery control flow exceptions (like `Retry`, `Reject`, `Ignore`) which call `_set_status("aborted")` - these are not actual errors but controlled flow, yet they will be marked as ERROR in streaming mode. The old span implementation properly distinguished between different status strings.
|
|
[LW6-9QA] StreamedSpan always sets ERROR status, ignoring the actual status parameter (additional location):
sentry_sdk/integrations/celery/__init__.py#L330
When the span is a `StreamedSpan`, `_set_status()` always sets `SpanStatus.ERROR` regardless of the `status` parameter passed. This causes incorrect behavior for Celery control flow exceptions (like `Retry`, `Reject`, `Ignore`) which call `_set_status("aborted")` - these are not actual errors but controlled flow, yet they will be marked as ERROR in streaming mode. The old span implementation properly distinguished between different status strings.
|
|
dynamic_sampling_context() not overridden in NoOpStreamedSpan causes AttributeError:
sentry_sdk/traces.py#L528
The `dynamic_sampling_context()` method at line 528 is inherited by `NoOpStreamedSpan` but not overridden. Since `NoOpStreamedSpan.__init__` sets `self.segment = None`, calling `dynamic_sampling_context()` on a `NoOpStreamedSpan` instance will raise `AttributeError: 'NoneType' object has no attribute 'get_baggage'`. This could cause runtime crashes in code paths that use the NoOp span for unsampled traces.
|
|
[EUR-G3Z] dynamic_sampling_context() not overridden in NoOpStreamedSpan causes AttributeError (additional location):
sentry_sdk/traces.py#L774
The `dynamic_sampling_context()` method at line 528 is inherited by `NoOpStreamedSpan` but not overridden. Since `NoOpStreamedSpan.__init__` sets `self.segment = None`, calling `dynamic_sampling_context()` on a `NoOpStreamedSpan` instance will raise `AttributeError: 'NoneType' object has no attribute 'get_baggage'`. This could cause runtime crashes in code paths that use the NoOp span for unsampled traces.
|
|
Control flow exceptions incorrectly marked as errors in streaming mode:
sentry_sdk/integrations/celery/__init__.py#L104
The `_set_status` function now always sets `SpanStatus.ERROR` for `StreamedSpan`, ignoring the status parameter. This means when `_set_status("aborted")` is called for Celery control flow exceptions (Retry, Ignore, Reject), the span is incorrectly marked as an error. These are normal Celery operations, not errors. This will cause misleading error reporting in span streaming mode.
|
|
[LG2-4N8] Control flow exceptions incorrectly marked as errors in streaming mode (additional location):
sentry_sdk/integrations/redis/_sync_common.py#L148
The `_set_status` function now always sets `SpanStatus.ERROR` for `StreamedSpan`, ignoring the status parameter. This means when `_set_status("aborted")` is called for Celery control flow exceptions (Retry, Ignore, Reject), the span is incorrectly marked as an error. These are normal Celery operations, not errors. This will cause misleading error reporting in span streaming mode.
|
|
Transaction source is not set when NoOpStreamedSpan is active:
sentry_sdk/scope.py#L829
In `set_transaction_name()`, when `self._span` is a `NoOpStreamedSpan`, the function returns early on line 829 before reaching the code on line 841-842 that sets `self._transaction_info["source"]`. This means if `set_transaction_name(name, source)` is called with a source parameter while a `NoOpStreamedSpan` is active, the source will be silently dropped. This can cause incorrect transaction source information (`_transaction_info`) to be applied to events sent to Sentry.
|
|
NoOpStreamedSpan.finish() fails to restore scope's active span causing span leak:
sentry_sdk/traces.py#L731
In `NoOpStreamedSpan`, the `finish()` method is overridden with `pass` (line 732), unlike the parent `StreamedSpan.finish()` which calls `self.end()`. When a user calls `span.start()` then `span.finish()`, the scope's span is never restored to its previous value because `finish()` doesn't call `__exit__()`. This causes the NoOp span to remain as the active span on the scope, breaking span hierarchy for subsequent spans. The `end()` method (line 729) correctly calls `__exit__` which restores the old span.
|
Artifacts
Produced during runtime
| Name | Size | Digest | |
|---|---|---|---|
|
codecov-coverage-results-feat-span-first-2-test-flags
|
110 KB |
sha256:c6688679437020d0d14108f0c63c7a41f98bd3c89aeef1d114a8ec86b754a991
|
|
|
codecov-test-results-feat-span-first-2-test-flags
|
230 Bytes |
sha256:91df87cece228bf8b467a2eed2e37a23980ceb5f049f3a9cdd786206b3e0da97
|
|