-
Notifications
You must be signed in to change notification settings - Fork 585
fix(openai): Attach response model with streamed Completions API #5557
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: webb/openai/add-response-model
Are you sure you want to change the base?
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change | ||||||||
|---|---|---|---|---|---|---|---|---|---|---|
|
|
@@ -613,6 +613,8 @@ | |||||||||
| nonlocal ttft | ||||||||||
| count_tokens_manually = True | ||||||||||
| for x in old_iterator: | ||||||||||
| span.set_data(SPANDATA.GEN_AI_RESPONSE_MODEL, x.model) | ||||||||||
|
Check warning on line 616 in sentry_sdk/integrations/openai.py
|
||||||||||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Ungarded streamed model access can raiseMedium Severity
Additional Locations (1)
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Spec says it's required: developers.openai.com/api/reference/resources/chat/subresources/completions/streaming-events |
||||||||||
|
|
||||||||||
| with capture_internal_exceptions(): | ||||||||||
| if hasattr(x, "choices"): | ||||||||||
| choice_index = 0 | ||||||||||
|
|
@@ -657,6 +659,8 @@ | |||||||||
| nonlocal ttft | ||||||||||
| count_tokens_manually = True | ||||||||||
| async for x in old_iterator: | ||||||||||
| span.set_data(SPANDATA.GEN_AI_RESPONSE_MODEL, x.model) | ||||||||||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Accessing x.model without defensive check could raise AttributeError and break streaming The new code accesses VerificationRead the full function context at lines 658-686. Verified that the synchronous version at line 616 has the same pattern (also potentially problematic). Confirmed that Suggested fix: Wrap the model attribute access inside the existing capture_internal_exceptions block or add a hasattr check
Suggested change
Also found at 1 additional location
Identified by Warden
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Fix attempt detected (commit f12cf5d) The commit added unprotected x.model access outside the capture_internal_exceptions() block at both lines 616 and 662, directly introducing the exact issue reported: an AttributeError could propagate if a streaming chunk lacks the model attribute. The original issue appears unresolved. Please review and try again. Evaluated by Warden
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Spec says it's required: https://developers.openai.com/api/reference/resources/chat/subresources/completions/streaming-events |
||||||||||
|
|
||||||||||
| with capture_internal_exceptions(): | ||||||||||
| if hasattr(x, "choices"): | ||||||||||
| choice_index = 0 | ||||||||||
|
|
||||||||||


There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unprotected attribute access on streaming chunk may cause runtime error
The
x.modelaccess on line 616 is placed outside thecapture_internal_exceptions()block, unlike other model accesses in this file (e.g., lines 761, 810 which are inside the block, or line 472 which useshasattrguard). If a streaming chunk lacks themodelattribute or it's malformed, anAttributeErrorwill propagate to the user's application instead of being silently logged by Sentry's internal error handler.Verification
Read openai.py lines 590-670 and 740-815. Verified pattern: line 472 uses
if hasattr(response, 'model')guard, lines 761 and 810 place model access insidecapture_internal_exceptions(). The new code at line 616 has neither protection and could raise AttributeError to user code.Suggested fix: Move the set_data call inside the existing capture_internal_exceptions() block
Identified by Warden
code-review·26J-GUAThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Spec says it's required: developers.openai.com/api/reference/resources/chat/subresources/completions/streaming-events