Conversation
Codecov ReportAttention: Patch coverage is
📢 Thoughts on this report? Let us know! |
Contributor
There was a problem hiding this comment.
Pull Request Overview
This pull request fixes the OpenAI token extraction issues by adjusting the response handling logic. Key changes include adding support for Pydantic models via model_dump in both embeddings and chat wrappers, and adding explicit checks to ensure tool_calls and function_call values are handled only when present.
Reviewed Changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated no comments.
| File | Description |
|---|---|
| agentops/instrumentation/openai/wrappers/embeddings.py | Adds fallback conditions to handle Pydantic models and dict responses |
| agentops/instrumentation/openai/wrappers/chat.py | Introduces explicit checks for tool_calls and function_call, ensuring proper extraction and logging |
Comments suppressed due to low confidence (3)
agentops/instrumentation/openai/wrappers/embeddings.py:65
- The new branch for handling Pydantic models using model_dump is correctly placed before the dict check, which is good. Consider adding unit tests to confirm that the precedence between model_dump and dict conversions behaves as intended.
elif hasattr(return_value, "model_dump"):
agentops/instrumentation/openai/wrappers/chat.py:86
- The explicit check for tool_calls helps avoid errors when the value is None; however, please verify that an empty list is handled appropriately as it will evaluate to false.
if tool_calls: # Check if tool_calls is not None
agentops/instrumentation/openai/wrappers/chat.py:186
- The guard for function_call adds a useful safeguard; ensure that an empty dictionary is handled as intended in downstream processing.
if function_call: # Check if function_call is not None
dot-agi
approved these changes
Jun 16, 2025
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
📥 Pull Request
Issue:Token usage wasn't being extracted from ChatCompletion, Responses API response.
Response object wasn't being properly converted to dictionary due to incorrect condition checking and added proper handling for Pydantic models and improved fallback logic.