Skip to content

Conversation

@stanleychu2
Copy link

This pull request fixes tracing usage payloads for Chat Completions and LiteLLM models so generation spans include requests, total_tokens, and token detail fields (input_tokens_details, output_tokens_details), aligning trace data with Usage and improving cost accuracy. The change updates both non-streaming and streaming paths in openai_chatcompletions.py and litellm_model.py.

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 6198542f3a

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

@seratch seratch added this to the 0.8.x milestone Feb 2, 2026
@seratch seratch marked this pull request as draft February 2, 2026 01:34
@stanleychu2 stanleychu2 marked this pull request as ready for review February 2, 2026 04:52
@stanleychu2 stanleychu2 requested a review from seratch February 2, 2026 04:55
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants