Skip to content

fix(mistral): remove accidental tuple wrapping of CompletionUsage in metadata#13676

Open
frankgoldfish wants to merge 1 commit intomicrosoft:mainfrom
frankgoldfish:fix/mistral-usage-tuple-metadata
Open

fix(mistral): remove accidental tuple wrapping of CompletionUsage in metadata#13676
frankgoldfish wants to merge 1 commit intomicrosoft:mainfrom
frankgoldfish:fix/mistral-usage-tuple-metadata

Conversation

@frankgoldfish
Copy link

Summary

In MistralAIChatCompletion._get_metadata_from_response, the CompletionUsage object is accidentally wrapped in a single-element tuple due to a trailing comma after the closing parenthesis of the constructor call.

Bug

# Before (buggy): metadata["usage"] is a tuple, not a CompletionUsage instance
metadata["usage"] = (
    CompletionUsage(
        prompt_tokens=response.usage.prompt_tokens,
        completion_tokens=response.usage.completion_tokens,
    ),   # <-- trailing comma creates a tuple
)

Any code that reads metadata["usage"].prompt_tokens or metadata["usage"].completion_tokens will get an AttributeError at runtime because tuples don't have those attributes.

Fix

# After (fixed): metadata["usage"] is a CompletionUsage instance
metadata["usage"] = CompletionUsage(
    prompt_tokens=response.usage.prompt_tokens,
    completion_tokens=response.usage.completion_tokens,
)

Files Changed

  • python/semantic_kernel/connectors/ai/mistral_ai/services/mistral_ai_chat_completion.py

Test plan

  • Verify that metadata["usage"] is a CompletionUsage instance (not a tuple) after a MistralAI chat completion response
  • Confirm metadata["usage"].prompt_tokens and metadata["usage"].completion_tokens are accessible without error
  • Run existing MistralAI unit tests

🤖 Generated with Claude Code

… metadata

In `_get_metadata_from_response`, the `CompletionUsage` object was wrapped in
a single-element tuple due to a trailing comma after the closing parenthesis:

    metadata["usage"] = (
        CompletionUsage(...),   # <-- trailing comma made this a tuple
    )

This caused `metadata["usage"]` to be a `tuple` instead of a `CompletionUsage`
instance, breaking any code that expected to access token counts via
`metadata["usage"].prompt_tokens` or `metadata["usage"].completion_tokens`.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
@frankgoldfish frankgoldfish requested a review from a team as a code owner March 18, 2026 01:12
Copy link
Collaborator

@moonbox3 moonbox3 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any unit test that needs to be added/updated for this?

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Fixes token usage metadata in the Mistral chat completion connector so metadata["usage"] is a CompletionUsage instance (not an accidental single element tuple), preventing runtime AttributeError when accessing token counts.

Changes:

  • Remove trailing comma that wrapped CompletionUsage(...) in a single element tuple.
  • Ensure metadata["usage"] is assigned directly to a CompletionUsage instance when response.usage is present.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines 253 to 257
if hasattr(response, "usage") and response.usage is not None:
metadata["usage"] = (
CompletionUsage(
prompt_tokens=response.usage.prompt_tokens,
completion_tokens=response.usage.completion_tokens,
),
metadata["usage"] = CompletionUsage(
prompt_tokens=response.usage.prompt_tokens,
completion_tokens=response.usage.completion_tokens,
)
Copy link

Copilot AI Mar 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This bug fix changes the shape of metadata["usage"] (tuple -> CompletionUsage), but there is no regression test asserting the type/attributes of metadata["usage"] on returned ChatMessageContent / streaming chunks. Add a unit test that verifies response.metadata["usage"] is a CompletionUsage instance and that prompt_tokens/completion_tokens are accessible to prevent the tuple-wrapping bug from reappearing.

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants