Skip to content

fix: preserve Anthropic thinking blocks and signatures in LiteLLM round-trip#4811

Open
giulio-leone wants to merge 1 commit intogoogle:mainfrom
giulio-leone:fix/litellm-anthropic-thinking-roundtrip
Open

fix: preserve Anthropic thinking blocks and signatures in LiteLLM round-trip#4811
giulio-leone wants to merge 1 commit intogoogle:mainfrom
giulio-leone:fix/litellm-anthropic-thinking-roundtrip

Conversation

@giulio-leone
Copy link

Summary

Fixes #4801 — Adaptive thinking is broken when using Claude models through LiteLLM.

Root Cause

When Claude produces extended thinking with thinking_blocks (each containing a type, thinking text, and signature), the round-trip through ADK's LiteLLM integration silently loses them:

  1. _extract_reasoning_value() only read reasoning_content (a flattened string without signatures), ignoring the richer thinking_blocks field
  2. _content_to_message_param() set reasoning_content on the outgoing ChatCompletionAssistantMessage, but LiteLLM's anthropic_messages_pt() prompt template silently drops the reasoning_content field entirely
  3. Result: thinking blocks vanish from conversation history after turn 1; Claude stops producing them

Fix

Three coordinated changes in lite_llm.py:

Change What it does
_is_anthropic_provider() helper Detects anthropic, bedrock, vertex_ai providers
_extract_reasoning_value() Now prefers thinking_blocks (with per-block signatures) over reasoning_content
_convert_reasoning_value_to_parts() Handles ChatCompletionThinkingBlock dicts, preserving thought_signature
_content_to_message_param() For Anthropic providers, embeds thinking blocks directly in the message content list as {"type": "thinking", ...} dicts — this format passes through LiteLLM's anthropic_messages_pt() correctly

For non-Anthropic providers (OpenAI, etc.), behavior is unchanged — reasoning_content is still used.

Verification

  • LiteLLM's anthropic_messages_pt() was tested to confirm:
    • reasoning_content field → DROPPED (existing LiteLLM bug)
    • content as list with {"type": "thinking", ...}PRESERVED
    • Signatures in thinking blocks → PRESERVED when in content list ✅

Tests

Added 7 targeted tests covering:

  • _is_anthropic_provider() — provider detection
  • _extract_reasoning_value() — prefers thinking_blocks over reasoning_content
  • _convert_reasoning_value_to_parts() — signature preservation from block dicts
  • _convert_reasoning_value_to_parts() — plain string fallback (no signature)
  • _content_to_message_param() — Anthropic: thinking blocks embedded in content list
  • _content_to_message_param() — OpenAI: reasoning_content field used (unchanged)
  • _content_to_message_param() — Anthropic thinking + tool calls combined

Full test suite: 4732 passed, 0 failures

@gemini-code-assist
Copy link
Contributor

Warning

You have reached your daily quota limit. Please wait up to 24 hours and I will start processing your requests again!

@google-cla
Copy link

google-cla bot commented Mar 13, 2026

Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).

View this failed invocation of the CLA check for more information.

For the most up to date status, view the checks section at the bottom of the pull request.

@adk-bot adk-bot added the models [Component] Issues related to model support label Mar 13, 2026
@adk-bot
Copy link
Collaborator

adk-bot commented Mar 13, 2026

Response from ADK Triaging Agent

Hello @giulio-leone, thank you for your contribution!

Before we can merge this PR, we need you to sign our Contributor License Agreement (CLA). You can find more information and sign the CLA at https://cla.developers.google.com/.

Thanks!

@rohityan rohityan self-assigned this Mar 13, 2026
@rohityan
Copy link
Collaborator

Hi @giulio-leone , Thank you for your contribution! It appears you haven't yet signed the Contributor License Agreement (CLA). Please visit https://cla.developers.google.com/ to complete the signing process. Once the CLA is signed, we'll be able to proceed with the review of your PR. Thank you!

@rohityan rohityan added the request clarification [Status] The maintainer need clarification or more information from the author label Mar 13, 2026
@giulio-leone giulio-leone force-pushed the fix/litellm-anthropic-thinking-roundtrip branch from 9b92452 to 4b10202 Compare March 14, 2026 20:08
@giulio-leone
Copy link
Author

@rohityan Thanks for the heads-up! I'll get the Google CLA signed. Will follow up once it's done.

@giulio-leone
Copy link
Author

I have signed the Google CLA. Could you please re-run the CLA check? Thank you!

@giulio-leone giulio-leone force-pushed the fix/litellm-anthropic-thinking-roundtrip branch from 4b10202 to b3c35fc Compare March 15, 2026 15:56
…nd-trip

When using Claude models through LiteLLM, extended thinking blocks
(with signatures) were lost after the first turn because:

1. _extract_reasoning_value() only read reasoning_content (flattened
   string without signatures), ignoring thinking_blocks
2. _content_to_message_param() set reasoning_content on the outgoing
   message, which LiteLLM's anthropic_messages_pt() template silently
   drops

This fix:
- Adds _is_anthropic_provider() helper to detect anthropic/bedrock/
  vertex_ai providers
- Updates _extract_reasoning_value() to prefer thinking_blocks (with
  per-block signatures) over reasoning_content
- Updates _convert_reasoning_value_to_parts() to handle
  ChatCompletionThinkingBlock dicts, preserving thought_signature
- Updates _content_to_message_param() to embed thinking blocks
  directly in the message content list for Anthropic providers,
  bypassing the broken reasoning_content path

Fixes google#4801
@giulio-leone giulio-leone force-pushed the fix/litellm-anthropic-thinking-roundtrip branch from b3c35fc to 59e6e04 Compare March 15, 2026 16:03
@giulio-leone
Copy link
Author

Hi @rohityan — the CLA is now signed and passing ✅ (it was a Co-authored-by trailer issue that has been resolved). Ready for review!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

models [Component] Issues related to model support request clarification [Status] The maintainer need clarification or more information from the author

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Adaptive Thinking Broken Claude Litellm

3 participants