Skip to content

Conversation

@daniel-lxs
Copy link
Member

@daniel-lxs daniel-lxs commented Jan 15, 2026

Problem

When users switch mid-task from Claude (or other models) to Gemini 3/2.5 via LiteLLM, they encounter "Corrupted thought signature" errors. This happens because Gemini 3 validates thought signatures for tool/function calling steps, and conversation history from other models lacks these signatures.

Solution

Implemented thought signature injection using provider_specific_fields.thought_signature on tool calls, following LiteLLM's official documentation:

  • Thought signatures are stored in provider_specific_fields.thought_signature of tool calls
  • Dummy signature is base64("skip_thought_signature_validator") - this bypasses validation
  • Only the first tool call needs the signature for parallel function calls

Changes

  • Added isGeminiModel() method to detect Gemini 3.x and 2.5.x models (including provider-prefixed variants)
  • Added injectThoughtSignatureForGemini() method to inject dummy signatures
  • Updated createMessage() to apply injection when targeting Gemini with native tool protocol
  • Added 8 new tests covering model detection, injection behavior, and integration

Test Results

All tests pass (5246 passed).

Fixes: COM-489


Important

Injects thought signatures for Gemini models in LiteLLM to prevent errors when switching models, with new detection and injection methods and comprehensive tests.

  • Behavior:
    • Adds isGeminiModel() to detect Gemini 3.x and 2.5.x models in lite-llm.ts.
    • Adds injectThoughtSignatureForGemini() to inject dummy thought signatures for Gemini models.
    • Updates createMessage() to apply thought signature injection for Gemini models with native tool protocol.
  • Tests:
    • Adds tests in lite-llm.spec.ts for Gemini model detection and thought signature injection.
    • Tests cover model detection, injection behavior, and integration with createMessage().
  • Misc:
    • Updates createMessage() to handle tool protocol resolution and Gemini-specific processing.

This description was created by Ellipsis for d705365. You can customize this summary. It will automatically update as commits are pushed.

… LiteLLM

When users switch mid-task from Claude (or other models) to Gemini 3/2.5
via LiteLLM, the API returns 'Corrupted thought signature' errors because
conversation history contains tool calls without the required signatures.

This fix injects dummy thought signatures into tool calls when targeting
Gemini models, following LiteLLM's official documentation:

- Detect Gemini 3.x and 2.5.x models (including provider-prefixed variants)
- Inject base64('skip_thought_signature_validator') into first tool call
- Preserve existing provider_specific_fields on tool calls
- Skip injection if signature already exists

Added 8 new tests covering model detection, injection behavior, and integration.

Fixes: COM-489
@daniel-lxs daniel-lxs requested a review from mrubens as a code owner January 15, 2026 02:11
@daniel-lxs daniel-lxs requested review from cte and jr as code owners January 15, 2026 02:11
@dosubot dosubot bot added size:L This PR changes 100-499 lines, ignoring generated files. bug Something isn't working labels Jan 15, 2026
@roomote
Copy link
Contributor

roomote bot commented Jan 15, 2026

Rooviewer Clock   See task on Roo Cloud

Reviewed the thought signature injection implementation for Gemini models. The core logic is sound and test coverage is comprehensive. Found one minor documentation issue.

  • Fix misleading comment at line 203: says "reasoning.encrypted block" but should say "provider_specific_fields.thought_signature"

Mention @roomote in a comment to request specific changes to this pull request or fix all unresolved issues.

Comment on lines +203 to +204
// For Gemini models with native protocol: inject fake reasoning.encrypted block for tool calls
// This is required when switching from other models to Gemini to satisfy API validation.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The comment mentions "inject fake reasoning.encrypted block" but the code actually injects provider_specific_fields.thought_signature. This inconsistency could confuse future developers trying to understand or modify this code.

Suggested change
// For Gemini models with native protocol: inject fake reasoning.encrypted block for tool calls
// This is required when switching from other models to Gemini to satisfy API validation.
// For Gemini models with native protocol: inject thought signatures via provider_specific_fields
// This is required when switching from other models to Gemini to satisfy API validation.

Fix it with Roo Code or mention @roomote and request a fix.

@daniel-lxs daniel-lxs closed this Jan 15, 2026
@github-project-automation github-project-automation bot moved this from New to Done in Roo Code Roadmap Jan 15, 2026
@github-project-automation github-project-automation bot moved this from Triage to Done in Roo Code Roadmap Jan 15, 2026
@daniel-lxs daniel-lxs deleted the feature/com-489-fix-corrupted-thought-signature-error-when-switching-to branch January 15, 2026 02:53
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

bug Something isn't working size:L This PR changes 100-499 lines, ignoring generated files.

Projects

Status: Done

Development

Successfully merging this pull request may close these issues.

2 participants