Conversation
- #313: Add installation.md docs page with framework-specific sections and anchors to fix 404 links from the frameworks page - #314: Add snake_case to camelCase conversion for OpenRouter modelOptions so options like `tool_choice` are not silently discarded by the SDK - #330: Parse usageMetadata from Gemini image adapter responses instead of hardcoding usage as undefined - #341: Cache a shared EventTarget on the server via globalThis.__TANSTACK_EVENT_TARGET__ so emit() and on() operate on the same target in Node/Bun/Workers environments - #363: Preserve array tool results (multimodal content parts) instead of always stringifying, enabling image/multimodal tool responses https://claude.ai/code/session_01GsXhpJTjCnyZifLHTJj9gT
|
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (3)
🚧 Files skipped from review as they are similar to previous changes (2)
📝 WalkthroughWalkthroughAdds multimodal tool-result support (string or array) across types, processors, adapters, and devtools; introduces an installation doc and nav entry; and ensures a shared EventTarget is created in non-browser runtimes for the event client. Changes
Sequence Diagram(s)sequenceDiagram
participant Tool as Tool
participant Processor as Chat Processor
participant Adapter as Adapter (OpenAI/Gemini/OpenRouter)
participant EventClient as EventClient
participant Devtools as Devtools UI
Tool->>Processor: return result (string | Array | other)
Processor->>Processor: preserve string/array, stringify others
Processor->>Adapter: package tool result into message
Adapter->>EventClient: emit tool call end (result string or Array)
EventClient->>EventClient: ensure global EventTarget in non-browser
EventClient->>Devtools: deliver emitted result (array may be stringified for UI)
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
View your CI Pipeline Execution ↗ for commit 423883f
☁️ Nx Cloud last updated this comment at |
@tanstack/ai
@tanstack/ai-anthropic
@tanstack/ai-client
@tanstack/ai-devtools-core
@tanstack/ai-elevenlabs
@tanstack/ai-event-client
@tanstack/ai-fal
@tanstack/ai-gemini
@tanstack/ai-grok
@tanstack/ai-groq
@tanstack/ai-ollama
@tanstack/ai-openai
@tanstack/ai-openrouter
@tanstack/ai-preact
@tanstack/ai-react
@tanstack/ai-react-ui
@tanstack/ai-solid
@tanstack/ai-solid-ui
@tanstack/ai-svelte
@tanstack/ai-vue
@tanstack/ai-vue-ui
@tanstack/preact-ai-devtools
@tanstack/react-ai-devtools
@tanstack/solid-ai-devtools
commit: |
There was a problem hiding this comment.
Actionable comments posted: 5
🧹 Nitpick comments (4)
docs/getting-started/installation.md (1)
78-88: Vue and Svelte sections lack usage examples and API doc links.The React, Solid, and Preact sections include code examples and links to API documentation, but Vue and Svelte only have install commands. Consider adding equivalent content for consistency, or clarifying if these integrations are still in development.
📝 Suggested additions
## Vue ```bash npm install `@tanstack/ai-vue`
+The Vue integration provides composables for managing chat state. See the
@tanstack/ai-vueAPI docs for full details.Svelte
npm install `@tanstack/ai-svelte`
+The Svelte integration provides stores for managing chat state. See the
@tanstack/ai-svelteAPI docs for full details.</details> <details> <summary>🤖 Prompt for AI Agents</summary>Verify each finding against the current code and only fix it if needed.
In
@docs/getting-started/installation.mdaround lines 78 - 88, Add usage
examples and links to the Vue and Svelte API documentation in the
installation.md file to match the existing React, Solid, and Preact sections.
For Vue, mention that it provides composables for managing chat state and link
to the@tanstack/ai-vueAPI docs; for Svelte, note it offers stores for chat
state management and link to the@tanstack/ai-svelteAPI docs. Place these
additions immediately below the respective installation commands.</details> </blockquote></details> <details> <summary>packages/typescript/ai/src/types.ts (1)</summary><blockquote> `873-874`: **Consider updating `RealtimeToolResultPart` to support multimodal content for consistency with other tool result types.** The `ToolResultPart` and `ToolCallEndEvent` types both support `string | Array<ContentPart>` for tool results, enabling multimodal responses. However, `RealtimeToolResultPart` only accepts `string`. While this likely reflects the simpler requirements of the realtime audio-first protocol, aligning the type signature would improve consistency across the codebase. If realtime tool execution scenarios do require multimodal content (e.g., tool returning images), update `RealtimeToolResultPart.content` to `string | Array<ContentPart>` to match. <details> <summary>🤖 Prompt for AI Agents</summary>Verify each finding against the current code and only fix it if needed.
In
@packages/typescript/ai/src/types.tsaround lines 873 - 874, Update the
RealtimeToolResultPart type so its content property accepts multimodal data like
the other tool result types: change RealtimeToolResultPart.content from string
to string | Array (matching ToolResultPart and ToolCallEndEvent)
and ensure any related usages of RealtimeToolResultPart (e.g.,
serialization/consumers) handle an Array alongside string.</details> </blockquote></details> <details> <summary>packages/typescript/ai/src/activities/chat/stream/message-updaters.ts (1)</summary><blockquote> `100-100`: **Tighten tool-result content typing to avoid `any`.** Use the canonical `ToolResultPart['content']` type here so future content-shape changes stay synchronized and type-safe. <details> <summary>♻️ Suggested refactor</summary> ```diff export function updateToolResultPart( messages: Array<UIMessage>, messageId: string, toolCallId: string, - content: string | Array<any>, + content: ToolResultPart['content'], state: ToolResultState, error?: string, ): Array<UIMessage> {Based on learnings: Maintain type safety through multimodal content support (image, audio, video, document) with model capability awareness.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai/src/activities/chat/stream/message-updaters.ts` at line 100, The property currently typed as "content: string | Array<any>" is too loose; replace it with the canonical ToolResultPart['content'] type to keep multimodal tool-result shapes synchronized and type-safe. Update the declaration that currently uses "content: string | Array<any>" in message-updaters.ts to reference ToolResultPart['content'], and ensure any imports or type aliases include ToolResultPart so the compiler enforces the correct shape for images/audio/video/documents as models evolve.packages/typescript/ai-gemini/src/adapters/image.ts (1)
205-224: Removeas anyand properly type forward-compatibility for future SDK versions.The
usageMetadataproperty does not exist onGenerateImagesResponsein@google/genaiv1.43.0 (the version used). The defensive code at line 206 usesas anyto handle a future SDK update, but this approach weakens type safety. Instead of casting to any, use a proper type guard with a narrowing check—either remove this forward-compatibility code if it's not immediately needed, or implement it with a type-safe approach:const usageMeta = 'usageMetadata' in response ? (response as GenerateImagesResponse & { usageMetadata?: { promptTokenCount?: number candidatesTokenCount?: number totalTokenCount?: number } }).usageMetadata : undefinedThis preserves type safety without
as anyand prepares for the property if it's added to the SDK in the future.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-gemini/src/adapters/image.ts` around lines 205 - 224, The current code casts response to any to access a non-existent usageMetadata, weakening type safety; replace that cast by using a type guard: check "'usageMetadata' in response" and if present narrow response to GenerateImagesResponse & { usageMetadata?: { promptTokenCount?: number; candidatesTokenCount?: number; totalTokenCount?: number } } and read its usageMetadata into the usageMeta variable, otherwise set usageMeta to undefined; update the usage construction that references usageMeta (inputTokens/outputTokens/totalTokens) to remain unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@docs/getting-started/installation.md`:
- Around line 100-127: The Adapters installation list is missing the Fal adapter
referenced in the docs navigation; update the "Adapters" section in
installation.md to include the Fal adapter by adding an entry for the Fal
package (npm install `@tanstack/ai-fal`) alongside the other providers and ensure
the text still points readers to the Adapters section for provider-specific
setup; keep naming consistent with existing entries (e.g., "Fal") so it matches
the navigation config.
In `@packages/typescript/ai-client/src/types.ts`:
- Around line 146-147: The ToolResultPart.content property currently uses a
loose Array<any>; update it to use the specific ContentPart type for consistency
and stronger typing by changing the union from string | Array<any> to string |
Array<ContentPart> (modify the ToolResultPart.content declaration to reference
ContentPart which is already imported).
In `@packages/typescript/ai-event-client/src/devtools-middleware.ts`:
- Around line 198-201: The current serialization for the result field uses
JSON.stringify(chunk.result ?? ''), which turns null/undefined into the
two-character string '" "' instead of an empty string; update the logic around
the result key (the expression using typeof chunk.result === 'string' ? ... :
...) to short-circuit null/undefined explicitly so that when chunk.result is
null or undefined you return '' (empty string), otherwise call JSON.stringify on
the non-string value; reference the existing chunk.result check and the result
property so the branch becomes: if it's a string return it, else if chunk.result
== null return '' else return JSON.stringify(chunk.result).
In `@packages/typescript/ai-openai/src/adapters/text.ts`:
- Around line 709-716: The function_call_output.output assignment incorrectly
allows arrays for message.content which the OpenAI Responses API does not
accept; update the logic in the adapter where function_call_output.output is
built (referencing message.content and function_call_output.output) to always
pass a string: if message.content is already a string use it, otherwise
JSON.stringify(message.content) so arrays and objects become valid string
payloads; ensure any downstream consumers parse the JSON string if they expect
structured multimodal data.
In `@packages/typescript/ai/src/activities/chat/messages.ts`:
- Around line 251-253: The code currently JSON.stringify's non-array outputs
which turns string tool outputs into quoted strings; change the content
assignment for part.output so strings are preserved: replace the ternary that
sets content using Array.isArray(part.output) ? part.output :
JSON.stringify(part.output) with logic that returns part.output when it's an
array, returns part.output as-is when typeof part.output === 'string', and only
calls JSON.stringify for other types (e.g., objects), referencing the same
part.output expression in messages.ts.
---
Nitpick comments:
In `@docs/getting-started/installation.md`:
- Around line 78-88: Add usage examples and links to the Vue and Svelte API
documentation in the installation.md file to match the existing React, Solid,
and Preact sections. For Vue, mention that it provides composables for managing
chat state and link to the `@tanstack/ai-vue` API docs; for Svelte, note it offers
stores for chat state management and link to the `@tanstack/ai-svelte` API docs.
Place these additions immediately below the respective installation commands.
In `@packages/typescript/ai-gemini/src/adapters/image.ts`:
- Around line 205-224: The current code casts response to any to access a
non-existent usageMetadata, weakening type safety; replace that cast by using a
type guard: check "'usageMetadata' in response" and if present narrow response
to GenerateImagesResponse & { usageMetadata?: { promptTokenCount?: number;
candidatesTokenCount?: number; totalTokenCount?: number } } and read its
usageMetadata into the usageMeta variable, otherwise set usageMeta to undefined;
update the usage construction that references usageMeta
(inputTokens/outputTokens/totalTokens) to remain unchanged.
In `@packages/typescript/ai/src/activities/chat/stream/message-updaters.ts`:
- Line 100: The property currently typed as "content: string | Array<any>" is
too loose; replace it with the canonical ToolResultPart['content'] type to keep
multimodal tool-result shapes synchronized and type-safe. Update the declaration
that currently uses "content: string | Array<any>" in message-updaters.ts to
reference ToolResultPart['content'], and ensure any imports or type aliases
include ToolResultPart so the compiler enforces the correct shape for
images/audio/video/documents as models evolve.
In `@packages/typescript/ai/src/types.ts`:
- Around line 873-874: Update the RealtimeToolResultPart type so its content
property accepts multimodal data like the other tool result types: change
RealtimeToolResultPart.content from string to string | Array<ContentPart>
(matching ToolResultPart and ToolCallEndEvent) and ensure any related usages of
RealtimeToolResultPart (e.g., serialization/consumers) handle an
Array<ContentPart> alongside string.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: e1795af3-dc4d-489d-b6ec-8d72d5fb5fcf
📒 Files selected for processing (15)
docs/config.jsondocs/getting-started/installation.mddocs/getting-started/quick-start.mdpackages/typescript/ai-client/src/types.tspackages/typescript/ai-event-client/src/devtools-middleware.tspackages/typescript/ai-event-client/src/index.tspackages/typescript/ai-gemini/src/adapters/image.tspackages/typescript/ai-openai/src/adapters/text.tspackages/typescript/ai-openrouter/src/adapters/text.tspackages/typescript/ai/src/activities/chat/index.tspackages/typescript/ai/src/activities/chat/messages.tspackages/typescript/ai/src/activities/chat/stream/message-updaters.tspackages/typescript/ai/src/activities/chat/stream/processor.tspackages/typescript/ai/src/activities/chat/tools/tool-calls.tspackages/typescript/ai/src/types.ts
| ## Adapters | ||
|
|
||
| You also need an adapter for your LLM provider. Install one (or more) of the following: | ||
|
|
||
| ```bash | ||
| # OpenRouter (recommended — 300+ models with one API key) | ||
| npm install @tanstack/ai-openrouter | ||
|
|
||
| # OpenAI | ||
| npm install @tanstack/ai-openai | ||
|
|
||
| # Anthropic | ||
| npm install @tanstack/ai-anthropic | ||
|
|
||
| # Google Gemini | ||
| npm install @tanstack/ai-gemini | ||
|
|
||
| # Ollama (local models) | ||
| npm install @tanstack/ai-ollama | ||
|
|
||
| # Groq | ||
| npm install @tanstack/ai-groq | ||
|
|
||
| # Grok (xAI) | ||
| npm install @tanstack/ai-grok | ||
| ``` | ||
|
|
||
| See the [Adapters section](../adapters/openai) for provider-specific setup guides. |
There was a problem hiding this comment.
Missing Fal adapter from the adapters list.
The Fal adapter is listed in the navigation config (docs/config.json line 176-178) under Adapters, but is not included in the installation instructions here.
📝 Suggested addition
# Grok (xAI)
npm install `@tanstack/ai-grok`
+
+# Fal (image/video generation)
+npm install `@tanstack/ai-fal`</details>
<details>
<summary>🤖 Prompt for AI Agents</summary>
Verify each finding against the current code and only fix it if needed.
In @docs/getting-started/installation.md around lines 100 - 127, The Adapters
installation list is missing the Fal adapter referenced in the docs navigation;
update the "Adapters" section in installation.md to include the Fal adapter by
adding an entry for the Fal package (npm install @tanstack/ai-fal) alongside the
other providers and ensure the text still points readers to the Adapters section
for provider-specific setup; keep naming consistent with existing entries (e.g.,
"Fal") so it matches the navigation config.
</details>
<!-- fingerprinting:phantom:poseidon:ocelot -->
<!-- This is an auto-generated comment by CodeRabbit -->
| /** Tool result content. String for text results, or an array for multimodal results. */ | ||
| content: string | Array<any> |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Search for direct string method calls on ToolResultPart.content without type guards
rg -n --type=ts 'content\.(trim|split|substring|slice|replace|match|toLowerCase|toUpperCase)' packages/typescript/ai-client/Repository: TanStack/ai
Length of output: 179
🏁 Script executed:
#!/bin/bash
# Read the types.ts file around lines 146-147 to see the actual definition
echo "=== ai-client/src/types.ts (lines 140-155) ==="
sed -n '140,155p' packages/typescript/ai-client/src/types.ts
echo -e "\n=== Check ai/src/types.ts for ToolResultPart definition ==="
grep -n -A 3 "ToolResultPart" packages/typescript/ai/src/types.ts | head -20Repository: TanStack/ai
Length of output: 835
🏁 Script executed:
#!/bin/bash
# Search for other potential unsafe content usage patterns more broadly
echo "=== Search for .content usage in ai-client ==="
rg -n '\.content' packages/typescript/ai-client/src/ -A 2 -B 2 --type=ts | head -100Repository: TanStack/ai
Length of output: 1953
🏁 Script executed:
#!/bin/bash
# Get the complete ToolResultPart definition from ai/src/types.ts
sed -n '298,310p' packages/typescript/ai/src/types.tsRepository: TanStack/ai
Length of output: 428
🏁 Script executed:
#!/bin/bash
# Also search for ContentPart type definition to understand the type hierarchy
rg -n "interface ContentPart" packages/typescript/ -A 5Repository: TanStack/ai
Length of output: 1586
🏁 Script executed:
#!/bin/bash
# Check the context around chat-client.ts:352 to see the full usage
sed -n '345,360p' packages/typescript/ai-client/src/chat-client.tsRepository: TanStack/ai
Length of output: 548
🏁 Script executed:
#!/bin/bash
# Check if ContentPart is properly exported/imported in ai-client
rg -n "ContentPart|import.*types" packages/typescript/ai-client/src/types.ts | head -20Repository: TanStack/ai
Length of output: 113
🏁 Script executed:
#!/bin/bash
# Also check what ContentPart actually is
rg -n "type ContentPart|interface ContentPart" packages/typescript/ai/src/types.ts -A 5Repository: TanStack/ai
Length of output: 1049
Use Array<ContentPart> for type consistency with the main ai package.
The ToolResultPart.content type in ai-client uses Array<any>, while packages/typescript/ai/src/types.ts defines it as string | Array<ContentPart>. Since ContentPart is already imported in this file, use the properly-typed alternative to maintain consistency and provide better type safety across the codebase.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/typescript/ai-client/src/types.ts` around lines 146 - 147, The
ToolResultPart.content property currently uses a loose Array<any>; update it to
use the specific ContentPart type for consistency and stronger typing by
changing the union from string | Array<any> to string | Array<ContentPart>
(modify the ToolResultPart.content declaration to reference ContentPart which is
already imported).
| result: | ||
| typeof chunk.result === 'string' | ||
| ? chunk.result | ||
| : JSON.stringify(chunk.result ?? ''), |
There was a problem hiding this comment.
Bug: Undefined results produce '""' instead of empty string.
When chunk.result is undefined or null, this evaluates to:
JSON.stringify('' /* from ?? '' */) // produces '""' (a 2-character string)This changes the behavior from producing '' (falsy, empty) to '""' (truthy, parseable as empty string). Downstream code checking truthiness or parsing will behave differently:
if (chunk.result)will now be true for'""'JSON.parse('""')yields""instead of failing
Consider preserving the original empty string behavior for undefined/null:
🐛 Proposed fix
result:
typeof chunk.result === 'string'
? chunk.result
- : JSON.stringify(chunk.result ?? ''),
+ : chunk.result != null
+ ? JSON.stringify(chunk.result)
+ : '',🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/typescript/ai-event-client/src/devtools-middleware.ts` around lines
198 - 201, The current serialization for the result field uses
JSON.stringify(chunk.result ?? ''), which turns null/undefined into the
two-character string '" "' instead of an empty string; update the logic around
the result key (the expression using typeof chunk.result === 'string' ? ... :
...) to short-circuit null/undefined explicitly so that when chunk.result is
null or undefined you return '' (empty string), otherwise call JSON.stringify on
the non-string value; reference the existing chunk.result check and the result
property so the branch becomes: if it's a string return it, else if chunk.result
== null return '' else return JSON.stringify(chunk.result).
| // Support multimodal tool outputs (OpenAI Responses API accepts | ||
| // string or array of content parts for function_call_output). | ||
| output: | ||
| typeof message.content === 'string' | ||
| typeof message.content === 'string' || | ||
| Array.isArray(message.content) | ||
| ? message.content | ||
| : JSON.stringify(message.content), | ||
| }) | ||
| } as any) |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
Does OpenAI Responses API function_call_output support array content multimodal?
💡 Result:
No. In the Responses API, a function_call_output item’s payload is the output string (free text—often a JSON-encoded string). It does not support the “array of content parts” multimodal content: [...] structure used by message items. [1], [2]
If you need multimodal data downstream, return structured JSON in output (e.g., with file/image URLs or IDs) and then include those as proper multimodal inputs via a subsequent message item with content parts. [1], [2]
Sources: [1], [2]
The multimodal tool output logic is incorrect and will fail at runtime.
The OpenAI Responses API function_call_output.output field only accepts strings, not arrays. The current code at lines 712–716 passes message.content directly if it's an array, which violates the API contract:
output:
typeof message.content === 'string' ||
Array.isArray(message.content)
? message.content // ❌ Arrays not supported by output field
: JSON.stringify(message.content),
Fix: Always stringify the output, including arrays:
output: typeof message.content === 'string'
? message.content
: JSON.stringify(message.content),
If multimodal data is needed, encode it as structured JSON (e.g., with URLs or references) in the string and let downstream code parse it.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/typescript/ai-openai/src/adapters/text.ts` around lines 709 - 716,
The function_call_output.output assignment incorrectly allows arrays for
message.content which the OpenAI Responses API does not accept; update the logic
in the adapter where function_call_output.output is built (referencing
message.content and function_call_output.output) to always pass a string: if
message.content is already a string use it, otherwise
JSON.stringify(message.content) so arrays and objects become valid string
payloads; ensure any downstream consumers parse the JSON string if they expect
structured multimodal data.
| content: Array.isArray(part.output) | ||
| ? part.output | ||
| : JSON.stringify(part.output), |
There was a problem hiding this comment.
Preserve string tool outputs here as well (not only arrays).
This path still JSON.stringifys string outputs, so client tool results become quoted strings while server tool results stay raw strings. That mismatch can change model context interpretation.
🐛 Proposed fix
role: 'tool',
- content: Array.isArray(part.output)
- ? part.output
- : JSON.stringify(part.output),
+ content:
+ typeof part.output === 'string' || Array.isArray(part.output)
+ ? part.output
+ : JSON.stringify(part.output),
toolCallId: part.id,
})📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| content: Array.isArray(part.output) | |
| ? part.output | |
| : JSON.stringify(part.output), | |
| role: 'tool', | |
| content: | |
| typeof part.output === 'string' || Array.isArray(part.output) | |
| ? part.output | |
| : JSON.stringify(part.output), | |
| toolCallId: part.id, |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/typescript/ai/src/activities/chat/messages.ts` around lines 251 -
253, The code currently JSON.stringify's non-array outputs which turns string
tool outputs into quoted strings; change the content assignment for part.output
so strings are preserved: replace the ternary that sets content using
Array.isArray(part.output) ? part.output : JSON.stringify(part.output) with
logic that returns part.output when it's an array, returns part.output as-is
when typeof part.output === 'string', and only calls JSON.stringify for other
types (e.g., objects), referencing the same part.output expression in
messages.ts.
ToolResultPart.content was changed to `string | Array<ContentPart>` in 7d7878f but the devtools consumer still expected `string`. Stringify array content so the MessagePart type is satisfied. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
tool_choiceare not silently discarded by the SDKhttps://claude.ai/code/session_01GsXhpJTjCnyZifLHTJj9gT
🎯 Changes
✅ Checklist
pnpm run test:pr.🚀 Release Impact
Summary by CodeRabbit
New Features
Improvements