You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Chat is just the beginning. Most AI-powered apps need to generate images, convert text to speech, transcribe audio, summarize documents, or create videos. Until now, wiring up each of these meant custom fetch logic, manual loading state, bespoke error handling, and a different streaming protocol for every one.
10
+
Chat is just the beginning. Your AI-powered app probably needs to generate images, convert text to speech, transcribe audio, summarize documents, or create videos. Until now, wiring up each of these activities meant writing custom fetch logic, managing loading states, handling errors, and juggling streaming protocols for every single one.
11
11
12
-
TanStack AI now ships **generation hooks**: a unified set of React hooks (with Solid, Vue, and Svelte coming soon) that give you first-class primitives for every non-chat AI activity.
12
+
Not anymore.
13
13
14
14
## One Pattern to Rule Them All
15
15
16
+
TanStack AI now ships **generation hooks**: a unified set of React hooks (with Solid, Vue, and Svelte support) that give you first-class primitives for every non-chat AI activity:
17
+
16
18
-`useGenerateImage()` for image generation
17
19
-`useGenerateSpeech()` for text-to-speech
18
20
-`useTranscription()` for audio transcription
19
21
-`useSummarize()` for text summarization
20
22
-`useGenerateVideo()` for video generation
21
23
22
-
Every hook follows the same API surface. Learn one, know them all:
24
+
Every hook follows the exact same API surface. Learn one, and you know them all:
generate({ prompt: 'A neon-lit cyberpunk cityscape at sunset' })
30
33
```
31
34
32
-
`result` is fully typed. `error` is handled. Loading state is tracked. Abort is built in. No boilerplate, no `useEffect` spaghetti, no manual state management.
35
+
The `result` is fully typed. The`error` is handled. Loading state is tracked. Abort is built in. No boilerplate, no `useEffect` spaghetti, no manual state management.
33
36
34
37
## Three Ways to Connect
35
38
36
-
Every generation hook supports three transport modes.
39
+
Every generation hook supports three transport modes, so you can pick the one that fits your architecture:
The server function runs, returns JSON, the hook updates your UI. Simple, synchronous from the user's perspective, fully type-safe.
74
+
The server function runs, returns JSON, and the hook updates your UI. Simple, synchronous from the user's perspective, and fully type-safe.
72
75
73
-
### 3. Server Function Streaming
76
+
### 3. Server Function Streaming (NEW)
74
77
75
-
This is the one we're most excited about. It combines the type safety of server functions with the real-time feedback of streaming, and it works beautifully with TanStack Start.
78
+
This is the one we're most excited about. It combines the **type safety of server functions** with the **real-time feedback of streaming**, and it works beautifully with TanStack Start.
76
79
77
-
The problem: the `connection` approach uses a generic `Record<string, any>` for its data payload. Your input loses all type information. The `fetcher` approach is fully typed, but blocks until the entire result is ready.
80
+
Here is the problem we solved: the `connection` approach uses a generic `Record<string, any>` for its data payload. Great for flexibility, but your input loses all type information. The `fetcher` approach is fully typed, but it waits for the entire result before updating the UI.
78
81
79
-
Server Function Streaming gives you both. Your fetcher returns a `Response` (an SSE stream), and the client detects it automatically and parses the stream in real-time:
82
+
Server Function Streaming gives you both. Your fetcher returns a `Response`object (an SSE stream), and the client automatically detects it and parses the stream in real-time:
From the client's perspective the API is identical to a direct fetcher call. Behind the scenes, TanStack AI detects the `Response` object, reads the SSE stream, and feeds chunks through the same event pipeline as the connection adapter. Progress events fire in real-time. Errors are reported as they happen. Your`input` parameter stays fully typed throughout.
112
+
From the client's perspective, the API is identical to a direct fetcher call. But behind the scenes, TanStack AI detects the `Response` object, reads the SSE stream, and feeds chunks through the same event pipeline used by the connection adapter. Progress events fire in real-time. Errors are reported as they happen. And your`input` parameter stays fully typed throughout.
110
113
111
-
Zero config. If your fetcher returns a `Response`, it's treated as an SSE stream. If it returns anything else, it's a direct result.
114
+
The detection is simple and zero-config: if your fetcher returns a `Response`, it's treated as an SSE stream. If it returns anything else, it's treated as a direct result. No flags, no configuration, no separate hook.
112
115
113
116
## How It Works Under the Hood
114
117
@@ -118,24 +121,27 @@ When a fetcher returns a `Response`, the `GenerationClient` runs a simple check:
118
121
const result =awaitthis.fetcher(input, { signal })
119
122
120
123
if (resultinstanceofResponse) {
121
-
// Parse as SSE stream — same pipeline as ConnectionAdapter
124
+
// Parse as SSE stream - same pipeline as ConnectionAdapter
`parseSSEResponse` reads the response body as a stream of newline-delimited SSE events, parses each `data:` line into a `StreamChunk`, and yields them into the same `processStream` method that the ConnectionAdapter uses. Same event types, same state transitions, same callbacks. Every feature that works with streaming connections works with server function streaming: progress reporting, chunk callbacks, abort signals, error handling.
132
+
The `parseSSEResponse` utility reads the response body as a stream of newline-delimited SSE events, parses each `data:` line into a `StreamChunk`, and yields them into the same `processStream` method that the ConnectionAdapter uses. Same event types, same state transitions, same callbacks.
133
+
134
+
This means every feature that works with streaming connections also works with server function streaming: progress reporting, chunk callbacks, abort signals, error handling. All of it.
130
135
131
136
## Result Transforms
132
137
133
-
Sometimes the raw result from the server isn't what you want in state. Every generation hook accepts an `onResult` callback that transforms the result before it's stored:
138
+
Sometimes the raw result from the server isn't what you want to store in state. Every generation hook accepts an `onResult` callback that can transform the result before it's stored:
@@ -149,11 +155,11 @@ const { result } = useGenerateSpeech({
149
155
// result is typed as { audioUrl: string; format?: string; duration?: number } | null
150
156
```
151
157
152
-
TypeScript infers the output type from your transform. No explicit generics needed.
158
+
TypeScript infers the output type from your transform function. No explicit generics needed.
153
159
154
-
## Video Generation
160
+
## Video Generation: A First-Class Citizen
155
161
156
-
Video generation is a different beast. Providers like OpenAI's Sora use a jobs-based architecture: submit a prompt, receive a job ID, poll for status until the video is ready. This can take minutes.
162
+
Video generation is a different beast. Unlike image or speech generation, video providers like OpenAI's Sora use a jobs-based architecture: you submit a prompt, receive a job ID, then poll for status until the video is ready. This can take minutes.
157
163
158
164
`useGenerateVideo()` handles all of this transparently:
`jobId` and `videoStatus`are reactive state that update in real-time as the server streams polling updates. Your users see "pending", "processing", progress percentages, and finally the completed video URL. You write zero polling loops.
187
+
The hook exposes `jobId` and `videoStatus`as reactive state that updates in real-time as the server streams polling updates. Your users see "pending", "processing", progress percentages, and finally the completed video URL, all without you writing a single polling loop.
181
188
182
189
## Every Activity, Same API
183
190
191
+
Here's what makes this design special: the API is identical across all five generation types. Once you've built an image generation page, building a speech generation page is a matter of swapping the hook name and adjusting the input:
Same `generate()`. Same `result`. Same `isLoading`. Same `error`. Same `stop()` and `reset()`. The consistency is intentional: AI features should be as easy to add to your app as a form submission.
201
+
Same `generate()`. Same `result`. Same `isLoading`. Same `error`. Same `stop()` and `reset()`. The consistency is intentional: we want AI features to be as easy to add to your app as a form submission.
Three lines of hook setup. Type-safe input. Streaming progress. Error handling. Abort support.
255
+
Three lines of hook setup. Type-safe input. Streaming progress. Error handling. Abort support. That's it.
245
256
246
257
## What's Next
247
258
248
-
Generation hooks are available now in `@tanstack/ai-client` and `@tanstack/ai-react`. Solid, Vue, and Svelte support is coming with the same API surface.
259
+
Generation hooks are available now in `@tanstack/ai-client` and `@tanstack/ai-react`. Support for Solid, Vue, and Svelte is coming soon with the same API surface.
249
260
250
-
We're also expanding the adapter ecosystem so you can use these hooks with providers beyond OpenAI. The generation functions are provider-agnostic by design, so swapping from OpenAI to Anthropic or a local model is a single line change.
261
+
We're also working on expanding the adapter ecosystem so you can use these hooks with providers beyond OpenAI. The generation functions are provider-agnostic by design, so swapping from OpenAI to Anthropic or a local model will be a single line change.
251
262
252
-
Build something and let us knowwhat you make.
263
+
Build something cool and let us know. We can't wait to see what you create.
0 commit comments