|
| 1 | +# AI SDK Chat Transport & Chat Task System |
| 2 | + |
| 3 | +Run AI chat completions as durable Trigger.dev tasks — with built-in realtime streaming, multi-turn conversations in a single run, typed per-run state, cancellation from the frontend, and tool support. No API routes needed. |
| 4 | + |
| 5 | +## How it works |
| 6 | + |
| 7 | +1. Frontend sends messages via AI SDK's `useChat` hook through `TriggerChatTransport` |
| 8 | +2. Transport triggers a Trigger.dev task with the conversation as payload |
| 9 | +3. Task streams `UIMessageChunk` events back via realtime streams |
| 10 | +4. AI SDK processes the stream natively — text, tool calls, reasoning, everything |
| 11 | +5. Frontend can cancel generation mid-stream — the transport sends a cancel signal via input streams and `chat.task` aborts `streamText` automatically |
| 12 | + |
| 13 | +``` |
| 14 | +useChat → TriggerChatTransport → Trigger.dev Task → streamText → realtime stream → useChat |
| 15 | + ↑ cancel ↓ abort |
| 16 | + └──── input stream ("cancel") ─────────────┘ |
| 17 | +``` |
| 18 | + |
| 19 | +## Backend: `chat.task` |
| 20 | + |
| 21 | +Define a chat task in one function. Return a `streamText` result and it's automatically piped to the frontend. |
| 22 | + |
| 23 | +```ts |
| 24 | +import { chat } from "@trigger.dev/sdk/ai"; |
| 25 | +import { streamText } from "ai"; |
| 26 | +import { openai } from "@ai-sdk/openai"; |
| 27 | + |
| 28 | +export const myChat = chat.task({ |
| 29 | + id: "my-chat", |
| 30 | + run: async ({ modelMessages, signal }) => { |
| 31 | + return streamText({ |
| 32 | + model: openai("gpt-4o"), |
| 33 | + messages: modelMessages, |
| 34 | + abortSignal: signal, // enables frontend cancellation |
| 35 | + }); |
| 36 | + }, |
| 37 | +}); |
| 38 | +``` |
| 39 | + |
| 40 | +No `convertToModelMessages` call needed — `chat.task` handles the conversion and passes both `modelMessages` (for the model) and `messages` (raw `UIMessage[]`) in the payload. The `signal` is an `AbortSignal` that fires when the frontend cancels generation. |
| 41 | + |
| 42 | +## Frontend: `useTriggerChatTransport` |
| 43 | + |
| 44 | +A React hook that creates a type-safe transport for `useChat`. No `useMemo` needed — the hook handles memoization internally. |
| 45 | + |
| 46 | +```tsx |
| 47 | +import { useChat } from "@ai-sdk/react"; |
| 48 | +import { useTriggerChatTransport } from "@trigger.dev/sdk/chat/react"; |
| 49 | +import type { myChat } from "@/trigger/chat"; |
| 50 | + |
| 51 | +function Chat() { |
| 52 | + const transport = useTriggerChatTransport<typeof myChat>({ |
| 53 | + task: "my-chat", |
| 54 | + accessToken: getChatToken, // async function for token refresh |
| 55 | + }); |
| 56 | + |
| 57 | + const { messages, sendMessage, stop, status } = useChat({ transport }); |
| 58 | + |
| 59 | + // stop() cancels the in-flight generation — chat.task aborts streamText automatically |
| 60 | +} |
| 61 | +``` |
| 62 | + |
| 63 | +The `<typeof myChat>` generic gives compile-time validation of the task ID string. |
| 64 | + |
| 65 | +Cancellation just works — calling `stop()` from `useChat` sends a cancel signal via an input stream to the running task. `chat.task` listens for it and aborts the `streamText` call. No extra wiring needed. |
| 66 | + |
| 67 | +## Single-run mode (multi-turn conversations) |
| 68 | + |
| 69 | +`chat.task` keeps the entire conversation inside a single run using waitpoint tokens. After each AI response, the run pauses until the next message arrives — then resumes in the same process. |
| 70 | + |
| 71 | +- All turns share the same run ID, logs, and metadata |
| 72 | +- In-memory state persists across turns without external storage |
| 73 | +- The full conversation is observable as one run in the dashboard |
| 74 | + |
| 75 | +```ts |
| 76 | +export const myChat = chat.task({ |
| 77 | + id: "my-chat", |
| 78 | + maxTurns: 50, // default: 100 |
| 79 | + turnTimeout: "30m", // default: "1h" |
| 80 | + run: async ({ modelMessages, signal }) => { ... }, |
| 81 | +}); |
| 82 | +``` |
| 83 | + |
| 84 | +## Per-run state with `chat.state` |
| 85 | + |
| 86 | +Define typed, per-run state that's accessible from anywhere during task execution — tools, the run function, nested helpers. Each conversation gets its own isolated copy. |
| 87 | + |
| 88 | +```ts |
| 89 | +import { chat } from "@trigger.dev/sdk/ai"; |
| 90 | +import { streamText, tool } from "ai"; |
| 91 | +import { openai } from "@ai-sdk/openai"; |
| 92 | +import { z } from "zod"; |
| 93 | + |
| 94 | +const state = chat.state({ |
| 95 | + init: () => ({ score: 0, questionsAsked: 0, streak: 0 }), |
| 96 | +}); |
| 97 | + |
| 98 | +// Tools at module level — access state directly |
| 99 | +const checkAnswer = tool({ |
| 100 | + description: "Check the user's answer", |
| 101 | + inputSchema: z.object({ correct: z.boolean() }), |
| 102 | + execute: async ({ correct }) => { |
| 103 | + state.questionsAsked++; |
| 104 | + if (correct) { state.score++; state.streak++; } |
| 105 | + else { state.streak = 0; } |
| 106 | + return { score: state.score, total: state.questionsAsked }; |
| 107 | + }, |
| 108 | +}); |
| 109 | + |
| 110 | +export const quiz = chat.task({ |
| 111 | + id: "quiz-bot", |
| 112 | + state, |
| 113 | + run: async ({ modelMessages, signal }) => { |
| 114 | + return streamText({ |
| 115 | + model: openai("gpt-4o-mini"), |
| 116 | + system: `Score: ${state.score}/${state.questionsAsked}`, |
| 117 | + messages: modelMessages, |
| 118 | + tools: { checkAnswer }, |
| 119 | + maxSteps: 5, |
| 120 | + abortSignal: signal, |
| 121 | + }); |
| 122 | + }, |
| 123 | +}); |
| 124 | +``` |
| 125 | + |
| 126 | +State is backed by a Proxy over locals — no globals, fully isolated per run. Supports optional `persist` callback for external storage: |
| 127 | + |
| 128 | +```ts |
| 129 | +const state = chat.state({ |
| 130 | + init: () => ({ preferences: [] }), |
| 131 | + persist: async ({ state, chatId }) => { |
| 132 | + await db.sessions.upsert({ where: { chatId }, data: state }); |
| 133 | + }, |
| 134 | + persistDebounceMs: 1000, // debounce rapid mutations |
| 135 | +}); |
| 136 | +``` |
| 137 | + |
| 138 | +## AI SDK tools |
| 139 | + |
| 140 | +Tools work through the full pipeline. Tool calls and results stream to the frontend and appear in `message.parts`. |
| 141 | + |
| 142 | +```ts |
| 143 | +export const myChat = chat.task({ |
| 144 | + id: "my-chat", |
| 145 | + run: async ({ modelMessages, signal }) => { |
| 146 | + return streamText({ |
| 147 | + model: openai("gpt-4o"), |
| 148 | + messages: modelMessages, |
| 149 | + tools: { weather: weatherTool }, |
| 150 | + maxSteps: 5, |
| 151 | + abortSignal: signal, |
| 152 | + }); |
| 153 | + }, |
| 154 | +}); |
| 155 | +``` |
| 156 | + |
| 157 | +## `chat.pipe` for complex flows |
| 158 | + |
| 159 | +For agent loops where `streamText` is called deep in your code, use `chat.pipe` instead of returning the result: |
| 160 | + |
| 161 | +```ts |
| 162 | +import { chat } from "@trigger.dev/sdk/ai"; |
| 163 | + |
| 164 | +export const agent = chat.task({ |
| 165 | + id: "agent-chat", |
| 166 | + run: async ({ modelMessages, signal }) => { |
| 167 | + await runAgentLoop(modelMessages, signal); |
| 168 | + // Don't return — chat.pipe handles streaming |
| 169 | + }, |
| 170 | +}); |
| 171 | + |
| 172 | +async function runAgentLoop(messages: ModelMessage[], signal: AbortSignal) { |
| 173 | + const result = streamText({ model, messages, abortSignal: signal }); |
| 174 | + await chat.pipe(result); // works from anywhere inside the task |
| 175 | +} |
| 176 | +``` |
| 177 | + |
| 178 | +## Cancellation |
| 179 | + |
| 180 | +Frontend cancellation flows through input streams to an `AbortSignal` provided in the run payload: |
| 181 | + |
| 182 | +1. User clicks stop (or calls `stop()` from `useChat`) |
| 183 | +2. `TriggerChatTransport` sends a cancel signal via an input stream to the running task |
| 184 | +3. `chat.task` receives the signal and aborts the `signal` passed to your `run` function |
| 185 | +4. `streamText` stops generating — `useChat` shows the partial response |
| 186 | + |
| 187 | +Just pass `signal` to `abortSignal` on `streamText` and cancellation works end-to-end. No manual abort controller wiring. |
| 188 | + |
| 189 | +## Type-safe access tokens |
| 190 | + |
| 191 | +```ts |
| 192 | +// Server action |
| 193 | +import { chat } from "@trigger.dev/sdk/ai"; |
| 194 | +import type { myChat } from "@/trigger/chat"; |
| 195 | + |
| 196 | +export const getChatToken = () => chat.createAccessToken<typeof myChat>("my-chat"); |
| 197 | +``` |
| 198 | + |
| 199 | +## Package imports |
| 200 | + |
| 201 | +| Import | Package | |
| 202 | +|--------|---------| |
| 203 | +| `chat.task`, `chat.state`, `chat.pipe`, `chat.createAccessToken` | `@trigger.dev/sdk/ai` | |
| 204 | +| `TriggerChatTransport` | `@trigger.dev/sdk/chat` | |
| 205 | +| `useTriggerChatTransport` | `@trigger.dev/sdk/chat/react` | |
| 206 | + |
| 207 | +Requires `ai` package v6+. |
0 commit comments