diff --git a/apps/docs/ai-sdk/overview.mdx b/apps/docs/ai-sdk/overview.mdx
index 168be519a..b4d6aad99 100644
--- a/apps/docs/ai-sdk/overview.mdx
+++ b/apps/docs/ai-sdk/overview.mdx
@@ -39,13 +39,13 @@ const result = await generateText({
```
- **Memory saving is disabled by default.** The middleware only retrieves existing memories. To automatically save new memories from conversations, enable it explicitly:
-
+ **Memory saving is enabled by default** (`addMemory: "always"`). New conversations are persisted automatically. To opt out, set `addMemory: "never"`:
+
```typescript
const modelWithMemory = withSupermemory(openai("gpt-5"), {
containerTag: "user-123",
customId: "conversation-456",
- addMemory: "always",
+ addMemory: "never",
})
```
diff --git a/apps/docs/ai-sdk/user-profiles.mdx b/apps/docs/ai-sdk/user-profiles.mdx
index 6d85772e3..c0027aa21 100644
--- a/apps/docs/ai-sdk/user-profiles.mdx
+++ b/apps/docs/ai-sdk/user-profiles.mdx
@@ -46,13 +46,13 @@ The `withSupermemory` middleware:
All of this happens transparently - you write code as if using a normal model, but get personalized responses.
- **Memory saving is disabled by default.** The middleware only retrieves existing memories. To automatically save new memories from conversations, set `addMemory: "always"`:
-
+ **Memory saving is enabled by default** (`addMemory: "always"`). New conversations are persisted automatically. To opt out, set `addMemory: "never"`:
+
```typescript
const model = withSupermemory(openai("gpt-5"), {
containerTag: "user-123",
customId: "conversation-456",
- addMemory: "always",
+ addMemory: "never",
})
```
diff --git a/apps/docs/docs.json b/apps/docs/docs.json
index 5731c9452..0626e5639 100644
--- a/apps/docs/docs.json
+++ b/apps/docs/docs.json
@@ -170,7 +170,12 @@
"integrations/pipecat",
"integrations/n8n",
"integrations/viasocket",
- "integrations/zapier"
+ "integrations/zapier",
+ {
+ "group": "Migration Guides",
+ "icon": "arrow-up-right",
+ "pages": ["migration/tools-v2-upgrade"]
+ }
]
}
],
diff --git a/apps/docs/integrations/ai-sdk.mdx b/apps/docs/integrations/ai-sdk.mdx
index c71a1ec19..bdf73fd34 100644
--- a/apps/docs/integrations/ai-sdk.mdx
+++ b/apps/docs/integrations/ai-sdk.mdx
@@ -7,6 +7,10 @@ icon: "triangle"
The Supermemory AI SDK provides native integration with Vercel's AI SDK through two approaches: **User Profiles** for automatic personalization and **Memory Tools** for agent-based interactions.
+
+ Migrating to v2 from 1.4.x? Check the [migration guide](/migration/tools-v2-upgrade).
+
+
Check out the NPM page for more details
@@ -46,14 +50,21 @@ const result = await generateText({
})
```
+### Required fields
+
+Both `containerTag` and `customId` are required.
+
+- **`containerTag`** — *who* the memories belong to. Use a stable identifier per user, workspace, or tenant (e.g. `"user-123"`, `"acme-workspace"`). Memory search and writes are scoped to this tag.
+- **`customId`** — *which conversation* this turn belongs to. Use it to group messages from the same chat session into a single document (e.g. `"chat-2026-04-25"`, a thread ID, or a UUID per session).
+
- **Memory saving is disabled by default.** The middleware only retrieves existing memories. To automatically save new memories:
+ **Memory saving is enabled by default** (`addMemory: "always"`). New conversations are persisted automatically. To opt out, set `addMemory: "never"`:
```typescript
const modelWithMemory = withSupermemory(openai("gpt-5"), {
containerTag: "user-123",
customId: "conversation-456",
- addMemory: "always",
+ addMemory: "never",
})
```
diff --git a/apps/docs/integrations/mastra.mdx b/apps/docs/integrations/mastra.mdx
index b74626988..db526de42 100644
--- a/apps/docs/integrations/mastra.mdx
+++ b/apps/docs/integrations/mastra.mdx
@@ -7,6 +7,10 @@ icon: "/images/mastra-icon.svg"
Integrate Supermemory with [Mastra](https://mastra.ai) to give your AI agents persistent memory. Use the `withSupermemory` wrapper for zero-config setup or processors for fine-grained control.
+
+ Migrating to v2 from 1.4.x? Check the [migration guide](/migration/tools-v2-upgrade).
+
+
Check out the NPM page for more details
diff --git a/apps/docs/integrations/openai.mdx b/apps/docs/integrations/openai.mdx
index 13cbb58fb..80751d74e 100644
--- a/apps/docs/integrations/openai.mdx
+++ b/apps/docs/integrations/openai.mdx
@@ -10,6 +10,10 @@ Add memory capabilities to the official OpenAI SDKs using Supermemory. Two appro
1. **`withSupermemory` wrapper** - Automatic memory injection into system prompts (zero-config)
2. **Function calling tools** - Explicit tool calls for search/add memory operations
+
+ Migrating to v2 from 1.4.x? Check the [migration guide](/migration/tools-v2-upgrade).
+
+
**New to Supermemory?** Start with `withSupermemory` for the simplest integration. It automatically injects relevant memories into your prompts.
diff --git a/apps/docs/integrations/voltagent.mdx b/apps/docs/integrations/voltagent.mdx
index 66f12826a..b54479389 100644
--- a/apps/docs/integrations/voltagent.mdx
+++ b/apps/docs/integrations/voltagent.mdx
@@ -7,6 +7,10 @@ icon: "bolt"
Supermemory integrates with [VoltAgent](https://github.com/VoltAgent/voltagent), providing long-term memory capabilities for AI agents. Your VoltAgent applications will remember past conversations and provide personalized responses based on user history.
+
+ Migrating to v2 from 1.4.x? Check the [migration guide](/migration/tools-v2-upgrade).
+
+
Check out the NPM page for more details
diff --git a/apps/docs/migration/tools-v2-upgrade.mdx b/apps/docs/migration/tools-v2-upgrade.mdx
new file mode 100644
index 000000000..a948da52d
--- /dev/null
+++ b/apps/docs/migration/tools-v2-upgrade.mdx
@@ -0,0 +1,188 @@
+---
+title: 'Upgrading @supermemory/tools to v2.0.0'
+description: 'Migrate your code from @supermemory/tools 1.4.x to 2.0.0 — config-object signature, customId, and new defaults'
+sidebarTitle: 'Tools: v1.4 → 2.0'
+---
+
+`@supermemory/tools` v2.0.0 unifies the API across all four integrations (Vercel AI SDK, OpenAI, Mastra, VoltAgent) around a single config-object signature and a consistent conversation-grouping concept. This guide walks you through the breaking changes.
+
+
+ This release is **breaking**. Update calls and re-test before bumping in
+ production.
+
+
+## What changed at a glance
+
+| Area | v1.4.x | v2.0.0 |
+| --------------------- | ----------------------------------------------------- | ----------------------------------------------------------- |
+| Signature | `withSupermemory(model, "user-123", { ... })` | `withSupermemory(model, { containerTag: "user-123", ... })` |
+| Conversation grouping | `conversationId` (Vercel/OpenAI), `threadId` (Mastra) | **`customId`** everywhere |
+| `customId` | Optional | **Required** — throws if missing or empty |
+| `containerTag` | Positional argument | **Required** field on options object |
+| `addMemory` default | `"never"` | `"always"` |
+| VoltAgent `verbose` | Hardcoded to `false` | Honored from options |
+
+## Install
+
+```bash
+npm install @supermemory/tools@^2.0.0
+```
+
+## 1. Vercel AI SDK
+
+```typescript
+// v1.4.x
+import { withSupermemory } from '@supermemory/tools/ai-sdk';
+
+const model = withSupermemory(openai('gpt-4'), 'user-123', {
+ conversationId: 'conv-456',
+ mode: 'full',
+});
+```
+
+```typescript
+// v2.0.0
+import { withSupermemory } from '@supermemory/tools/ai-sdk';
+
+const model = withSupermemory(openai('gpt-4'), {
+ containerTag: 'user-123',
+ customId: 'conv-456',
+ mode: 'full',
+});
+```
+
+
+ `customId` is now **required**. Passing an empty string or omitting it throws
+ at construction time.
+
+
+## 2. OpenAI SDK
+
+```typescript
+// v1.4.x
+import { withSupermemory } from '@supermemory/tools/openai';
+
+const client = withSupermemory(openai, 'user-123', {
+ conversationId: 'conv-456',
+});
+```
+
+```typescript
+// v2.0.0
+import { withSupermemory } from '@supermemory/tools/openai';
+
+const client = withSupermemory(openai, {
+ containerTag: 'user-123',
+ customId: 'conv-456',
+});
+```
+
+Both `containerTag` and `customId` are validated and throw with explicit error messages if missing.
+
+## 3. Mastra
+
+Processor constructors and factory functions both moved to a single options argument. `threadId` is gone — use `customId` instead.
+
+```typescript
+// v1.4.x
+import {
+ SupermemoryInputProcessor,
+ createSupermemoryOutputProcessor,
+} from '@supermemory/tools/mastra';
+
+const input = new SupermemoryInputProcessor('user-123', {
+ mode: 'full',
+});
+
+const output = createSupermemoryOutputProcessor('user-123', {
+ threadId: 'conv-456',
+ addMemory: 'always',
+});
+```
+
+```typescript
+// v2.0.0
+import {
+ SupermemoryInputProcessor,
+ createSupermemoryOutputProcessor,
+} from '@supermemory/tools/mastra';
+
+const input = new SupermemoryInputProcessor({
+ containerTag: 'user-123',
+ customId: 'conv-456',
+ mode: 'full',
+});
+
+const output = createSupermemoryOutputProcessor({
+ containerTag: 'user-123',
+ customId: 'conv-456',
+});
+```
+
+
+ In server setups, Mastra's `RequestContext` thread ID still takes precedence
+ over the construction-time `customId` — the option now acts as the fallback
+ when no per-request thread ID is provided.
+
+
+## 4. VoltAgent
+
+VoltAgent already used a config-object signature, so the call shape is unchanged. Two behavior fixes ship in v2.0.0:
+
+- `verbose: true` is now honored (was hardcoded to `false` in v1.4.x).
+- A runtime warning is logged when advanced search params (`threshold`, `limit`, `rerank`, `rewriteQuery`, `filters`, `include`, `searchMode`) are set while `mode: "profile"` — those parameters are ignored in profile mode.
+
+If you were relying on `verbose: false` implicitly while passing `verbose: true`, you will now see logs. Adjust as needed.
+
+## 5. New default: `addMemory: "always"`
+
+Across all four integrations, `addMemory` now defaults to `"always"`. If your v1.4.x code relied on the old default of `"never"`, set it explicitly:
+
+```typescript
+const model = withSupermemory(openai('gpt-4'), {
+ containerTag: 'user-123',
+ customId: 'conv-456',
+ addMemory: 'never', // preserve v1.4.x behavior
+});
+```
+
+## Conversation persistence
+
+In v1.4.x the Vercel middleware fell back to `client.add` with a synthesized `customId` when no `conversationId` was passed. In v2.0.0, because `customId` is required, all conversation persistence goes through the `/v4/conversations` endpoint via `addConversation`. There is no fallback path.
+
+## Migration checklist
+
+
+
+ `npm install @supermemory/tools@^2.0.0`
+
+
+ Grep your codebase for `withSupermemory(`, `SupermemoryInputProcessor`,
+ `SupermemoryOutputProcessor`, `createSupermemoryProcessor`,
+ `createSupermemoryOutputProcessor`.
+
+
+ Drop the positional `containerTag` argument and add it to the options
+ object.
+
+
+ Make sure every call site provides a non-empty `customId`.
+
+
+ If you depended on the old `"never"` default, pass `addMemory: "never"`
+ explicitly.
+
+
+ Validation throws happen at construction time, so missing fields surface
+ immediately.
+
+
+
+## Need help?
+
+- [Vercel AI SDK integration](/integrations/ai-sdk)
+- [OpenAI integration](/integrations/openai)
+- [Mastra integration](/integrations/mastra)
+- [VoltAgent integration](/integrations/voltagent)
+
+If you hit something this guide does not cover, open an issue on [GitHub](https://github.com/supermemoryai/supermemory).
diff --git a/packages/tools/package.json b/packages/tools/package.json
index b69ac16f7..f65a48b86 100644
--- a/packages/tools/package.json
+++ b/packages/tools/package.json
@@ -1,7 +1,7 @@
{
"name": "@supermemory/tools",
"type": "module",
- "version": "1.4.7",
+ "version": "2.0.0",
"description": "Memory tools for AI SDK, OpenAI, Voltagent and Mastra with supermemory",
"scripts": {
"build": "tsdown",