Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions apps/docs/ai-sdk/overview.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -39,13 +39,13 @@ const result = await generateText({
```

<Note>
**Memory saving is disabled by default.** The middleware only retrieves existing memories. To automatically save new memories from conversations, enable it explicitly:
**Memory saving is enabled by default** (`addMemory: "always"`). New conversations are persisted automatically. To opt out, set `addMemory: "never"`:

```typescript
const modelWithMemory = withSupermemory(openai("gpt-5"), {
containerTag: "user-123",
customId: "conversation-456",
addMemory: "always",
addMemory: "never",
})
```
</Note>
Expand Down
6 changes: 3 additions & 3 deletions apps/docs/ai-sdk/user-profiles.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -46,13 +46,13 @@ The `withSupermemory` middleware:
All of this happens transparently - you write code as if using a normal model, but get personalized responses.

<Note>
**Memory saving is disabled by default.** The middleware only retrieves existing memories. To automatically save new memories from conversations, set `addMemory: "always"`:
**Memory saving is enabled by default** (`addMemory: "always"`). New conversations are persisted automatically. To opt out, set `addMemory: "never"`:

```typescript
const model = withSupermemory(openai("gpt-5"), {
containerTag: "user-123",
customId: "conversation-456",
addMemory: "always",
addMemory: "never",
})
```
</Note>
Expand Down
7 changes: 6 additions & 1 deletion apps/docs/docs.json
Original file line number Diff line number Diff line change
Expand Up @@ -170,7 +170,12 @@
"integrations/pipecat",
"integrations/n8n",
"integrations/viasocket",
"integrations/zapier"
"integrations/zapier",
{
"group": "Migration Guides",
"icon": "arrow-up-right",
"pages": ["migration/tools-v2-upgrade"]
}
]
}
],
Expand Down
15 changes: 13 additions & 2 deletions apps/docs/integrations/ai-sdk.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,10 @@ icon: "triangle"

The Supermemory AI SDK provides native integration with Vercel's AI SDK through two approaches: **User Profiles** for automatic personalization and **Memory Tools** for agent-based interactions.

<Note>
Migrating to v2 from 1.4.x? Check the [migration guide](/migration/tools-v2-upgrade).
</Note>

<Card title="@supermemory/tools on npm" icon="npm" href="https://www.npmjs.com/package/@supermemory/tools">
Check out the NPM page for more details
</Card>
Expand Down Expand Up @@ -46,14 +50,21 @@ const result = await generateText({
})
```

### Required fields

Both `containerTag` and `customId` are required.

- **`containerTag`** — *who* the memories belong to. Use a stable identifier per user, workspace, or tenant (e.g. `"user-123"`, `"acme-workspace"`). Memory search and writes are scoped to this tag.
- **`customId`** — *which conversation* this turn belongs to. Use it to group messages from the same chat session into a single document (e.g. `"chat-2026-04-25"`, a thread ID, or a UUID per session).

<Note>
**Memory saving is disabled by default.** The middleware only retrieves existing memories. To automatically save new memories:
**Memory saving is enabled by default** (`addMemory: "always"`). New conversations are persisted automatically. To opt out, set `addMemory: "never"`:

```typescript
const modelWithMemory = withSupermemory(openai("gpt-5"), {
containerTag: "user-123",
customId: "conversation-456",
addMemory: "always",
addMemory: "never",
})
```
</Note>
Expand Down
4 changes: 4 additions & 0 deletions apps/docs/integrations/mastra.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,10 @@ icon: "/images/mastra-icon.svg"

Integrate Supermemory with [Mastra](https://mastra.ai) to give your AI agents persistent memory. Use the `withSupermemory` wrapper for zero-config setup or processors for fine-grained control.

<Note>
Migrating to v2 from 1.4.x? Check the [migration guide](/migration/tools-v2-upgrade).
</Note>

<Card title="@supermemory/tools on npm" icon="npm" href="https://www.npmjs.com/package/@supermemory/tools">
Check out the NPM page for more details
</Card>
Expand Down
4 changes: 4 additions & 0 deletions apps/docs/integrations/openai.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,10 @@ Add memory capabilities to the official OpenAI SDKs using Supermemory. Two appro
1. **`withSupermemory` wrapper** - Automatic memory injection into system prompts (zero-config)
2. **Function calling tools** - Explicit tool calls for search/add memory operations

<Note>
Migrating to v2 from 1.4.x? Check the [migration guide](/migration/tools-v2-upgrade).
</Note>

<Tip>
**New to Supermemory?** Start with `withSupermemory` for the simplest integration. It automatically injects relevant memories into your prompts.
</Tip>
Expand Down
4 changes: 4 additions & 0 deletions apps/docs/integrations/voltagent.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,10 @@ icon: "bolt"

Supermemory integrates with [VoltAgent](https://github.com/VoltAgent/voltagent), providing long-term memory capabilities for AI agents. Your VoltAgent applications will remember past conversations and provide personalized responses based on user history.

<Note>
Migrating to v2 from 1.4.x? Check the [migration guide](/migration/tools-v2-upgrade).
</Note>

<Card title="@supermemory/tools on npm" icon="npm" href="https://www.npmjs.com/package/@supermemory/tools">
Check out the NPM page for more details
</Card>
Expand Down
188 changes: 188 additions & 0 deletions apps/docs/migration/tools-v2-upgrade.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,188 @@
---
title: 'Upgrading @supermemory/tools to v2.0.0'
description: 'Migrate your code from @supermemory/tools 1.4.x to 2.0.0 — config-object signature, customId, and new defaults'
sidebarTitle: 'Tools: v1.4 → 2.0'
---

`@supermemory/tools` v2.0.0 unifies the API across all four integrations (Vercel AI SDK, OpenAI, Mastra, VoltAgent) around a single config-object signature and a consistent conversation-grouping concept. This guide walks you through the breaking changes.

<Note>
This release is **breaking**. Update calls and re-test before bumping in
production.
</Note>

## What changed at a glance

| Area | v1.4.x | v2.0.0 |
| --------------------- | ----------------------------------------------------- | ----------------------------------------------------------- |
| Signature | `withSupermemory(model, "user-123", { ... })` | `withSupermemory(model, { containerTag: "user-123", ... })` |
| Conversation grouping | `conversationId` (Vercel/OpenAI), `threadId` (Mastra) | **`customId`** everywhere |
| `customId` | Optional | **Required** — throws if missing or empty |
| `containerTag` | Positional argument | **Required** field on options object |
| `addMemory` default | `"never"` | `"always"` |
| VoltAgent `verbose` | Hardcoded to `false` | Honored from options |

## Install

```bash
npm install @supermemory/tools@^2.0.0
```

## 1. Vercel AI SDK

```typescript
// v1.4.x
import { withSupermemory } from '@supermemory/tools/ai-sdk';

const model = withSupermemory(openai('gpt-4'), 'user-123', {
conversationId: 'conv-456',
mode: 'full',
});
```

```typescript
// v2.0.0
import { withSupermemory } from '@supermemory/tools/ai-sdk';

const model = withSupermemory(openai('gpt-4'), {
containerTag: 'user-123',
customId: 'conv-456',
mode: 'full',
});
```

<Warning>
`customId` is now **required**. Passing an empty string or omitting it throws
at construction time.
</Warning>

## 2. OpenAI SDK

```typescript
// v1.4.x
import { withSupermemory } from '@supermemory/tools/openai';

const client = withSupermemory(openai, 'user-123', {
conversationId: 'conv-456',
});
```

```typescript
// v2.0.0
import { withSupermemory } from '@supermemory/tools/openai';

const client = withSupermemory(openai, {
containerTag: 'user-123',
customId: 'conv-456',
});
```

Both `containerTag` and `customId` are validated and throw with explicit error messages if missing.

## 3. Mastra

Processor constructors and factory functions both moved to a single options argument. `threadId` is gone — use `customId` instead.

```typescript
// v1.4.x
import {
SupermemoryInputProcessor,
createSupermemoryOutputProcessor,
} from '@supermemory/tools/mastra';

const input = new SupermemoryInputProcessor('user-123', {
mode: 'full',
});

const output = createSupermemoryOutputProcessor('user-123', {
threadId: 'conv-456',
addMemory: 'always',
});
```

```typescript
// v2.0.0
import {
SupermemoryInputProcessor,
createSupermemoryOutputProcessor,
} from '@supermemory/tools/mastra';

const input = new SupermemoryInputProcessor({
containerTag: 'user-123',
customId: 'conv-456',
mode: 'full',
});

const output = createSupermemoryOutputProcessor({
containerTag: 'user-123',
customId: 'conv-456',
});
```

<Note>
In server setups, Mastra's `RequestContext` thread ID still takes precedence
over the construction-time `customId` — the option now acts as the fallback
when no per-request thread ID is provided.
</Note>

## 4. VoltAgent

VoltAgent already used a config-object signature, so the call shape is unchanged. Two behavior fixes ship in v2.0.0:

- `verbose: true` is now honored (was hardcoded to `false` in v1.4.x).
- A runtime warning is logged when advanced search params (`threshold`, `limit`, `rerank`, `rewriteQuery`, `filters`, `include`, `searchMode`) are set while `mode: "profile"` — those parameters are ignored in profile mode.

If you were relying on `verbose: false` implicitly while passing `verbose: true`, you will now see logs. Adjust as needed.

## 5. New default: `addMemory: "always"`

Across all four integrations, `addMemory` now defaults to `"always"`. If your v1.4.x code relied on the old default of `"never"`, set it explicitly:

```typescript
const model = withSupermemory(openai('gpt-4'), {
containerTag: 'user-123',
customId: 'conv-456',
addMemory: 'never', // preserve v1.4.x behavior
});
```

## Conversation persistence

In v1.4.x the Vercel middleware fell back to `client.add` with a synthesized `customId` when no `conversationId` was passed. In v2.0.0, because `customId` is required, all conversation persistence goes through the `/v4/conversations` endpoint via `addConversation`. There is no fallback path.

## Migration checklist

<Steps>
<Step title="Bump the dependency">
`npm install @supermemory/tools@^2.0.0`
</Step>
<Step title="Find every withSupermemory / processor call">
Grep your codebase for `withSupermemory(`, `SupermemoryInputProcessor`,
`SupermemoryOutputProcessor`, `createSupermemoryProcessor`,
`createSupermemoryOutputProcessor`.
</Step>
<Step title="Move containerTag into the options object">
Drop the positional `containerTag` argument and add it to the options
object.
</Step>
<Step title="Rename conversationId / threadId to customId">
Make sure every call site provides a non-empty `customId`.
</Step>
<Step title="Audit addMemory">
If you depended on the old `"never"` default, pass `addMemory: "never"`
explicitly.
</Step>
<Step title="Run your test suite">
Validation throws happen at construction time, so missing fields surface
immediately.
</Step>
</Steps>

## Need help?

- [Vercel AI SDK integration](/integrations/ai-sdk)
- [OpenAI integration](/integrations/openai)
- [Mastra integration](/integrations/mastra)
- [VoltAgent integration](/integrations/voltagent)

If you hit something this guide does not cover, open an issue on [GitHub](https://github.com/supermemoryai/supermemory).
2 changes: 1 addition & 1 deletion packages/tools/package.json
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
{
"name": "@supermemory/tools",
"type": "module",
"version": "1.4.7",
"version": "2.0.0",
"description": "Memory tools for AI SDK, OpenAI, Voltagent and Mastra with supermemory",
"scripts": {
"build": "tsdown",
Expand Down
Loading