Last Updated February 10, 2026
Cloudflare Durable Object Manager: Full-featured, self-hosted web app to manage Durable Object namespaces, instances, and storage. Supports automatic namespace discovery, instance inspection, key/value editing, Rich SQL Console,SQL for SQLite-backed DOs, batch operations, alarms, R2 backups, analytics, global search, and job history, with optional GitHub SSO.
Live Demo β’ Wiki β’ Changelog β’ Release Article
Frontend: React 19.2.4 | Vite 7.3.1 | TypeScript 5.9.3 | Tailwind CSS 4.1.17 | shadcn/ui
Backend: Cloudflare Workers + D1 + R2 + Zero Trust
- Auto-discover DO namespaces from Cloudflare API
- Manual configuration for custom setups
- List/Grid toggle - Switch between compact List view (default) and card-based Grid view; preference saved to localStorage
- Clone namespace - Configuration only (fast) or Deep Clone with all instances and storage (requires admin hooks)
- Download config - Export namespace settings as JSON
- System namespace filtering - Internal DOs (kv-manager, d1-manager, do-manager) are hidden to prevent accidental deletion
- Search & filter - Real-time filtering by name, class name, or script name
- Support for SQLite and KV storage backends
- Track DO instances by name or hex ID
- Create new instances with custom names
- List/Grid toggle - Switch between compact List view (default) and card-based Grid view; preference saved to localStorage
- Rename instance - Change the display name of tracked instances
- Clone instance - Copy all storage data to a new instance
- Download instance - Export instance storage as JSON
- Search & filter - Real-time filtering by instance name or object ID
- Color tags - Color-code instances for visual organization (9 preset colors)
- Instance diff - Compare storage between two instances to see differences
- Instance migration - Migrate instances between namespaces with 3 cutover modes (Copy Only, Copy + Freeze, Copy + Delete)
- View storage contents (keys/values)
- Enhanced SQL Editor with Prism.js syntax highlighting and line numbers
- Real-time validation with inline error indicators
- Context-aware autocomplete for SQL keywords, table names, and columns
- Hover documentation for SQL keywords and functions
- Smart indentation with bracket/quote auto-pairing
- Format button for one-click SQL formatting
- Copy button with clipboard feedback
- Word wrap toggle and suggestions toggle (persisted to localStorage)
- Quick Queries dropdown with grouped SQL templates (Information, Select Data, Modify Data, Table Management)
- Saved queries - Store frequently used queries per namespace
- Query history - Quick access to recent queries
- Results displayed in sortable table format
- Always-visible checkboxes - Select namespaces, instances, and storage keys directly from lists
- Batch download (namespaces) - Export multiple namespace configs as a ZIP file with manifest
- Batch download (instances) - Export multiple instance storage as a ZIP file with manifest
- Batch download (keys) - Export selected storage keys as JSON with metadata
- Batch delete - Delete multiple namespaces, instances, or storage keys with confirmation
- Batch backup - Backup multiple instances to R2 with progress tracking
- Compare instances - Select exactly 2 instances to compare storage differences
- Selection toolbar - Floating toolbar with count, Select All, and Clear actions
- Job history integration - All batch operations are tracked in job history
- Key search & filter - Real-time filtering to find keys quickly
- Rename keys - Edit key names directly in the Edit Key dialog
- Multi-select keys - Select multiple keys with checkboxes for batch operations
- Batch export keys - Export selected keys as JSON with instance/namespace metadata
- Batch delete keys - Delete multiple keys at once with confirmation
- Import keys from JSON - Upload JSON files or paste JSON directly to bulk import keys into instance storage
- View/edit storage values with JSON support
- Clickable key rows for easy editing
- NPM package (
do-manager-admin-hooks) for easy integration - Copy-paste template also available for custom setups
- Support for both SQLite and KV backends
- View current alarm state
- Set new alarms with date/time picker
- Delete existing alarms
- Snapshot DO storage to R2
- Browse backup history
- Restore from any backup with auto-refresh
- Request volume over time
- Storage usage
- CPU time metrics (average and total)
- Cross-namespace key search - Search for storage keys by name across all instances
- Value search - Search within JSON values to find data across instances
- Namespace filtering - Filter search to specific namespaces
- Result grouping - Results grouped by namespace for easy navigation
- Match highlighting - Search terms highlighted in results
- Value previews - Shows matching portion of values for value searches
- Job tracking - All search operations logged to job history
- Comprehensive tracking - Records all operations including:
- Namespace: create, delete, clone, download (single & batch)
- Instance: create, delete, clone, download (single & batch)
- Storage keys: create/update/delete (single), batch delete, batch export, import
- Alarms: set, delete
- Backup/restore operations
- Batch operations: delete, backup, download (namespaces, instances, keys)
- Search operations: key search, value search
- View status, progress, and timing
- Error details for failed operations
- Filter by status or namespace
- Event-driven webhooks - Send HTTP notifications on key events (13 event types)
- Configurable events:
- Storage:
storage_create,storage_update,storage_delete - Instance:
instance_create,instance_delete - Backup/Restore:
backup_complete,restore_complete - Alarms:
alarm_set,alarm_deleted - Import/Export:
import_complete,export_complete - System:
job_failed,batch_complete
- Storage:
- HMAC signatures - Optional secret-based request signing for security
- Test webhooks - Verify endpoint connectivity before going live
- Structured error payloads - Consistent format with module, operation, context, and metadata
- Module-prefixed error codes - e.g.,
NS_CREATE_FAILED,INST_DELETE_FAILED,BKP_RESTORE_FAILED - Severity levels - error, warning, info
- Webhook integration - Automatic webhook triggers for job failures
- Stack trace capture - Full stack traces logged for debugging
- System overview - Total namespaces, instances, and alarms at a glance
- Stale instance detection - Identify instances not accessed in 7+ days
- Storage quota alerts - Warn when instances approach 10GB DO storage limit (80% warning, 90% critical)
- Active alarms list - See all pending alarms with countdown timers
- Storage summary - Aggregate storage usage across all instances
- Recent activity - Timeline of operations in last 24h/7d
- Dark/Light/System themes
- Responsive design
- Enterprise auth via Cloudflare Access
- Accessible UI - Proper form labels and ARIA attributes
DO Manager supports integration with external observability platforms via Cloudflare's native OpenTelemetry export. This allows you to send traces and logs to services like Grafana Cloud, Datadog, Honeycomb, Sentry, and Axiom.
Cloudflare Workers natively supports exporting OpenTelemetry-compliant traces and logs to any OTLP endpoint.
Step 1: Create a destination in Cloudflare Dashboard
- Go to Workers Observability
- Click Add destination
- Configure your provider's OTLP endpoint and authentication headers
Common OTLP Endpoints:
| Provider | Traces Endpoint | Logs Endpoint |
|---|---|---|
| Grafana Cloud | https://otlp-gateway-{region}.grafana.net/otlp/v1/traces |
https://otlp-gateway-{region}.grafana.net/otlp/v1/logs |
| Honeycomb | https://api.honeycomb.io/v1/traces |
https://api.honeycomb.io/v1/logs |
| Axiom | https://api.axiom.co/v1/traces |
https://api.axiom.co/v1/logs |
| Sentry | https://{HOST}/api/{PROJECT_ID}/integration/otlp/v1/traces |
https://{HOST}/api/{PROJECT_ID}/integration/otlp/v1/logs |
| Datadog | Coming soon | https://otlp.{SITE}.datadoghq.com/v1/logs |
Step 2: Update wrangler.toml
[observability]
enabled = true
[observability.traces]
enabled = true
destinations = ["your-traces-destination"]
[observability.logs]
enabled = true
destinations = ["your-logs-destination"]For custom metrics and usage-based analytics, use Workers Analytics Engine:
Step 1: Add binding to wrangler.toml
[[analytics_engine_datasets]]
binding = "DO_METRICS"
dataset = "do_manager_metrics"Step 2: Write data points from your Worker
// Example: Track backup operations
env.DO_METRICS.writeDataPoint({
blobs: [namespaceId, instanceId, "backup"],
doubles: [backupSizeBytes, durationMs],
indexes: [userId],
});Step 3: Query via SQL API or Grafana
SELECT
blob1 AS namespace_id,
SUM(double1) AS total_backup_bytes,
COUNT(*) AS backup_count
FROM do_manager_metrics
WHERE timestamp > NOW() - INTERVAL '7' DAY
GROUP BY blob1For real-time log processing, create a Tail Worker:
export default {
async tail(events: TraceItem[]) {
for (const event of events) {
// Forward to your logging service
await fetch("https://your-logging-service.com/ingest", {
method: "POST",
body: JSON.stringify(event),
});
}
},
};Configure in wrangler.toml:
[[tail_consumers]]
service = "my-tail-worker"For structured log export to storage destinations (R2, S3, etc.):
- Go to Cloudflare Dashboard > Analytics > Logs
- Create a Logpush job for Workers Trace Events
- Select your destination (R2, S3, Azure, GCS, etc.)
π Hidden System Namespaces
DO Manager automatically hides internal system Durable Objects to prevent accidental deletion:
| Pattern | Description |
|---|---|
kv-manager_* |
KV Manager internal DOs (ImportExportDO, BulkOperationDO) |
d1-manager_* |
D1 Manager internal DOs |
do-manager_* |
DO Manager internal DOs |
These namespaces are filtered during auto-discovery. To modify the filter list, edit worker/routes/namespaces.ts:
const SYSTEM_DO_PATTERNS = [
"kv-manager_ImportExportDO",
"kv-manager_BulkOperationDO",
"d1-manager_",
"do-manager_",
// Add your own patterns here
];- Node.js 24+ (LTS)
- Cloudflare account
Clone the repository:
git clone https://github.com/neverinfamous/do-manager.git
cd do-managerInstall dependencies:
npm installInitialize local D1 database:
npx wrangler d1 execute do-manager-metadata-dev --local --file=worker/schema.sqlStart both servers in separate terminals:
Terminal 1 β Frontend (Vite):
npm run devTerminal 2 β Worker (Wrangler):
npx wrangler dev --config wrangler.dev.toml --localOpen http://localhost:5173 β no auth required, mock data included.
npx wrangler loginnpx wrangler d1 create do-manager-metadata
npx wrangler d1 execute do-manager-metadata --remote --file=worker/schema.sqlnpx wrangler r2 bucket create do-manager-backupscp wrangler.toml.example wrangler.tomlEdit wrangler.toml with your database_id from step 2.
- Go to Cloudflare Zero Trust
- Configure authentication (GitHub OAuth, etc.)
- Create an Access Application for your domain
- Copy the Application Audience (AUD) tag
- Go to Cloudflare API Tokens
- Create Custom Token with:
- Account β Workers Scripts β Read
- Account β D1 β Edit (if managing D1-backed DOs)
Note: Both API Tokens (Bearer auth) and Global API Keys (X-Auth-Key auth) are supported.
npx wrangler secret put ACCOUNT_ID
npx wrangler secret put API_KEY
npx wrangler secret put TEAM_DOMAIN
npx wrangler secret put POLICY_AUDnpm run build
npx wrangler deployTo manage a Durable Object's storage, you need to add admin hook methods to your DO class. There are two options:
Install the admin hooks package:
npm install do-manager-admin-hooksExtend your Durable Object class:
import { withAdminHooks } from "do-manager-admin-hooks";
export class MyDurableObject extends withAdminHooks() {
async fetch(request: Request): Promise<Response> {
// Handle admin requests first (required for DO Manager)
const adminResponse = await this.handleAdminRequest(request);
if (adminResponse) return adminResponse;
// Your custom logic here
return new Response("Hello from my Durable Object!");
}
}That's it! The package handles all admin endpoints automatically.
Configuration options:
export class SecureDO extends withAdminHooks({
basePath: "/admin", // Change admin endpoint path (default: '/admin')
requireAuth: true, // Require authentication
adminKey: "secret-key", // Admin key for auth
}) {
// ...
}π¦ NPM Package β’ GitHub
Click "Get Admin Hook Code" in the namespace view to generate copy-paste TypeScript code for your DO class.
- Deploy your Worker with admin hooks installed
- In DO Manager, add your namespace with the Admin Hook Endpoint URL (e.g.,
https://my-worker.workers.dev) - Admin hooks are automatically enabled when you save with a URL
- The green "Admin Hook Enabled" badge confirms it's working
Tip: You can set the endpoint URL when creating a namespace, or add it later via Settings.
| Endpoint | Method | Description |
|---|---|---|
/admin/list |
GET | List storage keys (KV) or tables (SQLite) |
/admin/get?key=X |
GET | Get value for a key |
/admin/put |
POST | Set key-value pair |
/admin/delete |
POST | Delete a key |
/admin/sql |
POST | Execute SQL (SQLite only) |
/admin/alarm |
GET/PUT/DELETE | Manage alarms |
/admin/export |
GET | Export all storage |
/admin/import |
POST | Import data |
| Endpoint | Description |
|---|---|
GET /api/namespaces |
List tracked namespaces |
GET /api/namespaces/discover |
Auto-discover from Cloudflare API |
POST /api/namespaces |
Add namespace manually |
POST /api/namespaces/:id/clone |
Clone namespace with new name |
DELETE /api/namespaces/:id |
Remove namespace |
GET /api/namespaces/:id/instances |
List instances |
POST /api/namespaces/:id/instances |
Track new instance |
GET /api/instances/:id/storage |
Get storage contents |
PUT /api/instances/:id/storage |
Update storage |
POST /api/instances/:id/import |
Import keys from JSON |
POST /api/instances/:id/sql |
Execute SQL query |
GET /api/instances/:id/export |
Export instance storage as JSON |
POST /api/instances/:id/clone |
Clone instance with new name |
GET /api/instances/:id/alarm |
Get alarm state |
PUT /api/instances/:id/alarm |
Set alarm |
DELETE /api/instances/:id/alarm |
Delete alarm |
GET /api/instances/:id/backups |
List backups |
POST /api/instances/:id/backups |
Create backup |
POST /api/instances/:id/restore |
Restore from backup |
GET /api/metrics |
Get account metrics |
GET /api/jobs |
List job history |
GET /api/namespaces/:id/export |
Export namespace config as JSON |
POST /api/batch/namespaces/delete |
Batch delete namespaces |
POST /api/batch/instances/delete |
Batch delete instances |
POST /api/batch/instances/backup |
Batch backup instances to R2 |
POST /api/batch/keys/delete |
Log batch delete keys job |
POST /api/batch/keys/export |
Log batch export keys job |
POST /api/search/keys |
Search for keys across all instances |
POST /api/search/values |
Search within storage values |
GET /api/health |
Get system health summary |
GET /api/webhooks |
List configured webhooks |
POST /api/webhooks |
Create a new webhook |
PUT /api/webhooks/:id |
Update a webhook |
DELETE /api/webhooks/:id |
Delete a webhook |
POST /api/webhooks/:id/test |
Send a test webhook |
PUT /api/instances/:id/color |
Update instance color tag |
POST /api/instances/diff |
Compare storage between two instances |
GET /api/namespaces/:id/queries |
List saved SQL queries for namespace |
POST /api/namespaces/:id/queries |
Create a saved SQL query |
PUT /api/queries/:id |
Update a saved SQL query |
DELETE /api/queries/:id |
Delete a saved SQL query |
do-manager/
βββ src/
β βββ components/
β β βββ ui/ # shadcn components
β β βββ layout/ # Header, navigation
β β βββ features/ # Feature components
β βββ contexts/ # React contexts (theme)
β βββ hooks/ # Custom hooks
β βββ services/ # API clients
β βββ types/ # TypeScript types
β βββ App.tsx
βββ worker/
β βββ routes/ # API route handlers
β β βββ namespaces.ts # Namespace discovery & management
β β βββ instances.ts # Instance tracking & cloning
β β βββ storage.ts # Storage operations
β β βββ export.ts # Instance export/download
β β βββ alarms.ts # Alarm management
β β βββ backup.ts # R2 backup/restore
β β βββ batch.ts # Batch operations
β β βββ search.ts # Cross-namespace search
β β βββ metrics.ts # GraphQL analytics
β β βββ jobs.ts # Job history
β β βββ webhooks.ts # Webhook management
β β βββ health.ts # Health dashboard API
β β βββ queries.ts # Saved SQL queries
β β βββ diff.ts # Instance comparison
β βββ types/ # Worker types
β βββ utils/ # Utilities (CORS, auth, helpers, webhooks)
β βββ schema.sql # D1 schema
β βββ index.ts # Worker entry
βββ ...config files
"Failed to fetch from Cloudflare API"
- Verify
ACCOUNT_IDis correct - Ensure API token has Workers Scripts Read permission
- If using Global API Key, ensure email is correct in
worker/routes/namespaces.ts
"Admin hook not configured"
- Add admin hook methods to your DO class
- Set the endpoint URL in namespace settings
- Ensure your Worker is deployed
"No namespaces discovered"
- You may not have any Durable Objects deployed
- System namespaces are filtered by default (see Hidden System Namespaces section)
"Clone instance creates empty instance"
- Ensure your admin hooks use the correct export/import format
- Export should return:
{ data: {...}, exportedAt: "...", keyCount: N } - Import should accept:
{ data: {...} } - The
do-manager-admin-hooksNPM package handles this automatically
Authentication loop
- Check
TEAM_DOMAINincludeshttps:// - Verify
POLICY_AUDmatches your Access application's AUD tag
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit changes (
git commit -m 'Add amazing feature') - Push to branch (
git push origin feature/amazing-feature) - Open a Pull Request
MIT License - see LICENSE for details.
-
π Bug Reports: GitHub Issues
-
π§ Email: admin@adamic.tech
Made with β€οΈ for the Cloudflare community