Know who's burning through your AI budget before the invoice tells you.
Engineering costs used to be two things: headcount and cloud infrastructure. You had tools for both. Then AI coding assistants showed up, and suddenly there's a third cost center that nobody has good tooling for.
A single developer on Cursor can burn through hundreds of dollars a day just by switching to an expensive model or letting an agent loop run wild. Now multiply that by 50, 100, 500 developers. The bill gets big fast, and there's nothing like Datadog or CloudHealth for this category yet.
Cursor's admin dashboard shows you the raw numbers, but it won't tell you when something is off. No anomaly detection. No alerts. No incident tracking. You find out about cost spikes when the invoice lands, weeks after the damage is done.
I built cursor-usage-tracker to fix that. It sits on top of Cursor's Enterprise APIs and gives engineering managers, finance, and platform teams actual visibility into AI spend before it becomes a surprise.
Your company has 50+ developers on Cursor. Do you know who's spending $200/day on Claude Opus while everyone else uses Sonnet?
You're about to find out.
Demo animation created with remotion-readme-kit
It connects to Cursor's Enterprise APIs, collects usage data, and automatically detects anomalies across three layers. When something looks off, you get a Slack message or email within the hour, not next month.
Developer uses Cursor → API collects data hourly → Engine detects anomaly → You get a Slack alert
| What happens | Example |
|---|---|
| A developer exceeds the spend limit | Bob spent $82 this cycle (limit: $50) → Slack alert |
| Someone's daily spend spikes | Alice: daily spend spiked to $214 (4.2x her 7-day avg of $51) → Slack alert |
| A user's cycle spend is far above the team | Bob: cycle spend $957 is 5.1x the team median ($188) → Slack alert |
| A user is statistically far from the team | Bob: daily spend $214 is 3.2σ above team mean ($42) → Slack alert |
Every alert includes who, what model, how much, and a link to their dashboard page so you can investigate immediately.
| Layer | Method | What it catches |
|---|---|---|
| Thresholds | Static limits | Optional hard caps on spend, requests, or tokens (disabled by default) |
| Z-Score | Statistical | User daily spend 2.5+ standard deviations above team mean (active users only) |
| Trends | Spend-based | Daily spend spikes vs personal average, cycle spend outliers vs team median |
Every anomaly becomes a tracked incident with full lifecycle metrics:
Anomaly Detected ──→ Alert Sent ──→ Acknowledged ──→ Resolved
│ │ │ │
└──── MTTD ────────┘ │ │
└── MTTI ──────┘
└────────────────── MTTR ─────────────────────────┘
- MTTD (Mean Time to Detect): how fast the system catches it
- MTTI (Mean Time to Identify): how fast a human acknowledges it
- MTTR (Mean Time to Resolve): how fast it gets fixed
- Slack: Block Kit messages via bot token (
chat.postMessage) with severity, user, model, value vs threshold, and dashboard links. Batches alerts automatically (individual messages for 1-3 anomalies, single summary for 4+). - Email: HTML-formatted alerts via Resend (one API key, no SMTP config)
| Page | What you see |
|---|---|
| Team Overview | Stat cards, spend by user, daily spend trend, spend breakdown, members table with search/sort, group filter dropdown, billing cycle progress, time range picker |
| Insights | DAU chart, model adoption trends, model efficiency rankings (cost/precision), MCP tool usage, file extensions, client versions |
| User Drilldown | Per-user token timeline, model breakdown, feature usage, activity profile, anomaly history |
| Anomalies | Open incidents, MTTD/MTTI/MTTR metrics, full anomaly timeline |
| Settings | Detection thresholds, billing group management (rename, assign, create), HiBob CSV import with change preview |
| What | Where to get it |
|---|---|
| Cursor Enterprise plan | Required for API access |
| Admin API key | Cursor dashboard → Settings → Advanced → Admin API Keys |
| Node.js 18+ | nodejs.org |
Option A: One command
npx cursor-usage-tracker my-tracker
cd my-trackerOption B: Manual clone
git clone https://github.com/ofershap/cursor-usage-tracker.git
cd cursor-usage-tracker
npm installcp .env.example .envEdit .env:
# Required
CURSOR_ADMIN_API_KEY=your_admin_api_key
# Alerting — Slack (at least one alerting channel recommended)
SLACK_BOT_TOKEN=xoxb-your-bot-token # bot token with chat:write scope
SLACK_CHANNEL_ID=C0123456789 # channel to post alerts to
# Dashboard URL (used in alert links)
DASHBOARD_URL=http://localhost:3000
# Optional
CRON_SECRET=your_secret_here # protects the cron endpoint
DASHBOARD_PASSWORD=your_password # optional basic auth for the dashboard
# Email alerts via Resend (optional)
RESEND_API_KEY=re_xxxxxxxxxxxx
ALERT_EMAIL_TO=team-lead@company.comnpm run dev
# Open http://localhost:3000npm run collectYou should see:
[collect] Done in 4.2s
Members: 87
Daily usage: 30
Spending: 87
Usage events: 12,847
After collecting data, run detection separately:
npm run detectThis runs the stored data through all three detection layers and sends alerts for anything it finds.
npm run collectonly fetches data.npm run detectonly runs detection. The cron endpoint (POST /api/cron) does both in one call.
Trigger the cron endpoint hourly (via crontab, GitHub Actions, or any scheduler):
curl -X POST http://localhost:3000/api/cron -H "x-cron-secret: YOUR_SECRET"This collects data, runs anomaly detection, and sends alerts in one call.
For production deployment:
cp .env.example .env # configure your keys
docker compose up -d
# Dashboard at http://localhost:3000The Docker image uses multi-stage builds for a minimal production image. Data persists in a Docker volume.
Docker Compose details
services:
tracker:
build: .
ports:
- "3000:3000"
env_file: .env
volumes:
- tracker-data:/app/data
volumes:
tracker-data:flowchart TB
APIs["Cursor Enterprise APIs\n/teams/members · /teams/spend · /teams/daily-usage-data\n/teams/filtered-usage-events · /teams/groups · /analytics/team/*"]
C["Collector (hourly)"]
DB[("SQLite (local)")]
D["Detection Engine, 3 layers"]
AL["Alerts: Slack / Email"]
DA["Dashboard: Next.js"]
APIs --> C --> DB --> D
DB --> DA
D --> AL
Zero external dependencies. SQLite stores everything locally. No Postgres, no Redis, no cloud database. Clone, configure, run.
All detection thresholds are configurable via the Settings page or the API:
| Setting | Default | What it does |
|---|---|---|
| Max spend per cycle | 0 (off) | Alert when a user exceeds this in a billing cycle |
| Max requests per day | 0 (off) | Alert on excessive daily request count |
| Max tokens per day | 0 (off) | Alert on excessive daily token consumption |
| Z-score multiplier | 2.5 | How many standard deviations above mean to flag (spend + reqs) |
| Z-score window | 7 days | Historical window for statistical comparison |
| Spend spike multiplier | 5.0x | Alert when today's spend > N× user's personal daily average |
| Spend spike lookback | 7 days | How many days of history to compare against |
| Cycle outlier multiplier | 10.0x | Alert when cycle spend > N× team median (active users only) |
Billing Groups: organize teams by department, group, or custom structure
Billing groups let you organize team members by department, team, or any structure that fits your org.
Dashboard Filtering
The Team Overview page includes a group filter dropdown next to the search bar. Select a group to instantly filter all stats, charts, and the members table to that subset. Groups are displayed in a hierarchical Parent > Team format.
Settings Page
From the Settings page you can:
- View all groups with member counts and per-group spend
- Rename groups to match your org structure
- Reassign members between groups
- Create new groups manually
- Search across all members to find who's in which group
HiBob Import: sync your org structure from HiBob's People Directory
For teams using HiBob as their HR platform, the Settings page includes an Import from HiBob feature:
- Download a CSV export from HiBob's People Directory
- Upload it to the import modal in Settings
- Review the preview: see which members will be moved, which groups will be created, and which members weren't matched
- Selectively approve or reject individual changes before applying
The import uses HiBob's Group and Team columns (falling back to Department) to build a Group > Team hierarchy. Small teams (fewer than 3 members) are automatically consolidated into broader groups to avoid excessive granularity.
The HiBob import updates your local billing groups only. It does not push changes back to HiBob or to Cursor's billing API.
| Endpoint | Method | Description |
|---|---|---|
/api/cron |
POST | Collect + detect + alert (use with scheduler) |
/api/stats |
GET | Dashboard statistics (?days=7) |
/api/analytics |
GET | Analytics data: DAU, models, MCP, etc. (?days=30) |
/api/team-spend |
GET | Daily team spend breakdown |
/api/model-costs |
GET | Model cost breakdown by users and spend |
/api/groups |
GET | Billing groups with members and spend |
/api/groups |
PATCH | Rename group, assign member, or create group |
/api/groups/import |
POST | HiBob CSV import (preview + apply) |
/api/anomalies |
GET | Anomaly timeline (?days=30) |
/api/users/[email] |
GET | Per-user statistics (?days=30) |
/api/incidents/[id] |
PATCH | Acknowledge or resolve incident |
/api/settings |
GET/PUT | Detection configuration |
| Component | Technology |
|---|---|
| Framework | Next.js (App Router) |
| Language | TypeScript (strict mode) |
| Database | SQLite (better-sqlite3), zero config |
| Charts | Recharts |
| Styling | Tailwind CSS |
| Testing | Vitest |
| Deployment | Docker (multi-stage) |
npm run dev # Start dev server
npm run collect # Manual data collection
npm run detect # Manual anomaly detection + alerting
npm run typecheck # Type checking
npm test # Run tests
npm run lint # Lint + format checkRequires a Cursor Enterprise plan. The tool uses these endpoints:
| Endpoint | Auth | What it provides |
|---|---|---|
GET /teams/members |
Admin API key | Team member list |
POST /teams/spend |
Admin API key | Per-user spending data |
POST /teams/daily-usage-data |
Admin API key | Daily usage metrics |
POST /teams/filtered-usage-events |
Admin API key | Detailed usage events with model/token info |
POST /teams/groups |
Admin API key | Billing groups + cycle dates |
GET /analytics/team/* |
Analytics API key | DAU, model usage, MCP, tabs, etc. (optional) |
Rate limit: 20 requests/minute (Admin API), 100 requests/minute (Analytics API). The collector handles rate limiting with automatic retry.
This project handles sensitive usage and spending data, so security matters here more than most.
- Vulnerability reporting: See SECURITY.md for the disclosure policy. Report vulnerabilities privately via GitHub Security Advisories, not public issues.
- Automated scanning: Every push and PR goes through CodeQL (SQL injection, XSS, CSRF, etc.) and Dependabot for dependency vulnerabilities.
- OpenSSF Scorecard: Continuously evaluated against OpenSSF Scorecard security benchmarks.
- OpenSSF Best Practices: Passing badge earned.
- Data stays local: Everything is stored in a local SQLite file. Nothing leaves your infrastructure. No external databases, no cloud services, no telemetry.
- Small dependency tree: Fewer dependencies = smaller attack surface.
- Signed releases: Automated via semantic-release with GitHub-verified provenance.
See CONTRIBUTING.md for setup and guidelines. Bug reports, feature requests, docs improvements, and code are all welcome. Use conventional commits and make sure CI is green before opening a PR.
This project uses the Contributor Covenant Code of Conduct.
Ofer Shapira
MIT © Ofer Shapira