This guide covers development and release processes for both Docker Compose and Cloudflare (Workers + Pages) deployments.
- TuvixRSS Deployment Guide
- Table of Contents
- Overview
- Docker Compose Deployment
- Cloudflare Deployment
- Shared Topics
- CI/CD Integration
- Security Checklist
- Performance Optimization
- Next Steps
TuvixRSS supports two deployment targets:
- Docker Compose - Traditional container-based deployment with Node.js runtime
- Cloudflare - Serverless edge deployment:
- API: Cloudflare Workers (serverless edge runtime) - typically deployed to
api.example.com - Frontend: Cloudflare Pages (static site hosting) - typically deployed to
feed.example.com
- API: Cloudflare Workers (serverless edge runtime) - typically deployed to
Both deployments share the same codebase with runtime-specific adapters.
Example Domain Structure:
example.com- Static blog (optional, separate from TuvixRSS)feed.example.com- Frontend Pages app (TuvixRSS UI)api.example.com- Worker API (TuvixRSS backend)
| Feature | Docker Compose | Cloudflare Workers |
|---|---|---|
| Runtime | Node.js 20+ | Cloudflare Workers |
| Database | SQLite (better-sqlite3) | D1 (Cloudflare's SQLite) |
| Cron | node-cron | Workers Scheduled Events |
| Rate Limiting | Disabled | Cloudflare Workers rate limit bindings |
TuvixRSS uses Better Auth for authentication, which manages user sessions via HTTP-only cookies. The BETTER_AUTH_SECRET environment variable (minimum 32 characters) is used to sign and verify session cookies securely. No JWT tokens are used.
Local Development: Better Auth works perfectly with localhost setups:
- Frontend and API can run on different ports (e.g.,
localhost:5173andlocalhost:3001) - Cookies are automatically handled (localhost domain works for both ports)
- CORS is configured with
credentials: trueto allow cookies
Cross-Subdomain: If your frontend and API are on different subdomains (e.g., feed.example.com and api.example.com), configure the COOKIE_DOMAIN secret to the root domain (e.g., example.com) to enable cross-subdomain cookies.
- Docker 20.10+
- Docker Compose 2.0+
- Git
# Clone repository
git clone https://github.com/TechSquidTV/Tuvix-RSS.git
cd Tuvix-RSS
# Copy environment file
cp .env.example .env
# Generate secure Better Auth secret
openssl rand -base64 32
# Edit .env file with your values
vim .envRequired Environment Variables:
BETTER_AUTH_SECRET=your-generated-secret-here # Min 32 chars
CORS_ORIGIN=http://localhost:5173
DATABASE_PATH=/app/data/tuvix.db
PORT=3001
NODE_ENV=production
BASE_URL=http://localhost:5173 # Frontend URL for Better Auth callbacksAdmin User Setup:
TuvixRSS supports two methods for creating an admin user:
Option 1: Admin Bootstrap on Startup (RECOMMENDED - Secure) Provide admin credentials to create an admin user automatically when the container starts:
ADMIN_USERNAME=admin
ADMIN_EMAIL=admin@example.com
ADMIN_PASSWORD=your-secure-password-hereNote: All three ADMIN_* variables must be set for bootstrap to work. If an admin already exists, these will be ignored.
Option 2: First User Auto-Promotion
ALLOW_FIRST_USER_ADMIN=trueIf enabled, the first person to register becomes admin. This is convenient for quick setup.
Timing consideration: If your deployment is publicly accessible, ensure you register first before others discover the instance. For internet-exposed production deployments, Option 1 (bootstrap) is more deterministic.
Recommended for:
- Local development/testing
- Private networks or VPN-only access
- Deployments where you control registration access
# Build Docker images
pnpm run docker:build
# Start containers
pnpm run docker:up
# View logs
pnpm run docker:logs
# Stop containers
pnpm run docker:downServices:
api- tRPC API server (port 3001)- Health check: http://localhost:3001/health
- tRPC endpoint: http://localhost:3001/trpc
app- React frontend (port 5173)- Health check: http://localhost:5173/health
For active development without Docker:
# Install dependencies
pnpm install
# Run database migrations
pnpm run db:migrate
# Start both API and frontend
pnpm run dev
# Or start separately:
pnpm run dev:api # API on :3001
pnpm run dev:app # Frontend on :5173# Run all tests
pnpm run test
# Run tests with coverage
pnpm run test:coverage
# Type checking
pnpm run type-check
# Linting
pnpm run lint
# Pre-release checks
pnpm run pre-checkThe docker-compose.yml supports both pre-built images and source builds. Choose your preferred method:
Advantages:
- ✅ No build step required (faster deployment)
- ✅ Version tag embedded in image (shows in settings)
- ✅ Multi-arch support (amd64 & arm64)
- ✅ Consistent builds across environments
Setup:
# On your production server
mkdir Tuvix-RSS && cd Tuvix-RSS
# Download docker-compose and env files
curl -O https://raw.githubusercontent.com/TechSquidTV/Tuvix-RSS/main/docker-compose.yml
curl -O https://raw.githubusercontent.com/TechSquidTV/Tuvix-RSS/main/env.example
cp env.example .env
vim .env # Configure your environmentPin to a specific version (recommended for production):
# Set version in .env file
echo "VERSION=v0.7.0" >> .env
# Or export temporarily
export VERSION=v0.7.0
# Create data directory with proper permissions
# The container runs as uid 1001, so the directory must be writable
mkdir data
chmod 777 data # Or: chown 1001:1001 data
docker compose pull
docker compose up -dDeploy:
# Verify health
curl http://localhost:3001/health
curl http://localhost:5173/health # app container listens on 8080, exposed on host as 5173
# Monitor logs
docker compose logs -fUpdates:
# Update to new version
export VERSION=v0.7.0 # Or update in .env
docker compose pull
docker compose up -dAdvantages:
- ✅ Full control over build process
- ✅ Can modify code before deployment
- ✅ No external registry dependencies
Setup:
# On your production server
git clone https://github.com/yourusername/TuvixRSS.git
cd TuvixRSS
# Create production environment file
cp .env.example .env
vim .envProduction Environment Variables:
# SECURITY: Use strong secrets in production
BETTER_AUTH_SECRET=<generate-with-openssl-rand-base64-32> # Min 32 chars
CORS_ORIGIN=https://feed.example.com # Frontend URL (or multiple origins comma-separated)
DATABASE_PATH=/app/data/tuvix.db
PORT=3001
NODE_ENV=production
BASE_URL=https://feed.example.com # Frontend URL for Better Auth callbacks
# Admin Setup (choose one option)
# Option 1: Bootstrap admin on startup (recommended for production)
ADMIN_USERNAME=admin
ADMIN_EMAIL=admin@example.com
ADMIN_PASSWORD=<secure-password>
# Option 2: First user auto-promotion (convenient for dev/testing)
# ALLOW_FIRST_USER_ADMIN=true
# Optional: Email service (for verification emails, password resets)
# RESEND_API_KEY=re_xxxxxxxxx
# EMAIL_FROM=noreply@yourdomain.com
# Optional: AI Features (requires Pro or Enterprise plan)
# OpenAI API key for AI-powered category suggestions
# Get your API key from: https://platform.openai.com/api-keys
# OPENAI_API_KEY=sk-proj-xxxxxxxxxxxxx
# Optional: Customize fetch behavior
FETCH_INTERVAL_MINUTES=60 # How often to fetch RSS feedsDeploy:
# Build and start
docker compose build
docker compose up -d
# Verify health
curl http://localhost:3001/health
curl http://localhost:5173/health # app container listens on 8080, exposed on host as 5173
# Monitor logs
docker compose logs -fNote: When building from source, the version displayed in settings will default to "docker". To show the git version, set VITE_APP_VERSION before building:
# Set version to git tag
export VITE_APP_VERSION=$(git describe --tags --always)
docker compose build# Backup database
docker compose exec api cp /app/data/tuvix.db /app/data/backup-$(date +%Y%m%d).db
# Or from host (if volume is mounted)
cp ./data/tuvix.db ./data/backup-$(date +%Y%m%d).dbFor Pre-built Images:
# Update version
export VERSION=v0.7.0 # Or update in .env
# Pull and restart
docker compose pull
docker compose up -dFor Source Builds:
# Pull latest code
git pull origin main
# Or checkout specific release
git fetch --tags
git checkout v0.7.0
# Rebuild and restart
docker compose down
export VITE_APP_VERSION=$(git describe --tags --always)
docker compose build
docker compose up -d
# Database migrations run automatically on startupImportant: Both Dockerfiles use the monorepo root as the build context and copy workspace files. This ensures the correct pnpm-lock.yaml from the workspace root is used.
API Dockerfile (packages/api/Dockerfile):
- Multi-stage build (builder + production)
- Build context: monorepo root (not
packages/api) - Copies workspace files (
pnpm-workspace.yaml, rootpnpm-lock.yaml) - Installs pnpm 10.19.0
- Installs dependencies for all needed packages (api + tricorder)
- Runs migrations on startup
- Exposes port 3001
- Health check on /health endpoint
App Dockerfile (packages/app/Dockerfile):
- Multi-stage build with nginx
- Build context: monorepo root (not
packages/app) - Copies workspace files (
pnpm-workspace.yaml, rootpnpm-lock.yaml) - Accepts VITE_API_URL build arg (API endpoint for frontend)
- Accepts VITE_APP_VERSION build arg (version displayed in settings, defaults to "docker")
- SPA routing support
- Static asset caching
- Health check on /health endpoint
services:
api:
build:
context: .
dockerfile: ./packages/api/Dockerfile
ports:
- "3001:3001"
volumes:
- ./data:/app/data
environment:
- DATABASE_PATH=/app/data/tuvix.db
- BETTER_AUTH_SECRET=${BETTER_AUTH_SECRET}
- CORS_ORIGIN=${CORS_ORIGIN}
healthcheck:
test: ["CMD", "wget", "--spider", "http://localhost:3001/health"]
interval: 30s
timeout: 3s
retries: 3
app:
build:
context: .
dockerfile: ./packages/app/Dockerfile
args:
- VITE_API_URL=${VITE_API_URL:-http://localhost:3001/trpc}
- VITE_APP_VERSION=${VITE_APP_VERSION:-docker}
ports:
- "5173:8080"
depends_on:
api:
condition: service_healthyBuild Arguments:
VITE_API_URL: The API endpoint (defaults tohttp://localhost:3001/trpc)VITE_APP_VERSION: Version string shown in settings page (defaults todocker)- Can be set to git commit SHA:
VITE_APP_VERSION=$(git rev-parse --short HEAD) - Or a version tag:
VITE_APP_VERSION=v1.2.3
- Can be set to git commit SHA:
# Check container health
docker compose ps
docker inspect tuvix-api | grep -A 5 Health
# Health check endpoints
curl http://localhost:3001/health
# Response: {"status":"ok","runtime":"nodejs"}
curl http://localhost:5173/health
# Response: ok# All services
docker compose logs -f
# Specific service
docker compose logs -f api
# Last 100 lines
docker compose logs --tail=100 api
# Filter cron logs
docker compose logs -f api | grep "RSS fetch\|Prune"Port Already in Use:
# Check what's using the port
lsof -i :3001
# Change port in .env
PORT=3002Database Locked:
# Stop all containers
docker compose down
# Remove stale lock
rm -f ./data/tuvix.db-shm ./data/tuvix.db-wal
# Restart
docker compose up -d- Node.js 20+
- pnpm
- Cloudflare account (Sign up)
- Wrangler CLI (
npm install -g wrangleror usenpx wrangler)
# 1. Authenticate
npx wrangler login
# 2. Setup API (Workers)
cd packages/api
# Follow "API Setup" section below
# 3. Setup Frontend (Pages)
cd packages/app
# Follow "Frontend Setup" section belowAuthenticate:
npx wrangler loginCreate D1 Database:
cd packages/api
# Create database
npx wrangler d1 create tuvix
# Output will show:
# ✅ Successfully created DB 'tuvix'!
# database_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" # Copy this ID
#
# For local development: Create wrangler.toml.local with this ID
# For CI/CD: Add this ID as D1_DATABASE_ID GitHub secretThe wrangler.example.toml file serves as a template with environment variable placeholders. You'll create your own wrangler.toml from this template:
name = "tuvix-api"
main = "src/adapters/cloudflare.ts"
compatibility_date = "2024-11-10"
compatibility_flags = ["nodejs_als"]
# D1 Database binding
[[d1_databases]]
binding = "DB"
database_name = "tuvix"
database_id = "${D1_DATABASE_ID}" # Substituted by envsubst in CI/CD
# Plan-specific API Rate Limit Bindings
# Each plan has its own binding with the plan's rate limit
[[ratelimits]]
name = "FREE_API_RATE_LIMIT"
namespace_id = "1003"
simple = { limit = 60, period = 60 } # Free plan: 60 requests per minute
[[ratelimits]]
name = "PRO_API_RATE_LIMIT"
namespace_id = "1004"
simple = { limit = 180, period = 60 } # Pro plan: 180 requests per minute
[[ratelimits]]
name = "ENTERPRISE_API_RATE_LIMIT"
namespace_id = "1005"
simple = { limit = 600, period = 60 } # Enterprise/admin plan: 600 requests per minute
# Public Feed Rate Limiting (unchanged)
[[ratelimits]]
name = "FEED_RATE_LIMIT"
namespace_id = "1001" # User-defined positive integer
simple = { limit = 10000, period = 60 }
[[ratelimits]]
name = "FEED_RATE_LIMIT"
namespace_id = "1002" # User-defined positive integer
simple = { limit = 10000, period = 60 }
[vars]
RUNTIME = "cloudflare"
[triggers]
crons = ["*/5 * * * *"] # Every 5 minutesSecurity Notes:
- ✅ Safe to commit:
wrangler.example.tomlwith environment variable placeholders - ❌ Never commit:
wrangler.tomlwith filled-in values (now gitignored) - 🔒 Use GitHub Secrets (not Variables) for sensitive data
Local Development Setup:
For local development, you have two options:
Option 1: Create wrangler.toml.local (Recommended for quick setup):
cd packages/api
# Copy the example file
cp wrangler.toml.local.example wrangler.toml.local
# Edit wrangler.toml.local and replace "your-database-id-here" with your actual D1 database ID
# Example: database_id = "7078240d-69e3-46fb-bb21-aa8e5208de9b"Note: wrangler.toml.local is gitignored and will override values when scripts read configuration.
Option 2: Create wrangler.toml directly (Alternative):
cd packages/api
# Copy the example to wrangler.toml
cp wrangler.example.toml wrangler.toml
# Edit wrangler.toml and replace ${D1_DATABASE_ID} with your actual database ID
# Note: wrangler.toml is gitignored, so your values won't be committedDeployment Scripts:
The deployment scripts (deploy.sh, migrate-d1.sh) automatically create wrangler.toml from wrangler.example.toml and substitute the database ID from either:
D1_DATABASE_IDenvironment variable, ORwrangler.toml.localfile
CI/CD Setup:
The GitHub Actions workflow automatically creates wrangler.toml from wrangler.example.toml and substitutes ${D1_DATABASE_ID} before deployment.
To configure:
- Go to your GitHub repository → Settings → Secrets and variables → Actions
- Click the "Secrets" tab (not "Variables")
- Click "New repository secret"
- Name:
D1_DATABASE_ID(must match exactly) - Value: Your D1 database ID (from
wrangler d1 create tuvix) - Click "Add secret"
Why Secrets instead of Variables?
- Secrets are encrypted and masked in logs (use for sensitive data)
- Variables are plain text and visible in logs (use for non-sensitive configuration)
Required Secrets:
cd packages/api
# Better Auth secret (min 32 chars)
npx wrangler secret put BETTER_AUTH_SECRET
# Generate with: openssl rand -base64 32
# First user auto-promotion to admin (optional - enabled by default if not set)
# Only set this secret if you want to explicitly control the behavior
npx wrangler secret put ALLOW_FIRST_USER_ADMIN
# Enter: true (to enable, default) or false (to disable)
# CORS origin (frontend URL) - Set BEFORE deploying API
npx wrangler secret put CORS_ORIGIN
# Enter: https://feed.example.com (if using custom domain)
# Or: https://your-pages-project.pages.dev (if using Pages default)
# Multiple origins: https://feed.example.com,https://your-pages-project.pages.dev
# Base URL for Better Auth (REQUIRED for production)
# Better Auth uses this for generating callback URLs and session management
# Must be your production API URL, NOT localhost
npx wrangler secret put BASE_URL
# Enter: https://api.example.com (if using custom domain)
# Or: https://your-worker.workers.dev (if using Workers default domain)
# Example: https://api.tuvix.appOptional Secrets:
# Email service (Resend)
# See docs/developer/email-system.md for complete email setup guide
npx wrangler secret put RESEND_API_KEY
npx wrangler secret put EMAIL_FROM
npx wrangler secret put BASE_URL
# AI Features (requires Pro or Enterprise plan)
# OpenAI API key for AI-powered category suggestions
# Get your API key from: https://platform.openai.com/api-keys
npx wrangler secret put OPENAI_API_KEY
# Enter: sk-proj-xxxxxxxxxxxxx
# Cross-subdomain cookies (if frontend/API on different subdomains)
npx wrangler secret put COOKIE_DOMAIN
# Enter: example.com (root domain, not subdomain like api.example.com)
# Sentry Error Tracking (Optional but recommended)
# Get DSN from: https://techsquidtv.sentry.io/settings/projects/tuvix-api/keys/
npx wrangler secret put SENTRY_DSN
# Enter: https://xxx@xxx.ingest.sentry.io/xxx
npx wrangler secret put SENTRY_ENVIRONMENT
# Enter: production
# Optional: Release tracking (git commit SHA or version)
npx wrangler secret put SENTRY_RELEASE
# Enter: v1.0.0 or git commit SHANote: For first deployment, run migrations BEFORE deploying. For subsequent deployments, migrations can run before or after deployment (CI/CD runs them after deployment).
cd packages/api
# Generate migrations from schema changes (if schema was modified)
pnpm run db:generate
# Apply migrations to production D1
pnpm run db:migrate:d1
# Verify migrations
npx wrangler d1 execute tuvix --remote \
--command "SELECT name FROM sqlite_master WHERE type='table';"cd packages/api
# Pre-deployment checks
pnpm run type-check
pnpm run test
pnpm run build
# Deploy to Workers
pnpm run deploy
# Or: npx wrangler deploy
# Monitor deployment
npx wrangler tailCloudflare Workers free tier does not support password authentication due to CPU time limits:
- Free tier: 10ms CPU limit
- Password hashing (scrypt): Requires 3-4 seconds of CPU time
- Paid tier ($5/month): 30 seconds CPU limit (required)
See GitHub Issue #969 for details.
Prerequisites:
- ✅ Cloudflare Workers Paid plan active ($5/month)
- ✅
ALLOW_FIRST_USER_ADMINenabled (enabled by default, or set secret to"true"in Step 3) - ✅ Email service configured (optional but recommended)
- ✅ CPU limits configured in
wrangler.toml(already set to 30 seconds)
Admin User Creation:
The first user to sign up automatically becomes admin:
- Navigate to your frontend URL (e.g.,
https://feed.example.com/sign-up) - Sign up with your email and password
- You'll be assigned user ID 1 and admin role automatically
- Email verification is disabled by default - you can log in immediately
Why this approach:
- ✅ No manual database manipulation needed
- ✅ Uses Better Auth's standard signup flow
- ✅ Automatic role assignment via
ALLOW_FIRST_USER_ADMINlogic - ✅ Works with all authentication methods (email, username)
Verify Admin User Created:
# Check if admin user exists
npx wrangler d1 execute tuvix --remote \
--command "SELECT id, email, email_verified, role FROM user WHERE id = 1;"
# Expected output:
# id: 1
# email: your@email.com
# email_verified: 1
# role: adminConfiguration Notes:
- Email verification is disabled by default (can be enabled in admin settings)
- First user automatically gets admin role and free plan
- Subsequent users get user role and must be promoted by admin
- Admin can enable email verification requirement in settings after initial setup
Option A: Via Wrangler CLI (Recommended)
# Create the Pages project (first time only)
npx wrangler pages project create tuvix-app
# Build and deploy
cd packages/app
export VITE_API_URL=https://api.example.com/trpc
# Or if not using custom domain: https://your-worker.workers.dev/trpc
pnpm run build
npx wrangler pages deploy dist --project-name=tuvix-appNote: The project name (tuvix-app in this example) must match the CLOUDFLARE_PAGES_PROJECT_NAME GitHub secret used in CI/CD. This is the internal Cloudflare project name, not your custom domain.
Option B: Via Cloudflare Dashboard
- Go to Cloudflare Dashboard → Pages
- Click "Create a project"
- Connect your Git repository (GitHub/GitLab)
- Configure build settings:
- Build command:
cd packages/app && pnpm install && pnpm build - Build output directory:
packages/app/dist - Root directory:
/(project root)
- Build command:
- Add environment variable:
- Variable:
VITE_API_URL - Value:
https://api.example.com/trpc(orhttps://your-worker.workers.dev/trpcif not using custom domain)
- Variable:
- In Cloudflare Dashboard → Pages → Your Project → Custom domains
- Click "Set up a custom domain"
- Enter your domain (e.g.,
feed.example.com) - Cloudflare will automatically configure DNS
Update CORS: After adding a custom domain, update the CORS_ORIGIN secret in your Worker to include the frontend URL:
npx wrangler secret put CORS_ORIGIN
# Enter: https://feed.example.comExample: For feed.tuvix.app, set CORS_ORIGIN to https://feed.tuvix.app
When to configure: After both API and Frontend are deployed and working.
If your frontend and API are on different subdomains (e.g., feed.example.com and api.example.com), configure cross-subdomain cookies:
When You Need This:
- ✅ Frontend on
feed.example.com, API onapi.example.com - ✅ Frontend on
www.example.com, API onapi.example.com - ❌ Both on same domain (e.g.,
example.com/feedandexample.com/api) - ❌ Both on same subdomain (e.g.,
feed.example.com/feedandfeed.example.com/api)
Configuration:
# Set cookie domain to root domain (not subdomain)
npx wrangler secret put COOKIE_DOMAIN
# Enter: example.com (NOT api.example.com or feed.example.com)Example: For feed.tuvix.app and api.tuvix.app, set COOKIE_DOMAIN to tuvix.app
Security Note: Setting COOKIE_DOMAIN makes cookies accessible across all subdomains. Only enable if necessary and ensure all subdomains are trusted.
Local Development:
# API (Workers)
cd packages/api
pnpm run dev:workers
# Starts: Local Workers runtime (Miniflare), Local D1 database, Auto-reload
# Frontend
cd packages/app
pnpm run dev
# Frontend runs on http://localhost:5173
# Points to API at VITE_API_URL (default: http://localhost:3001/trpc)
# Testing
pnpm run test
npx wrangler dev # Test Workers locally
npx wrangler dev --test-scheduled # Test cron trigger locallyProduction Deployment:
# Pre-Deployment
pnpm run type-check
pnpm run test
# Deploy API
cd packages/api
pnpm run db:migrate:d1 # Run migrations
pnpm run build
pnpm run deploy
# Deploy Frontend
cd packages/app
export VITE_API_URL=https://api.example.com/trpc
pnpm run build
npx wrangler pages deploy dist --project-name=tuvix-app
# Verify
curl https://api.example.com/health
curl https://feed.example.com/healthWorker Settings (Cloudflare Dashboard → Workers → Your Worker → Settings):
- CPU Limit: 50ms (sufficient for most operations)
- Memory: 128MB
- Cron Triggers: Configured via
wrangler.toml(*/5 * * * *)
Pages Settings (Cloudflare Dashboard → Pages → Your Project → Settings):
- Build command:
cd packages/app && pnpm install && pnpm build - Build output directory:
packages/app/dist - Environment variables:
VITE_API_URL(set to your Worker URL)
Custom Domains:
# Add custom domain to Worker
npx wrangler domains add api.example.com
# Update CORS_ORIGIN secret to include frontend domain
npx wrangler secret put CORS_ORIGIN
# Enter: https://feed.example.comExample: npx wrangler domains add api.tuvix.app
# API (Workers) logs
npx wrangler tail
# Filter by status
npx wrangler tail --status error
npx wrangler tail --status ok
# Search logs
npx wrangler tail --search "RSS fetch"
npx wrangler tail --search "Cron triggered"
# Frontend (Pages) logs
# View in Cloudflare Dashboard → Pages → Your Project → Deployments → View logs- Workers: Cloudflare Dashboard → Workers → Your Worker → Metrics
- Pages: Cloudflare Dashboard → Pages → Your Project → Analytics
# Check API health
curl https://api.example.com/health
# Or: curl https://your-worker.workers.dev/health
# Response: {"status":"ok","runtime":"cloudflare"}
# Check frontend
curl https://feed.example.com/health
# Or: curl https://your-pages-project.pages.dev/healthCORS Errors:
# Ensure CORS_ORIGIN includes your frontend URL
npx wrangler secret put CORS_ORIGIN
# Enter: https://feed.example.com
# Or if using Pages default: https://your-pages-project.pages.devAuthentication Cookies Not Working:
- If frontend/API on different subdomains: Set
COOKIE_DOMAINsecret - Verify
CORS_ORIGINincludes frontend URL - Ensure frontend uses
credentials: "include"in fetch requests
Database Migration Failed:
# Check migration status
npx wrangler d1 migrations list tuvix
# Re-run migrations
pnpm run db:migrate:d1
# Check D1 status
npx wrangler d1 execute tuvix --remote \
--command "SELECT * FROM __drizzle_migrations;"Rate Limit Namespaces Not Found:
- Verify
wrangler.tomlhas correct format:- Uses
name(notbinding) - Uses
namespace_idas a string integer (e.g.,"1001") - Uses
simpleobject withlimitandperiod
- Uses
- Ensure
namespace_idvalues are unique positive integers - Check that bindings match the names used in code (
FREE_API_RATE_LIMIT,PRO_API_RATE_LIMIT,ENTERPRISE_API_RATE_LIMIT,FEED_RATE_LIMIT)
Rate Limiting:
- API Rate Limiting: Per-user, per-minute limits based on subscription plan
- Public Feed Rate Limiting: Per-feed owner, per-minute limits
- Monitor:
npx wrangler tail --search "Rate limit"
Admin Initialization Failed:
Error: "Admin credentials not provided in environment variables"
# Ensure all three admin secrets are set
npx wrangler secret list
# Should show: ADMIN_USERNAME, ADMIN_EMAIL, ADMIN_PASSWORD
# If missing, set them:
npx wrangler secret put ADMIN_USERNAME
npx wrangler secret put ADMIN_EMAIL
npx wrangler secret put ADMIN_PASSWORD
# Then retry initialization
curl -X POST https://api.example.com/_admin/initError: "Admin user already exists"
- This is normal if admin was already created
- You can skip initialization and proceed to login
- To verify admin exists:
npx wrangler d1 execute tuvix --remote --command "SELECT id, email, username, role FROM user WHERE role = 'admin';"
Error: "Failed to create admin user via Better Auth"
- Check Worker logs:
npx wrangler tail --status error - Verify database migrations completed successfully
- Ensure
BETTER_AUTH_SECRETis set correctly - Check that email/username don't already exist:
npx wrangler d1 execute tuvix --remote --command "SELECT email, username FROM user;"
Cannot Login After Initialization:
- Verify admin was created: Check database (see above)
- Ensure you're using the correct credentials (from secrets you set)
- Try both email and username login endpoints:
/api/auth/sign-in/email(with email)/api/auth/sign-in/username(with username)
- Check CORS_ORIGIN includes your frontend URL
- Verify cookies are being set (check browser DevTools → Application → Cookies)
"CPU Time Limit Exceeded" Error During Login:
Common causes that manifest as CPU exceeded errors:
-
Missing BASE_URL Secret:
# Better Auth needs production BASE_URL, not localhost npx wrangler secret put BASE_URL # Enter: https://api.example.com (your API domain) # Or: https://your-worker.workers.dev (if using Workers default domain)
-
CORS Configuration Issues:
- Ensure
CORS_ORIGINsecret includes your frontend URL - Frontend must allow requests to
/api/auth/*endpoints - Check browser console for CORS errors (may be masked by CPU error)
# Verify CORS_ORIGIN is set correctly npx wrangler secret put CORS_ORIGIN # Enter: https://feed.example.com (your frontend domain)
- Ensure
-
Better Auth Base URL Mismatch:
- Better Auth uses
BASE_URLorBETTER_AUTH_URLfor generating callback URLs - If not set, defaults to
http://localhost:5173which breaks in production - Set
BASE_URLsecret to your production API URL
- Better Auth uses
Debugging Steps:
- Check Worker logs:
npx wrangler tail --status error - Verify all required secrets are set:
npx wrangler secret list - Test CORS by checking browser Network tab for preflight OPTIONS requests
- Verify
BASE_URLmatches your actual API domain (not localhost)
Free Plan Optimization:
- Free plan has 50ms CPU limit (cannot be increased)
- Ensure Better Auth is properly configured to avoid unnecessary CPU usage
- Set
BASE_URLandCORS_ORIGINcorrectly to prevent retry loops - Monitor CPU usage:
npx wrangler tailand look for patterns
| Variable | Required | Default | Description |
|---|---|---|---|
BETTER_AUTH_SECRET |
Yes | - | Secret for Better Auth session management (min 32 chars) |
CORS_ORIGIN |
Yes | - | Allowed CORS origins (comma-separated) |
NODE_ENV |
No | development | Environment mode |
| Variable | Required | Default | Description |
|---|---|---|---|
DATABASE_PATH |
No | ./data/tuvix.db | Path to SQLite database |
PORT |
No | 3001 | API server port |
BASE_URL |
No | - | Frontend URL for Better Auth callbacks (e.g., http://localhost:5173) |
ADMIN_USERNAME |
No | - | Admin username for bootstrap (requires ADMIN_EMAIL and ADMIN_PASSWORD) - Recommended for production |
ADMIN_EMAIL |
No | - | Admin email for bootstrap (requires ADMIN_USERNAME and ADMIN_PASSWORD) - Recommended for production |
ADMIN_PASSWORD |
No | - | Admin password for bootstrap (requires ADMIN_USERNAME and ADMIN_EMAIL) - Recommended for production |
ALLOW_FIRST_USER_ADMIN |
No | false | Enable first user auto-promotion to admin. Convenient for dev/testing. For public production deployments, bootstrap (ADMIN_* vars) is more deterministic. |
RESEND_API_KEY |
No | - | Resend API key for email service |
EMAIL_FROM |
No | - | Email sender address (must match verified domain in Resend) |
COOKIE_DOMAIN |
No | - | Root domain for cross-subdomain cookies (e.g., "example.com") |
Bindings (configured in wrangler.toml):
| Binding | Type | Description |
|---|---|---|
DB |
D1 | Database binding |
FREE_API_RATE_LIMIT |
RateLimit | Free plan API rate limiting (60/min) |
PRO_API_RATE_LIMIT |
RateLimit | Pro plan API rate limiting (180/min) |
ENTERPRISE_API_RATE_LIMIT |
RateLimit | Enterprise/admin plan API rate limiting (600/min) |
FEED_RATE_LIMIT |
RateLimit | Public feed rate limiting binding |
Secrets (set via wrangler secret put - not in wrangler.toml):
| Secret | Required | Description |
|---|---|---|
BETTER_AUTH_SECRET |
Yes | Secret for Better Auth session management (min 32 chars) |
ALLOW_FIRST_USER_ADMIN |
No | Enable first user auto-promotion to admin (defaults to enabled, set to "false" to disable) |
CORS_ORIGIN |
Yes | Allowed CORS origins (comma-separated) |
BASE_URL |
Yes | Base URL for Better Auth (production API URL, NOT localhost). Used for callback URLs and session management. |
RESEND_API_KEY |
No | Resend API key for email service (see Email System Guide) |
EMAIL_FROM |
No | Email sender address (must match verified domain in Resend) |
COOKIE_DOMAIN |
No | Root domain for cross-subdomain cookies (e.g., "example.com") |
wrangler.toml. Use wrangler secret put for all sensitive values. Only wrangler.example.toml should be committed (with placeholders); wrangler.toml is gitignored.
Migrations run automatically on container startup (packages/api/Dockerfile:45):
CMD ["sh", "-c", "node dist/db/migrate-local.js && node dist/adapters/express.js"]Manual migrations:
# From host
pnpm run db:migrate
# From container
docker compose exec api node dist/db/migrate-local.jsMust be run manually before deployment:
cd packages/api
# Generate migration from schema changes
pnpm run db:generate
# Apply to local D1
pnpm run db:migrate:d1:local
# Apply to remote D1
pnpm run db:migrate:d1
# Verify migrations
npx wrangler d1 execute tuvix --remote \
--command "SELECT name FROM sqlite_master WHERE type='table';"
# Check migration status
npx wrangler d1 migrations list tuvix- Modify Schema - Edit
packages/api/src/db/schema.ts - Generate Migration -
pnpm run db:generate - Test Locally - Run on local database
- Deploy:
- Docker: Restart containers (auto-migrates)
- Workers: Run
pnpm run db:migrate:d1then deploy
TuvixRSS runs two scheduled tasks:
- RSS Feed Fetching - Fetches new articles from subscribed feeds
- Article Pruning - Removes old articles based on retention policy
Uses node-cron (scheduler.ts:44):
// RSS fetch - dynamic interval from global_settings
cron.schedule(fetchCronExpression, async () => {
await handleRSSFetch(env);
});
// Article prune - daily at 2 AM
cron.schedule("0 2 * * *", async () => {
await handleArticlePrune(env);
});Configuration:
- Fetch interval: Configurable via
global_settings.fetchIntervalMinutes - Default: 60 minutes
- Minimum: 5 minutes
Logs:
docker compose logs -f api | grep "RSS fetch\|Prune"Uses Workers Scheduled Events (cloudflare.ts:284):
# wrangler.toml
[triggers]
crons = ["*/5 * * * *"] # Every 5 minutesHow it works:
- Cron triggers every 5 minutes
- Checks
global_settings.lastRssFetchAtandfetchIntervalMinutes - Runs RSS fetch if interval has elapsed
- Checks
global_settings.lastPruneAt - Runs prune if 24 hours have elapsed
Configuration:
# View cron triggers
npx wrangler deployments list
# Test cron locally
npx wrangler dev --test-scheduled
# Monitor cron execution
npx wrangler tail --search "Cron triggered"Cron Interval Limits:
- Cloudflare: Minimum 1 minute intervals
- Recommended: 5-15 minutes (balance between freshness and costs)
Both deployments read from global_settings table:
-- Update via SQL
UPDATE global_settings SET fetchIntervalMinutes = 30 WHERE id = 1;Or via admin UI:
- Navigate to Settings
- Update "Fetch Interval (minutes)"
- Save
TuvixRSS uses GitHub Actions for automated CI/CD with a trunk-based workflow.
feature branch → PR → main → [Manual Deploy to Production]
Triggers: Pull requests targeting main
Validates:
- Lint & format checks
- TypeScript type checking
- API and App tests (with coverage)
- Build verification
- Coverage tracking and reporting
Purpose: Ensure code quality before merging to main.
Triggers:
- Published GitHub releases (automatic)
- Manual workflow dispatch
Process:
- Checks out release tag (from release or manual input)
- Runs type checks and tests for API
- Builds API
- Creates
wrangler.tomlfromwrangler.example.tomland substitutesD1_DATABASE_ID - Deploys API to Cloudflare Workers
- Runs database migrations (after successful API deployment)
- Runs type checks and tests for App
- Builds App (with
VITE_API_URLfrom secrets) - Deploys App to Cloudflare Pages (after API deployment succeeds)
- Outputs deployment summary with URLs
Purpose: Automated production deployment on releases.
Environment: Uses production GitHub environment
TuvixRSS uses GitHub Environments for production deployment:
production- Used bydeploy-cloudflare.ymlworkflow- Deploys on published releases
- Uses production Cloudflare resources
Setting up Environments:
- Go to Settings → Environments
- Create
productionenvironment (if it doesn't exist) - Add production secrets to the environment
Configure these in Settings → Environments → production → Secrets:
| Secret | Required | Description |
|---|---|---|
CLOUDFLARE_API_TOKEN |
Yes | Cloudflare API token with Workers, Pages, and D1 permissions (see below) |
CLOUDFLARE_ACCOUNT_ID |
Yes | Your Cloudflare account ID |
D1_DATABASE_ID |
Yes | Your D1 database ID (from wrangler d1 create tuvix) - used for envsubst substitution |
CLOUDFLARE_PAGES_PROJECT_NAME |
Yes | Cloudflare Pages project name (production) |
VITE_API_URL |
Yes | API URL for frontend builds (e.g., https://api.example.com/trpc or https://your-worker.workers.dev/trpc) |
SENTRY_DSN |
No | Backend Sentry DSN (for automatic release tracking) |
VITE_SENTRY_DSN |
No | Frontend Sentry DSN (for error tracking) - Get from Sentry project settings |
VITE_SENTRY_ENVIRONMENT |
No | Frontend Sentry environment (e.g., production) |
VITE_APP_VERSION |
No | App version (e.g., git commit SHA or version tag) - used for Sentry release tracking and UI display |
Getting Cloudflare Credentials:
- API Token: Cloudflare Dashboard → My Profile → API Tokens → Create token with:
Account.Cloudflare Workers:Edit(for deploying Workers)Account.Cloudflare Pages:Edit(for deploying Pages)Account.Cloudflare D1:Edit(for running D1 migrations)⚠️ Required for migrations
- Account ID:
- Via Wrangler (Recommended): Run
npx wrangler whoami- displays your account ID - Via Dashboard: Cloudflare Dashboard → Right sidebar (under your account name)
- Via Wrangler (Recommended): Run
- D1 Database ID: Run
npx wrangler d1 create tuvixlocally, copy thedatabase_idfrom output, add asD1_DATABASE_IDsecret - Pages Project:
- List existing projects: Run
npx wrangler pages project listto see all your Pages projects - Create new project: Run
npx wrangler pages project create tuvix-app(or create via Dashboard) - Add project name: Use the project name as
CLOUDFLARE_PAGES_PROJECT_NAMEsecret (must match exactly, case-sensitive)
- List existing projects: Run
- Worker Name: Automatically read from
packages/api/wrangler.toml→namefield (no secret needed)
Important:
- Pushing directly to
mainwill NOT trigger a deployment (or any CI checks) - CI workflows only run on pull requests, not direct pushes
- Deployments only happen when:
- A GitHub release is published (automatic)
- The workflow is manually triggered via GitHub Actions UI
-
Create a Release:
# Tag and push git tag v1.0.0 git push origin v1.0.0 # Or create release in GitHub UI # GitHub → Releases → Draft a new release
-
Workflow Automatically:
- Checks out the release tag
- Runs type checks and tests for API
- Builds API
- Substitutes
D1_DATABASE_IDinwrangler.tomlusingenvsubst - Deploys API to Cloudflare Workers
- Runs database migrations (after API deployment succeeds)
- Runs type checks and tests for App
- Builds App with
VITE_API_URLfrom secrets - Deploys App to Cloudflare Pages (only after API deployment succeeds)
- Outputs deployment summary with URLs
- Go to Actions → Deploy to Cloudflare Workers
- Click "Run workflow"
- Select branch and enter version tag (e.g.,
v1.0.0) - Click "Run workflow"
- ✅ Sequential Deployment: API deploys first, then App (ensures API is ready)
- ✅ Wrangler Config Creation: Creates
wrangler.tomlfromwrangler.example.tomlwith substituted values - ✅ Validation: Type checks and tests run before deployment
- ✅ Database Migrations: Automatically run after successful API deployment
- ✅ Concurrency Control: Prevents duplicate runs
- ✅ Caching: Optimized dependency caching
- ✅ Environment Protection: Uses
productionGitHub environment - ✅ Release Tag Checkout: Ensures correct code version is deployed
- ✅ Deployment URLs: Displayed in workflow summary
- ✅ Automatic Sentry Release Tracking: Release version automatically passed to Sentry for both backend and frontend
Purpose: Monitor errors and performance across frontend and backend with distributed tracing.
Projects:
- Backend:
tuvix-api(Cloudflare Workers) - Frontend:
tuvix-app(Cloudflare Pages)
Setup Steps:
-
Get Sentry DSNs:
- Go to https://techsquidtv.sentry.io/settings/projects/
- Click on
tuvix-api→ Settings → Client Keys (DSN) - Copy the DSN (format:
https://xxx@xxx.ingest.sentry.io/xxx) - Repeat for
tuvix-app
-
Set Backend Secrets (Cloudflare Workers):
cd packages/api # Required: Backend DSN npx wrangler secret put SENTRY_DSN # Enter: https://xxx@xxx.ingest.sentry.io/xxx (from tuvix-api project) # Required: Environment name npx wrangler secret put SENTRY_ENVIRONMENT # Enter: production # Optional: Release tracking (automatically set during deployment) # The deployment workflow automatically sets SENTRY_RELEASE from the release tag # You can manually set it if needed: # npx wrangler secret put SENTRY_RELEASE # Enter: v1.0.0 or git commit SHA
-
Set Frontend Secrets (GitHub Actions):
- Go to GitHub → Settings → Secrets and variables → Actions
- Add
VITE_SENTRY_DSN(fromtuvix-appproject) - Add
VITE_SENTRY_ENVIRONMENT(e.g.,production) - Note:
VITE_APP_VERSIONis automatically set during deployment from the release tag - You can manually set it if needed (optional, e.g., git commit SHA)
-
Verify Setup:
# Test backend Sentry by triggering an error # Check Sentry dashboard for events # https://techsquidtv.sentry.io/issues/ # Monitor backend logs npx wrangler tail
Distributed Tracing:
- ✅ Automatic: Frontend automatically propagates trace headers to backend
- ✅ Trace Propagation: Configured in
packages/app/src/main.tsx→tracePropagationTargets - ✅ Backend Handling: Cloudflare Workers automatically accepts trace headers via
Sentry.withSentry() - ✅ View Traces: In Sentry, click on an error → "View Trace" to see full request flow
Release Tracking:
- ✅ Automatic: Release version is automatically extracted from GitHub release tag or manual input
- ✅ Backend:
SENTRY_RELEASEsecret is automatically updated during deployment workflow - ✅ Frontend:
VITE_APP_VERSIONis automatically passed as environment variable during build - ✅ Fallback: If no release tag is provided, uses git commit SHA
What Gets Tracked:
- Frontend: JavaScript errors, unhandled promise rejections, React errors, performance metrics
- Backend: API errors, D1 database queries, rate limit errors, performance metrics
- Distributed: Full request flow from frontend → backend with trace context
- Release: All errors are tagged with the release version for easy tracking
Troubleshooting:
- No events in Sentry: Check DSNs are set correctly, check browser console for Sentry initialization logs
- No distributed traces: Verify
tracePropagationTargetsincludes your API URL (e.g.,api.tuvix.app) - Backend not logging: Check Worker logs (
npx wrangler tail) for Sentry initialization messages
Workflow Fails:
- Check Actions tab for specific error messages
- Verify all required secrets are configured (see Required GitHub Secrets above)
- Run checks locally:
pnpm run pre-check - Check that
D1_DATABASE_IDsubstitution succeeded (look for "Successfully substituted" message)
Deployment Fails:
- Verify Cloudflare API token permissions (Workers:Edit, Pages:Edit, D1:Edit)
- Check that Worker and Pages projects exist
- Verify
D1_DATABASE_IDsecret is set correctly (workflow will fail if missing) - If migrations fail with error code 7403: API token lacks D1 permissions or database belongs to different account
- Review Cloudflare dashboard for errors
- Ensure database migrations completed successfully (runs after API deployment)
- Check that worker name in
wrangler.tomlmatches your Cloudflare Worker
Coverage Not Showing:
- Coverage generates automatically during test runs
- Check that
coverage/lcov.infofiles exist - For private repos, may need
CODECOV_TOKENsecret
See .github/workflows/README.md for detailed setup instructions.
- Generate strong BETTER_AUTH_SECRET (min 32 chars) for Better Auth session management
- Set restrictive CORS_ORIGIN
- Use HTTPS in production (required for secure cookies)
- Regular dependency updates (
pnpm update) - Monitor security advisories
- Regular database backups
- Verify Better Auth endpoints are accessible (
/api/auth/*)
- Don't commit
.envto git - Use Docker secrets for sensitive data
- Limit container resource usage
- Run containers as non-root user
- Regular security scans (
docker scan)
- Use Wrangler secrets (not vars) for sensitive data
- Enable Cloudflare security features (WAF, DDoS)
- Restrict Worker routes
- Monitor usage for cost control
- Review KV/D1 access logs
- Use multi-stage builds (already implemented)
- Mount volumes for data persistence
- Adjust health check intervals
- Use Docker build cache
- Consider using Alpine base images
- Minimize Worker script size
- Use Smart Placement for reduced latency
- Configure appropriate CPU limits
- Use caching for static responses
- Monitor edge location performance
After deployment:
- Create admin user (see deployment sections above)
- Configure global settings via admin UI
- Set up monitoring and alerting
- Configure backups
- Test public RSS feeds
- Add custom domain (optional)
- Configure reverse proxy (Docker) or custom domain (Workers)
For more information, see:
- README.md - Project overview
- API README - API documentation
- App README - Frontend documentation
Last Updated: 2025-01-15