This guide explains how to test the AI provider system and verify that Transformers.js is working correctly.
npm run dev
# Open http://localhost:3000
# Open browser console (F12 or Cmd+Option+I)Default Provider: Mock Mode 🎭
- Instant responses
- No setup required
- Pre-coded examples
- Open http://localhost:3000
- Look at bottom-right of chat input
- Should say: "🎭 Mock Mode (Press ⌘+K for AI)"
- Type: "Write a Python function to reverse a linked list"
- Check browser console - you should see:
💬 [AI API] Generating response with provider: mock 🎭 [AI API] Routing to Mock API... ✅ [AI API] Mock response received
- Press
⌘+K(Mac) orCtrl+K(Windows/Linux) - Settings modal opens
- Click on "Transformers" provider button
- Watch the browser console for:
🤖 Initializing Transformers.js model...
What Happens Next:
- Model download begins (~250MB)
- This can take 2-5 minutes on first load
- Status indicator updates:
🔄 Loading model...(during download)🤖 AI Ready(when complete)
Open Browser DevTools → Network Tab:
- Look for downloads from
huggingface.co - Model files (
.onnx,.json) downloading - Total size: ~250MB
Console logs to watch for:
🤖 Initializing Transformers.js model...
✅ Model initialized successfully
- Wait for status: "🤖 AI Ready"
- Type: "Write a fibonacci function"
- Press Enter or ⌘+Enter
- Check console logs:
💬 [AI API] Generating response with provider: transformers 📝 [AI API] User message: "Write a fibonacci function" 🤖 [AI API] Routing to Transformers.js... ✅ [AI API] Transformers.js response received
Cause: Still on mock mode, need to manually switch to transformers
Solution:
- Press ⌘+K
- Click "Transformers" provider
- Wait for download
Causes:
- Network issues
- Browser doesn't support WebAssembly
- Insufficient disk space
Solutions:
- Check internet connection
- Try Chrome/Edge (best WebAssembly support)
- Free up ~500MB disk space
- Check console for specific errors
Debugging:
- Open DevTools → Network tab
- Filter by
huggingface.co - Check if files are downloading
- If stuck, refresh page and try again
This is normal for the CodeT5-small model:
- It's a smaller model (~250MB)
- Not as powerful as GPT-4 or Claude
- Best for simple code snippets
- For production, use Backend API with GPT-4
// In browser console:
1. Note current provider (should show 🎭)
2. Send message: "hello"
3. Should get instant response with sample codeConsole output:
💬 [AI API] Generating response with provider: mock
🎭 [AI API] Routing to Mock API...
✅ [AI API] Mock response received
// In browser console after switching to transformers:
1. Switch to transformers (⌘+K → click Transformers)
2. Wait for "🤖 AI Ready"
3. Send message: "fibonacci"
4. Should take ~2-5 seconds to generateConsole output:
💬 [AI API] Generating response with provider: transformers
🤖 [AI API] Routing to Transformers.js...
✅ [AI API] Transformers.js response received
// Clicking "Backend" without configuration:
1. Switch to backend (⌘+K → click Backend)
2. Send message
3. Should fail and fall back to mockConsole output:
💬 [AI API] Generating response with provider: backend
❌ [AI API] Error with backend provider: Backend URL not configured
⚠️ [AI API] Falling back to mock provider
| Provider | First Response | Subsequent | Model Size | Quality |
|---|---|---|---|---|
| Mock | <100ms | <100ms | 0 | Pre-coded |
| Transformers | 2-5 sec | 2-5 sec | 250MB | Good |
| Backend (GPT-4) | 1-3 sec | 1-3 sec | N/A | Excellent |
All AI operations log to console with prefixes:
💬 [AI API]- Main API calls🤖 [AI API]- Transformers.js routing🎭 [AI API]- Mock API routing✅ [AI API]- Success messages❌ [AI API]- Errors
- Open DevTools → Network
- Filter by:
huggingface.co - Look for model downloads
- Check status codes (200 = success)
The status indicator in the chat input updates every second:
- Mock:
🎭 Mock Mode (Press ⌘+K for AI) - Transformers (loading):
🔄 Loading model... - Transformers (ready):
🤖 AI Ready - Transformers (not loaded):
⏳ Click ⌘+K to enable AI
- Page loads with Mock mode active
- Status shows:
🎭 Mock Mode (Press ⌘+K for AI) - Can immediately chat and get responses
- All responses use mock data
- Click Transformers in Settings (⌘+K)
- Status changes to:
🔄 Loading model... - Network tab shows downloads from huggingface.co
- After 2-5 minutes:
🤖 AI Ready - Subsequent messages use real AI model
- Provider choice is NOT persisted (resets to mock on refresh)
- Model is cached by browser (no re-download needed)
- To persist provider, add to localStorage
// In lib/aiApi.ts
console.log('🔍 [DEBUG] Your message here')// In browser console:
import { switchProvider } from '/lib/aiApi'
// Switch to transformers
await switchProvider('transformers')
// Switch back to mock
await switchProvider('mock')// In browser console:
import { getAIStatus } from '/lib/aiApi'
console.log(getAIStatus())
// Returns: { provider: 'mock', status: 'ready', ready: true }- Page loads in mock mode
- Can send messages and get instant responses
- ⌘+K opens settings modal
- Can switch to Transformers provider
- Console shows "Initializing Transformers.js model..."
- Network tab shows downloads from huggingface.co
- Status changes from "Loading" to "AI Ready"
- Can send message and get AI-generated code
- Console shows "Routing to Transformers.js..."
- AI-generated code appears in editor
Potential enhancements:
- Implement provider persistence (localStorage)
- Add progress bar for model download
- Set up backend API
- Add model selection UI (CodeT5-small vs CodeLlama)
For issues, check:
- Browser console for errors
- Network tab for failed downloads
- Backend API documentation for setup