This project is created with the goal of providing a local AI chatbot interface that works offline, and is completely free, private, and open source. Chat with AI models locally using either Ollama (server-based) or WebLLM (browser-based) without requiring internet connectivity.
The previous implementation focuses solely around the Ollama ecosystem and API, with no support for WebLLM. Some may find this version more suitable with no need for browser-based models. You can find the original version here: Offline Chatbot v1
- Dual Provider System: Choose between Ollama (powerful server models) or WebLLM (instant browser models)
- Fully Offline: Works completely without internet after initial setup
- Real-time Streaming: Get responses as they're generated
- Multi-Model Support: Switch between different AI models easily
- Text Files: Upload and chat about TXT, JSON, CSV files
- Code Files: Analyze code across multiple languages
- PDF Documents: Extract and discuss PDF content
- Image Analysis: Vision-capable models can analyze uploaded images
- Screen Capture: Take screenshots directly within the app
- Clean Modern UI: Built with shadcn/ui and Tailwind CSS v4
- Dark/Light Mode: System-aware theme switching
- Keyboard Shortcuts:
Ctrl+/to focus input,Escto stop generation - PWA Support: Install as a desktop app
- Local Storage: Conversations persist across sessions
- Attachment Previews: Visual feedback for uploaded files
- TypeScript: Full type safety throughout the application
- **Modular(): Context-based architecture for easy extension
- Comprehensive Testing: Unit and integration tests with Vitest
- Optimized Performance: Memoization, streaming, and efficient rendering
No installation required! WebLLM models run entirely in your browser:
- Visit WebLLM to learn more
- Start the app and select a WebLLM model from the dropdown
- Models download automatically on first use
For larger, more capable models, install Ollama:
- Visit Ollama's website and download for your OS
- Install and verify:
ollama --version
- Pull your first model:
ollama pull llama2 ollama run llama2
- Read our blog post for detailed Ollama setup
Note: The Ollama server automatically starts with the app when using npm start.
-
Clone the repository
git clone https://github.com/yourusername/offline-chatbot.git cd offline-chatbot -
Install dependencies
npm install
-
Configure ports (optional)
Create a
.envfile in the root directory:VITE_PORT=8080 # Frontend port (default: 8080) VITE_API_PORT=8081 # Backend port for Ollama (default: 8081)
-
Start the application
npm start
This automatically:
- Clears any processes using ports 8080 and 8081
- Starts both frontend (
http://localhost:8080) and backend servers - Finds available ports if defaults are in use
- Click the model selector in the top navigation
- Choose between:
- Ollama Models: Installed locally via Ollama server
- WebLLM Models: Browser-based models (downloads on first use)
- Wait for model initialization (WebLLM shows download progress)
| Category | Formats |
|---|---|
| Text | .txt, .md, .csv |
| Code | .js, .ts, .py, .java, .cpp, .html, .css, and more |
| Documents | .pdf |
| Images | .png, .jpg, .jpeg, .webp (vision-capable models only) |
| Shortcut | Action |
|---|---|
Ctrl + / |
Focus message input |
Esc |
Stop message generation |
Enter |
Send message (if not generating) |
- Click the paperclip icon in the message input
- Select files to attach
- Models will analyze file contents along with your prompt
- For images, ensure your selected model supports vision (e.g., LLaVa, Llama 3.2)
Frontend only:
npm run devBackend only:
npm run serverRun the test suite:
npm testRun tests in watch mode:
npm test -- --watchCreate production build:
npm run buildPreview production build:
npm run previewoffline-chatbot/
├── src/ # Frontend source
│ ├── components/
│ │ ├── OfflineChatbot/ # Main chatbot module
│ │ │ ├── components/ # UI components
│ │ │ │ ├── chat/ # Chat-specific components
│ │ │ │ ├── layout/ # Layout components
│ │ │ │ └── attachments/ # File handling components
│ │ │ ├── contexts/ # React contexts
│ │ │ ├── hooks/ # Custom hooks
│ │ │ ├── services/ # API services
│ │ │ └── types/ # TypeScript types
│ │ ├── layout/ # App layout
│ │ └── ui/ # shadcn/ui components
│ ├── contexts/ # Global contexts
│ ├── hooks/ # Global hooks
│ └── lib/ # Utilities
├── server/ # Express backend (Ollama)
│ ├── routes/ # API routes
│ ├── types/ # Server types
│ └── utils/ # Server utilities
├── tests/ # Test files
│ ├── components/ # Component tests
│ ├── contexts/ # Context tests
│ ├── hooks/ # Hook tests
│ └── utils/ # Utility tests
└── public/ # Static assets
Core Technologies:
- React 19 with TypeScript
- Vite for blazing-fast builds
- Tailwind CSS v4 with shadcn/ui components
- Context API for centralized state management
- Custom Hooks for reusable logic
Context Providers:
ApplicationContext: App-level state (sidebar, theme)ModelContext: Model management and selectionChatContext: Chat state and message handlingAttachmentContext: File upload and processing
Key Services:
model.service.ts: Ollama/WebLLM model interactionsprovider.service.ts: WebLLM initialization and streamingmessage.service.ts: Document processingchat.service.ts: Message formatting
Technologies:
- Express.js with TypeScript
- Ollama API for local AI inference
- Streaming responses for real-time chat
- Error handling and health checks
- Separation of Concerns: Clear boundaries between UI, logic, and data
- DRY Principle: Reusable utilities and hooks
- Performance Optimization: Memoization, streaming, RAF-based rendering
- Type Safety: Comprehensive TypeScript coverage
- Modularity: Easy to extend and maintain
| Variable | Description | Default | Required |
|---|---|---|---|
VITE_PORT |
Frontend Vite server port | 8080 |
No |
VITE_API_PORT |
Backend API server port | 8081 |
No |
- React 19 - UI framework
- TypeScript - Type safety
- Vite - Build tool and dev server
- Tailwind CSS v4 - Styling
- shadcn/ui - UI components
- Radix UI - Primitive components
- Framer Motion - Animations
- Sonner - Toast notifications
- Express.js - Web framework
- TypeScript - Type safety
- ollama - Ollama client library
- Vitest - Testing framework
- React Testing Library - Component testing
- ESLint - Code linting
- TypeScript - Type checking
- Getting Started with Ollama - Blog Post
- Ollama Official Website
- Ollama Model Library
- Ollama JS Documentation
- WebLLM Official Website
- WebLLM GitHub Repository
MIT License - see LICENSE file for details