Skip to content

Commit 2f77fe4

Browse files
committed
fix: updated docker command
1 parent aa8dd16 commit 2f77fe4

File tree

3 files changed

+26
-16
lines changed

3 files changed

+26
-16
lines changed

README.md

Lines changed: 16 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -46,7 +46,7 @@ Get AI-driven analysis of your code covering:
4646
Fix bugs with confidence. The **Diff View** shows exactly what the AI changed, side-by-side comparison so you understand every modification before accepting.
4747

4848
#### 🔒 Air-Gapped Privacy
49-
Powered by **Phi-3 Mini** (2.3GB model) running locally through **Ollama**. Your code never touches the internet.
49+
Powered by **Qwen2.5-Coder** (1.5GB model) running locally through **Ollama**. Your code never touches the internet.
5050

5151
---
5252

@@ -69,17 +69,21 @@ cd codepapi-ai
6969
docker-compose up -d
7070
```
7171

72-
That's it! Docker will automatically:
73-
1. Pull and run the Ollama AI engine
74-
2. Download the Phi-3 Mini model (first run only, ~2.3GB)
75-
3. Start the NestJS backend API
76-
4. Launch the React frontend UI
72+
### First Launch Setup
7773

78-
### First Launch
74+
> ⚠️ **Important:** The first startup requires downloading AI models. Ensure you have a stable internet connection.
7975
80-
> ⚠️ **Note:** The first startup will download the Phi-3 Mini model (~2.3GB). This is a one-time operation. Ensure you have a stable internet connection.
76+
After starting the containers, pull the required models:
8177

82-
Once the containers are running:
78+
```bash
79+
# Pull Qwen2.5 Coder (primary model, ~1.5GB)
80+
docker exec ollama ollama pull qwen2.5-coder:1.5b
81+
82+
# Pull Phi-3 Mini (optional, ~2.3GB alternative model)
83+
docker exec ollama ollama pull phi3:mini
84+
```
85+
86+
Once the models are downloaded and containers are running:
8387
- **🖥️ Frontend:** Open http://localhost in your browser
8488
- **🔌 API:** Backend runs at http://localhost:3000
8589
- **🤖 AI Engine:** Ollama API available at http://localhost:11434
@@ -103,7 +107,7 @@ Once the containers are running:
103107

104108
| Component | Technology | Purpose |
105109
| --- | --- | --- |
106-
| **AI Engine** | [Ollama](https://ollama.ai/) + Phi-3 Mini | Local LLM inference |
110+
| **AI Engine** | [Ollama](https://ollama.ai/) + Qwen2.5-Coder | Local LLM inference |
107111
| **Orchestration** | LangChain.js | AI workflow management |
108112
| **Backend** | NestJS (Node.js) | REST API & business logic |
109113
| **Frontend** | React + TailwindCSS + Lucide | Modern, responsive UI |
@@ -255,7 +259,7 @@ Before submitting a PR, ensure:
255259
While formal unit tests are encouraged:
256260
- **Manual testing** is acceptable for UI changes
257261
- **Test in Docker** to ensure consistency across environments
258-
- **Test with the Phi-3 Mini model** (not a different LLM)
262+
- **Test with the Qwen2.5-Coder model** (not a different LLM)
259263
- **Document test steps** in your PR
260264

261265
### Review Process
@@ -345,7 +349,7 @@ See `frontend/README.md` for detailed customization guides.
345349

346350
- **Docker & Docker Compose** (recommended) or
347351
- **Node.js 18+** + **Ollama** (for local development)
348-
- **Minimum 4GB RAM** recommended (Phi-3 Mini model size)
352+
- **Minimum 2GB RAM** recommended (Qwen2.5-Coder model size)
349353
- **Stable internet** for initial model download
350354
- **macOS, Linux, or Windows** (with WSL2)
351355

backend/src/converter/converter.service.ts

Lines changed: 8 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,14 @@ export class ConverterService {
88
constructor() {
99
this.model = new ChatOllama({
1010
baseUrl: process.env.OLLAMA_URL || 'http://localhost:11434',
11-
model: 'phi3:mini',
11+
model: 'qwen2.5-coder:1.5b',
12+
temperature: 0.1,
13+
numPredict: 2048,
14+
numCtx: 4096,
15+
topK: 40,
16+
topP: 0.9,
17+
repeatPenalty: 1.1,
18+
1219
});
1320
}
1421

docker-compose.yml

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -9,8 +9,7 @@ services:
99
- "11434:11434"
1010
volumes:
1111
- ollama_data:/root/.ollama
12-
entrypoint: /bin/sh
13-
command: -c "ollama serve & sleep 5 && ollama pull phi3:mini && wait"
12+
1413

1514
# 2. NestJS Backend
1615
backend:
@@ -34,7 +33,7 @@ services:
3433
ports:
3534
- "80:80"
3635
depends_on:
37-
- backend
36+
- backend
3837

3938
volumes:
4039
ollama_data:

0 commit comments

Comments
 (0)