You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+16-12Lines changed: 16 additions & 12 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -46,7 +46,7 @@ Get AI-driven analysis of your code covering:
46
46
Fix bugs with confidence. The **Diff View** shows exactly what the AI changed, side-by-side comparison so you understand every modification before accepting.
47
47
48
48
#### 🔒 Air-Gapped Privacy
49
-
Powered by **Phi-3 Mini** (2.3GB model) running locally through **Ollama**. Your code never touches the internet.
49
+
Powered by **Qwen2.5-Coder** (1.5GB model) running locally through **Ollama**. Your code never touches the internet.
50
50
51
51
---
52
52
@@ -69,17 +69,21 @@ cd codepapi-ai
69
69
docker-compose up -d
70
70
```
71
71
72
-
That's it! Docker will automatically:
73
-
1. Pull and run the Ollama AI engine
74
-
2. Download the Phi-3 Mini model (first run only, ~2.3GB)
75
-
3. Start the NestJS backend API
76
-
4. Launch the React frontend UI
72
+
### First Launch Setup
77
73
78
-
### First Launch
74
+
> ⚠️ **Important:** The first startup requires downloading AI models. Ensure you have a stable internet connection.
79
75
80
-
> ⚠️ **Note:** The first startup will download the Phi-3 Mini model (~2.3GB). This is a one-time operation. Ensure you have a stable internet connection.
76
+
After starting the containers, pull the required models:
81
77
82
-
Once the containers are running:
78
+
```bash
79
+
# Pull Qwen2.5 Coder (primary model, ~1.5GB)
80
+
docker exec ollama ollama pull qwen2.5-coder:1.5b
81
+
82
+
# Pull Phi-3 Mini (optional, ~2.3GB alternative model)
83
+
docker exec ollama ollama pull phi3:mini
84
+
```
85
+
86
+
Once the models are downloaded and containers are running:
83
87
-**🖥️ Frontend:** Open http://localhost in your browser
84
88
-**🔌 API:** Backend runs at http://localhost:3000
85
89
-**🤖 AI Engine:** Ollama API available at http://localhost:11434
@@ -103,7 +107,7 @@ Once the containers are running:
103
107
104
108
| Component | Technology | Purpose |
105
109
| --- | --- | --- |
106
-
|**AI Engine**|[Ollama](https://ollama.ai/) + Phi-3 Mini| Local LLM inference |
110
+
|**AI Engine**|[Ollama](https://ollama.ai/) + Qwen2.5-Coder| Local LLM inference |
107
111
|**Orchestration**| LangChain.js | AI workflow management |
108
112
|**Backend**| NestJS (Node.js) | REST API & business logic |
0 commit comments