|
13 | 13 |
|
14 | 14 | # PyVizAST |
15 | 15 |
|
16 | | -[](https://github.com/ChidcGithub/PyVizAST) |
| 16 | +[](https://github.com/ChidcGithub/PyVizAST) |
17 | 17 | [](https://www.python.org/) |
18 | 18 | [](LICENSE) |
19 | 19 | [](https://github.com/ChidcGithub/PyVizAST) |
20 | | -[](https://github.com/ChidcGithub/PyVizAST) |
| 20 | +[](https://github.com/ChidcGithub/PyVizAST) |
21 | 21 |  |
22 | 22 |
|
23 | 23 | A Python AST Visualizer & Static Analyzer that transforms code into interactive graphs. Detect complexity, performance bottlenecks, and code smells with actionable refactoring suggestions. |
@@ -48,6 +48,28 @@ A Python AST Visualizer & Static Analyzer that transforms code into interactive |
48 | 48 | - **Beginner Mode**: Display Python documentation when hovering over AST nodes |
49 | 49 | - **Challenge Mode**: Identify performance issues in provided code samples |
50 | 50 |
|
| 51 | +### LLM AI Features (v1.0.0-alpha) |
| 52 | +- **Local LLM Integration**: Powered by Ollama for privacy-first AI features |
| 53 | +- **Auto Install Ollama**: One-click automatic Ollama installation and configuration |
| 54 | +- **AI Node Explanations**: Get intelligent explanations for any AST node |
| 55 | + - Code context-aware explanations |
| 56 | + - Python documentation snippets |
| 57 | + - Practical code examples |
| 58 | + - Related concepts |
| 59 | + - Fullscreen view for detailed reading |
| 60 | + - Auto-retry on failure (up to 2 times) |
| 61 | +- **AI Challenge Generation**: Generate custom coding challenges with LLM |
| 62 | +- **AI Hints**: Get contextual hints during challenge solving |
| 63 | +- **Model Management**: |
| 64 | + - Recommended models for code analysis (CodeLlama, DeepSeek Coder, etc.) |
| 65 | + - One-click model download with aria2 acceleration |
| 66 | + - Auto-select best model for code analysis |
| 67 | + - Model status monitoring (installed/running) |
| 68 | +- **Settings Panel**: Configure LLM features: |
| 69 | + - Enable/disable AI explanations, challenges, hints |
| 70 | + - Temperature and token limits |
| 71 | + - Model selection with "Load" button to enable LLM |
| 72 | + |
51 | 73 | ### Easter Egg |
52 | 74 | - Just explore the project and you'll find it :) |
53 | 75 |
|
@@ -213,6 +235,101 @@ Contributions are welcome. Please submit pull requests to the main repository. |
213 | 235 |
|
214 | 236 | <summary>Version History</summary> |
215 | 237 |
|
| 238 | +<details> |
| 239 | +<summary>v1.0.0-alpha (2026-03-16)</summary> |
| 240 | + |
| 241 | +**Major Release - LLM AI Integration** |
| 242 | + |
| 243 | +**New Features:** |
| 244 | + |
| 245 | +**LLM Service Module (`backend/llm/`):** |
| 246 | +- `models.py`: Data models for LLM configuration, status, and responses |
| 247 | +- `ollama_client.py`: Ollama API client for local LLM communication |
| 248 | +- `prompts.py`: Prompt templates for node explanations, challenges, and hints |
| 249 | +- `service.py`: Core LLM service with explanation/challenge/hint generation |
| 250 | +- `downloader.py`: Ollama auto-install and model download with aria2 acceleration |
| 251 | + |
| 252 | +**LLM API Endpoints (`backend/routers/llm.py`):** |
| 253 | +- `GET /api/llm/status` - Get LLM service status |
| 254 | +- `GET/POST /api/llm/config` - LLM configuration management |
| 255 | +- `GET /api/llm/models` - List installed models |
| 256 | +- `POST /api/llm/models/pull` - Download model with progress streaming |
| 257 | +- `DELETE /api/llm/models/{name}` - Delete model |
| 258 | +- `GET /api/llm/ollama/status` - Get Ollama installation status |
| 259 | +- `POST /api/llm/ollama/install` - Auto-install Ollama |
| 260 | +- `POST /api/llm/ollama/start` - Start Ollama server |
| 261 | +- `POST /api/llm/generate/explanation` - Generate node explanation |
| 262 | +- `POST /api/llm/generate/challenge` - Generate coding challenge |
| 263 | +- `POST /api/llm/generate/hint` - Generate contextual hint |
| 264 | + |
| 265 | +**Frontend Components:** |
| 266 | +- `LLMSettings.js`: Settings panel with model management and configuration |
| 267 | +- `LLMDownloader.js`: Quick setup wizard for Ollama installation |
| 268 | +- `LLMSettings.css`: Black/white minimalist design styles |
| 269 | + |
| 270 | +**AI Node Explanations (2D & 3D):** |
| 271 | +- AI explanations display in node detail panel when LLM is enabled |
| 272 | +- Code context snippet shown with explanations |
| 273 | +- Fullscreen modal for detailed reading |
| 274 | +- Auto-retry on failure (up to 2 times) |
| 275 | +- Error display with manual retry button |
| 276 | + |
| 277 | +**Model Management:** |
| 278 | +- Recommended models: CodeLlama 7B/13B, Llama 3.2 3B, Mistral 7B, DeepSeek Coder, Qwen 2.5 Coder |
| 279 | +- One-click download with progress tracking |
| 280 | +- Auto-select best model for code analysis |
| 281 | +- "Use" button for installed models, "In Use" indicator for current model |
| 282 | + |
| 283 | +**Ollama Auto-Install:** |
| 284 | +- Automatic platform detection (Windows/macOS/Linux) |
| 285 | +- One-click Ollama installation |
| 286 | +- Automatic server startup |
| 287 | +- Status monitoring (installed/running) |
| 288 | + |
| 289 | +**Configuration Options:** |
| 290 | +- Enable/disable LLM features |
| 291 | +- Toggle AI explanations, challenges, hints separately |
| 292 | +- Temperature and max tokens settings |
| 293 | +- Model selection with "Load" button |
| 294 | + |
| 295 | +**Bug Fixes:** |
| 296 | +- Fixed model name case sensitivity matching (codeLlama:7b vs codellama:7b) |
| 297 | +- Fixed async state update issue in Load button |
| 298 | +- Fixed LLM explanation status checks |
| 299 | +- Fixed pullModel API to use POST with streaming |
| 300 | + |
| 301 | +**Dependencies Added:** |
| 302 | +- `httpx>=0.27.0` for LLM API calls |
| 303 | + |
| 304 | +**Files Added:** |
| 305 | +- `backend/llm/__init__.py` |
| 306 | +- `backend/llm/models.py` |
| 307 | +- `backend/llm/ollama_client.py` |
| 308 | +- `backend/llm/prompts.py` |
| 309 | +- `backend/llm/service.py` |
| 310 | +- `backend/llm/downloader.py` |
| 311 | +- `backend/routers/llm.py` |
| 312 | +- `frontend/src/components/LLMSettings.js` |
| 313 | +- `frontend/src/components/LLMDownloader.js` |
| 314 | +- `frontend/src/components/LLMSettings.css` |
| 315 | + |
| 316 | +**Files Modified:** |
| 317 | +- `backend/config.py` - Version bump |
| 318 | +- `backend/main.py` - LLM router registration |
| 319 | +- `backend/routers/__init__.py` - Export LLM router |
| 320 | +- `backend/routers/learning.py` - LLM-enhanced explanations |
| 321 | +- `backend/routers/challenges.py` - LLM challenge generation |
| 322 | +- `frontend/src/App.js` - LLM settings modal |
| 323 | +- `frontend/src/api.js` - LLM API functions |
| 324 | +- `frontend/src/components/Header.js` - AI button |
| 325 | +- `frontend/src/components/ASTVisualizer.js` - AI explanations |
| 326 | +- `frontend/src/components/ASTVisualizer3D.js` - AI explanations |
| 327 | +- `frontend/src/components/components.css` - LLM explanation styles |
| 328 | +- `frontend/src/components/LearnChallenge.css` - LLM toggle styles |
| 329 | +- `requirements.txt` - Added httpx |
| 330 | + |
| 331 | +</details> |
| 332 | + |
216 | 333 | <details> |
217 | 334 | <summary>v0.7.2 (2026-03-15)</summary> |
218 | 335 |
|
|
0 commit comments