Skip to content

Commit e1b0c2b

Browse files
author
ChidcGithub
committed
feat: v1.0.0-alpha - LLM Integration with Ollama
- Add local LLM integration powered by Ollama - Auto-install Ollama with one click - AI-powered AST node explanations (2D/3D) - Model management (download, delete, select) - aria2 support for faster downloads - Fullscreen AI explanation modal - Auto-retry mechanism for explanations - New LLM settings panel and download wizard - Black/white minimalist design New files: - backend/llm/ module (models, service, client, prompts, downloader) - backend/routers/llm.py - frontend/src/components/LLMSettings.js - frontend/src/components/LLMDownloader.js - frontend/src/components/LLMSettings.css
1 parent 3e12d02 commit e1b0c2b

29 files changed

Lines changed: 6830 additions & 33 deletions

README.md

Lines changed: 119 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -13,11 +13,11 @@
1313

1414
# PyVizAST
1515

16-
[![Version](https://img.shields.io/badge/Version-0.7.2-blue.svg)](https://github.com/ChidcGithub/PyVizAST)
16+
[![Version](https://img.shields.io/badge/Version-1.0.0--alpha-blue.svg)](https://github.com/ChidcGithub/PyVizAST)
1717
[![Python](https://img.shields.io/badge/Python-3.9%2B-brightgreen.svg)](https://www.python.org/)
1818
[![License](https://img.shields.io/badge/License-GPL%20v3-blue.svg)](LICENSE)
1919
[![Platform](https://img.shields.io/badge/Platform-Windows%20%7C%20Linux%20%7C%20macOS-lightgrey.svg)](https://github.com/ChidcGithub/PyVizAST)
20-
[![Status](https://img.shields.io/badge/Status-stable-brightgreen.svg)](https://github.com/ChidcGithub/PyVizAST)
20+
[![Status](https://img.shields.io/badge/Status-alpha-red.svg)](https://github.com/ChidcGithub/PyVizAST)
2121
![CI Build Status](https://github.com/ChidcGithub/PyVizAST/actions/workflows/ci.yml/badge.svg)
2222

2323
A Python AST Visualizer & Static Analyzer that transforms code into interactive graphs. Detect complexity, performance bottlenecks, and code smells with actionable refactoring suggestions.
@@ -48,6 +48,28 @@ A Python AST Visualizer & Static Analyzer that transforms code into interactive
4848
- **Beginner Mode**: Display Python documentation when hovering over AST nodes
4949
- **Challenge Mode**: Identify performance issues in provided code samples
5050

51+
### LLM AI Features (v1.0.0-alpha)
52+
- **Local LLM Integration**: Powered by Ollama for privacy-first AI features
53+
- **Auto Install Ollama**: One-click automatic Ollama installation and configuration
54+
- **AI Node Explanations**: Get intelligent explanations for any AST node
55+
- Code context-aware explanations
56+
- Python documentation snippets
57+
- Practical code examples
58+
- Related concepts
59+
- Fullscreen view for detailed reading
60+
- Auto-retry on failure (up to 2 times)
61+
- **AI Challenge Generation**: Generate custom coding challenges with LLM
62+
- **AI Hints**: Get contextual hints during challenge solving
63+
- **Model Management**:
64+
- Recommended models for code analysis (CodeLlama, DeepSeek Coder, etc.)
65+
- One-click model download with aria2 acceleration
66+
- Auto-select best model for code analysis
67+
- Model status monitoring (installed/running)
68+
- **Settings Panel**: Configure LLM features:
69+
- Enable/disable AI explanations, challenges, hints
70+
- Temperature and token limits
71+
- Model selection with "Load" button to enable LLM
72+
5173
### Easter Egg
5274
- Just explore the project and you'll find it :)
5375

@@ -213,6 +235,101 @@ Contributions are welcome. Please submit pull requests to the main repository.
213235

214236
<summary>Version History</summary>
215237

238+
<details>
239+
<summary>v1.0.0-alpha (2026-03-16)</summary>
240+
241+
**Major Release - LLM AI Integration**
242+
243+
**New Features:**
244+
245+
**LLM Service Module (`backend/llm/`):**
246+
- `models.py`: Data models for LLM configuration, status, and responses
247+
- `ollama_client.py`: Ollama API client for local LLM communication
248+
- `prompts.py`: Prompt templates for node explanations, challenges, and hints
249+
- `service.py`: Core LLM service with explanation/challenge/hint generation
250+
- `downloader.py`: Ollama auto-install and model download with aria2 acceleration
251+
252+
**LLM API Endpoints (`backend/routers/llm.py`):**
253+
- `GET /api/llm/status` - Get LLM service status
254+
- `GET/POST /api/llm/config` - LLM configuration management
255+
- `GET /api/llm/models` - List installed models
256+
- `POST /api/llm/models/pull` - Download model with progress streaming
257+
- `DELETE /api/llm/models/{name}` - Delete model
258+
- `GET /api/llm/ollama/status` - Get Ollama installation status
259+
- `POST /api/llm/ollama/install` - Auto-install Ollama
260+
- `POST /api/llm/ollama/start` - Start Ollama server
261+
- `POST /api/llm/generate/explanation` - Generate node explanation
262+
- `POST /api/llm/generate/challenge` - Generate coding challenge
263+
- `POST /api/llm/generate/hint` - Generate contextual hint
264+
265+
**Frontend Components:**
266+
- `LLMSettings.js`: Settings panel with model management and configuration
267+
- `LLMDownloader.js`: Quick setup wizard for Ollama installation
268+
- `LLMSettings.css`: Black/white minimalist design styles
269+
270+
**AI Node Explanations (2D & 3D):**
271+
- AI explanations display in node detail panel when LLM is enabled
272+
- Code context snippet shown with explanations
273+
- Fullscreen modal for detailed reading
274+
- Auto-retry on failure (up to 2 times)
275+
- Error display with manual retry button
276+
277+
**Model Management:**
278+
- Recommended models: CodeLlama 7B/13B, Llama 3.2 3B, Mistral 7B, DeepSeek Coder, Qwen 2.5 Coder
279+
- One-click download with progress tracking
280+
- Auto-select best model for code analysis
281+
- "Use" button for installed models, "In Use" indicator for current model
282+
283+
**Ollama Auto-Install:**
284+
- Automatic platform detection (Windows/macOS/Linux)
285+
- One-click Ollama installation
286+
- Automatic server startup
287+
- Status monitoring (installed/running)
288+
289+
**Configuration Options:**
290+
- Enable/disable LLM features
291+
- Toggle AI explanations, challenges, hints separately
292+
- Temperature and max tokens settings
293+
- Model selection with "Load" button
294+
295+
**Bug Fixes:**
296+
- Fixed model name case sensitivity matching (codeLlama:7b vs codellama:7b)
297+
- Fixed async state update issue in Load button
298+
- Fixed LLM explanation status checks
299+
- Fixed pullModel API to use POST with streaming
300+
301+
**Dependencies Added:**
302+
- `httpx>=0.27.0` for LLM API calls
303+
304+
**Files Added:**
305+
- `backend/llm/__init__.py`
306+
- `backend/llm/models.py`
307+
- `backend/llm/ollama_client.py`
308+
- `backend/llm/prompts.py`
309+
- `backend/llm/service.py`
310+
- `backend/llm/downloader.py`
311+
- `backend/routers/llm.py`
312+
- `frontend/src/components/LLMSettings.js`
313+
- `frontend/src/components/LLMDownloader.js`
314+
- `frontend/src/components/LLMSettings.css`
315+
316+
**Files Modified:**
317+
- `backend/config.py` - Version bump
318+
- `backend/main.py` - LLM router registration
319+
- `backend/routers/__init__.py` - Export LLM router
320+
- `backend/routers/learning.py` - LLM-enhanced explanations
321+
- `backend/routers/challenges.py` - LLM challenge generation
322+
- `frontend/src/App.js` - LLM settings modal
323+
- `frontend/src/api.js` - LLM API functions
324+
- `frontend/src/components/Header.js` - AI button
325+
- `frontend/src/components/ASTVisualizer.js` - AI explanations
326+
- `frontend/src/components/ASTVisualizer3D.js` - AI explanations
327+
- `frontend/src/components/components.css` - LLM explanation styles
328+
- `frontend/src/components/LearnChallenge.css` - LLM toggle styles
329+
- `requirements.txt` - Added httpx
330+
331+
</details>
332+
216333
<details>
217334
<summary>v0.7.2 (2026-03-15)</summary>
218335

backend/config.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -3,6 +3,6 @@
33
All version numbers are managed in this single file for easy updates.
44
"""
55

6-
VERSION = "0.7.2"
7-
BUILD = "2596"
6+
VERSION = "1.0.0-alpha"
7+
BUILD = "3230"
88
FULL_VERSION = f"v{VERSION}"

backend/llm/__init__.py

Lines changed: 36 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,36 @@
1+
"""
2+
LLM Integration Module for PyVizAST
3+
4+
This module provides optional LLM (Large Language Model) integration
5+
for enhanced code analysis, learning explanations, and challenge generation.
6+
7+
Uses Ollama for local LLM inference with aria2 for fast model downloads.
8+
9+
@author: Chidc
10+
@link: github.com/chidcGithub
11+
"""
12+
13+
from .service import LLMService, get_llm_service
14+
from .ollama_client import OllamaClient
15+
from .models import (
16+
LLMConfig,
17+
LLMStatus,
18+
ModelInfo,
19+
DownloadProgress,
20+
GeneratedExplanation,
21+
GeneratedChallenge,
22+
)
23+
from .downloader import Aria2Downloader
24+
25+
__all__ = [
26+
"LLMService",
27+
"get_llm_service",
28+
"OllamaClient",
29+
"LLMConfig",
30+
"LLMStatus",
31+
"ModelInfo",
32+
"DownloadProgress",
33+
"GeneratedExplanation",
34+
"GeneratedChallenge",
35+
"Aria2Downloader",
36+
]

0 commit comments

Comments
 (0)