Your AI coding assistant — run locally or in the cloud with Ollama.
No API keys required. Just you and your code.
MandoCode is an AI coding assistant powered by Semantic Kernel and Ollama. Run locally or connect to Ollama cloud — no API keys required. It gives you Claude-Code-style project awareness — reading, writing, searching, and planning across your entire codebase — without ever leaving your terminal.
It understands any file type: C#, JavaScript, TypeScript, Python, CSS, HTML, JSON, config files, and more.
# Prerequisites: .NET 8.0 SDK + Ollama installed and running
dotnet tool install -g MandoCode
mandocodegit clone https://github.com/DevMando/MandoCode.git
cd MandoCode
dotnet build src/MandoCode/MandoCode.csproj
dotnet run --project src/MandoCode/MandoCode.csproj -- /path/to/your/projectOn first run, MandoCode uses minimax-m2.5:cloud by default. Run /config inside the app or configure from the command line:
mandocode config set model qwen2.5-coder:14b
mandocode config set endpoint http://localhost:11434| Feature | Description | |
|---|---|---|
| AI | Project-aware assistant | Reads, writes, deletes, and searches your entire codebase |
| AI | Streaming responses | Real-time output with animated spinners |
| AI | Task planner | Auto-detects complex requests and breaks them into steps |
| AI | Fallback function parsing | Handles models that output tool calls as raw JSON |
| UI | Diff approvals | Color-coded diffs with approve / deny / redirect |
| UI | Markdown rendering | Rich terminal output — headers, tables, code blocks, quotes |
| UI | Syntax highlighting | C#, Python, JavaScript/TypeScript, Bash |
| UI | Clickable file links | OSC 8 hyperlinks for file paths |
| UI | Terminal theme detection | Auto-adapts colors for light and dark terminals |
| UI | Taskbar progress | Windows Terminal integration during task execution |
| Input | / command autocomplete |
Slash commands with dropdown navigation |
| Input | @ file references |
Attach file content to any prompt |
| Input | ! shell escape |
Run shell commands inline (!git status, !ls) |
| Input | /copy and /copy-code |
Copy responses or code blocks to clipboard |
| Music | Lofi + synthwave | Bundled tracks with volume, genre switching, waveform visualizer |
| Config | Configuration wizard | Guided setup with model selection and connection testing |
| Reliability | Retry + deduplication | Exponential backoff and duplicate call prevention |
| Education | /learn command |
LLM education guide with optional AI educator chat |
Type / to see the autocomplete dropdown, or ! to run a shell command.
| Command | What it does |
|---|---|
/help |
Show commands and usage examples |
/config |
Open configuration (wizard or view settings) |
/learn |
Interactive guide to LLMs and local AI |
/copy |
Copy last AI response to clipboard |
/copy-code |
Copy code blocks from last response |
/command <cmd> |
Run a shell command |
/music |
Start playing music |
/music-stop |
Stop playback |
/music-pause |
Pause / resume |
/music-next |
Next track |
/music-vol <0-100> |
Set volume |
/music-lofi |
Switch to lofi |
/music-synthwave |
Switch to synthwave |
/music-list |
List available tracks |
/clear |
Clear conversation history |
/exit |
Exit MandoCode |
!<cmd> |
Shell escape (e.g., !git status) |
!cd <path> |
Change project root directory |
You type a prompt
|
MandoCode adds project context (@files, system prompt)
|
Semantic Kernel sends to Ollama (local or cloud model)
|
AI responds with text + function calls
|
File operations go through diff approval
|
Rich markdown rendered in your terminal
The AI has sandboxed access to your project through a FileSystemPlugin with 9 functions: list files, glob search, read, write, delete files/folders, text search, and path resolution. All operations are locked to your project root — path traversal is blocked.
Models with tool/function calling support work best with MandoCode.
Cloud models (no GPU required — run remotely via Ollama):
| Model | Notes |
|---|---|
minimax-m2.5:cloud |
Default — excellent tool support |
kimi-k2.5:cloud |
Strong general-purpose |
qwen3-coder:480b-cloud |
Code-focused |
Local models (fully offline, runs on your hardware):
| Model | VRAM | Notes |
|---|---|---|
qwen3:8b |
~5-6 GB | Recommended — good speed/quality balance |
qwen2.5-coder:7b |
~5-6 GB | Code-focused |
qwen2.5-coder:14b |
~10-12 GB | Stronger coding model |
mistral |
~5 GB | General purpose |
llama3.1 |
~5-6 GB | Meta's model |
MandoCode validates model compatibility on startup. Run /learn for a detailed guide on model sizes and hardware requirements.
Located at ~/.mandocode/config.json
{
"ollamaEndpoint": "http://localhost:11434",
"modelName": "minimax-m2.5:cloud",
"modelPath": null,
"temperature": 0.7,
"maxTokens": 4096,
"ignoreDirectories": [],
"enableDiffApprovals": true,
"enableTaskPlanning": true,
"enableTokenTracking": true,
"enableThemeCustomization": true,
"enableFallbackFunctionParsing": true,
"functionDeduplicationWindowSeconds": 5,
"maxRetryAttempts": 2,
"music": {
"volume": 0.5,
"genre": "lofi",
"autoPlay": false
}
}| Key | Default | Description |
|---|---|---|
ollamaEndpoint |
http://localhost:11434 |
Ollama server URL |
modelName |
minimax-m2.5:cloud |
Model to use |
modelPath |
null |
Optional path to a local GGUF model file |
temperature |
0.7 |
Response creativity (0.0 = focused, 1.0 = creative) |
maxTokens |
4096 |
Maximum response token length |
ignoreDirectories |
[] |
Additional directories to exclude from file scanning |
enableDiffApprovals |
true |
Show diffs and prompt for approval before file writes/deletes |
enableTaskPlanning |
true |
Enable automatic task planning for complex requests |
enableTokenTracking |
true |
Show session token totals and per-response token costs |
enableThemeCustomization |
true |
Detect terminal theme and apply a curated ANSI palette |
enableFallbackFunctionParsing |
true |
Parse function calls from text output |
functionDeduplicationWindowSeconds |
5 |
Time window to prevent duplicate function calls |
maxRetryAttempts |
2 |
Max retry attempts for transient errors |
music.volume |
0.5 |
Music volume (0.0 - 1.0) |
music.genre |
lofi |
Default genre (lofi or synthwave) |
music.autoPlay |
false |
Auto-start music on launch |
mandocode config show # Display current configuration
mandocode config init # Create default configuration file
mandocode config set <key> <value> # Set a configuration value
mandocode config path # Show configuration file location
mandocode config --help # Show help| Variable | Overrides |
|---|---|
OLLAMA_ENDPOINT |
ollamaEndpoint in config |
OLLAMA_MODEL |
modelName in config |
When the AI writes or deletes a file, MandoCode intercepts the operation and shows a color-coded diff before applying changes.
- Red lines — content being removed
- Light blue lines — content being added
- Dim lines — unchanged context (3 lines around each change)
- Long unchanged sections are collapsed with a summary
| Option | Behavior |
|---|---|
| Approve | Apply this change |
| Approve - Don't ask again | Auto-approve future changes to this file (per-file), or all files (global) |
| Deny | Reject the change, the AI is told it was denied |
| Provide new instructions | Redirect the AI with custom feedback |
For new files, "don't ask again" sets a global bypass — all future writes and deletes are auto-approved for the session. For existing files, the bypass is per-file.
Even when auto-approved, diffs are still rendered so you can follow along.
File deletions show all existing content as red removals with a deletion warning. The same approval options apply.
mandocode config set diffApprovals falseType @ anywhere in your input (after a space or at position 0) to trigger file autocomplete. A dropdown appears showing your project files, filtered as you type.
- Type your prompt and hit
@— a file dropdown appears - Type a partial name to filter (e.g.,
Conf) — matches narrow down - Use arrow keys to navigate, Tab or Enter to select
- The selected path is inserted (e.g.,
@src/MandoCode/Models/MandoCodeConfig.cs) - Continue typing and press Enter to submit
- MandoCode reads the referenced file(s) and injects the content as context for the AI
explain @src/MandoCode/Services/AIService.cs to me
what does the ProcessFileReferences method do in @src/MandoCode/Components/App.razor
refactor @src/MandoCode/Models/LoadingMessages.cs to use fewer spinners
Multiple @ references in one prompt are supported. Files over 10,000 characters are automatically truncated.
| Key | Action |
|---|---|
@ |
Open file dropdown |
| Type | Filter files by name |
| Up/Down | Navigate dropdown |
| Tab/Enter | Insert selected file path (does not submit) |
| Escape | Close dropdown, keep text |
| Backspace | Re-filter, or close if you delete past @ |
MandoCode automatically detects complex requests and offers to break them into a step-by-step plan before execution.
The planner activates for requests like:
Create a REST API service with authentication and rate limiting for the user module(12+ words with imperative verb and scope indicator)Build an application that handles user registration and sends email confirmations- Numbered lists with 3+ items
- Requests over 400 characters
Simple questions, short prompts, and single-action operations (delete, remove, read, show, list, find, search, rename) bypass planning automatically.
- Detection — heuristics identify complex requests
- Plan generation — AI creates numbered steps
- User approval — review the plan table, then choose: execute, skip planning, or cancel
- Step-by-step execution — each step runs with progress tracking
- Error handling — skip failed steps or cancel the entire plan
See Task Planner Documentation for full technical details.
The /learn command helps new users understand local LLMs and get set up.
| Scenario | What happens |
|---|---|
| Startup, no Ollama detected | Automatically displays the educational guide instead of a bare error |
/learn typed, no model running |
Displays the static educational guide |
/learn typed, model is running |
Shows the guide, then offers to enter AI educator chat mode |
- What are Open-Weight LLMs? — Free, private, offline models vs. cloud AI
- Model Sizes & Hardware — Parameters, quantization, VRAM requirements
- Cloud vs Local Models — Ollama cloud models (no GPU) vs local models
- Recommended Models — Table of cloud and local options
- Getting Started — Step-by-step setup instructions
When Ollama is running, /learn offers an interactive chat mode where the AI explains LLM concepts using beginner-friendly language. Type /clear to return to normal mode.
The AI has sandboxed access to your project directory through these functions:
| Function | Description |
|---|---|
list_all_project_files() |
Recursively lists all project files, excluding ignored directories |
list_files_match_glob_pattern(pattern) |
Lists files matching a glob pattern (*.cs, src/**/*.ts) |
read_file_contents(relativePath) |
Reads complete file content with line count |
write_file(relativePath, content) |
Writes/creates a file (creates directories as needed) |
delete_file(relativePath) |
Deletes a file |
create_folder(relativePath) |
Creates a new directory |
delete_folder(relativePath) |
Deletes a directory and all its contents |
search_text_in_files(pattern, searchText) |
Searches file contents for text, returns paths and line numbers |
get_absolute_path(relativePath) |
Converts a relative path to absolute |
Security: All operations are sandboxed to the project root. Path traversal is blocked with a separator-boundary check.
Ignored directories: .git, node_modules, bin, obj, .vs, .vscode, packages, dist, build, __pycache__, .idea — plus any custom directories from your config.
Transient errors (HTTP failures, timeouts, socket errors) are retried with exponential backoff:
Attempt 1 -> fail -> wait 500ms
Attempt 2 -> fail -> wait 1000ms
Attempt 3 -> fail -> throw
| Operation | Window | Matching |
|---|---|---|
| Read operations | 2 seconds | Function name + arguments |
| Write operations | 5 seconds (configurable) | Function name + path + content hash (SHA256) |
Some local models output function calls as JSON text instead of proper tool calls. MandoCode detects and parses:
- Standard:
{"name": "func", "parameters": {...}} - OpenAI-style:
{"function_call": {"name": "func", "arguments": {...}}} - Tool calls:
{"tool_calls": [{"function": {"name": "func", "arguments": {...}}}]}
AI responses are rendered as rich terminal output:
| Markdown | Rendered as |
|---|---|
**bold** |
Bold text |
*italic* |
Italic text |
`code` |
Cyan highlighted |
| Fenced code blocks | Bordered panels with syntax highlighting |
| Tables | Spectre.Console table widgets |
# Headers |
Bold yellow with horizontal rules |
- lists |
Indented bullet points |
> quotes |
Grey-bordered block quotes |
| URLs | Clickable OSC 8 hyperlinks |
Syntax highlighting supports C#, Python, JavaScript/TypeScript, and Bash with language-specific keyword coloring.
- Per-response:
[~1.2k in, 847 out]after each AI response - Session total:
Total [4.2k tokens]above the prompt - File estimates:
@fileattachments show estimated token cost (chars/4)
Function executions use semaphore-based signaling, ensuring each task plan step fully completes before the next begins.
src/MandoCode/
Components/ Razor UI (App, Banner, HelpDisplay, ConfigMenu, Prompt)
Services/ Core logic (AI, markdown, syntax, tokens, music, diffs)
Models/ Data models, config, system prompts, educational content
Plugins/ Semantic Kernel plugins (FileSystem)
Audio/ Bundled lofi and synthwave MP3 tracks
docs/ Feature documentation
Program.cs Entry point and DI registration
| Package | Purpose |
|---|---|
| Microsoft.SemanticKernel 1.72.0 | LLM orchestration and plugin system |
| Ollama Connector 1.72.0-alpha | Ollama model integration |
| RazorConsole.Core 0.5.0-alpha | Terminal UI with Razor components |
| Markdig 1.0.0 | Markdown parsing |
| NAudio 2.2.1 | Audio playback |
| FileSystemGlobbing 10.0.3 | Glob pattern matching |





