A comprehensive AI-powered development toolkit featuring live session logging, real-time constraint monitoring, semantic knowledge management, and multi-agent analysis β supporting Claude Code, GitHub Copilot CLI, OpenCode, and Mastracode. Zero-cost LLM routing via existing Claude Code and GitHub Copilot subscriptions.
# Install the system (safe - prompts before any system changes)
./install.sh
# Start Claude Code with all features
coding
# Or use specific agent
coding --claude
coding --copilot
coding --opencode
coding --mastra
# Clean start (kills all orphaned processes, frees ports)
coding --force
# Query local LLM from command line (Docker Model Runner)
llm "Explain this error message"
cat file.js | llm "Review this code"The coding stack runs in Docker β there is no native fallback.
# Start Claude or CoPilot β Docker services start automatically
coding --claude
coding --copilotBenefits: Persistent MCP servers, shared browser automation across sessions, isolated database containers, no duplicate containers when switching agents.
MCP Configuration: claude-mcp-launcher.sh wires the stdio-proxy β SSE bridge so the agent talks to the containerized MCP servers.
Unified Agent Launching: All agents are wrapped in tmux sessions via the shared scripts/tmux-session-wrapper.sh, providing a consistent status bar across Claude, CoPilot, OpenCode, and Mastracode. The shared orchestrator (scripts/launch-agent-common.sh) handles service startup, monitoring, session management, and auto-installation of missing agent CLIs β adding a new agent requires only a single config file in config/agents/. The service orchestrator (start-services-robust.js) treats Redis, Qdrant, and Memgraph as built-ins of the coding-services container, so it never spawns duplicates.
Multi-Agent Support: While Claude Code is the primary and default agent (coding or coding --claude), the system is fully agent-agnostic. Any coding agent can be integrated with a single config file in config/agents/. Currently supported:
| Agent | Launch Command | Detection |
|---|---|---|
| Claude Code (default) | coding or coding --claude |
Native transcript support |
| GitHub Copilot CLI | coding --copilot |
Pipe-pane I/O capture |
| OpenCode | coding --opencode |
Pipe-pane I/O capture |
| Mastracode | coding --mastra |
Lifecycle hook transcripts |
All agents get the same infrastructure: tmux session wrapping, status line, health monitoring, LSL session logging, knowledge management, constraint enforcement, and shared skills (see Skills System). Missing agent CLIs are auto-installed on first launch (with user confirmation).
See Agent Integration Guide for adding new agents.
Health System: The health verifier reads cached commit info from cache-metadata.json (no .git inside the container) and uses supervisorctl for service restarts.
The Docker stack runs 4 containers (coding-services, Qdrant, Memgraph, Redis) with 10 internal services managed by supervisord, using ~1.75 GB memory total. The only host-side service is the LLM CLI Proxy (port 12435), which bridges to host-local CLI tools like Claude Code and GitHub Copilot.
See Architecture Report for full system overview, and the Docker Deployment Guide for container configuration.
The launcher automatically adapts to your network environment:
- Corporate network detection β 3-layer detection (environment variable, SSH probe, HTTPS fallback) with 5-second timeouts
- Proxy auto-configuration β Detects local proxy services (proxydetox) and configures environment variables automatically
- Docker auto-start β Launches Docker Desktop on demand with hung-process recovery and 45-second timeout
- Tested in all combinations β CN/public network, with/without proxy, Claude/CoPilot β validated by 17 end-to-end tests
No manual network configuration needed for most environments. See Getting Started - Network Setup for details.
The installer follows a non-intrusive policy - it will NEVER modify system tools without explicit consent:
- Confirmation prompts before installing any system packages (Node.js, Python, jq)
- Skip options:
y(approve),N(skip),skip-all(skip all system changes) - Shell config backup with timestamped files before any modifications
- Syntax verification after shell config changes
Next Steps: Getting Started Guide
- π₯ Health System - Real-time monitoring, auto-healing, and status line indicators
- π Live Session Logging - Real-time conversation classification and routing
- π Constraint Monitoring - PreToolUse hook enforcement for code quality
- π§ Knowledge Management - Capture, visualize, and share development insights
- π Trajectory Generation - Automated project analysis and documentation
- ποΈ Observational Memory - Per-exchange LLM observations from live sessions, browsable dashboard
- π€ Multi-Agent Analysis - 11 specialized AI agents for comprehensive code analysis
The unified LLM layer (lib/llm/) intelligently routes requests to maximize cost savings:
- Subscription-First: Claude Code β GitHub Copilot β Groq β Anthropic β OpenAI
- 10 Providers: 2 subscription (CLI), 5 cloud API, 2 local, 1 mock
- Automatic Fallback: Quota exhausted? Seamlessly fall back to paid APIs
- Quota Tracking: Persistent usage tracking with exponential backoff
- Cost Savings: ~$50-100/month for active development (all UKB/LSL analysis is $0)
Provider Status:
- β Claude Code (sonnet/opus) - Zero cost via subscription
- β GitHub Copilot (gpt-4o-mini/gpt-4o) - Zero cost via subscription
- β Groq (llama-3.1/3.3) - Fast, low-cost API fallback
- β Anthropic, OpenAI, Gemini, GitHub Models - Cloud API fallback
- β DMR, Ollama - Local fallback (no API costs)
See LLM Architecture for details.
- Claude Code - Full MCP server integration (default agent)
- GitHub Copilot CLI - Pipe-pane capture with session logging
- OpenCode - Pipe-pane capture with session logging
- Agent Abstraction API - Unified adapter system for any coding agent
- Docker Support - Containerized deployment with HTTP/SSE transport for MCP servers
The system uses a unified Agent Abstraction API (lib/agent-api/) that enables consistent features across different coding agents:
- BaseAdapter - Common interface for all agent adapters
- StatuslineProvider - Unified status display (rendered via tmux status bar)
- HooksManager - Bridge between native hook systems and unified hooks
- TranscriptAdapter - Unified session log format (LSL)
See Agent Abstraction API for details.
Automatic health monitoring and self-healing with real-time dashboard
- Pre-prompt health verification with 3-layer resilience
- Auto-healing failed services (Docker-aware)
- Dashboard at
http://localhost:3032 - Service supervision hierarchy ensures services stay running
- π Status Line System - Real-time indicators via unified tmux status bar (all agents)
Real-time conversation classification and routing with security redaction
- 5-layer classification system
- Multi-project support with foreign session tracking
- 98.3% security effectiveness
- Zero data loss architecture
Real-time development state tracking and comprehensive project analysis
- AI-powered activity classification (exploring, implementing, verifying, etc.)
- Status line integration
- Automated project capability documentation
Real-time code quality enforcement through PreToolUse hooks
- 18 active constraints (security, architecture, code quality, PlantUML, documentation)
- Severity-based enforcement (CRITICAL/ERROR blocks, WARNING/INFO allows)
- Dashboard monitoring at
http://localhost:3030 - Compliance scoring (0-10 scale)
Two Complementary Approaches for knowledge capture and retrieval:
- Manual/Batch (UKB): Git analysis and interactive capture for team sharing
- Online (Continuous Learning): Real-time session learning with semantic search
- Visualization (VKB): Web-based graph visualization at
http://localhost:8080 - Ontology Classification: 4-layer classification pipeline
Real-time per-exchange observations from live coding sessions, inspired by the observational memory concepts in the Mastra codebase:
- Structured LLM summaries: Each exchange summarized into Intent/Approach/Artifacts/Result via subscription providers
- Multi-agent capture: All four agents (Claude, Copilot, OpenCode, Mastracode) generate observations
- Dashboard: Browsable at
http://localhost:3032/observationswith filters, search, compact view - Auto-fallback: LLM proxy automatically tries the next provider on failure (health tracking with cooldowns)
- Transcript converters: Batch-convert historical Claude JSONL, Copilot events, and .specstory files
- Zero-cost summarization: Routes through subscription providers (Claude Max, Copilot Enterprise)
- System Health Dashboard - Real-time health visualization
- MCP Constraint Monitor - PreToolUse hook enforcement
- MCP Semantic Analysis - 11-agent AI analysis system
- VKB Visualizer - Knowledge graph visualization
- All Integrations - Complete integration list
Reusable workflow instructions shared across all agents β drop a .md into .claude/commands/ and it propagates to Claude, Copilot, and OpenCode automatically.
- Installation & Setup - Complete installation guide
- Provider Configuration - LLM provider setup
- Troubleshooting - Common issues and solutions
Real-time conversation classification and routing with enterprise-grade security:
- 3-Layer Classification: Path analysis β Keyword matching β Semantic analysis
- 98.3% Security Effectiveness: Enhanced redaction with bypass protection
- Multi-User Support: Secure user isolation with SHA-256 hash generation
- Zero Data Loss: Every exchange properly classified and preserved
- 200x Performance: Optimized bulk processing with sub-millisecond tracking
Status: β Production Ready
PreToolUse hook integration for real-time code quality enforcement:
- 18 Active Constraints: Security, architecture, code quality, PlantUML, documentation
- Severity-Based: CRITICAL/ERROR blocks, WARNING/INFO allows with feedback
- Dashboard Monitoring: Live violation feed (port 3030)
- REST API: Programmatic access (port 3031)
- Testing Framework: Automated and interactive constraint testing
Status: β Production Ready
Capture, organize, and visualize development insights with git-based team collaboration:
- UKB (Update Knowledge Base): Auto git analysis + interactive capture
- VKB (Visualize Knowledge Base): Web-based graph visualization
- Graph Database: Agent-agnostic persistent storage (Graphology + Level)
- Git-Tracked JSON: Team collaboration via pretty JSON exports
- graph-sync CLI: Manual export/import/status operations
- Auto-Sync: Import on startup, export on changes (5s debounce)
- Team Isolation: Multi-team support with conflict resolution
- Domain-Specific: Automatic domain knowledge bases per team
Status: β Production Ready
11 specialized agents for comprehensive code analysis:
- CoordinatorAgent - Workflow orchestration
- GitHistoryAgent - Git commits and architectural decisions
- VibeHistoryAgent - Conversation file processing
- SemanticAnalysisAgent - Deep code analysis (uses LLM)
- WebSearchAgent - External pattern research
- InsightGenerationAgent - Insight generation with PlantUML (uses LLM)
- ObservationGenerationAgent - Structured UKB-compatible observations
- QualityAssuranceAgent - Output validation with auto-correction (uses LLM)
- ContentValidationAgent - Stale entity detection and knowledge refresh
- PersistenceAgent - Knowledge base persistence
- DeduplicationAgent - Semantic duplicate detection
Debug Mode: Full debugging support with single-step execution, substep inspection, and mock LLM mode for cost-free testing. See UKB Workflow System.
Status: β Production Ready
# Start visualization server
vkb
# View at http://localhost:8080
# Manual sync operations
graph-sync status # View sync status
graph-sync export # Export all teams to JSON
graph-sync import # Import all teams from JSON
graph-sync sync # Full bidirectional sync# Start dashboard (automatic with install)
cd integrations/mcp-constraint-monitor
npm run dashboard # http://localhost:3030
# API access
curl http://localhost:3031/api/status
curl http://localhost:3031/api/violations# Automatic during Claude Code sessions
# Session files in .specstory/history/
# Status line shows:
ππ 2130-2230(3min) βcoding
# π = logging, π = window closing, βcoding = activity detectedClaude Code:
# Repository analysis workflow
start_workflow {
"workflowType": "repository-analysis",
"parameters": {
"repository": ".",
"depth": 25,
"significanceThreshold": 6
}
}
VSCode CoPilot:
# Via HTTP API
curl -X POST http://localhost:8765/api/semantic/analyze-repository \
-H "Content-Type: application/json" \
-d '{"repository": ".", "depth": 25}'# Set API keys
export ANTHROPIC_API_KEY="your-key-here"
export OPENAI_API_KEY="optional-fallback"
# Configure preferred agent
export CODING_AGENT="claude" # or "copilot"See Getting Started for:
- API key setup
- MCP configuration
- Network setup (proxies/firewalls)
- Verification steps
# Test all components (check-only mode - safe, no modifications)
./scripts/test-coding.sh
# Interactive mode - prompts before each repair
./scripts/test-coding.sh --interactive
# Auto-repair mode - fixes coding-internal issues only
./scripts/test-coding.sh --auto-repair
# Check MCP servers
cd integrations/mcp-server-semantic-analysis && npm test
# Check constraint monitor
cd integrations/mcp-constraint-monitor && npm testNote: The test script defaults to --check-only mode and will NEVER auto-install system packages.
β Health System - 4-layer monitoring with auto-healing β Live Session Logging - Real-time classification with 98.3% security β Constraint Monitoring - 18 active constraints with PreToolUse hooks β Knowledge Management - UKB/VKB with MCP integration β Multi-Agent Analysis - 11 agents with workflow orchestration β Observational Memory - Per-exchange LLM observations with dashboard β Status Line System - Real-time indicators via unified tmux status bar β Cross-Platform - macOS, Linux, Windows support
This is a personal development toolkit. For issues or suggestions:
- Check Troubleshooting
- Review Architecture Documentation
- Create an issue with detailed information
This project is licensed under the MIT License - see the LICENSE file for details.
Copyright Β© 2025 Frank Wornle
- Documentation Hub: docs/README.md
- Installation Guide: docs/getting-started.md
- LLM Providers & Local Models: docs/provider-configuration.md
- Agent Abstraction API: docs/architecture/agent-abstraction-api.md
- Observational Memory: docs-content/core-systems/observational-memory.md
- Skills System: docs/skills-system.md
- Adding Agents: docs/agent-integration-guide.md
- Docker Architecture: docs/architecture-report.md
- Docker Deployment: docker/README.md
- System Overview: docs/system-overview.md
- Core Systems: docs/core-systems/
- Integrations: docs/integrations/
- Knowledge Management: docs/knowledge-management/











