Fix audit logging#27
Conversation
Problem: PR Fast-Editor#20 introduced new code but unintentionally introduced inconsistent code. Changes implemented: 1. Files cleaned up 2. Missing dns-logger.js added 3. New code added Testing: success: npm run test:unit success: Claude connects to Lynkr correctly
|
Thanks @MichaelAnders I will take a look at this one today |
Integrates the audit logging infrastructure with the orchestrator's runAgentLoop function to capture all LLM request/response pairs for compliance and debugging purposes. Changes: - Add crypto module import for correlation ID generation - Add getDestinationUrl() helper to resolve provider endpoints - Instantiate audit logger at the start of runAgentLoop - Log LLM requests before invokeModel() with correlation IDs - Log LLM responses after invokeModel() with token usage and latency - Handle streaming, success, and error response cases - Add logs/ directory to .gitignore The audit logger is a no-op when disabled, ensuring zero overhead when not in use. All logs use structured JSON format with correlation IDs to link request/response pairs. Implements plan from PR Fast-Editor#27 for intelligent audit logging.
|
I have added the missing code as my plans for the next 3 days have changed this morning. Testing: |
I have another PR coming up based on these changes (Ollama specific and "correct" this time). I'll create the new PR once you've accepted this one, I don't want to mix this - it'll get too big and complicated. IGNORE 9ae1d30 - it is a local branch, WIP |
This PR builds on PR Fast-Editor#27's audit logging infrastructure to fix critical issues when using Ollama as the LLM provider. ## Problems Fixed ### 1. Ollama JSON String Tool Call Parsing **Problem**: Ollama returns tool calls as JSON strings in the content field instead of structured tool_calls arrays, causing malformed displays and execution failures. **Example**: `{"name": "Bash", "parameters": {"command": "ls"}}` displayed as raw JSON instead of executing. **Solution**: Enhanced ollama-utils.js to detect and parse JSON strings, converting them to proper Anthropic tool_use format. ### 2. File Path Extraction Endless Loop **Problem**: Commands like "ls" created an endless loop. The CLI sends two parallel requests: the main command AND a file path extraction request expecting XML format. Ollama kept returning Grep tool calls instead of the requested `<filepaths></filepaths>` XML, creating an infinite loop. **Solution**: Added detection for file path extraction requests and strip tools from the payload, forcing Ollama to return plain XML text. ### 3. Enhanced Audit Logging Builds on PR Fast-Editor#27's fixes for: - **Missing Response Content**: Ollama's `message.message` format now captured - **Model Name Mismatch**: Both requested and actual model logged correctly - **User Messages Buried**: Actual user input extracted from wrapped content **New enhancements**: - Query-response pairing with correlation IDs for easy audit trail - Application log mirroring to audit file for debugging (configurable levels) - Oversized error logging integration ## Changes **Core Fixes**: - src/clients/ollama-utils.js: Add JSON string detection and parsing - New: isToolCallJsonString() helper - New: parseToolCallFromJsonString() helper - Enhanced: buildAnthropicResponseFromOllama() for centralized conversion - src/orchestrator/index.js: Add file path extraction detection - Detects XML format requests from CLI - Strips tools to force text-only responses - Prevents endless tool call loops **Audit Logging Enhancements**: - src/logger/audit-logger.js: Add query-response pairing - New: logQueryResponsePair() method - Correlates requests with responses using correlation IDs - src/logger/index.js: Add application log mirroring - Mirrors app logs to audit file (configurable: none/error/warn/info/debug) - Provides comprehensive debugging visibility - src/config/index.js: Add appLogLevel configuration - .env.example: Document all audit logging options (+51 lines) **Testing**: - test/ollama-json-string-parsing.test.js: 10 comprehensive tests (all passing) ## Configuration New environment variables: ```bash # Mirror application logs to audit file for debugging LLM_AUDIT_APP_LOG_LEVEL=info # none|error|warn|info|debug ``` ## Verification 1. Start Lynkr: `~/start_lynkr` 2. Test ls command: Works without endless loop 3. Test tool calls: Properly parsed and executed 4. Check audit logs: `tail -50 logs/llm-audit.log | jq` Builds on PR Fast-Editor#27 audit logging infrastructure.
|
Thank you @MichaelAnders |
After adding more comprehensive logging to track context flow through the request pipeline, discovered a bug in Ollama system prompt handling. BUG FIX: Ollama System Prompt Handling - Problem: System prompts were being flattened into the first user message for Ollama - Impact: Models received instructions as conversational context instead of system directives, causing meta-commentary responses instead of tool execution - Fix: Keep system prompt separate for Ollama (same pattern as other providers) - Files: src/orchestrator/index.js, src/clients/databricks.js Diagnostic Logging Added: - MEMORY_DEBUG: Track memory retrieval, formatting, and injection (src/memory/retriever.js) - SESSION_DEBUG: Track session reuse vs creation, age, and history size (src/sessions/store.js) - CONTEXT_FLOW: Track system prompt transformations through the pipeline (src/orchestrator/index.js) The diagnostic logging revealed how system prompts flow through the request pipeline and helped identify where the flattening was occurring.
|
Thank you Micheal for all your contributions to the Lynkr Project |
Summary
This PR addresses issues with the audit logging system and fixes a critical bug in Ollama system prompt handling that was causing models to produce meta-commentary instead of executing commands.
Changes
1. Fix Audit Logging Infrastructure (bd9c1a6)
2. Integrate Audit Logging with Orchestrator (e69d6f4)
3. Fix Ollama System Prompt Handling (8b2b7cf) 🐛
Critical Bug Fix: System prompts were being flattened into user message content for Ollama, causing models to see instructions as conversational context instead of system directives.
Impact: Models produced meta-commentary like "It seems like you're ready to start a conversation..." instead of executing commands like "ls" or "pwd".
Fix: Keep system prompt separate as first message with
role: "system", consistent with all other providers (OpenRouter, Azure OpenAI, llama.cpp, LM Studio).Diagnostic Logging Added:
MEMORY_DEBUG: Track memory retrieval, formatting, and injectionSESSION_DEBUG: Track session reuse vs creation, age, and history sizeCONTEXT_FLOW: Track system prompt transformations through the pipeline5 files changed, +344/-28 lines
Testing
npm run test:unitFiles Changed
Total: 18 files, ~3,444 additions, ~55 deletions
Key files:
src/orchestrator/index.js- Audit integration + Ollama system prompt fixsrc/clients/databricks.js- Ollama system prompt handlingsrc/logger/audit-logger.js- New audit logging infrastructuresrc/logger/deduplicator.js- Deduplication for audit logssrc/memory/retriever.js- Memory debug loggingsrc/sessions/store.js- Session debug loggingdocs/memory-stale-context-fix.md- Comprehensive documentationRelated Issues
Fixes system prompt flattening causing meta-commentary responses from Ollama models.