Skip to content

Fix audit logging#27

Merged
vishalveerareddy123 merged 4 commits intoFast-Editor:mainfrom
MichaelAnders:fix/intelligent_audit_logging
Jan 31, 2026
Merged

Fix audit logging#27
vishalveerareddy123 merged 4 commits intoFast-Editor:mainfrom
MichaelAnders:fix/intelligent_audit_logging

Conversation

@MichaelAnders
Copy link
Contributor

@MichaelAnders MichaelAnders commented Jan 27, 2026

Summary

This PR addresses issues with the audit logging system and fixes a critical bug in Ollama system prompt handling that was causing models to produce meta-commentary instead of executing commands.

Changes

1. Fix Audit Logging Infrastructure (bd9c1a6)

  • Add intelligent audit logging with deduplication
  • Implement oversized error logging stream
  • Add audit log reader and compaction scripts
  • Add DNS logging for destination tracking
  • 11 files changed, +2991 lines

2. Integrate Audit Logging with Orchestrator (e69d6f4)

  • Add correlation IDs to link request/response pairs
  • Log all LLM requests before invokeModel()
  • Log all LLM responses with token usage and latency
  • Handle streaming, success, and error cases
  • Zero overhead when disabled
  • 2 files changed, +109 lines

3. Fix Ollama System Prompt Handling (8b2b7cf) 🐛

Critical Bug Fix: System prompts were being flattened into user message content for Ollama, causing models to see instructions as conversational context instead of system directives.

Impact: Models produced meta-commentary like "It seems like you're ready to start a conversation..." instead of executing commands like "ls" or "pwd".

Fix: Keep system prompt separate as first message with role: "system", consistent with all other providers (OpenRouter, Azure OpenAI, llama.cpp, LM Studio).

Diagnostic Logging Added:

  • MEMORY_DEBUG: Track memory retrieval, formatting, and injection
  • SESSION_DEBUG: Track session reuse vs creation, age, and history size
  • CONTEXT_FLOW: Track system prompt transformations through the pipeline

5 files changed, +344/-28 lines

Testing

  • ✅ Unit tests pass: npm run test:unit
  • ✅ Claude Code connects to Lynkr correctly
  • ✅ Ollama system prompts now sent as separate messages
  • ✅ Audit logging captures all request/response pairs
  • ✅ Correlation IDs link requests to responses

Files Changed

Total: 18 files, ~3,444 additions, ~55 deletions

Key files:

  • src/orchestrator/index.js - Audit integration + Ollama system prompt fix
  • src/clients/databricks.js - Ollama system prompt handling
  • src/logger/audit-logger.js - New audit logging infrastructure
  • src/logger/deduplicator.js - Deduplication for audit logs
  • src/memory/retriever.js - Memory debug logging
  • src/sessions/store.js - Session debug logging
  • docs/memory-stale-context-fix.md - Comprehensive documentation

Related Issues

Fixes system prompt flattening causing meta-commentary responses from Ollama models.

Problem: PR Fast-Editor#20 introduced new code but unintentionally introduced

inconsistent code.

Changes implemented:

1. Files cleaned up

2. Missing dns-logger.js added

3. New code added

Testing:

success: npm run test:unit

success: Claude connects to Lynkr correctly
@vishalveerareddy123
Copy link
Collaborator

Thanks @MichaelAnders I will take a look at this one today

Integrates the audit logging infrastructure with the orchestrator's
runAgentLoop function to capture all LLM request/response pairs
for compliance and debugging purposes.

Changes:
- Add crypto module import for correlation ID generation
- Add getDestinationUrl() helper to resolve provider endpoints
- Instantiate audit logger at the start of runAgentLoop
- Log LLM requests before invokeModel() with correlation IDs
- Log LLM responses after invokeModel() with token usage and latency
- Handle streaming, success, and error response cases
- Add logs/ directory to .gitignore

The audit logger is a no-op when disabled, ensuring zero overhead
when not in use. All logs use structured JSON format with correlation
IDs to link request/response pairs.

Implements plan from PR Fast-Editor#27 for intelligent audit logging.
@MichaelAnders
Copy link
Contributor Author

I have added the missing code as my plans for the next 3 days have changed this morning.

Testing:
success: npm run test:unit
success: Claude connects to Lynkr correctly
success: Logging works. I can see logs being created after setting:
export LLM_AUDIT_ENABLED=true
export LLM_AUDIT_LOG_FILE="./logs/llm-audit.log"

@MichaelAnders
Copy link
Contributor Author

MichaelAnders commented Jan 28, 2026

Thanks @MichaelAnders I will take a look at this one today

I have another PR coming up based on these changes (Ollama specific and "correct" this time).

I'll create the new PR once you've accepted this one, I don't want to mix this - it'll get too big and complicated.

IGNORE 9ae1d30 - it is a local branch, WIP

MichaelAnders added a commit to MichaelAnders/Lynkr that referenced this pull request Jan 28, 2026
This PR builds on PR Fast-Editor#27's audit logging infrastructure to fix critical
issues when using Ollama as the LLM provider.

## Problems Fixed

### 1. Ollama JSON String Tool Call Parsing
**Problem**: Ollama returns tool calls as JSON strings in the content field
instead of structured tool_calls arrays, causing malformed displays and
execution failures.

**Example**: `{"name": "Bash", "parameters": {"command": "ls"}}`
displayed as raw JSON instead of executing.

**Solution**: Enhanced ollama-utils.js to detect and parse JSON strings,
converting them to proper Anthropic tool_use format.

### 2. File Path Extraction Endless Loop
**Problem**: Commands like "ls" created an endless loop. The CLI sends
two parallel requests: the main command AND a file path extraction request
expecting XML format. Ollama kept returning Grep tool calls instead of the
requested `<filepaths></filepaths>` XML, creating an infinite loop.

**Solution**: Added detection for file path extraction requests and strip
tools from the payload, forcing Ollama to return plain XML text.

### 3. Enhanced Audit Logging
Builds on PR Fast-Editor#27's fixes for:
- **Missing Response Content**: Ollama's `message.message` format now captured
- **Model Name Mismatch**: Both requested and actual model logged correctly
- **User Messages Buried**: Actual user input extracted from wrapped content

**New enhancements**:
- Query-response pairing with correlation IDs for easy audit trail
- Application log mirroring to audit file for debugging (configurable levels)
- Oversized error logging integration

## Changes

**Core Fixes**:
- src/clients/ollama-utils.js: Add JSON string detection and parsing
  - New: isToolCallJsonString() helper
  - New: parseToolCallFromJsonString() helper
  - Enhanced: buildAnthropicResponseFromOllama() for centralized conversion

- src/orchestrator/index.js: Add file path extraction detection
  - Detects XML format requests from CLI
  - Strips tools to force text-only responses
  - Prevents endless tool call loops

**Audit Logging Enhancements**:
- src/logger/audit-logger.js: Add query-response pairing
  - New: logQueryResponsePair() method
  - Correlates requests with responses using correlation IDs

- src/logger/index.js: Add application log mirroring
  - Mirrors app logs to audit file (configurable: none/error/warn/info/debug)
  - Provides comprehensive debugging visibility

- src/config/index.js: Add appLogLevel configuration
- .env.example: Document all audit logging options (+51 lines)

**Testing**:
- test/ollama-json-string-parsing.test.js: 10 comprehensive tests (all passing)

## Configuration

New environment variables:
```bash
# Mirror application logs to audit file for debugging
LLM_AUDIT_APP_LOG_LEVEL=info  # none|error|warn|info|debug
```

## Verification

1. Start Lynkr: `~/start_lynkr`
2. Test ls command: Works without endless loop
3. Test tool calls: Properly parsed and executed
4. Check audit logs: `tail -50 logs/llm-audit.log | jq`

Builds on PR Fast-Editor#27 audit logging infrastructure.
@vishalveerareddy123
Copy link
Collaborator

Thank you @MichaelAnders
I will definitely look into this
I am traveling overseas this week hence I am a bit caught up. Will have a look

After adding more comprehensive logging to track context flow through
the request pipeline, discovered a bug in Ollama system prompt handling.

BUG FIX: Ollama System Prompt Handling
- Problem: System prompts were being flattened into the first user message
  for Ollama
- Impact: Models received instructions as conversational context instead of
  system directives, causing meta-commentary responses instead of tool execution
- Fix: Keep system prompt separate for Ollama (same pattern as other providers)
- Files: src/orchestrator/index.js, src/clients/databricks.js

Diagnostic Logging Added:
- MEMORY_DEBUG: Track memory retrieval, formatting, and injection
  (src/memory/retriever.js)
- SESSION_DEBUG: Track session reuse vs creation, age, and history size
  (src/sessions/store.js)
- CONTEXT_FLOW: Track system prompt transformations through the pipeline
  (src/orchestrator/index.js)

The diagnostic logging revealed how system prompts flow through the request
pipeline and helped identify where the flattening was occurring.
MichaelAnders added a commit to MichaelAnders/Lynkr that referenced this pull request Jan 30, 2026
@vishalveerareddy123
Copy link
Collaborator

Thank you Micheal for all your contributions to the Lynkr Project

@vishalveerareddy123 vishalveerareddy123 merged commit 3ca0f59 into Fast-Editor:main Jan 31, 2026
6 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants