Skip to content

Latest commit

 

History

History
1104 lines (831 loc) · 24.2 KB

File metadata and controls

1104 lines (831 loc) · 24.2 KB

MEMORY.md

Automatic Memory System - Persistent Knowledge Management with MCP Memory


Table of Contents

  1. Overview
  2. Implementation Status
  3. Project-Specific Memory Storage
  4. How Auto-Memory Works
  5. What Gets Stored
  6. Memory Graph Structure
  7. Automatic Triggers
  8. Manual Memory Management
  9. Memory-Driven Context Injection
  10. Configuration
  11. Best Practices
  12. Troubleshooting

Overview

LAZY_DEV integrates with MCP (Model Context Protocol) Memory to create a persistent, project-isolated knowledge graph that grows and evolves across sessions.

The Problem

Traditional AI-assisted development suffers from:

  • ❌ Lost context between sessions
  • ❌ Constant re-prompting of team conventions
  • ❌ Repeated explanations of architecture decisions
  • ❌ No memory of service ownership or dependencies
  • ❌ Global memory pollution across multiple projects

The LAZY_DEV Solution

Session 1: User mentions "service:payment owned_by:alice"
              ↓
           Hook detects durable fact
              ↓
           Suggests: Use mcp__memory__create_entities
              ↓
           You use MCP tools to store
              ↓
Session 2: Working on payment service (same project)
              ↓
           Hook injects memory skill guidance
              ↓
           You query: mcp__memory__search_nodes
              ↓
           Context retrieved: "Alice owns payment service"

Different Project:
              ↓
           Memory isolated per project
              ↓
           No cross-project pollution

Project-Specific Memory Storage

How It Works

LAZY_DEV stores memory in a project-specific location to avoid conflicts across different projects.

Storage Location:

<PROJECT_ROOT>/.claude/memory/memory.jsonl

Environment Variable:

MEMORY_FILE_PATH=.claude/memory/memory.jsonl

Key Features:

  1. Project Isolation

    • Each project has its own memory database
    • Switching projects automatically uses project-specific memory
    • No cross-project knowledge pollution
  2. Persistence

    • Memory persists across sessions within the same project
    • Knowledge graph grows over time
    • Survives restarts and editor sessions
  3. Portability

    • Git-compatible (can be tracked in version control)
    • Shared with team via repository
    • Easy to backup and restore

MCP Configuration

File: .claude/.mcp.json

{
  "mcpServers": {
    "memory": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-memory"],
      "env": {
        "MEMORY_FILE_PATH": ".claude/memory/memory.jsonl"
      }
    }
  }
}

How It Works:

  1. Startup: Claude Code reads .mcp.json
  2. Environment: Sets MEMORY_FILE_PATH for the MCP server
  3. Storage: MCP Memory stores to .claude/memory/memory.jsonl
  4. Isolation: Each project's .claude/ directory has its own memory
  5. Persistence: Knowledge survives across sessions

Directory Structure

project-root/
├── .claude/
│   ├── memory/
│   │   └── memory.jsonl       # Project-specific knowledge graph
│   ├── .mcp.json              # MCP config with MEMORY_FILE_PATH
│   ├── agents/
│   ├── commands/
│   └── hooks/
└── ...

Implementation Status

✅ What's Implemented

MCP Memory Server Integration:

  • .claude/.mcp.json configured for @modelcontextprotocol/server-memory
  • MEMORY_FILE_PATH environment variable for project isolation
  • ✅ All MCP memory tools available: mcp__memory__*

Auto-Detection Hooks:

  • user_prompt_submit.py - Detects memory intents and entity mentions
  • memory_suggestions.py (PostToolUse) - Suggests memory storage after tools

Memory Graph Skill:

  • .claude/skills/memory-graph/ - Complete skill documentation
  • ✅ Operations guide, playbooks, examples

Project Isolation:

  • ✅ Per-project memory storage in .claude/memory/memory.jsonl
  • ✅ Environment variable configuration
  • ✅ No global memory pollution

What This Means:

  • Memory storage is available via MCP tools
  • Hooks detect durable facts and suggest actions
  • You must manually invoke MCP tools (Claude Code calls them for you when prompted)
  • Memory is isolated per project

⚠️ What's Semi-Automated

Detection → Suggestion (Working):

# user_prompt_submit.py detects:
"service:payment owned_by:alice"Injects Memory Graph skill block with guidanceClaude Code sees: "Use mcp__memory__create_entities"

Manual Execution (Required):

# You (or Claude Code) must then actually call:
mcp__memory__create_entities({
    "entities": [{
        "name": "service:payment",
        "entityType": "service",
        "observations": ["Owned by Alice"]
    }]
})

❌ What's NOT Fully Automatic (Yet)

Automatic Storage with Countdown:

  • ❌ No 5-second countdown auto-store (as documented)
  • ❌ No "auto-store unless cancelled" feature
  • ⚠️ This would require additional hook logic

Automatic Context Injection:

  • ❌ No automatic retrieval based on keywords
  • ❌ No auto-injection of related entities into prompts
  • ✅ But: Skill guidance helps Claude Code query memory when relevant

Why It Still Works:

  • Hook detects → Suggests → Claude Code invokes MCP tools
  • It's AI-assisted memory rather than fully automatic
  • Effective in practice, just requires Claude Code to execute

Summary: How It Actually Works

Current State:

You: "service:payment owned_by:alice"
     ↓
Hook: [Detects entities, injects skill guidance]
     ↓
Claude Code: [Sees guidance, decides to store]
     ↓
Claude Code: mcp__memory__create_entities(...)
     ↓
MCP Memory: [Stores to .claude/memory/memory.jsonl]
     ✅ Persisted (project-isolated)!

Next Session (same project):
You: "Update payment service"
     ↓
Hook: [Detects entity mention, injects skill guidance]
     ↓
Claude Code: mcp__memory__search_nodes("payment")
     ↓
MCP Memory: [Returns: owned by Alice, uses Stripe API]
     ↓
Claude Code: [Uses context in implementation]

It's Semi-Automatic:

  • ✅ Detection is automatic (hooks)
  • ✅ Guidance is automatic (skill injection)
  • ⚠️ Storage/retrieval requires Claude Code to invoke MCP tools
  • ✅ Works well in practice (Claude Code is smart about using tools)
  • ✅ Project-isolated (no cross-project pollution)

How Auto-Memory Works

The Three-Phase System (Current Implementation)

Phase 1: Detection (UserPromptSubmit Hook)

Trigger: Every user prompt

Actual Detection Logic:

# From .claude/hooks/user_prompt_submit.py

# Hard triggers for explicit memory intents
hard_triggers = [
    "save to memory",
    "add to memory",
    "memory graph",
    "knowledge graph",
    "persist this",
    "remember this",
    "create entity",
    "link entities",
    "search memory",
]

# Entity mention detection (regex)
ENTITY_MENTION_PATTERN = re.compile(
    r"\b(person|service|repo|dataset|api|team):[\w\-/\.]+",
    re.IGNORECASE
)

What Actually Happens:

User: "service:payment owned_by:alice"
   ↓
Hook detects entity mentions
   ↓
Injects Memory Graph skill block into prompt
   ↓
Claude Code sees guidance to use mcp__memory__* tools

Output (Injected into prompt):

[MEMORY GRAPH SKILL]

Detected potential memory entities:
- service:payment
- person:alice

Memory MCP Server available as `memory`.
Tools: mcp__memory__create_entities, mcp__memory__search_nodes, etc.

Suggested workflow:
1) mcp__memory__search_nodes to avoid duplicates
2) mcp__memory__create_entities (when missing)
3) mcp__memory__add_observations with small, dated facts
4) mcp__memory__create_relations to link entities

Phase 2: Assisted Storage (Not Fully Automatic)

Current Behavior:

Hook provides guidance
   ↓
Claude Code decides to store
   ↓
Claude Code calls: mcp__memory__create_entities
   ↓
MCP Memory server stores to .claude/memory/memory.jsonl
   ↓
✅ Memory persisted (project-isolated)

Example Tool Call:

# Claude Code invokes (based on hook guidance):
mcp__memory__create_entities({
    "entities": [
        {
            "name": "service:payment",
            "entityType": "service",
            "observations": ["Payment processing service", "Owned by Alice"]
        },
        {
            "name": "person:alice",
            "entityType": "person",
            "observations": ["Team lead for backend", "Owns payment service"]
        }
    ]
})

mcp__memory__create_relations({
    "relations": [
        {
            "from": "service:payment",
            "to": "person:alice",
            "relationType": "owned_by"
        }
    ]
})

Phase 3: Assisted Retrieval (Skill-Guided)

Current Behavior:

User: "Update payment service"
   ↓
Hook detects entity mention
   ↓
Injects Memory Graph skill guidance
   ↓
Claude Code queries: mcp__memory__search_nodes
   ↓
MCP returns relevant entities from .claude/memory/memory.jsonl
   ↓
Claude Code uses context in implementation

Example Retrieval:

# Claude Code invokes:
mcp__memory__search_nodes({
    "query": "payment"
})

# MCP returns:
{
    "nodes": [
        {
            "name": "service:payment",
            "entityType": "service",
            "observations": ["Owned by Alice", "Uses Stripe API"],
            "relations": [
                {"to": "person:alice", "type": "owned_by"},
                {"to": "service:stripe-api", "type": "depends_on"}
            ]
        }
    ]
}

# Claude Code then knows:
# - Payment service owned by Alice
# - Depends on Stripe API
# - Can coordinate with Alice if needed

What Gets Stored

Entity Types

LAZY_DEV auto-detects and stores these entity types:

1. Service Ownership

Pattern: "service:<name> owned_by:<owner>"

Examples:
- "service:payment owned_by:alice"
- "service:auth owned_by:bob"
- "service:api owned_by:team:backend"

2. Repository Information

Pattern: "repo:<org/name> endpoint:<url>"

Examples:
- "repo:org/api endpoint:https://api.example.com"
- "repo:org/frontend uses:react"
- "repo:org/backend uses:fastapi"

3. Architecture Patterns

Pattern: "project uses:<pattern>"

Examples:
- "project uses:repository-pattern"
- "project uses:event-driven-architecture"
- "api uses:rest"

4. Team Conventions

Pattern: "team:<name> prefers:<convention>"

Examples:
- "team:backend prefers:async-patterns"
- "team:frontend prefers:functional-components"
- "team:all requires:type-hints"

5. Dependency Relationships

Pattern: "<service> depends_on:<dependency>"

Examples:
- "payment depends_on:stripe-api"
- "checkout depends_on:payment"
- "billing depends_on:payment,checkout"

6. Configuration Values

Pattern: "config:<key> value:<value>"

Examples:
- "config:api-timeout value:30s"
- "config:max-retries value:3"
- "config:cache-ttl value:3600"

Memory Graph Structure

Graph Model

MCP Memory uses a graph database model:

┌─────────────┐
│   ENTITY    │
├─────────────┤
│ name        │
│ entityType  │
│ observations│
└─────────────┘
       │
       │ RELATION
       ↓
┌─────────────┐
│   ENTITY    │
├─────────────┤
│ name        │
│ entityType  │
│ observations│
└─────────────┘

Example Graph

service:payment (SERVICE)
  observations:
    - "Handles payment processing with Stripe"
    - "Owned by Alice"
    - "Uses async patterns"

  relations:
    - owned_by → person:alice
    - depends_on → service:stripe-api
    - uses → pattern:async
    - relates_to → service:billing

person:alice (PERSON)
  observations:
    - "Backend team lead"
    - "Owns payment and billing services"

  relations:
    - owns → service:payment
    - owns → service:billing

pattern:async (PATTERN)
  observations:
    - "Async/await pattern for I/O operations"
    - "Preferred by backend team"

  relations:
    - used_by → service:payment

Automatic Triggers

UserPromptSubmit Hook Logic

Hook File: .claude/hooks/user_prompt_submit.py

Trigger Conditions:

def should_auto_store(prompt: str) -> bool:
    """Determine if prompt contains durable facts."""

    # Triggers:
    triggers = [
        # Ownership statements
        r"owned by",
        r"maintained by",
        r"belongs to",

        # Architecture decisions
        r"we use",
        r"pattern:",
        r"architecture:",

        # Configuration
        r"config:",
        r"setting:",
        r"endpoint:",

        # Dependencies
        r"depends on",
        r"requires",
        r"integrates with",
    ]

    return any(re.search(trigger, prompt, re.IGNORECASE) for trigger in triggers)

Auto-Store Workflow:

User Prompt
    ↓
UserPromptSubmit Hook
    ↓
Detect durable facts?
    ↓ Yes
Show suggestion + countdown
    ↓
5 seconds elapsed
    ↓
Store to MCP Memory (.claude/memory/memory.jsonl)
    ↓
Continue with command

Manual Memory Management

Store Memory Manually

# Basic syntax
/lazy memory-graph "<statement>"

# Examples
/lazy memory-graph "service:payment owned_by:alice"
/lazy memory-graph "repo:org/api endpoint:https://api.example.com"
/lazy memory-graph "team:backend prefers:async-patterns"

Complex Statements:

# Multiple relations
/lazy memory-graph "service:payment owned_by:alice depends_on:stripe-api uses:async-pattern"

# With observations
/lazy memory-graph "service:payment owned_by:alice observation:'Handles all payment processing with Stripe API integration'"

Query Memory

# Check what's stored
/lazy memory-check

# Output:
✅ MCP Memory Connected

Entities: 15
Relations: 23

Recent entries:
  - service:payment → person:alice (owned_by)
  - service:payment → service:stripe-api (depends_on)
  - team:backend → pattern:async (prefers)

Search Memory:

# Via MCP tools (automatic during prompts)
# Memory auto-queried when relevant keywords detected

# Manual query via skill
Use mcp-memory-router skill to search graph

Update Memory

# Add new observation to existing entity
/lazy memory-graph "service:payment observation:'Now supports PayPal integration'"

# Add new relation
/lazy memory-graph "service:payment depends_on:paypal-api"

Delete Memory

Currently: No direct delete command (MCP Memory limitation)

Workaround: Update observations to mark as deprecated

/lazy memory-graph "service:legacy observation:'DEPRECATED - No longer in use'"

Memory-Driven Context Injection

How Context Injection Works

Step 1: Keyword Detection

# User: "Update payment service"
prompt_keywords = extract_keywords(prompt)
# → ["payment", "service", "update"]

Step 2: Graph Query

# Query MCP Memory for related entities
results = mcp_memory_search(keywords=["payment", "service"])

# Results:
# - service:payment (direct match)
# - person:alice (related via owned_by)
# - service:stripe-api (related via depends_on)
# - pattern:async (related via uses)

Step 3: Context Building

context = build_context(results)

# Context:
"""
[MEMORY CONTEXT]
Service: payment
- Owner: Alice (team:backend)
- Dependencies: stripe-api, database
- Patterns: async, repository-pattern
- Endpoint: https://api.payment.example.com
- Recent changes: Added PayPal integration (2024-10-15)
"""

Step 4: Prompt Enhancement

enhanced_prompt = f"{original_prompt}\n\n{context}"

# Final Prompt:
"""
Update payment service

[MEMORY CONTEXT]
Service: payment
- Owner: Alice (team:backend)
- Dependencies: stripe-api, database
- Patterns: async, repository-pattern
...
"""

Configuration

MCP Memory Server Setup

Configuration File: .claude/.mcp.json

{
  "mcpServers": {
    "memory": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-memory"],
      "env": {
        "MEMORY_FILE_PATH": ".claude/memory/memory.jsonl"
      }
    }
  }
}

Key Points:

  • MEMORY_FILE_PATH points to project-specific location
  • Each project has its own .claude/memory/ directory
  • Memory is automatically isolated per project

Install Node.js (Required):

# Check installation
node --version  # Should be v18+

# Install if missing
# Windows: Download from nodejs.org
# macOS: brew install node
# Linux: sudo apt install nodejs npm

Test MCP Server:

# Manual test
npx -y @modelcontextprotocol/server-memory

# Should start without errors

Environment Variables

# Optional: Disable auto-memory
export LAZYDEV_DISABLE_MEMORY_SKILL=1

# Optional: Project-specific memory path
export MEMORY_FILE_PATH=.claude/memory/memory.jsonl

# Optional: Adjust auto-store countdown
export LAZYDEV_MEMORY_COUNTDOWN=10  # seconds (default: 5)

Skills Configuration

Memory Router Skill: .claude/skills/mcp-memory-router.md

Controls how memory is queried and injected.

Key Settings:

  • search_depth: How many relation hops to traverse (default: 2)
  • max_results: Maximum entities to return (default: 10)
  • relevance_threshold: Minimum relevance score (default: 0.5)

Best Practices

1. Be Specific with Entity Names

Good:

/lazy memory-graph "service:payment-api owned_by:alice"
/lazy memory-graph "repo:org/payment-service endpoint:https://api.payment.com"

Bad:

/lazy memory-graph "api owned_by:alice"  # Too vague
/lazy memory-graph "service endpoint:url"  # Not descriptive

2. Use Consistent Naming Conventions

Good:

service:payment-api     # kebab-case
service:billing-api
service:user-auth

Bad:

service:PaymentAPI      # Mixed case
service:billing_api     # Mixed conventions
service:UserAuth

3. Add Rich Observations

Good:

/lazy memory-graph "service:payment observation:'Handles payment processing with Stripe and PayPal. Supports subscriptions and one-time payments. Rate limited at 1000 req/min.'"

Bad:

/lazy memory-graph "service:payment observation:'payment stuff'"

4. Model Dependencies Explicitly

Good:

/lazy memory-graph "service:checkout depends_on:payment,inventory,shipping"

Better:

/lazy memory-graph "service:checkout depends_on:payment"
/lazy memory-graph "service:checkout depends_on:inventory"
/lazy memory-graph "service:checkout depends_on:shipping"

5. Update Memory as Project Evolves

# When ownership changes
/lazy memory-graph "service:payment owned_by:bob"  # Updates relation

# When architecture changes
/lazy memory-graph "service:payment uses:event-sourcing"  # Adds pattern

# When deprecating
/lazy memory-graph "service:legacy observation:'DEPRECATED - Replaced by service:payment-v2'"

Troubleshooting

Memory Not Auto-Storing

Symptoms: No auto-store suggestions appear

Checks:

# 1. Verify MCP server running
npx -y @modelcontextprotocol/server-memory

# 2. Check MCP configuration
cat .claude/.mcp.json

# 3. Verify memory directory exists
ls -la .claude/memory/

# 4. Verify hook exists and is executable
ls -la .claude/hooks/user_prompt_submit.py
chmod +x .claude/hooks/user_prompt_submit.py

# 5. Check if disabled
echo $LAZYDEV_DISABLE_MEMORY_SKILL

Memory Not Injecting Context

Symptoms: Stored memories not appearing in prompts

Checks:

# 1. Verify memory stored
/lazy memory-check

# 2. Check search keywords
# Ensure prompt contains relevant keywords

# 3. Test manual query
# Use mcp-memory-router skill explicitly

# 4. Check search_depth setting
# May need to increase relation hops

MCP Server Connection Failed

Symptoms: Error: "MCP Memory server not responding"

Solutions:

# 1. Check Node.js version
node --version  # Need v18+

# 2. Reinstall MCP memory server
npm install -g @modelcontextprotocol/server-memory

# 3. Test direct connection
npx -y @modelcontextprotocol/server-memory

# 4. Check firewall/antivirus
# May block local server connections

# 5. Verify memory file path
# Check MEMORY_FILE_PATH environment variable
echo $MEMORY_FILE_PATH

# 6. Check memory directory permissions
chmod -R 755 .claude/memory/

Memory Growing Too Large

Symptoms: Slow queries, context overload

Solutions:

# 1. Audit stored entities
/lazy memory-check

# 2. Mark deprecated entities
/lazy memory-graph "service:old observation:'DEPRECATED'"

# 3. Reduce search_depth
# Edit .claude/skills/mcp-memory-router.md

# 4. Clear and rebuild (last resort)
# Backup, delete .claude/memory/memory.jsonl, rebuild

Project-Specific Memory Issues

Symptoms: Memory from other projects appearing, or memories not accessible in current project

Checks:

# 1. Verify correct .mcp.json location
cat .claude/.mcp.json

# 2. Check MEMORY_FILE_PATH in .mcp.json
# Should point to .claude/memory/memory.jsonl

# 3. Verify memory file location
ls -la .claude/memory/memory.jsonl

# 4. Ensure Claude Code is using correct .claude directory
# Each project should have its own .claude/ folder

Advanced Usage

Multi-Project Memory (Isolated)

Scenario: Work on multiple projects with separate memories

How It Works:

# Project A
# .claude/.mcp.json → MEMORY_FILE_PATH=.claude/memory/memory.jsonl
# Memory stored in: /ProjectA/.claude/memory/memory.jsonl

/lazy memory-graph "service:payment owned_by:alice"

# Switch to Project B
# .claude/.mcp.json → MEMORY_FILE_PATH=.claude/memory/memory.jsonl
# Memory stored in: /ProjectB/.claude/memory/memory.jsonl

/lazy memory-graph "service:payment owned_by:bob"

# Each project has isolated memory - no conflicts!

Team Shared Memory

Scenario: Team needs shared knowledge graph

Solution: Git-Tracked Memory

# Commit memory to version control
git add .claude/memory/memory.jsonl
git commit -m "docs: update team memory snapshot"

# Other team members automatically get memory
# via git clone or pull

# Memory persists across the team

Benefits:

  • Knowledge graph tracked in git
  • Team can collaborate on shared memory
  • Historical version tracking
  • Easy onboarding of new team members

Memory-Driven Code Generation

Scenario: Use memory to guide implementation

Example:

# User: "Create new payment endpoint"

# Auto-injected context from memory:
# - service:payment uses:fastapi
# - service:payment requires:authentication
# - team:backend prefers:async-patterns
# - config:api-timeout value:30s

# Generated code automatically follows conventions:
@router.post("/payment", dependencies=[Depends(require_auth)])
async def create_payment(request: PaymentRequest, timeout: int = 30):
    """Create payment (async per team convention)."""
    # Implementation follows team patterns

Summary

Memory System Workflow

  1. Auto-Detection → UserPromptSubmit hook detects durable facts
  2. Auto-Storage → 5-second countdown, stores to MCP Memory
  3. Project-Isolated → Memory stored in .claude/memory/memory.jsonl
  4. Auto-Retrieval → Relevant memories queried on keywords
  5. Auto-Injection → Context injected into prompts
  6. No Re-Prompting → Knowledge persists across sessions
  7. No Cross-Project Pollution → Each project has isolated memory

Key Benefits

  • Persistent Context - Never lose important facts
  • Project-Isolated - Multiple projects with separate knowledge graphs
  • Zero Overhead - Automatic detection and storage
  • Smart Retrieval - Only relevant context injected
  • Team Shared - Knowledge shared via git
  • Growing Intelligence - System gets smarter over time

LAZY_DEV Memory - Knowledge that persists, context that grows, projects that stay isolated.