This task list breaks down the implementation steps outlined in PLANNING.md.
- Module Setup:
- Create directory
ipc/src/internal_messages/. - Create file
ipc/src/internal_messages/mod.rs. - Create file
ipc/src/internal_messages/types.rs. - Declare
internal_messagesmodule inipc/src/lib.rs(pub mod internal_messages;).
- Create directory
- Define Structs & Enums (HAPPE <-> IDA):
- Define
MemoryItem,ConversationTurnininternal_messages/types.rs. - Define
InternalMessageenum (GetMemoriesRequest,GetMemoriesResponse,StoreTurnRequest) ininternal_messages/mod.rs. - Add
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq)]to all. - Re-export types from
types.rsinmod.rs.
- Define
- Define Structs & Enums (Client <-> HAPPE):
- Create
ipc/src/happe_request/mod.rsandtypes.rs. - Define
HappeQueryRequest { query: String }struct. - Define
HappeQueryResponse { response: String, error: Option<String> }struct. - Add
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq)]to all. - Add
happe_requestmodule toipc/src/lib.rs.
- Create
- Dependencies:
- Ensure
serde(withderivefeature) is listed under[dependencies]inipc/Cargo.toml. - Ensure
gemini_coreis added if shared types are used.
- Ensure
- Verification:
- Run
cargo check -p gemini-ipcto ensure the crate compiles.
- Run
- Milestone: All IPC message definitions complete and build successfully.
- Binary Skeleton & Config: (Basic setup)
- IPC Server (
ida/src/ipc_server.rs): (Listens for HAPPE) - Connection Handler (
ida/src/ipc_server.rs): (HandlesInternalMessage) - Background Storage Task (
ida/src/storage.rs): (Placeholder logic) - MCP Client Placeholder (
ida/src/memory_mcp_client.rs): (Placeholder functions) - Dependencies (
ida/Cargo.toml): (Initial dependencies) - Verification: (Basic build)
- Milestone:
ida-daemonbuilds and runs, successfully accepting IPC connections from HAPPE and processing known message types (using placeholders for external calls).
- Binary Skeleton & Config: (Basic setup)
- IDA IPC Client (
happe/src/ida_client.rs): (Connects to IDA) - Core Interaction Loop (
happe/src/coordinator.rs): (Placeholder loop calling IDA client and placeholder LLM/MCP) - LLM Client Placeholder (
happe/src/llm_client.rs): - MCP Client Placeholder (
happe/src/mcp_client.rs): - Dependencies (
happe/Cargo.toml): (Initial dependencies) - Verification: (Basic build)
- Milestone:
happe-daemonbuilds and runs, successfully connecting to IDA, sending requests, receiving responses, and sending async notifications (using placeholders for external calls).
- Build Workspace:
- Direct IDA Test (Optional):
- Concurrent Daemon Test:
- Interaction Flow Test (via stdin): (Verify logs for HAPPE <-> IDA placeholder flow)
- Repeat:
- Milestone: Successful execution of the basic HAPPE <-> IDA communication flow verified via logs using placeholder logic.
- Core Interaction Implementation (
happe/src/coordinator.rs):- Define core
process_query(config: &AppConfig, mcp_provider: &mcp::host::McpHost, gemini_client: &core::client::GeminiClient, query: String) -> Result<String, Error>function.- Adapt structure from
cli/src/app.rs::process_prompt.
- Adapt structure from
- Implement real prompt construction (using
gemini_core::prompt, memories from IDA, MCP info).- Reference
cli/src/app.rslines ~107-121, ~115 for structure. - Use
mcp::gemini::build_mcp_system_prompt.
- Reference
- Integrate real
llm_client::generate_responsecall (using implementation from below). - Implement logic to handle LLM function calls.
- Use
mcp_providerto get capabilities (core::rpc_types::ServerCapabilities). - Use
mcp::gemini::generate_gemini_function_declarationsto createcore::types::Toollist. - Use
llm_clientto parse function calls from response (mcp::gemini::FunctionCall). - Call
happe/src/mcp_client.rs::execute_tool. - Reference
cli/src/app.rslines ~81-112, ~211, ~232-247.
- Use
- Call
IdaClient::store_turn_asyncwithipc::internal_messages::ConversationTurn.- Reference
cli/src/app.rslines ~218-227, ~250-270.
- Reference
- Define core
- LLM Client Implementation (
happe/src/llm_client.rs):- Implement API call logic (using
reqwest,core::client::GeminiClient,core::types::GenerateContentRequest).- Adapt from
gemini-core::client&cli/src/app.rslines ~189-211.
- Adapt from
- Implement response parsing (
extract_text_from_response,extract_function_calls_from_responseusingmcp::gemini::parse_function_calls). - Define proper error types (consider using/extending
core::errors).
- Implement API call logic (using
- MCP Client Implementation (
happe/src/mcp_client.rs- If needed):- Implement
get_capabilitiesfunction (likely wrapper aroundmcp::host::McpHost::get_all_capabilities).- Reference
cli/src/app.rs::get_capabilities(~line 1235).\
- Reference
- Implement
execute_toolfunction (usingmcp::host::McpHost::execute_tool).- Reference
cli/src/app.rs::execute_tool(~line 1209).\
- Reference
- Ensure
HAPPEhas anMcpProviderinstance (mcp::host::McpHost).
- Implement
- HAPPE Input API & Servers:
- Implement IPC server (
happe/src/ipc_server.rs) usingipc::happe_requesttypes. - Implement HTTP server (
happe/src/http_server.rs) usingaxum/warp. - Update
happe-daemon.rsto initializecore::client::GeminiClientandmcp::host::McpHostand pass them to handlers.
- Implement IPC server (
- Binary Implementation (
happe/src/bin/happe-daemon.rs):- Implement real configuration loading (using
core::config::GeminiConfigas base, loading MCP config withmcp::config::load_mcp_servers).- Migrate from
cli/src/config.rs&cli/src/main.rs.\
- Migrate from
- Define comprehensive CLI arguments (
clap).
- Implement real configuration loading (using
- Dependencies Update (
happe/Cargo.toml):- Add
@core,@mcp,@ipc. - Add
reqwest,axum/warp,serde_json. - Ensure features match usage.
- Add
- Verification (Full):
- Run
cargo check --all-targets/build --all-targets.
- Run
- Milestone:
happe-daemonfully functional: processes queries via IPC/HTTP, interacts with real LLM and MCP servers, and uses placeholder IDA for memory.
- Storage Logic (
ida/src/storage.rs):- Replace placeholder logic in
handle_storage. - Implement analysis of
ipc::internal_messages::ConversationTurn. - Implement memory summarization/formatting.
- Consider adapting
cli/src/history.rs::summarize_conversation.
- Consider adapting
- Implement interaction with Memory Backend:
- Option A (Direct Access): Use
gemini_memory::store::MemoryStore.- Initialize
MemoryStoreinida-daemon.rs. - Call
MemoryStoremethods (add, check duplicates, etc.). - Handle conversion between
gemini_memory::memory::Memoryandipc::internal_messages::MemoryItem. - Replaces
cli/src/memory/mod.rs,cli/src/memory_broker.rs. - Include async queue/workers if needed, migrating from
cli/src/memory/mod.rs.
- Initialize
- Option B (MCP Access): Use
@mcp::host::McpHostas a client.- Initialize
McpHostinida-daemon.rs, configured for the Memory MCP server. - Define and call MCP methods for
store_memory,check_duplicates, etc.
- Initialize
- Option A (Direct Access): Use
- Implement duplicate checking logic (either via
MemoryStoreor MCP call).
- Replace placeholder logic in
- Retrieval Logic (
ida/src/memory_mcp_client.rsor direct):- Replace placeholder
retrieve_memories. - Implement query logic against Memory Backend:
- Option A (Direct Access): Use
gemini_memory::store::MemoryStore::query_memories.- Handle conversion from
gemini_memory::memory::Memorytoipc::internal_messages::MemoryItem.
- Handle conversion from
- Option B (MCP Access): Call MCP method for
query_memories.- Replaces calls like
enhance_promptincli/src/app.rs.
- Replaces calls like
- Option A (Direct Access): Use
- Replace placeholder
- Configuration (
ida/src/bin/ida-daemon.rs):- Add config based on chosen backend access (MemoryStore path/config OR Memory MCP server address).
- Add config for async workers, etc.
- Migrate from
cli/src/config.rs::AsyncMemoryConfigExt.
- Migrate from
- Dependencies (
ida/Cargo.toml):- Add
@ipc. - Option A: Add
@memory. - Option B: Add
@mcp. - Maybe add
@core(for errors/config base).
- Add
- Verification:
- Run
cargo check/build. - Unit tests for storage/retrieval logic.
- Run
- Milestone:
ida-daemonfully functional: stores, summarizes, and retrieves memories based on real logic, interacting with the memory backend.
- Modify Main Logic (
cli/src/main.rs,cli/src/app.rs):- Remove LLM/MCP/Memory/History logic.
- Keep input loop (
run_interactive_chat, etc.). - Add IPC client (
cli/src/happe_client.rs?) to connect tohappe-daemonquery socket. - Send query via IPC, receive response, display using
output.rs.
- Update CLI Arguments (
cli/src/cli.rs::Args):- Remove LLM/memory/etc. args. Add HAPPE connection arg.
- Update Configuration (
cli/src/config.rs):- Remove old config logic. Add HAPPE connection config.
- Remove Unused Code:
- Gut/delete
history.rs,memory/,memory_broker.rs. - Remove unused dependencies (
gemini-core,gemini-memory,@mcp) fromcli/Cargo.toml. - Clean up
app.rs.
- Gut/delete
- Dependencies (
cli/Cargo.toml):- Add
@ipc(forHappeQueryRequest/Response). - Add necessary IPC client library (
tokio?).
- Add
- Verification:
- Run
cargo check/build.
- Run
- Milestone:
cliexecutable successfully acts as a front-end tohappe-daemonvia IPC, core logic removed.
- Setup: Run
ida-daemon,happe-daemon, and any required MCP servers. - CLI Test: Use the refactored
@clito interact withhappe-daemon.- Verify queries are processed correctly.
- Verify responses are displayed.
- Verify memory retrieval influences responses over time (check
IDAlogs). - Verify turns are stored (check
IDAlogs).
- HTTP Test: Use
curlor similar to send queries tohappe-daemon's HTTP endpoint.- Verify responses are correct.
- Verify interaction affects memory via
IDAlogs.
- Tool Call Test: If MCP tools are implemented, trigger them via queries.
- Verify
HAPPElogs show function call parsing and execution via MCP. - Verify tool results are sent back to LLM and influence final response.
- Verify
- Milestone: Full system (CLI/HTTP -> HAPPE -> LLM/MCP & IDA -> Memory Backend) functions correctly for multiple interaction turns.
- Code Formatting: Run
cargo fmtregularly across the workspace. - Linting: Run
cargo clippy --all-targets --all-featuresregularly and address warnings. - Unit Tests: Add unit tests for specific functions with complex logic.
- Documentation: Add doc comments (
///) for public functions, structs, and enums. - Error Handling: Ensure errors are propagated or handled gracefully.
- Configuration: Refine configuration loading (e.g., use environment variables, config files).
- Logging: Improve logging messages for clarity and debugging.
- Version Control: Commit changes frequently with clear messages.
This phase implements the plan to integrate a "broker" LLM directly within IDA to refine memory retrieval results before sending them to HAPPE.
- Broker LLM Configuration Loading:
- Verify
MemoryBrokerConfigfields (provider,api_key,model_name,base_url) are loaded correctly intoIdaConfig. - Add loaded
IdaConfigto sharedDaemonStatestruct. - Update
DaemonStateinstantiation inida/src/bin/ida-daemon.rs. - Ensure
handle_messageinida/src/ipc_server.rscan access theIdaConfig.
- Verify
- Ensure MCP Access Reuse in IDA:
- Identify where
MemoryStoreis initialized inida/src/bin/ida-daemon.rs. - Store the
Arc<dyn McpHostInterface + Send + Sync>used forMemoryStorein theDaemonState. - Update
DaemonStatedefinition and instantiation.
- Identify where
- Integrate Broker LLM Client into IDA:
- Configuration (
core/src/config.rs):- Modify
MemoryBrokerConfigstruct (addprovider,base_url, etc.). - Update corresponding
IdaConfiginida/src/config.rsand itsFromimpl. - Update default config generation (
installortools) for[ida.memory_broker].
- Modify
- Implementation (IDA Crate):
- Add dependencies (
reqwest,serde,serde_json,async-trait) toida/Cargo.toml. - Create
ida/src/llm_clients.rs. - Define
trait LLMClient { async fn generate(&self, prompt: &str) -> Result<String>; }. - Implement the trait for supported providers (e.g.,
GeminiClient,OllamaClient). - Create factory function
create_llm_client(config: &CoreMemoryBrokerConfig) -> Result<Option<Arc<dyn LLMClient>>>. - Add
Option<Arc<dyn LLMClient>>field toDaemonState. - Initialize the client in
ida/src/bin/ida-daemon.rsusing the factory.
- Add dependencies (
- Configuration (
- Modify
ida::memory_mcp_client::retrieve_memories:- Change signature to accept
broker_llm_client: &Option<Arc<dyn LLMClient>>andconversation_context: Option<String>. - After semantic search, if results exist and client exists:
- Construct broker prompt (query, context, candidates, instructions).
- Call
client.generate(&broker_prompt).await. - Parse response (e.g., comma-separated keys).
- Filter semantic search results based on broker response.
- Implement fallback logic on broker error.
- Update call site in
ida/src/ipc_server.rs(handle_message) to pass the client and context.
- Change signature to accept
- Modify IPC Message (
ipc/src/internal_messages.rs):- Add
conversation_context: Option<String>toGetMemoriesRequest.
- Add
- Update HAPPE Client (
happe/src/ida_client.rs):- Update
get_memoriessignature and request construction.
- Update
- Update HAPPE Orchestrator (
happe/src/coordinator.rsor similar):- Gather context and pass it to
ida_client::get_memories.
- Gather context and pass it to
- Update IDA Server (
ida/src/ipc_server.rs):- Extract context and pass it to
memory_mcp_client::retrieve_memories.
- Extract context and pass it to
- Testing: Unit, Integration, End-to-End tests.
- Refinement: Prompt engineering, latency analysis, fallback logic, configuration.
This phase adds a stateful session management system to HAPPE, allowing it to maintain conversation history across multiple requests within a session. This is crucial for providing proper context to IDA for memory retrieval.
- Module Setup:
- Create directory
happe/src/session/. - Create directory
happe/src/session/adapters/. - Create file
happe/src/session/mod.rs. - Create file
happe/src/session/store.rs. - Create file
happe/src/session/adapters/in_memory.rs. - Declare
sessionmodule inhappe/src/lib.rs(pub mod session;).
- Create directory
- Define
SessionStoreTrait (happe/src/session/store.rs):- Define
#[async_trait] pub trait SessionStore: Send + Sync. - Add methods for session management:
-
create_session -
get_session -
save_session -
delete_session -
cleanup_expired_sessions
-
- Define
- Implement
InMemorySessionStore(happe/src/session/adapters/in_memory.rs):- Define
struct InMemorySessionStore. - Add thread-safe storage using
Arc<RwLock<HashMap<String, Session>>>. - Implement
SessionStoretrait. - Add expiration and cleanup logic.
- Define
- Dependencies (
happe/Cargo.toml):- Add
async-trait = "0.1". - Add
uuid = { version = "1", features = ["v4"] }(for session ID generation). - Ensure
tokiofeatures includesync.
- Add
- State (
happe/src/http_server.rs,happe/src/ipc_server.rs):- Add
session_store: SessionStoreReftoAppStatestruct (http_server.rs). - Add
session_store: SessionStoreReftoIpcServerStatestruct (ipc_server.rs). - Initialize session store in HTTP and IPC servers.
- Add session cleanup task to the IPC server.
- Add
- Coordinator (
happe/src/coordinator.rs):- Modify
process_querysignature to acceptsession: &Session. - Add helper function
get_conversation_historyto extract history from the session. - Add helper function
update_session_historyto store turns in the session.
- Modify
- IPC Handler (
happe/src/ipc_server.rs):- Modify
handle_connection:- Get or create session for the request.
- Pass session to
coordinator::process_query. - Update session with new conversation turn.
- Save session back to the store.
- Modify
- HTTP Handler (
happe/src/http_server.rs):- Modify
handle_query:- Extract session ID from request or create a new one.
- Get or create session.
- Pass session to
coordinator::process_query. - Update and save session.
- Modify
- IPC Request (
ipc/src/happe_request/types.rsormod.rs):- Add
session_id: Option<String>field toHappeQueryRequeststruct. - Add
session_id: Option<String>field toHappeQueryResponsestruct.
- Add
- CLI (
@cliCrate):- Identify where the main interaction loop/HAPPE client logic resides (e.g.,
cli/src/app.rsorcli/src/happe_client.rs). - On CLI startup, generate a persistent
session_idfor the duration of the run (e.g.,let session_id = uuid::Uuid::new_v4().to_string();). - Modify the code that creates and sends
HappeQueryRequestto include thissession_id.
- Identify where the main interaction loop/HAPPE client logic resides (e.g.,
- Unit Tests:
- Add tests for the
Sessionstruct instore.rs. - Add tests for
InMemorySessionStore.
- Add tests for the
- Integration Tests: Test IPC handler with session history.
- End-to-End Tests (
@cli->happe-daemon): Verify context is maintained across multiple turns within a single CLI run. - Refinement: Assess performance, history pruning, error handling, security of session IDs.
This phase replaces filesystem-mcp and command-mcp with Python implementations. memory-store-mcp becomes a Python facade process, while its core logic (including vector store via gemini-memory) is embedded and executed directly within mcp-hostd.rs.
- Analyze
mcp-hostd.rs&gemini_mcpCrate: (Config loading/launching confirmed). - Analyze
installCrate: (Binary/config handling understood). - Analyze
gemini-memoryDependency: (Will be embedded inmcp-hostd). - Refine Design: Confirm hybrid strategy:
-
mcp-hostdembedsgemini_memory::MemoryStore. -
mcp-hostdintercepts & handlesmemory-store-mcp/*tool calls internally. - Minimal
memory-store-mcp.pyfacade launched via stdio for handshake. - Standard
filesystem-mcp.pyandcommand-mcp.pyservers.
-
- Modify
mcp-hostd.rs(Initialization):- Ensure
gemini-memorydependency available tomcp-hostd. - Initialize
MemoryStoreinstance during startup. - Load necessary
MemoryStoreconfig (DB path, etc.) from mainconfig.toml. - Store
Arc<MemoryStore>whereprocess_requestcan access it.
- Ensure
- Modify
mcp-hostd.rs(process_request):- Add logic to intercept
ExecuteToolforserver == "memory-store-mcp". - Implement internal calls to embedded
MemoryStorefor intercepted requests. - Bypass standard
host.execute_toolfor these intercepted requests.
- Add logic to intercept
- Modify
mcp-hostd.rs(Capabilities):- Manually add tool definitions for internal memory operations under the
memory-store-mcpserver name during capability aggregation.
- Manually add tool definitions for internal memory operations under the
- Implement
python_mcp/servers/base_server.py(as before). - Implement
python_mcp/servers/filesystem_mcp.py(as before). - Implement
python_mcp/servers/command_mcp.py(as before). - Simplify
python_mcp/servers/memory_store_mcp.py:- Remove
MemoryStoreclass and tool handlers. -
mainshould only createMcpBaseServer, register no tools, and callrun().
- Remove
- Project Setup (
python_mcp,requirements.txt- as before).
- Modify
install/src/main.rs:- Remove Rust MCP binary installation steps.
- Add step to copy
python_mcpdirectory to installation target. - Modify
install_unified_config:- Generate
mcp_servers.jsonentries pointing all 3 servers (memory-store-mcp,filesystem-mcp,command-mcp) to their Python scripts. - Ensure main
config.tomlgeneration includes necessary[memory]/[ida]sections for embeddedMemoryStoreconfig.
- Generate
- Build & Install modified
installcrate. - Verify Python scripts and config files placement.
- Install Python dependencies (
pip install -r requirements.txt). - Start
mcp-hostd. Check logs for embeddedMemoryStoreinit & Python process handshakes. - IPC Client Testing:
- Test
GetCapabilities(should show memory tools listed undermemory-store-mcp). - Test
filesystem-mcptools (check Python logs). - Test
command-mcptools (check Python logs). - Test
memory-store-mcptools (checkmcp-hostdlogs & DB state).
- Test
- Shutdown Testing.
- Deployment.