fix(api): support delete_memory by user_id/conversation_id (Fixes #1103)#1104
fix(api): support delete_memory by user_id/conversation_id (Fixes #1103)#1104fancyboi999 wants to merge 3 commits intoMemTensor:mainfrom
Conversation
There was a problem hiding this comment.
Pull request overview
This PR implements fast memory cleanup functionality by adding support for deleting memories directly by user_id and conversation_id/session_id, addressing issue #1103. Previously, users had to fetch all memories first via get_memory() and then delete them one by one using memory_ids, which was inefficient for large datasets (e.g., 142 seconds to delete 793 memories).
Changes:
- Added quick-delete fields (
user_id,session_id,conversation_id) toDeleteMemoryRequestwith automatic aliasing ofconversation_idtosession_id - Implemented filter merging logic in the memory handler to combine quick-delete constraints with existing filters using AND semantics for
andfilters and distribution fororfilters - Fixed simple dict filter handling (e.g.,
{"user_id": "..."}) in neo4j-community'sget_by_metadatamethod and madewritable_cube_idsoptional for non-file_ids deletion paths
Reviewed changes
Copilot reviewed 4 out of 4 changed files in this pull request and generated 3 comments.
| File | Description |
|---|---|
| src/memos/api/product_models.py | Extended DeleteMemoryRequest with quick-delete fields and conversation_id aliasing validator |
| src/memos/api/handlers/memory_handler.py | Added filter merging logic and validation for quick-delete parameters |
| src/memos/graph_dbs/neo4j_community.py | Fixed simple dict filter support and relaxed writable_cube_ids requirement for filter-based deletion |
| tests/api/test_memory_handler_delete.py | Added comprehensive unit tests for quick-delete functionality and filter merging |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| {"memory_type": "UserMemory", "user_id": "u_1"}, | ||
| ] | ||
| }, | ||
| ) |
There was a problem hiding this comment.
The test for OR filter distribution doesn't verify that pref_mem.delete_by_filter is called with the same merged filter. This is inconsistent with the other tests (e.g., test_delete_memories_quick_by_user_id lines 25-27) which verify both text_mem and pref_mem calls. The handler code on lines 512-513 does call delete_by_filter on pref_mem when it's not None, so this test should verify that behavior.
| ) | |
| ) | |
| naive_mem_cube.pref_mem.delete_by_filter.assert_called_once_with( | |
| filter={ | |
| "or": [ | |
| {"memory_type": "WorkingMemory", "user_id": "u_1"}, | |
| {"memory_type": "UserMemory", "user_id": "u_1"}, | |
| ] | |
| } | |
| ) |
| ) | ||
| # Validate that only one of memory_ids, file_ids, or filter is provided | ||
| quick_constraints = _build_quick_delete_constraints(delete_mem_req) | ||
| has_filter_mode = delete_mem_req.filter is not None or bool(quick_constraints) |
There was a problem hiding this comment.
The validation logic doesn't properly handle the edge case where filter is set to an empty dict. If a user sends {"filter": {}}, the check delete_mem_req.filter is not None on line 480 will be True (an empty dict is not None), but when passed to the database layer, an empty filter dict would match ALL memories in the system, which is dangerous.
Consider adding validation to reject empty filter dicts, or explicitly check for non-empty filters in the has_filter_mode calculation.
| def test_delete_memories_quick_by_user_id(): | ||
| naive_mem_cube = _build_naive_mem_cube() | ||
| req = DeleteMemoryRequest(user_id="u_1") | ||
|
|
||
| resp = handle_delete_memories(req, naive_mem_cube) | ||
|
|
||
| assert resp.data["status"] == "success" | ||
| naive_mem_cube.text_mem.delete_by_filter.assert_called_once_with( | ||
| writable_cube_ids=None, | ||
| filter={"and": [{"user_id": "u_1"}]}, | ||
| ) | ||
| naive_mem_cube.pref_mem.delete_by_filter.assert_called_once_with( | ||
| filter={"and": [{"user_id": "u_1"}]} | ||
| ) |
There was a problem hiding this comment.
The test suite doesn't cover the case where naive_mem_cube.pref_mem is None. In the actual handler code (line 512-513), there's a None check before calling delete_by_filter on pref_mem, but this code path isn't tested. Consider adding a test case to verify the handler works correctly when preference memory is disabled.
Description
This PR implements fast memory cleanup for
delete_memoryby request-leveluser_idandconversation_id(alias ofsession_id) to address issue #1103.Problem solved:
get_memoryfirst and then delete bymemory_ids, which is inefficient for large datasets.neo4j-community, filter-based delete had an implementation gap for simple dict filters (e.g.{"user_id": "..."}), which could lead to incorrect matching behavior.Implementation approach:
DeleteMemoryRequestwith quick-delete fields:user_id,session_id,conversation_id.conversation_id -> session_id.memory_handler, merge quick-delete fields into filter-mode deletion with explicit AND semantics.memory_ids/file_ids/filter-or-quick-delete).neo4j-community, fix simple dict filter handling inget_by_metadataand relaxwritable_cube_idsrequirement for non-file_idsdeletion.Dependencies:
Related Issue (Required): Fixes #1103
Type of change
How Has This Been Tested?
Test commands:
pytest -q tests/api/test_memory_handler_delete.pypytest -q tests/api/test_server_router.pyRuntime smoke test (docker-backed env: neo4j + qdrant + redis):
POST /product/delete_memorywith payloads:{"user_id":"..."}{"conversation_id":"..."}Checklist
Reviewer Checklist