Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
102 commits
Select commit Hold shift + click to select a range
65d5649
update reader and search strategy
Oct 28, 2025
6cad866
set strategy reader and search config
Oct 29, 2025
f040110
fix all reader conflicts
Oct 29, 2025
c389367
fix install problem
Oct 29, 2025
499502d
fix
Oct 29, 2025
e1bb223
fix test
Oct 29, 2025
72b7466
Merge branch 'dev' into dev_test
CaralHsi Oct 29, 2025
74585e8
Merge branch 'dev' into dev_test
fridayL Oct 30, 2025
790e99f
turn off graph recall
Oct 30, 2025
15b63a7
Merge branch 'dev' into dev_test
Oct 30, 2025
390ba29
turn off graph recall
Oct 30, 2025
9615282
turn off graph recall
Oct 30, 2025
2fb8ce0
Merge branch 'dev' into dev_test
fridayL Oct 30, 2025
6035522
Merge branch 'dev' into dev_test
Oct 30, 2025
04f412b
fix Searcher input bug
Oct 30, 2025
9716274
fix Searcher
Oct 30, 2025
c455a4e
Merge branch 'dev_test' of github.com:whipser030/MemOS into dev_test
Oct 30, 2025
f8b9b4a
fix Search
Oct 30, 2025
c840ad4
Merge branch 'dev' into dev_test
Oct 30, 2025
b9dbecd
fix bug
Nov 4, 2025
1798f60
Merge branch 'dev' of github.com:MemTensor/MemOS into dev
Nov 4, 2025
6db95e7
Merge branch 'dev' into dev_test
Nov 4, 2025
1173c07
adjust strategy reader
Nov 4, 2025
7ab465b
Merge branch 'dev' into dev_test
Nov 4, 2025
744d227
adjust strategy reader
Nov 4, 2025
a9a98fa
adjust search config input
Nov 4, 2025
900f5e6
reformat code
Nov 4, 2025
ac7aff5
Merge branch 'dev' into dev_test
CaralHsi Nov 4, 2025
144c446
re pr
Nov 5, 2025
a2b55c7
Merge branch 'dev' of github.com:MemTensor/MemOS into dev
Nov 5, 2025
441c52b
Merge branch 'dev' into dev_test
Nov 5, 2025
6f272db
Merge branch 'dev_test' of github.com:whipser030/MemOS into dev_test
Nov 5, 2025
f506d3e
format repair
Nov 5, 2025
db9041c
Merge branch 'dev' of github.com:MemTensor/MemOS into dev
Nov 5, 2025
d921284
Merge branch 'dev' into dev_test
CaralHsi Nov 5, 2025
d036c53
Merge branch 'dev' of github.com:MemTensor/MemOS into dev
Nov 11, 2025
5a3f0db
Merge branch 'dev' into dev_test
Nov 11, 2025
dc67413
fix time issue
Nov 11, 2025
7699b9a
Merge branch 'dev_test' of github.com:whipser030/MemOS into dev_test
Nov 11, 2025
8bfbf94
develop feedback process
Nov 19, 2025
875c551
Merge branch 'dev' of github.com:MemTensor/MemOS into dev
Nov 19, 2025
7f20f8b
Resolve merge conflicts
Nov 19, 2025
4d712eb
feedback handler configuration
Nov 20, 2025
36b93eb
Merge branch 'dev' of github.com:MemTensor/MemOS into dev
Nov 25, 2025
adec73e
merged
Nov 25, 2025
aef3aad
upgrade feedback using
Nov 26, 2025
81ec520
Merge branch 'dev' of github.com:MemTensor/MemOS into dev
Nov 26, 2025
55c9d89
fix
Nov 26, 2025
b4fbfde
Merge branch 'dev' of github.com:MemTensor/MemOS into dev
Nov 27, 2025
ee64719
Merge branch 'dev' into dev_test
Nov 27, 2025
0fa9be7
add threshold
Nov 27, 2025
4a4746e
Merge branch 'dev' of github.com:MemTensor/MemOS into dev
Nov 27, 2025
16de8da
Merge branch 'dev' into dev_test
Nov 27, 2025
facb7b3
update prompt
Nov 27, 2025
eab5fe6
update prompt
Nov 27, 2025
7577aac
fix handler
Nov 27, 2025
cc4069d
add feedback scheduler
Nov 29, 2025
2529db2
add handler change node update
Dec 1, 2025
898ccac
add handler change node update
Dec 1, 2025
faec340
Merge branch 'dev' of github.com:MemTensor/MemOS into dev
Dec 1, 2025
913c24d
add handler change node update
Dec 1, 2025
91d063d
add handler change node update
Dec 1, 2025
2a47880
add handler change node update
Dec 1, 2025
c5618c6
Merge branch 'dev' into dev_test
whipser030 Dec 2, 2025
b9737f1
Merge branch 'dev' into dev_test
CaralHsi Dec 2, 2025
ad9c2e7
fix interface input
Dec 2, 2025
c0c32b1
Merge branch 'dev_test' of github.com:whipser030/MemOS into dev_test
Dec 2, 2025
d906f0d
Merge branch 'dev' of github.com:MemTensor/MemOS into dev
Dec 2, 2025
696708e
fix interface input
Dec 2, 2025
6ad8dae
add chunk and ratio filter
Dec 3, 2025
6298c64
Merge branch 'dev' of github.com:MemTensor/MemOS into dev
Dec 3, 2025
47acd7a
Merge branch 'dev' into dev_test
Dec 3, 2025
0727c25
Merge branch 'dev' of github.com:MemTensor/MemOS into dev
Dec 3, 2025
0b0342d
Merge branch 'dev' into dev_test
Dec 3, 2025
294c1e6
Merge branch 'dev' of github.com:MemTensor/MemOS into dev
Dec 3, 2025
d9158e4
Merge branch 'dev' into dev_test
Dec 3, 2025
699cdf7
update stopwords
Dec 3, 2025
8ca03c0
Merge branch 'dev' into dev_test
fridayL Dec 3, 2025
6076935
Merge branch 'dev' of github.com:MemTensor/MemOS into dev
Dec 4, 2025
b2b0f6e
Merge branch 'dev' into dev_test
Dec 4, 2025
343eeb3
fix messages queue
Dec 4, 2025
1bb9396
Merge branch 'dev_test' of github.com:whipser030/MemOS into dev_test
Dec 4, 2025
045196c
Merge branch 'dev' of github.com:MemTensor/MemOS into dev
Dec 4, 2025
7131c35
Merge branch 'dev' into dev_test
Dec 4, 2025
d66e8ce
add seach_by_keywords_LIKE
Dec 7, 2025
d081aaa
Merge branch 'dev' of github.com:MemTensor/MemOS into dev
Dec 7, 2025
405658f
Merge branch 'dev' into dev_test
Dec 7, 2025
ae60994
add doc filter
Dec 9, 2025
70efbf3
Merge branch 'dev' of github.com:MemTensor/MemOS into dev
Dec 9, 2025
a613c7e
merge dev
Dec 9, 2025
7b0f2f4
add retrieve query
Dec 9, 2025
c6768b6
Merge branch 'dev' of github.com:MemTensor/MemOS into dev
Dec 9, 2025
005a5bb
add retrieve queies
Dec 10, 2025
d69e7f4
patch info filter
Dec 10, 2025
d4f18e8
Merge branch 'dev' of github.com:MemTensor/MemOS into dev
Dec 10, 2025
3c5199a
add strict info filter
Dec 11, 2025
365e0b6
Merge branch 'dev' of github.com:MemTensor/MemOS into dev
Dec 12, 2025
9bc942d
Merge branch 'dev' into dev_test
Dec 12, 2025
eab3d80
add log and make embedding safety net
Dec 12, 2025
9519f5e
Merge branch 'dev' of github.com:MemTensor/MemOS into dev
Dec 12, 2025
f21a885
Merge branch 'dev' into dev_test
Dec 12, 2025
7f146e1
add log and make embedding safety net
Dec 12, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
130 changes: 80 additions & 50 deletions src/memos/mem_feedback/feedback.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,15 +5,15 @@
from datetime import datetime
from typing import TYPE_CHECKING, Any

from tenacity import retry, stop_after_attempt, wait_exponential
from tenacity import retry, stop_after_attempt, wait_random_exponential

from memos import log
from memos.configs.memory import MemFeedbackConfig
from memos.context.context import ContextThreadPoolExecutor
from memos.dependency import require_python_package
from memos.embedders.factory import EmbedderFactory, OllamaEmbedder
from memos.graph_dbs.factory import GraphStoreFactory, PolarDBGraphDB
from memos.llms.factory import AzureLLM, LLMFactory, OllamaLLM, OpenAILLM
from memos.log import get_logger
from memos.mem_feedback.base import BaseMemFeedback
from memos.mem_feedback.utils import make_mem_item, should_keep_update, split_into_chunks
from memos.mem_reader.factory import MemReaderFactory
Expand Down Expand Up @@ -48,7 +48,7 @@
"generation": {"en": FEEDBACK_ANSWER_PROMPT, "zh": FEEDBACK_ANSWER_PROMPT_ZH},
}

logger = log.get_logger(__name__)
logger = get_logger(__name__)


class MemFeedback(BaseMemFeedback):
Expand Down Expand Up @@ -83,19 +83,47 @@ def __init__(self, config: MemFeedbackConfig):
self.reranker = None
self.DB_IDX_READY = False

@require_python_package(
import_name="jieba",
install_command="pip install jieba",
install_link="https://github.com/fxsjy/jieba",
)
def _tokenize_chinese(self, text):
"""split zh jieba"""
import jieba

tokens = jieba.lcut(text)
tokens = [token.strip() for token in tokens if token.strip()]
return self.stopword_manager.filter_words(tokens)

@retry(stop=stop_after_attempt(4), wait=wait_random_exponential(multiplier=1, max=10))
def _embed_once(self, texts):
return self.embedder.embed(texts)

@retry(stop=stop_after_attempt(3), wait=wait_random_exponential(multiplier=1, min=4, max=10))
def _retry_db_operation(self, operation):
try:
return operation()
except Exception as e:
logger.error(
f"[Feedback Core: _retry_db_operation] DB operation failed: {e}", exc_info=True
)
raise

def _batch_embed(self, texts: list[str], embed_bs: int = 5):
embed_bs = 5
texts_embeddings = []
results = []
dim = self.embedder.config.embedding_dims

for i in range(0, len(texts), embed_bs):
batch = texts[i : i + embed_bs]
try:
texts_embeddings.extend(self.embedder.embed(batch))
results.extend(self._embed_once(batch))
except Exception as e:
logger.error(
f"[Feedback Core: process_feedback_core] Embedding batch failed: {e}",
exc_info=True,
f"[Feedback Core: process_feedback_core] Embedding batch failed, Cover with all zeros: {len(batch)} entries: {e}"
)
return texts_embeddings
results.extend([[0.0] * dim for _ in range(len(batch))])
return results

def _pure_add(self, user_name: str, feedback_content: str, feedback_time: str, info: dict):
"""
Expand All @@ -108,7 +136,7 @@ def _pure_add(self, user_name: str, feedback_content: str, feedback_time: str, i
lambda: self.memory_manager.add(to_add_memories, user_name=user_name)
)
logger.info(
f"[Feedback Core: _pure_add] Added {len(added_ids)} memories for user {user_name}."
f"[Feedback Core: _pure_add] Pure added {len(added_ids)} memories for user {user_name}."
)
return {
"record": {
Expand Down Expand Up @@ -199,7 +227,7 @@ def _single_add_operation(
lambda: self.memory_manager.add([to_add_memory], user_name=user_name, mode=async_mode)
)

logger.info(f"[Memory Feedback ADD] {added_ids[0]}")
logger.info(f"[Memory Feedback ADD] memory id: {added_ids[0]}")
return {"id": added_ids[0], "text": to_add_memory.memory}

def _single_update_operation(
Expand Down Expand Up @@ -305,17 +333,22 @@ def semantics_feedback(

if not current_memories:
operations = [{"operation": "ADD"}]
logger.warning(
"[Feedback Core]: There was no recall of the relevant memory, so it was added directly."
)
else:
memory_chunks = split_into_chunks(current_memories, max_tokens_per_chunk=500)

all_operations = []
now_time = datetime.now().isoformat()
with ContextThreadPoolExecutor(max_workers=10) as executor:
future_to_chunk_idx = {}
for chunk in memory_chunks:
current_memories_str = "\n".join(
[f"{item.id}: {item.memory}" for item in chunk]
)
prompt = template.format(
now_time=now_time,
current_memories=current_memories_str,
new_facts=memory_item.memory,
chat_history=history_str,
Expand All @@ -337,7 +370,7 @@ def semantics_feedback(

operations = self.standard_operations(all_operations, current_memories)

logger.info(f"[Feedback memory operations]: {operations!s}")
logger.info(f"[Feedback Core Operations]: {operations!s}")

if not operations:
return {"record": {"add": [], "update": []}}
Expand Down Expand Up @@ -453,6 +486,7 @@ def _feedback_memory(
}

def _info_comparison(self, memory: TextualMemoryItem, _info: dict, include_keys: list) -> bool:
"""Filter the relevant memory items based on info"""
if not _info and not memory.metadata.info:
return True

Expand All @@ -463,10 +497,10 @@ def _info_comparison(self, memory: TextualMemoryItem, _info: dict, include_keys:
record.append(info_v == mem_v)
return all(record)

def _retrieve(self, query: str, info=None, user_name=None):
def _retrieve(self, query: str, info=None, top_k=100, user_name=None):
"""Retrieve memory items"""
retrieved_mems = self.searcher.search(
query, info=info, user_name=user_name, topk=50, full_recall=True
query, info=info, user_name=user_name, top_k=top_k, full_recall=True
)
retrieved_mems = [item[0] for item in retrieved_mems]
return retrieved_mems
Expand Down Expand Up @@ -524,11 +558,19 @@ def _get_llm_response(self, prompt: str, dsl: bool = True) -> dict:
else:
return response_text
except Exception as e:
logger.error(f"[Feedback Core LLM] Exception during chat generation: {e}")
logger.error(
f"[Feedback Core LLM Error] Exception during chat generation: {e} | response_text: {response_text}"
)
response_json = None
return response_json

def standard_operations(self, operations, current_memories):
"""
Regularize the operation design
1. Map the id to the correct original memory id
2. If there is an update, skip the memory object of add
3. If the modified text is too long, skip the update
"""
right_ids = [item.id for item in current_memories]
right_lower_map = {x.lower(): x for x in right_ids}

Expand Down Expand Up @@ -582,9 +624,16 @@ def correct_item(data):
has_update = any(item.get("operation").lower() == "update" for item in llm_operations)
if has_update:
filtered_items = [
item for item in llm_operations if item.get("operation").lower() == "add"
]
update_items = [
item for item in llm_operations if item.get("operation").lower() != "add"
]
return filtered_items
if filtered_items:
logger.info(
f"[Feedback Core: semantics_feedback] Due to have update objects, skip add: {filtered_items}"
)
return update_items
else:
return llm_operations

Expand Down Expand Up @@ -683,6 +732,10 @@ def process_keyword_replace(
if doc_scope != "NONE":
retrieved_memories = self._doc_filter(doc_scope, retrieved_memories)

logger.info(
f"[Feedback Core: process_keyword_replace] Keywords recalled memory for user {user_name}: {len(retrieved_ids)} memories | After filtering: {len(retrieved_memories)} memories."
)

if not retrieved_memories:
return {"record": {"add": [], "update": []}}

Expand All @@ -693,14 +746,14 @@ def process_keyword_replace(
if original_word in old_mem.memory:
mem = old_mem.model_copy(deep=True)
mem.memory = mem.memory.replace(original_word, target_word)
if original_word in mem.metadata.tags:
mem.metadata.tags.remove(original_word)
if target_word not in mem.metadata.tags:
mem.metadata.tags.append(target_word)
pick_index.append(i)
update_memories.append(mem)
update_memories_embed = self._batch_embed([mem.memory for mem in update_memories])

update_memories_embed = self._retry_db_operation(
lambda: self._batch_embed([mem.memory for mem in update_memories])
)
for _i, embed in zip(range(len(update_memories)), update_memories_embed, strict=False):
update_memories[_i].metadata.embedding = embed

Expand Down Expand Up @@ -805,9 +858,7 @@ def check_validity(item):
feedback_memories = []

corrected_infos = [item["corrected_info"] for item in valid_feedback]
feedback_memories_embeddings = self._retry_db_operation(
lambda: self._batch_embed(corrected_infos)
)
feedback_memories_embeddings = self._batch_embed(corrected_infos)

for item, embedding in zip(
valid_feedback, feedback_memories_embeddings, strict=False
Expand Down Expand Up @@ -845,8 +896,10 @@ def check_validity(item):
info=info,
**kwargs,
)
add_memories = mem_record["record"]["add"]
update_memories = mem_record["record"]["update"]
logger.info(
f"[Feedback Core: process_feedback_core] Processed {len(feedback_memories)} feedback memories for user {user_name}."
f"[Feedback Core: process_feedback_core] Processed {len(feedback_memories)} feedback | add {len(add_memories)} memories | update {len(update_memories)} memories for user {user_name}."
)
return mem_record

Expand Down Expand Up @@ -902,42 +955,19 @@ def process_feedback(
task_id = kwargs.get("task_id", "default")

logger.info(
f"[MemFeedback process] Feedback Completed : user {user_name} | task_id {task_id} | record {record}."
f"[Feedback Core MemFeedback process] Feedback Completed : user {user_name} | task_id {task_id} | record {record}."
)

return {"answer": answer, "record": record["record"]}
except concurrent.futures.TimeoutError:
logger.error(
f"[MemFeedback process] Timeout in sync mode for {user_name}", exc_info=True
f"[Feedback Core MemFeedback process] Timeout in sync mode for {user_name}",
exc_info=True,
)
return {"answer": "", "record": {"add": [], "update": []}}
except Exception as e:
logger.error(
f"[MemFeedback process] Error in concurrent tasks for {user_name}: {e}",
f"[Feedback Core MemFeedback process] Error in concurrent tasks for {user_name}: {e}",
exc_info=True,
)
return {"answer": "", "record": {"add": [], "update": []}}

# Helper for DB operations with retry
@retry(stop=stop_after_attempt(3), wait=wait_exponential(multiplier=1, min=4, max=10))
def _retry_db_operation(self, operation):
try:
return operation()
except Exception as e:
logger.error(
f"[MemFeedback: _retry_db_operation] DB operation failed: {e}", exc_info=True
)
raise

@require_python_package(
import_name="jieba",
install_command="pip install jieba",
install_link="https://github.com/fxsjy/jieba",
)
def _tokenize_chinese(self, text):
"""split zh jieba"""
import jieba

tokens = jieba.lcut(text)
tokens = [token.strip() for token in tokens if token.strip()]
return self.stopword_manager.filter_words(tokens)
2 changes: 1 addition & 1 deletion src/memos/multi_mem_cube/single_cube.py
Original file line number Diff line number Diff line change
Expand Up @@ -185,7 +185,7 @@ def feedback_memories(self, feedback_req: APIFeedbackRequest) -> dict[str, Any]:
task_id=feedback_req.task_id,
info=feedback_req.info,
)
self.logger.info(f"Feedback memories result: {feedback_result}")
self.logger.info(f"[Feedback memories result:] {feedback_result}")
return feedback_result

def _get_search_mode(self, mode: str) -> str:
Expand Down
5 changes: 5 additions & 0 deletions src/memos/templates/mem_feedback_prompts.py
Original file line number Diff line number Diff line change
Expand Up @@ -441,6 +441,8 @@
]
}}

**Current time**
{now_time}

**Current Memories**
{current_memories}
Expand Down Expand Up @@ -581,6 +583,9 @@
]
}}

**当前时间:**
{now_time}

**当前记忆:**
{current_memories}

Expand Down