Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
80 commits
Select commit Hold shift + click to select a range
70373f9
fix playground bug, internet search judge
Dec 4, 2025
d181339
Merge branch 'dev' into feat/fix_palyground_bug
Dec 4, 2025
11cf00a
fix playground internet bug
Dec 4, 2025
6b10ce1
merge dev
Dec 4, 2025
c861f61
modify delete mem
Dec 4, 2025
e638039
modify tool resp bug in multi cube
Dec 4, 2025
dcd3d50
Merge branch 'dev' into feat/fix_palyground_bug
Dec 4, 2025
0c0eff8
Merge branch 'dev' into feat/fix_palyground_bug
Dec 5, 2025
8765dc4
fix bug in playground chat handle and search inter
Dec 5, 2025
1a335db
modify prompt
Dec 5, 2025
18320ff
fix bug in playground
Dec 6, 2025
666b897
fix bug playfround
Dec 6, 2025
275b9b6
Merge branch 'dev' into feat/fix_palyground_bug
Dec 7, 2025
0d22512
fix bug
Dec 7, 2025
d38f55f
Merge branch 'dev' into feat/fix_palyground_bug
Dec 7, 2025
a9eb1f6
fix code
Dec 7, 2025
94ad709
Merge branch 'dev' into feat/fix_palyground_bug
Dec 7, 2025
723a14f
fix model bug in playground
Dec 7, 2025
6f06a23
Merge branch 'dev' into feat/fix_palyground_bug
Dec 7, 2025
a300670
Merge branch 'dev' into feat/fix_palyground_bug
Dec 8, 2025
7ee13b1
Merge branch 'dev' into feat/fix_palyground_bug
Dec 8, 2025
5ab6e92
modify plan b
Dec 8, 2025
1bb0bcd
llm param modify
Dec 8, 2025
1b607e7
Merge branch 'dev' into feat/fix_palyground_bug
Dec 8, 2025
f5bc426
add logger in playground
Dec 8, 2025
a9fa309
modify code
Dec 9, 2025
d2efa24
Merge branch 'dev' into feat/fix_palyground_bug
Dec 9, 2025
9ebfbe1
Merge branch 'dev' into feat/fix_palyground_bug
fridayL Dec 9, 2025
4c055d0
fix bug
Dec 9, 2025
27b4fc4
modify code
Dec 9, 2025
cefeefb
modify code
Dec 9, 2025
7e05fa7
fix bug
Dec 9, 2025
a4f66b1
Merge branch 'dev' into feat/fix_palyground_bug
Dec 9, 2025
9b47647
Merge branch 'dev' into feat/fix_palyground_bug
Dec 9, 2025
05da172
fix search bug in plarground
Dec 9, 2025
e410ec2
fixx bug
Dec 9, 2025
0324588
move schadualr to back
Dec 9, 2025
a834028
Merge branch 'dev' into feat/fix_palyground_bug
Dec 9, 2025
4084954
modify pref location
Dec 9, 2025
de5e372
Merge branch 'dev' into feat/fix_palyground_bug
Dec 9, 2025
87861ab
Merge branch 'dev' into feat/fix_palyground_bug
Dec 9, 2025
8b547b8
modify fast net search
Dec 9, 2025
c915867
Merge branch 'dev' into feat/fix_palyground_bug
Dec 9, 2025
2f238fd
Merge branch 'dev' into feat/fix_palyground_bug
Dec 9, 2025
4543332
add tags and new package
Dec 10, 2025
c51ef0d
merge dev
Dec 10, 2025
033e8bd
modify prompt fix bug
Dec 10, 2025
e300112
Merge branch 'dev' into feat/fix_palyground_bug
Dec 10, 2025
da498fc
Merge branch 'dev' into feat/fix_palyground_bug
Dec 10, 2025
4057f5d
remove nltk due to image promblem
Dec 10, 2025
479d74e
Merge branch 'dev' into feat/fix_palyground_bug
Dec 10, 2025
ecff6e5
prompt modify
Dec 11, 2025
1b4ef23
Merge branch 'dev' into feat/fix_palyground_bug
Dec 11, 2025
7e18cae
modify bug remove redundant field
Dec 11, 2025
a70ffa3
modify bug
Dec 11, 2025
e06eff2
merge dev
Dec 11, 2025
7a149e3
fix playground bug
Dec 11, 2025
0c2d132
merge dev
Dec 11, 2025
d69fd88
fix bug
Dec 11, 2025
a9a7613
merge dev
Dec 11, 2025
dad4ca6
bust internet topk
Dec 11, 2025
f49fad6
Merge branch 'dev' into feat/fix_palyground_bug
Dec 11, 2025
393a7f5
bust to 50
Dec 11, 2025
b691b05
Merge branch 'dev' into feat/fix_palyground_bug
Dec 11, 2025
2bba2c2
fix bug cite
Dec 11, 2025
571770b
modify search
Dec 12, 2025
f5e032c
merge dev
Dec 12, 2025
d7f5c0d
Merge branch 'dev' into feat/fix_palyground_bug
Dec 15, 2025
a570450
remote query add in playground
Dec 15, 2025
14a21c4
modify bug
Dec 15, 2025
2d84ae5
Merge branch 'dev' into feat/fix_palyground_bug
Dec 15, 2025
42591c8
modify pref bug
Dec 16, 2025
c4c3a87
Merge branch 'dev' into feat/fix_palyground_bug
CaralHsi Dec 16, 2025
289debd
move add position
Dec 16, 2025
9c855a8
Merge branch 'dev' into feat/fix_palyground_bug
Dec 16, 2025
705ed47
Merge branch 'dev' into feat/fix_palyground_bug
Dec 16, 2025
e654465
modify chat prompt
Dec 16, 2025
7b01f84
modify overthinking
Dec 16, 2025
a751823
Merge branch 'dev' into feat/fix_palyground_bug
Dec 17, 2025
002f990
add logger in playground chat
Dec 17, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
34 changes: 19 additions & 15 deletions src/memos/api/handlers/chat_handler.py
Original file line number Diff line number Diff line change
Expand Up @@ -37,6 +37,7 @@
ANSWER_TASK_LABEL,
QUERY_TASK_LABEL,
)
from memos.templates.cloud_service_prompt import get_cloud_chat_prompt
from memos.templates.mos_prompts import (
FURTHER_SUGGESTION_PROMPT,
get_memos_prompt,
Expand Down Expand Up @@ -145,9 +146,10 @@ def handle_chat_complete(self, chat_req: APIChatCompleteRequest) -> dict[str, An

# Step 2: Build system prompt
system_prompt = self._build_system_prompt(
filtered_memories,
search_response.data.get("pref_string", ""),
chat_req.system_prompt,
query=chat_req.query,
memories=filtered_memories,
pref_string=search_response.data.get("pref_string", ""),
base_prompt=chat_req.system_prompt,
)

# Prepare message history
Expand Down Expand Up @@ -263,9 +265,10 @@ def generate_chat_response() -> Generator[str, None, None]:

# Step 2: Build system prompt with memories
system_prompt = self._build_system_prompt(
filtered_memories,
search_response.data.get("pref_string", ""),
chat_req.system_prompt,
query=chat_req.query,
memories=filtered_memories,
pref_string=search_response.data.get("pref_string", ""),
base_prompt=chat_req.system_prompt,
)

# Prepare messages
Expand Down Expand Up @@ -462,6 +465,7 @@ def generate_chat_response() -> Generator[str, None, None]:
conversation=chat_req.history,
mode="fine",
)
self.logger.info(f"[PLAYGROUND chat parsed_goal]: {parsed_goal}")

if chat_req.beginner_guide_step == "first":
chat_req.internet_search = False
Expand All @@ -476,8 +480,8 @@ def generate_chat_response() -> Generator[str, None, None]:

# ====== second deep search ======
search_req = APISearchRequest(
query=parsed_goal.rephrased_query
or chat_req.query + (f"{parsed_goal.tags}" if parsed_goal.tags else ""),
query=(parsed_goal.rephrased_query or chat_req.query)
+ (f"{parsed_goal.tags}" if parsed_goal.tags else ""),
user_id=chat_req.user_id,
readable_cube_ids=readable_cube_ids,
mode="fast",
Expand All @@ -491,6 +495,9 @@ def generate_chat_response() -> Generator[str, None, None]:
search_memory_type="All",
search_tool_memory=False,
)

self.logger.info(f"[PLAYGROUND second search query]: {search_req.query}")

start_time = time.time()
search_response = self.search_handler.handle_search_memories(search_req)
end_time = time.time()
Expand Down Expand Up @@ -762,19 +769,16 @@ def _build_pref_md_string_for_playground(self, pref_mem_list: list[any]) -> str:

def _build_system_prompt(
self,
query: str,
memories: list | None = None,
pref_string: str | None = None,
base_prompt: str | None = None,
**kwargs,
) -> str:
"""Build system prompt with optional memories context."""
if base_prompt is None:
base_prompt = (
"You are a knowledgeable and helpful AI assistant. "
"You have access to conversation memories that help you provide more personalized responses. "
"Use the memories to understand the user's context, preferences, and past interactions. "
"If memories are provided, reference them naturally when relevant, but don't explicitly mention having memories."
)
lang = detect_lang(query)
base_prompt = get_cloud_chat_prompt(lang=lang)

memory_context = ""
if memories:
Expand All @@ -790,7 +794,7 @@ def _build_system_prompt(
return base_prompt.format(memories=memory_context)
elif base_prompt and memories:
# For backward compatibility, append memories if no placeholder is found
memory_context_with_header = "\n\n## Memories:\n" + memory_context
memory_context_with_header = "\n\n## Fact Memories:\n" + memory_context
return base_prompt + memory_context_with_header
return base_prompt

Expand Down
107 changes: 107 additions & 0 deletions src/memos/templates/cloud_service_prompt.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,107 @@
from datetime import datetime


CLOUD_CHAT_PROMPT_ZH = """
# Role
你是一个拥有长期记忆能力的智能助手 (MemOS Assistant)。你的目标是结合检索到的记忆片段,为用户提供高度个性化、准确且逻辑严密的回答。

# System Context
- 当前时间: {current_time} (请以此作为判断记忆时效性的基准)

# Memory Data
以下是 MemOS 检索到的相关信息,分为“事实”和“偏好”。
- **事实 (Facts)**:可能包含用户属性、历史对话记录或第三方信息。
- **特别注意**:其中标记为 `[assistant观点]`、`[模型总结]` 的内容代表 **AI 过去的推断**,**并非**用户的原话。
- **偏好 (Preferences)**:用户对回答风格、格式或逻辑的显式/隐式要求。

<memories>
{memories}
</memories>

# Critical Protocol: Memory Safety (记忆安全协议)
检索到的记忆可能包含**AI 自身的推测**、**无关噪音**或**主体错误**。你必须严格执行以下**“四步判决”**,只要有一步不通过,就**丢弃**该条记忆:

1. **来源真值检查 (Source Verification)**:
- **核心**:区分“用户原话”与“AI 推测”。
- 如果记忆带有 `[assistant观点]` 等标签,这仅代表AI过去的**假设**,**不可**将其视为用户的绝对事实。
- *反例*:记忆显示 `[assistant观点] 用户酷爱芒果`。如果用户没提,不要主动假设用户喜欢芒果,防止循环幻觉。
- **原则:AI 的总结仅供参考,权重大幅低于用户的直接陈述。**

2. **主语归因检查 (Attribution Check)**:
- 记忆中的行为主体是“用户本人”吗?
- 如果记忆描述的是**第三方**(如“候选人”、“面试者”、“虚构角色”、“案例数据”),**严禁**将其属性归因于用户。

3. **强相关性检查 (Relevance Check)**:
- 记忆是否直接有助于回答当前的 `Original Query`?
- 如果记忆仅仅是关键词匹配(如:都提到了“代码”)但语境完全不同,**必须忽略**。

4. **时效性检查 (Freshness Check)**:
- 记忆内容是否与用户的最新意图冲突?以当前的 `Original Query` 为最高事实标准。

# Instructions
1. **审视**:先阅读 `facts memories`,执行“四步判决”,剔除噪音和不可靠的 AI 观点。
2. **执行**:
- 仅使用通过筛选的记忆补充背景。
- 严格遵守 `preferences` 中的风格要求。
3. **输出**:直接回答问题,**严禁**提及“记忆库”、“检索”或“AI 观点”等系统内部术语。
4. **语言**:回答语言应与用户查询语言一致。
"""


CLOUD_CHAT_PROMPT_EN = """
# Role
You are an intelligent assistant powered by MemOS. Your goal is to provide personalized and accurate responses by leveraging retrieved memory fragments, while strictly avoiding hallucinations caused by past AI inferences.

# System Context
- Current Time: {current_time} (Baseline for freshness)

# Memory Data
Below is the information retrieved by MemOS, categorized into "Facts" and "Preferences".
- **Facts**: May contain user attributes, historical logs, or third-party details.
- **Warning**: Content tagged with `[assistant观点]` or `[summary]` represents **past AI inferences**, NOT direct user quotes.
- **Preferences**: Explicit or implicit user requirements regarding response style and format.

<memories>
{memories}
</memories>

# Critical Protocol: Memory Safety
You must strictly execute the following **"Four-Step Verdict"**. If a memory fails any step, **DISCARD IT**:

1. **Source Verification (CRITICAL)**:
- **Core**: Distinguish between "User's Input" and "AI's Inference".
- If a memory is tagged as `[assistant观点]`, treat it as a **hypothesis**, not a hard fact.
- *Example*: Memory says `[assistant view] User loves mango`. Do not treat this as absolute truth unless reaffirmed.
- **Principle: AI summaries have much lower authority than direct user statements.**

2. **Attribution Check**:
- Is the "Subject" of the memory definitely the User?
- If the memory describes a **Third Party** (e.g., Candidate, Fictional Character), **NEVER** attribute these traits to the User.

3. **Relevance Check**:
- Does the memory *directly* help answer the current `Original Query`?
- If it is merely a keyword match with different context, **IGNORE IT**.

4. **Freshness Check**:
- Does the memory conflict with the user's current intent? The current `Original Query` is always the supreme Source of Truth.

# Instructions
1. **Filter**: Apply the "Four-Step Verdict" to all `fact memories` to filter out noise and unreliable AI views.
2. **Synthesize**: Use only validated memories for context.
3. **Style**: Strictly adhere to `preferences`.
4. **Output**: Answer directly. **NEVER** mention "retrieved memories," "database," or "AI views" in your response.
5. **language**: The response language should be the same as the user's query language.
"""


def get_cloud_chat_prompt(lang: str = "en") -> str:
if lang == "zh":
return CLOUD_CHAT_PROMPT_ZH.replace(
"{current_time}", datetime.now().strftime("%Y-%m-%d %H:%M (%A)")
)
elif lang == "en":
return CLOUD_CHAT_PROMPT_EN.replace(
"{current_time}", datetime.now().strftime("%Y-%m-%d %H:%M (%A)")
)
else:
raise ValueError(f"Invalid language: {lang}")
2 changes: 2 additions & 0 deletions src/memos/templates/mos_prompts.py
Original file line number Diff line number Diff line change
Expand Up @@ -158,6 +158,7 @@
- For preferences, do not mention the source in the response, do not appear `[Explicit preference]`, `[Implicit preference]`, `(Explicit preference)` or `(Implicit preference)` in the response
- The last part of the response should not contain `(Note: ...)` or `(According to ...)` etc.
- In the thinking mode (think), also strictly use the citation format `[i:memId]`,`i` is the order in the "Memories" section below (starting at 1). `memId` is the given short memory ID. The same as the response format.
- Do not repeat the thinking too much, use the correct reasoning

## Key Principles
- Reference only relevant memories to avoid information overload
Expand Down Expand Up @@ -267,6 +268,7 @@
- 对于偏好,不要在回答中标注来源,不要出现`[显式偏好]`或`[隐式偏好]`或`(显式偏好)`或`(隐式偏好)`的字样
- 回复内容的结尾不要出现`(注: ...)`或`(根据...)`等解释
- 在思考模式下(think),也需要严格采用引用格式`[i:memId]`,`i`是下面"记忆"部分中的顺序(从1开始)。`memId`是给定的短记忆ID。与回答要求一致
- 不要过度重复的思考,使用正确的推理

## 核心原则
- 仅引用相关记忆以避免信息过载
Expand Down