diff --git a/README.md b/README.md index 29a50c1da..edd4e8905 100644 --- a/README.md +++ b/README.md @@ -118,89 +118,31 @@ showcasing its capabilities in **information extraction**, **temporal and cross- - **🔌 Extensible**: Easily extend and customize memory modules, data sources, and LLM integrations. -## 📦 Installation +## 🚀 Quickstart Guide -### Install via pip - -```bash -pip install MemoryOS -``` - -### Optional Dependencies - -MemOS provides several optional dependency groups for different features. You can install them based on your needs. - -| Feature | Package Name | -| --------------------- | ------------------------- | -| Tree Memory | `MemoryOS[tree-mem]` | -| Memory Reader | `MemoryOS[mem-reader]` | -| Memory Scheduler | `MemoryOS[mem-scheduler]` | - -Example installation commands: - -```bash -pip install MemoryOS[tree-mem] -pip install MemoryOS[tree-mem,mem-reader] -pip install MemoryOS[mem-scheduler] -pip install MemoryOS[tree-mem,mem-reader,mem-scheduler] -``` - -### External Dependencies - -#### Ollama Support - -To use MemOS with [Ollama](https://ollama.com/), first install the Ollama CLI: - -```bash -curl -fsSL https://ollama.com/install.sh | sh -``` - -#### Transformers Support - -To use functionalities based on the `transformers` library, ensure you have [PyTorch](https://pytorch.org/get-started/locally/) installed (CUDA version recommended for GPU acceleration). - -#### Download Examples +### Get API Key + - Sign up and get started on[`MemOS dashboard`](https://memos-dashboard.openmem.net/cn/quickstart/?source=landing) + - Open the API Keys Console in the MemOS dashboard and copy the API Key into the initialization code -To download example code, data and configurations, run the following command: - -```bash -memos download_examples -``` - - -## 🚀 Getting Started - -### ⭐️ MemOS online API -The easiest way to use MemOS. Equip your agent with memory **in minutes**! - -Sign up and get started on[`MemOS dashboard`](https://memos-dashboard.openmem.net/cn/quickstart/?source=landing). - - -### Self-Hosted Server -1. Get the repository. -```bash -git clone https://github.com/MemTensor/MemOS.git -cd MemOS -pip install -r ./docker/requirements.txt -``` +### Install via pip -2. Configure `docker/.env.example` and copy to `MemOS/.env` -3. Start the service. ```bash -uvicorn memos.api.server_api:app --host 0.0.0.0 --port 8001 --workers 8 +pip install MemoryOS -U ``` -### Interface SDK -#### Here is a quick example showing how to create all interface SDK +### Basic Usage -This interface is used to add messages, supporting multiple types of content and batch additions. MemOS will automatically parse the messages and handle memory for reference in subsequent conversations. +- Initialize MemOS client with API Key to start sending requests ```python # Please make sure MemoS is installed (pip install MemoryOS -U) from memos.api.client import MemOSClient # Initialize the client using the API Key client = MemOSClient(api_key="YOUR_API_KEY") +``` +- This API allows you to add one or more messages to a specific conversation. As illustrated in the examples bellow, you can add messages in real time during a user-assistant interaction, import historical messages in bulk, or enrich the conversation with user preferences and behavior data. All added messages are transformed into memories by MemOS, enabling their retrieval in future conversations to support chat history management, user behavior tracking, and personalized interactions. +```python messages = [ {"role": "user", "content": "I have planned to travel to Guangzhou during the summer vacation. What chain hotels are available for accommodation?"}, {"role": "assistant", "content": "You can consider [7 Days, All Seasons, Hilton], and so on."}, @@ -214,79 +156,90 @@ res = client.add_message(messages=messages, user_id=user_id, conversation_id=con print(f"result: {res}") ``` -This interface is used to retrieve the memories of a specified user, returning the memory fragments most relevant to the input query for Agent use. The recalled memory fragments include 'factual memory', 'preference memory', and 'tool memory'. +- This API allows you to query a user’s memory and returns the fragments most relevant to the input. These can serve as references for the model when generating responses. As shown in the examples bellow, You can retrieve memory in real time during a user’s conversation with the AI, or perform a global search across their entire memory to create user profiles or support personalized recommendations, improving both dialogue coherence and personalization. +In the latest update, in addition to “Fact Memory”, the system now supports “Preference Memory”, enabling LLM to respond in a way that better understands the user. ```python -# Please make sure MemoS is installed (pip install MemoryOS -U) -from memos.api.client import MemOSClient - -# Initialize the client using the API Key -client = MemOSClient(api_key="YOUR_API_KEY") - query = "I want to go out to play during National Day. Can you recommend a city I haven't been to and a hotel brand I haven't stayed at?" user_id = "memos_user_123" -conversation_id = "0928" +conversation_id = "0610" res = client.search_memory(query=query, user_id=user_id, conversation_id=conversation_id) print(f"result: {res}") ``` -This interface is used to delete the memory of specified users and supports batch deletion. -```python -# Please make sure MemoS is installed (pip install MemoryOS -U) -from memos.api.client import MemOSClient - -# Initialize the client using the API Key -client = MemOSClient(api_key="YOUR_API_KEY") - -user_ids = ["memos_user_123"] -# Replace with the memory ID -memory_ids = ["6b23b583-f4c4-4a8f-b345-58d0c48fea04"] -res = client.delete_memory(user_ids=user_ids, memory_ids=memory_ids) - -print(f"result: {res}") -``` - -This interface is used to add feedback to messages in the current session, allowing MemOS to correct its memory based on user feedback. -```python -# Please make sure MemoS is installed (pip install MemoryOS -U) -from memos.api.client import MemOSClient - -# Initialize the client using the API Key -client = MemOSClient(api_key="YOUR_API_KEY") - -user_id = "memos_user_123" -conversation_id = "memos_feedback_conv" -feedback_content = "No, let's change it now to a meal allowance of 150 yuan per day and a lodging subsidy of 700 yuan per day for first-tier cities; for second- and third-tier cities, it remains the same as before." -# Replace with the knowledgebase ID -allow_knowledgebase_ids = ["basee5ec9050-c964-484f-abf1-ce3e8e2aa5b7"] - -res = client.add_feedback( - user_id=user_id, - conversation_id=conversation_id, - feedback_content=feedback_content, - allow_knowledgebase_ids=allow_knowledgebase_ids -) - -print(f"result: {res}") -``` - -This interface is used to create a knowledgebase associated with a project -```python -# Please make sure MemoS is installed (pip install MemoryOS -U) -from memos.api.client import MemOSClient - -# Initialize the client using the API Key -client = MemOSClient(api_key="YOUR_API_KEY") -knowledgebase_name = "Financial Reimbursement Knowledge Base" -knowledgebase_description = "A compilation of all knowledge related to the company's financial reimbursements." +### Self-Hosted Server +1. Get the repository. + ```bash + git clone https://github.com/MemTensor/MemOS.git + cd MemOS + pip install -r ./docker/requirements.txt + ``` +2. Configure `docker/.env.example` and copy to `MemOS/.env` + - The `OPENAI_API_KEY`,`MOS_EMBEDDER_API_KEY`,`MEMRADER_API_KEY` and others can be applied for through [`BaiLian`](https://bailian.console.aliyun.com/?spm=a2c4g.11186623.0.0.2f2165b08fRk4l&tab=api#/api). + - Fill in the corresponding configuration in the `MemOS/.env` file. +3. Start the service. -res = client.create_knowledgebase( - knowledgebase_name=knowledgebase_name, - knowledgebase_description=knowledgebase_description -) -print(f"result: {res}") -``` +- Launch via Docker + ###### Tips: Please ensure that Docker Compose is installed successfully and that you have navigated to the docker directory (via `cd docker`) before executing the following command. + ```bash + # Enter docker directory + docker compose up + ``` + ##### If you prefer to deploy using Docker, please refer to the [`Docker Reference`](https://docs.openmem.net/open_source/getting_started/rest_api_server/#method-1-docker-use-repository-dependency-package-imagestart-recommended-use). + +- Launch via the uvicorn command line interface (CLI) + ###### Tips: Please ensure that Neo4j and Qdrant are running before executing the following command. + ```bash + uvicorn memos.api.server_api:app --host 0.0.0.0 --port 8001 --workers 1 + ``` + ##### For detailed integration steps, see the [`CLI Reference`](https://docs.openmem.net/open_source/getting_started/rest_api_server/#method-3client-install-with-CLI). + + + +Example + - Add User Message + ```python + import requests + import json + + data = { + "user_id": "8736b16e-1d20-4163-980b-a5063c3facdc", + "mem_cube_id": "b32d0977-435d-4828-a86f-4f47f8b55bca", + "messages": [ + { + "role": "user", + "content": "I like strawberry" + } + ], + "async_mode": "sync" + } + headers = { + "Content-Type": "application/json" + } + url = "http://localhost:8000/product/add" + + res = requests.post(url=url, headers=headers, data=json.dumps(data)) + print(f"result: {res.json()}") + ``` + - Search User Memory + ```python + import requests + import json + + data = { + "query": "What do I like", + "user_id": "8736b16e-1d20-4163-980b-a5063c3facdc", + "mem_cube_id": "b32d0977-435d-4828-a86f-4f47f8b55bca" + } + headers = { + "Content-Type": "application/json" + } + url = "http://localhost:8000/product/search" + + res = requests.post(url=url, headers=headers, data=json.dumps(data)) + print(f"result: {res.json()}") + ``` ## 💬 Community & Support diff --git a/docker/.env.example b/docker/.env.example index dc4252133..ee26c7bcd 100644 --- a/docker/.env.example +++ b/docker/.env.example @@ -3,32 +3,31 @@ ## Base TZ=Asia/Shanghai -ENV_NAME=PLAYGROUND_OFFLINE # Tag shown in DingTalk notifications (e.g., PROD_ONLINE/TEST); no runtime effect unless ENABLE_DINGDING_BOT=true MOS_CUBE_PATH=/tmp/data_test # local data path MEMOS_BASE_PATH=. # CLI/SDK cache path MOS_ENABLE_DEFAULT_CUBE_CONFIG=true # enable default cube config MOS_ENABLE_REORGANIZE=false # enable memory reorg +# MOS Text Memory Type MOS_TEXT_MEM_TYPE=general_text # general_text | tree_text ASYNC_MODE=sync # async/sync, used in default cube config ## User/session defaults -MOS_USER_ID=root -MOS_SESSION_ID=default_session -MOS_MAX_TURNS_WINDOW=20 +# Top-K for LLM in the Product API(old version) MOS_TOP_K=50 ## Chat LLM (main dialogue) +# LLM model name for the Product API MOS_CHAT_MODEL=gpt-4o-mini +# Temperature for LLM in the Product API MOS_CHAT_TEMPERATURE=0.8 +# Max tokens for LLM in the Product API MOS_MAX_TOKENS=2048 +# Top-P for LLM in the Product API MOS_TOP_P=0.9 +# LLM for the Product API backend MOS_CHAT_MODEL_PROVIDER=openai # openai | huggingface | vllm -MOS_MODEL_SCHEMA=memos.configs.llm.VLLMLLMConfig # vllm only: config class path; keep default unless you extend it OPENAI_API_KEY=sk-xxx # [required] when provider=openai OPENAI_API_BASE=https://api.openai.com/v1 # [required] base for the key -OPENAI_BASE_URL= # compatibility for eval/scheduler -VLLM_API_KEY= # required when provider=vllm -VLLM_API_BASE=http://localhost:8088/v1 # required when provider=vllm ## MemReader / retrieval LLM MEMRADER_MODEL=gpt-4o-mini @@ -37,40 +36,61 @@ MEMRADER_API_BASE=http://localhost:3000/v1 # [required] base for the key MEMRADER_MAX_TOKENS=5000 ## Embedding & rerank +# embedding dim EMBEDDING_DIMENSION=1024 +# set default embedding backend MOS_EMBEDDER_BACKEND=universal_api # universal_api | ollama +# set openai style MOS_EMBEDDER_PROVIDER=openai # required when universal_api +# embedding model MOS_EMBEDDER_MODEL=bge-m3 # siliconflow → use BAAI/bge-m3 +# embedding url MOS_EMBEDDER_API_BASE=http://localhost:8000/v1 # required when universal_api +# embedding model key MOS_EMBEDDER_API_KEY=EMPTY # required when universal_api OLLAMA_API_BASE=http://localhost:11434 # required when backend=ollama +# reranker config MOS_RERANKER_BACKEND=http_bge # http_bge | http_bge_strategy | cosine_local +# reranker url MOS_RERANKER_URL=http://localhost:8001 # required when backend=http_bge* +# reranker model MOS_RERANKER_MODEL=bge-reranker-v2-m3 # siliconflow → use BAAI/bge-reranker-v2-m3 MOS_RERANKER_HEADERS_EXTRA= # extra headers, JSON string, e.g. {"Authorization":"Bearer your_token"} +# use source MOS_RERANKER_STRATEGY=single_turn -MOS_RERANK_SOURCE= # optional rerank scope, e.g., history/stream/custom # External Services (for evaluation scripts) +# API key for reproducting Zep(compertitor product) evaluation ZEP_API_KEY=your_zep_api_key_here +# API key for reproducting Mem0(compertitor product) evaluation MEM0_API_KEY=your_mem0_api_key_here +# API key for reproducting MemU(compertitor product) evaluation +MEMU_API_KEY=your_memu_api_key_here +# API key for reproducting MEMOBASE(compertitor product) evaluation +MEMOBASE_API_KEY=your_memobase_api_key_here +# Project url for reproducting MEMOBASE(compertitor product) evaluation +MEMOBASE_PROJECT_URL=your_memobase_project_url_here +# LLM for evaluation MODEL=gpt-4o-mini +# embedding model for evaluation EMBEDDING_MODEL=nomic-embed-text:latest + ## Internet search & preference memory +# Enable web search ENABLE_INTERNET=false +# API key for BOCHA Search BOCHA_API_KEY= # required if ENABLE_INTERNET=true -XINYU_API_KEY= -XINYU_SEARCH_ENGINE_ID= +# default search mode SEARCH_MODE=fast # fast | fine | mixture -FAST_GRAPH=false -BM25_CALL=false -VEC_COT_CALL=false +# Slow retrieval strategy configuration, rewrite is the rewrite strategy FINE_STRATEGY=rewrite # rewrite | recreate | deep_search -ENABLE_ACTIVATION_MEMORY=false +# Whether to enable preference memory ENABLE_PREFERENCE_MEMORY=true +# Preference Memory Add Mode PREFERENCE_ADDER_MODE=fast # fast | safe +# Whether to deduplicate explicit preferences based on factual memory DEDUP_PREF_EXP_BY_TEXTUAL=false ## Reader chunking @@ -81,66 +101,71 @@ MEM_READER_CHAT_CHUNK_SESS_SIZE=10 # sessions per chunk (default mode) MEM_READER_CHAT_CHUNK_OVERLAP=2 # overlap between chunks ## Scheduler (MemScheduler / API) +# Enable or disable the main switch for configuring the memory scheduler during MemOS class initialization MOS_ENABLE_SCHEDULER=false +# Determine the number of most relevant memory entries that the scheduler retrieves or processes during runtime (such as reordering or updating working memory) MOS_SCHEDULER_TOP_K=10 +# The time interval (in seconds) for updating "Activation Memory" (usually referring to caching or short-term memory mechanisms) MOS_SCHEDULER_ACT_MEM_UPDATE_INTERVAL=300 +# The size of the context window considered by the scheduler when processing tasks (such as the number of recent messages or conversation rounds) MOS_SCHEDULER_CONTEXT_WINDOW_SIZE=5 +# The maximum number of working threads allowed in the scheduler thread pool for concurrent task execution MOS_SCHEDULER_THREAD_POOL_MAX_WORKERS=10000 +# The polling interval (in seconds) for the scheduler to consume new messages/tasks from the queue. The smaller the value, the faster the response, but the CPU usage may be higher MOS_SCHEDULER_CONSUME_INTERVAL_SECONDS=0.01 +# Whether to enable the parallel distribution function of the scheduler to improve the throughput of concurrent operations MOS_SCHEDULER_ENABLE_PARALLEL_DISPATCH=true +# The specific switch to enable or disable the "Activate Memory" function in the scheduler logic MOS_SCHEDULER_ENABLE_ACTIVATION_MEMORY=false +# Control whether the scheduler instance is actually started during server initialization. If false, the scheduler object may be created but its background loop will not be started API_SCHEDULER_ON=true +# Specifically define the window size for API search operations in OptimizedScheduler. It is passed to the ScherderrAPIModule to control the scope of the search context API_SEARCH_WINDOW_SIZE=5 +# Specify how many rounds of previous conversations (history) to retrieve and consider during the 'hybrid search' (fast search+asynchronous fine search). This helps provide context aware search results API_SEARCH_HISTORY_TURNS=5 ## Graph / vector stores +# Neo4j database selection mode NEO4J_BACKEND=neo4j-community # neo4j-community | neo4j | nebular | polardb +# Neo4j database url NEO4J_URI=bolt://localhost:7687 # required when backend=neo4j* +# Neo4j database user NEO4J_USER=neo4j # required when backend=neo4j* +# Neo4j database password NEO4J_PASSWORD=12345678 # required when backend=neo4j* +# Neo4j database name NEO4J_DB_NAME=neo4j # required for shared-db mode -MOS_NEO4J_SHARED_DB=true # if true, all users share one DB; if false, each user gets their own DB -NEO4J_AUTO_CREATE=false # [IMPORTANT] set to false for Neo4j Community Edition -NEO4J_USE_MULTI_DB=false # alternative to MOS_NEO4J_SHARED_DB (logic is inverse) +# Neo4j database data sharing with Memos +MOS_NEO4J_SHARED_DB=false QDRANT_HOST=localhost QDRANT_PORT=6333 # For Qdrant Cloud / remote endpoint (takes priority if set): QDRANT_URL=your_qdrant_url QDRANT_API_KEY=your_qdrant_key +# milvus server uri MILVUS_URI=http://localhost:19530 # required when ENABLE_PREFERENCE_MEMORY=true MILVUS_USER_NAME=root # same as above MILVUS_PASSWORD=12345678 # same as above -NEBULAR_HOSTS=["localhost"] -NEBULAR_USER=root -NEBULAR_PASSWORD=xxxxxx -NEBULAR_SPACE=shared-tree-textual-memory -NEBULAR_WORKING_MEMORY=20 -NEBULAR_LONGTERM_MEMORY=1000000 -NEBULAR_USER_MEMORY=1000000 - -## Relational DB (user manager / PolarDB) -MOS_USER_MANAGER_BACKEND=sqlite # sqlite | mysql -MYSQL_HOST=localhost # required when backend=mysql -MYSQL_PORT=3306 -MYSQL_USERNAME=root -MYSQL_PASSWORD=12345678 -MYSQL_DATABASE=memos_users -MYSQL_CHARSET=utf8mb4 + +# PolarDB endpoint/host POLAR_DB_HOST=localhost +# PolarDB port POLAR_DB_PORT=5432 +# PolarDB username POLAR_DB_USER=root +# PolarDB password POLAR_DB_PASSWORD=123456 +# PolarDB database name POLAR_DB_DB_NAME=shared_memos_db +# PolarDB Server Mode: +# If set to true, use Multi-Database Mode where each user has their own independent database (physical isolation). +# If set to false (default), use Shared Database Mode where all users share one database with logical isolation via username. POLAR_DB_USE_MULTI_DB=false +# PolarDB connection pool size POLARDB_POOL_MAX_CONN=100 -## Redis (scheduler queue) — fill only if you want scheduler queues in Redis; otherwise in-memory queue is used -REDIS_HOST=localhost # global Redis endpoint (preferred over MEMSCHEDULER_*) -REDIS_PORT=6379 -REDIS_DB=0 -REDIS_PASSWORD= -REDIS_SOCKET_TIMEOUT= -REDIS_SOCKET_CONNECT_TIMEOUT= +## Related configurations of Redis +# Reddimq sends scheduling information and synchronization information for some variables MEMSCHEDULER_REDIS_HOST= # fallback keys if not using the global ones MEMSCHEDULER_REDIS_PORT= MEMSCHEDULER_REDIS_DB= @@ -148,41 +173,26 @@ MEMSCHEDULER_REDIS_PASSWORD= MEMSCHEDULER_REDIS_TIMEOUT= MEMSCHEDULER_REDIS_CONNECT_TIMEOUT= -## MemScheduler LLM -MEMSCHEDULER_OPENAI_API_KEY= # LLM key for scheduler’s own calls (OpenAI-compatible); leave empty if scheduler not using LLM -MEMSCHEDULER_OPENAI_BASE_URL= # Base URL for the above; can reuse OPENAI_API_BASE -MEMSCHEDULER_OPENAI_DEFAULT_MODEL=gpt-4o-mini ## Nacos (optional config center) +# Nacos turns off long polling listening, defaults to true NACOS_ENABLE_WATCH=false +# The monitoring interval for long rotation training is 60 seconds, and the default 30 seconds can be left unconfigured NACOS_WATCH_INTERVAL=60 +# nacos server address NACOS_SERVER_ADDR= +# nacos dataid NACOS_DATA_ID= +# nacos group NACOS_GROUP=DEFAULT_GROUP +# nacos namespace NACOS_NAMESPACE= +# nacos ak AK= +# nacos sk SK= -## DingTalk bot & OSS upload -ENABLE_DINGDING_BOT=false # set true -> fields below required -DINGDING_ACCESS_TOKEN_USER= -DINGDING_SECRET_USER= -DINGDING_ACCESS_TOKEN_ERROR= -DINGDING_SECRET_ERROR= -DINGDING_ROBOT_CODE= -DINGDING_APP_KEY= -DINGDING_APP_SECRET= -OSS_ENDPOINT= # bot image upload depends on OSS -OSS_REGION= -OSS_BUCKET_NAME= -OSS_ACCESS_KEY_ID= -OSS_ACCESS_KEY_SECRET= -OSS_PUBLIC_BASE_URL= - -## SDK / external client -MEMOS_API_KEY= -MEMOS_BASE_URL=https://memos.memtensor.cn/api/openmem/v1 - +# chat model for chat api CHAT_MODEL_LIST='[{ "backend": "deepseek", "api_base": "http://localhost:1234", @@ -190,3 +200,16 @@ CHAT_MODEL_LIST='[{ "model_name_or_path": "deepseek-r1", "support_models": ["deepseek-r1"] }]' + +# RabbitMQ host name for message-log pipeline +MEMSCHEDULER_RABBITMQ_HOST_NAME= +# RabbitMQ user name for message-log pipeline +MEMSCHEDULER_RABBITMQ_USER_NAME= +# RabbitMQ password for message-log pipeline +MEMSCHEDULER_RABBITMQ_PASSWORD= +# RabbitMQ virtual host for message-log pipeline +MEMSCHEDULER_RABBITMQ_VIRTUAL_HOST=memos +# Erase connection state on connect for message-log pipeline +MEMSCHEDULER_RABBITMQ_ERASE_ON_CONNECT=true +# RabbitMQ port for message-log pipeline +MEMSCHEDULER_RABBITMQ_PORT=5672 diff --git a/docker/docker-compose.yml b/docker/docker-compose.yml index 0f680505f..0a8e2c634 100644 --- a/docker/docker-compose.yml +++ b/docker/docker-compose.yml @@ -53,7 +53,7 @@ services: - "6333:6333" # REST API - "6334:6334" # gRPC API volumes: - - ./qdrant_data:/qdrant/storage + - qdrant_data:/qdrant/storage environment: QDRANT__SERVICE__GRPC_PORT: 6334 QDRANT__SERVICE__HTTP_PORT: 6333 @@ -64,6 +64,7 @@ services: volumes: neo4j_data: neo4j_logs: + qdrant_data: networks: memos_network: diff --git a/docker/requirements-full.txt b/docker/requirements-full.txt index 57c26067f..be9ed2068 100644 --- a/docker/requirements-full.txt +++ b/docker/requirements-full.txt @@ -159,7 +159,7 @@ tzdata==2025.2 ujson==5.10.0 urllib3==2.5.0 uvicorn==0.35.0 -uvloop==0.21.0 +uvloop==0.22.1; sys_platform != 'win32' volcengine-python-sdk==4.0.6 watchfiles==1.1.0 websockets==15.0.1 @@ -179,7 +179,7 @@ pathable==0.4.4 pathvalidate==3.3.1 platformdirs==4.5.0 pluggy==1.6.0 -psycopg2-binary==2.9.9 +psycopg2-binary==2.9.11 py-key-value-aio==0.2.8 py-key-value-shared==0.2.8 PyJWT==2.10.1 diff --git a/docker/requirements.txt b/docker/requirements.txt index aa01fa626..f89617c10 100644 --- a/docker/requirements.txt +++ b/docker/requirements.txt @@ -1,21 +1,18 @@ annotated-types==0.7.0 -anyio==4.9.0 -async-timeout==5.0.1 -attrs==25.3.0 -authlib==1.6.0 -beautifulsoup4==4.13.4 -certifi==2025.7.14 -cffi==1.17.1 -charset-normalizer==3.4.2 -chonkie==1.1.1 -click==8.2.1 -cobble==0.1.4 -colorama==0.4.6 -coloredlogs==15.0.1 +anyio==4.11.0 +attrs==25.4.0 +Authlib==1.6.5 +beartype==0.22.5 +cachetools==6.2.2 +certifi==2025.11.12 +cffi==2.0.0 +charset-normalizer==3.4.4 +chonkie==1.1.0 +click==8.3.0 concurrent-log-handler==0.9.28 -cryptography==45.0.5 -cyclopts==3.22.2 -defusedxml==0.7.1 +cryptography==46.0.3 +cyclopts==4.2.3 +diskcache==5.6.3 distro==1.9.0 dnspython==2.8.0 docstring_parser==0.17.0 @@ -29,7 +26,6 @@ fastmcp==2.13.0.2 filelock==3.20.0 fsspec==2025.10.0 grpcio==1.76.0 -neo4j==5.28.1 h11==0.16.0 hf-xet==1.2.0 httpcore==1.0.9 @@ -56,6 +52,7 @@ MarkupSafe==3.0.3 mcp==1.21.1 mdurl==0.1.2 more-itertools==10.8.0 +neo4j==5.28.1 numpy==2.3.4 ollama==0.4.9 openai==1.109.1 @@ -68,10 +65,10 @@ pathvalidate==3.3.1 pika==1.3.2 platformdirs==4.5.0 pluggy==1.6.0 -portalocker==3.2.0 +portalocker==2.8.0 prometheus_client==0.23.1 protobuf==6.33.1 -psycopg2-binary==2.9.9 +psycopg2-binary==2.9.11 py-key-value-aio==0.2.8 py-key-value-shared==0.2.8 pycparser==2.23 @@ -90,7 +87,7 @@ python-dotenv==1.2.1 python-multipart==0.0.20 pytz==2025.2 PyYAML==6.0.3 -qdrant-client +qdrant-client==1.14.3 redis==6.4.0 referencing==0.36.2 regex==2025.11.3 @@ -123,6 +120,6 @@ tzdata==2025.2 ujson==5.11.0 urllib3==2.5.0 uvicorn==0.38.0 -uvloop==0.22.1 +uvloop==0.22.1; sys_platform != 'win32' watchfiles==1.1.1 websockets==15.0.1 diff --git a/src/memos/api/config.py b/src/memos/api/config.py index 7298658ff..daf9b6cfe 100644 --- a/src/memos/api/config.py +++ b/src/memos/api/config.py @@ -204,7 +204,7 @@ def init(cls) -> None: sk = os.getenv("SK") if not (server_addr and data_id and ak and sk): - logger.warning("❌ missing NACOS_SERVER_ADDR / AK / SK / DATA_ID") + logger.warning("missing NACOS_SERVER_ADDR / AK / SK / DATA_ID") return base_url = f"http://{server_addr}/nacos/v1/cs/configs" diff --git a/src/memos/api/server_api.py b/src/memos/api/server_api.py index 0dfef99d9..ac9ed8d88 100644 --- a/src/memos/api/server_api.py +++ b/src/memos/api/server_api.py @@ -13,8 +13,8 @@ logger = logging.getLogger(__name__) app = FastAPI( - title="MemOS Product REST APIs", - description="A REST API for managing multiple users with MemOS Product.", + title="MemOS Server REST APIs", + description="A REST API for managing multiple users with MemOS Server.", version="1.0.1", ) diff --git a/src/memos/mem_scheduler/task_schedule_modules/dispatcher.py b/src/memos/mem_scheduler/task_schedule_modules/dispatcher.py index e2c1621d4..cdd491183 100644 --- a/src/memos/mem_scheduler/task_schedule_modules/dispatcher.py +++ b/src/memos/mem_scheduler/task_schedule_modules/dispatcher.py @@ -128,6 +128,7 @@ def status_tracker(self) -> TaskStatusTracker | None: if self._status_tracker is None: try: self._status_tracker = TaskStatusTracker(self.redis) + # Propagate to submodules when created lazily if self.memos_message_queue: self.memos_message_queue.set_status_tracker(self._status_tracker) except Exception as e: diff --git a/src/memos/mem_scheduler/task_schedule_modules/redis_queue.py b/src/memos/mem_scheduler/task_schedule_modules/redis_queue.py index 941c52164..557a45466 100644 --- a/src/memos/mem_scheduler/task_schedule_modules/redis_queue.py +++ b/src/memos/mem_scheduler/task_schedule_modules/redis_queue.py @@ -1216,7 +1216,7 @@ def _update_stream_cache_with_log( self._stream_keys_cache = active_stream_keys self._stream_keys_last_refresh = time.time() cache_count = len(self._stream_keys_cache) - logger.info( - f"Refreshed stream keys cache: {cache_count} active keys, " - f"{deleted_count} deleted, {len(candidate_keys)} candidates examined." - ) + logger.info( + f"Refreshed stream keys cache: {cache_count} active keys, " + f"{deleted_count} deleted, {len(candidate_keys)} candidates examined." + ) diff --git a/src/memos/mem_scheduler/utils/status_tracker.py b/src/memos/mem_scheduler/utils/status_tracker.py index 2a995b239..4977cfc3c 100644 --- a/src/memos/mem_scheduler/utils/status_tracker.py +++ b/src/memos/mem_scheduler/utils/status_tracker.py @@ -17,6 +17,9 @@ def __init__(self, redis_client: "redis.Redis | None"): self.redis = redis_client def _get_key(self, user_id: str) -> str: + if not self.redis: + return + return f"memos:task_meta:{user_id}" def _get_task_items_key(self, user_id: str, task_id: str) -> str: diff --git a/src/memos/memories/textual/tree_text_memory/retrieve/searcher.py b/src/memos/memories/textual/tree_text_memory/retrieve/searcher.py index 7e28c174b..3612d37eb 100644 --- a/src/memos/memories/textual/tree_text_memory/retrieve/searcher.py +++ b/src/memos/memories/textual/tree_text_memory/retrieve/searcher.py @@ -290,6 +290,51 @@ def _parse_task( return parsed_goal, query_embedding, context, query + @timed + def _retrieve_simple( + self, + query: str, + top_k: int, + search_filter: dict | None = None, + user_name: str | None = None, + **kwargs, + ): + """Retrieve from by keywords and embedding""" + query_words = [] + if self.tokenizer: + query_words = self.tokenizer.tokenize_mixed(query) + else: + query_words = query.strip().split() + query_words = [query, *query_words] + logger.info(f"[SIMPLESEARCH] Query words: {query_words}") + query_embeddings = self.embedder.embed(query_words) + + items = self.graph_retriever.retrieve_from_mixed( + top_k=top_k * 2, + memory_scope=None, + query_embedding=query_embeddings, + search_filter=search_filter, + user_name=user_name, + use_fast_graph=self.use_fast_graph, + ) + logger.info(f"[SIMPLESEARCH] Items count: {len(items)}") + documents = [getattr(item, "memory", "") for item in items] + if not documents: + return [] + documents_embeddings = self.embedder.embed(documents) + similarity_matrix = cosine_similarity_matrix(documents_embeddings) + selected_indices, _ = find_best_unrelated_subgroup(documents, similarity_matrix) + selected_items = [items[i] for i in selected_indices] + logger.info( + f"[SIMPLESEARCH] after unrelated subgroup selection items count: {len(selected_items)}" + ) + return self.reranker.rerank( + query=query, + query_embedding=query_embeddings[0], + graph_results=selected_items, + top_k=top_k, + ) + @timed def _retrieve_paths( self, diff --git a/src/memos/utils.py b/src/memos/utils.py index 4f2666efd..594180e8f 100644 --- a/src/memos/utils.py +++ b/src/memos/utils.py @@ -79,7 +79,6 @@ def wrapper(*args, **kwargs): status = "SUCCESS" if success_flag else "FAILED" status_info = f", status: {status}" - if not success_flag and exc_type is not None: status_info += ( f", error_type: {exc_type.__name__}, error_message: {exc_message}" @@ -88,6 +87,7 @@ def wrapper(*args, **kwargs): msg = ( f"[TIMER_WITH_STATUS] {log_prefix or fn.__name__} " f"took {elapsed_ms:.0f} ms{status_info}, args: {ctx_str}" + f", result: {result}" ) logger.info(msg)