MemOS NEO Version
MOS.simple()
- the fastest way to start building memory-enhanced applications.Quick Setup
Environment Variables
Set your API credentials:
export OPENAI_API_KEY="sk-your-api-key-here"
export OPENAI_API_BASE="https://api.openai.com/v1" # Optional
export MOS_TEXT_MEM_TYPE="general_text" #or "tree_text" for advanced
#tips: general_text only support one-user when init MOS
One-Line Setup
from memos.mem_os.main import MOS
# Auto-configured instance
memory = MOS.simple()
The
MOS.simple()
will use deafult embedding model and size text-embedding-3-large dim-size 3027 if you had use other version memos before, you need delete dir ~/.memos for new qdrant or drop neo4j dbBasic Usage
#!/usr/bin/env python3
import os
from memos.mem_os.main import MOS
# Set environment variables
os.environ["OPENAI_API_KEY"] = "sk-your-api-key"
os.environ["MOS_TEXT_MEM_TYPE"] = "general_text"
# Create memory system
memory = MOS.simple()
# Add memories
memory.add("My favorite color is blue")
memory.add("I work as a software engineer")
memory.add("I live in San Francisco")
# Chat with memory context
response = memory.chat("What is user favorite color?")
print(response) # "favorite color is blue!"
response = memory.chat("Tell me about user job and location")
print(response) # Uses stored memories to respond
Memory Types
General Text Memory (Recommended for Beginners)
- Storage: Local JSON files + Qdrant vector database
- Setup: No external dependencies
- Best for: Most use cases, quick prototyping
export MOS_TEXT_MEM_TYPE="general_text"
Tree Text Memory (Advanced)
- Storage: Neo4j graph database
- Setup: Requires Neo4j server
- Best for: Complex relationship reasoning
export MOS_TEXT_MEM_TYPE="tree_text"
export NEO4J_URI="bolt://localhost:7687" # Optional
export NEO4J_PASSWORD="your-password" # Optional
Neo version Overview
MOS.simple()
automatically creates a complete configuration using sensible defaults:
Default Settings
- LLM: GPT-4o-mini with temperature 0.8
- Embedder: OpenAI text-embedding-3-large
- Chunking: 512 tokens with 128 overlap
- Graph-DB: graph db for neo4j
Default Configuration Utilities
MemOS provides three main configuration utilities in default_config.py
:
get_default_config()
: Creates complete MOS configuration with sensible defaultsget_default_cube_config()
: Creates MemCube configuration for memory storageget_default()
: Returns both MOS config and MemCube instance together
from memos.mem_os.utils.default_config import get_default, get_default_cube_config
# Get both MOS config and MemCube instance
mos_config, default_cube = get_default(
openai_api_key="sk-your-key",
text_mem_type="general_text"
)
# Or create just MemCube config
cube_config = get_default_cube_config(
openai_api_key="sk-your-key",
text_mem_type="general_text"
)
Manual Configuration (Optional)
If you need more control, use the configuration utilities:
from memos.mem_os.main import MOS
from memos.mem_os.utils.default_config import get_default_config
# Custom configuration
config = get_default_config(
openai_api_key="sk-your-key",
text_mem_type="general_text",
user_id="my_user",
model_name="gpt-4", # Different model
temperature=0.5, # Lower creativity
chunk_size=256, # Smaller chunks
top_k=10 # More search results
)
memory = MOS(config)
Advanced Features
Enable additional capabilities:
config = get_default_config(
openai_api_key="sk-your-key",
enable_activation_memory=True, # KV-cache memory
enable_mem_scheduler=True, # Background processing
)
Other Tips
- Start Simple: Use
general_text
memory type initially - Environment Setup: Keep API keys in environment variables
- Memory Quality: Add specific, factual information for best results
- Batch Operations: Add multiple related memories together
- User Context: Use
user_id
parameter for multi-user scenarios only fortree_text
Troubleshooting
Common Issues
Missing API Key Error:
# Ensure environment variable is set
echo $OPENAI_API_KEY
Neo4j Connection Error (tree_text mode):
# Check Neo4j is running desktop for local user or enterprise neo4j
Overview
The MOS (Memory Operating System) is a core component of MemOS that acts as an orchestration layer, managing multiple memory modules (MemCubes) and providing a unified interface for memory-augmented applications.
MemOS MCP
The Model Context Protocol (MCP) is a standard protocol that enables AI assistants to securely access and interact with local and remote resources. In the MemOS project, MCP provides a standardized interface for memory operations, allowing external applications to interact with the memory system through well-defined tools and resources.