NaiveTextMemory: Simple Plain Text Memory

The most lightweight memory module in MemOS, designed for rapid prototyping and simple scenarios. No vector database required—quickly retrieve memories using keyword matching.

NaiveTextMemory: Simple Plain Text Memory

Let's start using the MemOS memory system in the simplest way possible!

NaiveTextMemory is a lightweight, in-memory plain text memory module that stores memories in a memory list and uses keyword matching for retrieval. It's the best starting point for learning MemOS and is suitable for demos, testing, and small-scale applications.

Table of Contents

What You'll Learn

By the end of this guide, you will be able to:

  • Automatically extract structured memories from conversations using LLM
  • Store and manage memories in memory (no database required)
  • Search memories using keyword matching
  • Persist and restore memory data
  • Understand when to use NaiveTextMemory and when to upgrade to other modules

Why Choose NaiveTextMemory

Key Advantages

  • Zero Dependencies: No vector database or embedding model required
  • Fast Startup: Up and running in just a few lines of code
  • Lightweight & Efficient: Low resource footprint, fast execution
  • Simple & Intuitive: Keyword matching with predictable results
  • Easy to Debug: All memories in memory, easy to inspect
  • Perfect Starting Point: The best entry point for learning MemOS

Suitable Scenarios

  • Rapid prototyping and proof of concept
  • Simple conversational agents (< 1000 memories)
  • Testing and demo scenarios
  • Resource-constrained environments (cannot run embedding models)
  • Keyword search scenarios (queries directly match memories)

Performance Tip
When memory count exceeds 1000, it's recommended to upgrade to GeneralTextMemory, which uses vector search for better performance.

Core Concepts

Memory Structure

Each memory is represented as a TextualMemoryItem object with the following fields:

FieldTypeRequiredDescription
idstrUnique identifier (auto-generated UUID)
memorystrMain text content of the memory
metadataTextualMemoryMetadataMetadata (for categorization, filtering, and retrieval)

Metadata Fields (TextualMemoryMetadata)

Metadata provides rich contextual information for categorization, filtering, and organizing memories:

FieldTypeDefaultDescription
type"procedure" / "fact" / "event" / "opinion""fact"Memory type classification
memory_timestr (YYYY-MM-DD)Current dateTime associated with the memory
source"conversation" / "retrieved" / "web" / "file"-Source of the memory
confidencefloat (0-100)80.0Certainty/confidence score
entitieslist[str][]Mentioned entities or concepts
tagslist[str][]Topic tags
visibility"private" / "public" / "session""private"Access control scope
updated_atstrAuto-generatedLast update timestamp (ISO 8601)

API Reference

Initialization

from memos.memories.textual.naive import NaiveTextMemory
from memos.configs.memory import NaiveTextMemoryConfig

memory = NaiveTextMemory(config: NaiveTextMemoryConfig)

Core Methods

MethodParametersReturnsDescription
extract(messages)messages: list[dict]list[TextualMemoryItem]Extract structured memories from conversation using LLM
add(memories)memories: list / dict / ItemNoneAdd one or more memories
search(query, top_k)query: str, top_k: intlist[TextualMemoryItem]Retrieve top-k memories using keyword matching
get(memory_id)memory_id: strTextualMemoryItemGet a single memory by ID
get_by_ids(ids)ids: list[str]list[TextualMemoryItem]Batch retrieve memories by ID list
get_all()-list[TextualMemoryItem]Return all memories
update(memory_id, new)memory_id: str, new: dictNoneUpdate content or metadata of specified memory
delete(ids)ids: list[str]NoneDelete one or more memories
delete_all()-NoneClear all memories
dump(dir)dir: strNoneSerialize memories to JSON file
load(dir)dir: strNoneLoad memories from JSON file

Search Mechanism

Unlike GeneralTextMemory's vector semantic search, NaiveTextMemory uses a keyword matching algorithm:

Step 1: Tokenization

Break down the query and each memory content into lists of tokens

Step 2: Calculate Match Score

Count the number of overlapping tokens between query and memory

Step 3: Sort

Sort all memories by match count in descending order

Step 4: Return Results

Return the top-k memories as search results

Algorithm Comparison

FeatureKeyword Matching (NaiveTextMemory)Vector Semantic Search (GeneralTextMemory)
Semantic Understanding❌ Doesn't understand synonyms✅ Understands similar concepts
Resource Usage✅ Extremely low⚠️ Requires embedding model and vector DB
Execution Speed✅ Fast (O(n))⚠️ Slower (indexing + querying)
Suitable Scale< 1K memories10K - 100K memories
Predictability✅ Intuitive results⚠️ Black box model

Example Comparison
Query: "cat"

  • Keyword Matching: Only matches memories containing "cat"
  • Semantic Search: Also matches memories about "pet", "kitten", "feline", etc.

Configuration Parameters

NaiveTextMemoryConfig

ParameterTypeRequiredDefaultDescription
extractor_llmLLMConfigFactory-LLM configuration for extracting memories from conversations
memory_filenamestrtextual_memory.jsonFilename for persistent storage

Configuration Example

{
  "backend": "naive_text",
  "config": {
    "extractor_llm": {
      "backend": "openai",
      "config": {
        "model_name_or_path": "gpt-4o-mini",
        "temperature": 0.8,
        "max_tokens": 1024,
        "api_base": "xxx",
        "api_key": "sk-xxx"
      }
    },
    "memory_filename": "my_memories.json"
  }
}

Hands-On Practice

Quick Start

Get started with NaiveTextMemory in just 3 steps:

Step 1: Create Configuration

from memos.configs.memory import MemoryConfigFactory

config = MemoryConfigFactory(
    backend="naive_text",
    config={
        "extractor_llm": {
            "backend": "openai",
            "config": {
                "model_name_or_path": "gpt-4o-mini",
                "api_key": "your-api-key",
                "api_base": "your-api-base"
            },
        },
    },
)

Step 2: Initialize Memory Module

from memos.memories.factory import MemoryFactory

memory = MemoryFactory.from_config(config)

Step 3: Extract and Add Memories

# Automatically extract memories from conversation
memories = memory.extract([
    {"role": "user", "content": "I love tomatoes."},
    {"role": "assistant", "content": "Great! Tomatoes are delicious."},
])

# Add to memory store
memory.add(memories)
print(f"✓ Added {len(memories)} memories")

Advanced: Using MultiModal Reader
If you need to process multimodal content such as images, URLs, or files, use MultiModalStructMemReader.
View complete example: Using MultiModalStructMemReader (Advanced)

Complete Example

Here's a complete end-to-end example demonstrating all core functionality:

from memos.configs.memory import MemoryConfigFactory
from memos.memories.factory import MemoryFactory

# ========================================
# 1. Initialization
# ========================================
config = MemoryConfigFactory(
    backend="naive_text",
    config={
        "extractor_llm": {
            "backend": "openai",
            "config": {
                "model_name_or_path": "gpt-4o-mini",
                "api_key": "your-api-key",
            },
        },
    },
)
memory = MemoryFactory.from_config(config)

# ========================================
# 2. Extract and Add Memories
# ========================================
memories = memory.extract([
    {"role": "user", "content": "I love tomatoes."},
    {"role": "assistant", "content": "Great! Tomatoes are delicious."},
])
memory.add(memories)
print(f"✓ Added {len(memories)} memories")

# ========================================
# 3. Search Memories
# ========================================
results = memory.search("tomatoes", top_k=2)
print(f"\n🔍 Found {len(results)} relevant memories:")
for i, item in enumerate(results, 1):
    print(f"  {i}. {item.memory}")

# ========================================
# 4. Get All Memories
# ========================================
all_memories = memory.get_all()
print(f"\n📊 Total {len(all_memories)} memories")

# ========================================
# 5. Update Memory
# ========================================
if memories:
    memory_id = memories[0].id
    memory.update(
        memory_id, 
        {
            "memory": "User loves tomatoes.",
            "metadata": {"type": "opinion", "confidence": 95.0}
        }
    )
    print(f"\n✓ Updated memory: {memory_id}")

# ========================================
# 6. Persist to Storage
# ========================================
memory.dump("tmp/mem")
print("\n💾 Memories saved to tmp/mem/textual_memory.json")

# ========================================
# 7. Load Memories
# ========================================
memory.load("tmp/mem")
print("✓ Memories loaded from file")

# ========================================
# 8. Delete Memories
# ========================================
if memories:
    memory.delete([memories[0].id])
    print(f"\n🗑️ Deleted 1 memory")

# Delete all memories
# memory.delete_all()

Extension: Internet Retrieval
NaiveTextMemory focuses on local memory management. For retrieving information from the internet and adding it to your memory store, see:
Retrieve Memories from the Internet (Optional)

File Storage

When calling dump(dir), the system saves memories to:

<dir>/<config.memory_filename>

Default File Structure

[
  {
    "id": "550e8400-e29b-41d4-a716-446655440000",
    "memory": "User loves tomatoes.",
    "metadata": {
      "type": "opinion",
      "confidence": 95.0,
      "entities": ["user", "tomatoes"],
      "tags": ["food", "preference"],
      "updated_at": "2026-01-14T10:30:00Z"
    }
  },
  ...
]

Use load(dir) to fully restore all memory data.

Important Note
Memories are stored in memory and will be lost after process restart. Remember to call dump() regularly to save data!

Use Case Guide

Best Suited For

  • Rapid Prototyping: No need to configure vector databases, get started in minutes
  • Simple Conversational Agents: Small-scale applications with < 1000 memories
  • Testing and Demos: Quickly validate memory extraction and retrieval logic
  • Resource-Constrained Environments: Scenarios where embedding models or vector databases cannot run
  • Keyword Search: Scenarios where query content directly matches memory text
  • Learning and Teaching: The best starting point for understanding MemOS memory system
  • Large-Scale Applications: More than 10,000 memories (search performance degrades)
  • Semantic Search Needs: Need to understand synonyms (e.g., "cat" and "pet")
  • Production Environments: Strict performance and accuracy requirements
  • Multilingual Scenarios: Need cross-language semantic understanding
  • Complex Relationship Reasoning: Need to understand relationships between memories

Upgrade Path
For the scenarios not recommended above, consider upgrading to:

  • GeneralTextMemory - Vector semantic search, suitable for 10K-100K memories
  • TreeTextMemory - Graph structure storage, supports relationship reasoning and multi-hop queries

Comparison with Other Memory Modules

Choosing the right memory module is crucial for project success. This comparison helps you make the decision:

FeatureNaiveTextMemoryGeneralTextMemoryTreeTextMemory
Search MethodKeyword matchingVector semantic searchGraph structure + vector search
DependenciesLLM onlyLLM + Embedder + Vector DBLLM + Embedder + Graph DB
Suitable Scale< 1K1K - 100K10K - 1M
Query ComplexityO(n) linear scanO(log n) approximate NNO(log n) + graph traversal
Semantic Understanding
Relationship Reasoning
Multi-Hop Queries
Storage BackendIn-memory listVector DB (Qdrant, etc.)Graph DB (Neo4j/PolarDB)
Configuration ComplexityLow ⭐Medium ⭐⭐High ⭐⭐⭐
Learning CurveMinimalModerateSteep
Production Ready❌ Prototype/demo only✅ Suitable for most cases✅ Suitable for complex apps

Selection Guide

  • Just getting started? → Start with NaiveTextMemory
  • Need semantic search? → Use GeneralTextMemory
  • Need relationship reasoning? → Choose TreeTextMemory

Best Practices

Follow these recommendations to make the most of NaiveTextMemory:

1. Persist Data Regularly

# Save immediately after critical operations
memory.add(new_memories)
memory.dump("tmp/mem")  # ✓ Persist immediately

# Regular automatic backups
import schedule
schedule.every(10).minutes.do(lambda: memory.dump("tmp/mem"))

2. Control Memory Scale

# Regularly clean old memories
if len(memory.get_all()) > 1000:
    old_memories = sorted(
        memory.get_all(),
        key=lambda m: m.metadata.updated_at
    )[:100]  # Oldest 100
    
    memory.delete([m.id for m in old_memories])
    print("✓ Cleaned 100 old memories")

3. Optimize Search Queries

# ❌ Poor: Vague query
results = memory.search("thing", top_k=5)

# ✅ Good: Use specific keywords
results = memory.search("tomato", top_k=5)

4. Use Metadata Wisely

# Set clear metadata when adding memories
memory.add({
    "memory": "User prefers dark mode",
    "metadata": {
        "type": "opinion",          # ✓ Clear classification
        "tags": ["UI", "preference"],  # ✓ Easy filtering
        "confidence": 90.0,         # ✓ Mark confidence
        "entities": ["user", "dark mode"]  # ✓ Entity annotation
    }
})

5. Plan Upgrade Path

# Monitor memory count and upgrade timely
memory_count = len(memory.get_all())
if memory_count > 800:
    print("⚠️ Memory count approaching limit, consider upgrading to GeneralTextMemory")
    # Migration code reference:
    # 1. Export existing memories: memory.dump("backup")
    # 2. Create GeneralTextMemory configuration
    # 3. Import memories to new module

Next Steps

Congratulations! You've mastered the core usage of NaiveTextMemory. Next, you can:

  • Upgrade to Vector Search: Learn about GeneralTextMemory's semantic retrieval capabilities
  • Explore Graph Structure: Understand TreeTextMemory's relationship reasoning features
  • Integrate into Applications: Check Complete API Documentation to build production-grade applications
  • Run Example Code: Browse the /examples/ directory for more practical cases
  • Learn Graph Databases: If you need advanced features, learn about Neo4j or PolarDB

Tip
NaiveTextMemory is the perfect starting point for learning MemOS. When your application needs more powerful features, you can seamlessly migrate to other memory modules!