# Use Case 2: Monitor AI Agent Memory Drift

## What It Is

AI agents with memory systems store user interactions, preferences, and context as embeddings. Over time:

* **Memory grows** → Agent accumulates thousands of memories
* **Old memories interfere** → Outdated information conflicts with new data
* **Retrieval slows** → Searching through too many memories degrades performance
* **Accuracy drops** → Agent retrieves wrong or conflicting memories

**The Problem:** You can't see when memory degrades, which memories are redundant, or when to archive old data. You're flying blind until users complain.

***

## How It Solves the Problem

VectorBoard visualizes your agent's memory space, making drift and redundancy visible:

1. **Real-Time Memory Visualization**
   * Watch your agent's memory evolve as new memories are added
   * See how memories cluster by topic, user, or time period
2. **Detect Memory Clusters**
   * Identify if memories about similar topics cluster together (good)
   * Spot redundant memories (many memories about the same thing)
   * Find dead zones (memories never retrieved)
3. **Time-Series Analysis**
   * Color-code by timestamp to see memory evolution
   * Detect if old and new memories are drifting apart
   * Identify when to update embeddings or archive old data

**Impact:** Prevent memory degradation before users notice

***

## Example Scenario

### The Situation

Your customer support AI agent has been running for 3 months:

```
50,000 memories stored:
- User preferences (8,000)
- Common questions (35,000)
- Edge cases (5,000)
- System errors (2,000)
```

**Problems:**

* Agent responses getting slower (3+ seconds instead of <1 second)
* Agent confusing customers (retrieving old, conflicting memories)
* No idea which memories to archive
* Manual SQL queries + Excel → can't visualize memory relationships

### The Solution (With VectorBoard)

1. **Push all 50K memories to VectorBoard:**

   ```python
   # See "How to Use" section below for full code
   ```
2. **Visualize and analyze:**
   * Color-code by `timestamp` → see 80% of memories from first month (common questions)
   * Color-code by `type` → identify redundant memories
   * Spot dead zones → memories never retrieved (safe to archive)
3. **Take action:**
   * Archive 35% redundant memories (old, duplicate common questions)
   * Keep 5% outliers (important edge cases)
   * Update stale embeddings for frequently accessed memories

**Result:**

* Agent back to <1 second response time
* No more confusion (removed conflicting memories)
* Clear visibility into memory health

***

## How to Use

### Prerequisites

* VectorBoard running:
  * **Local:** `docker compose up -d`
  * **NodeOps:** Deploy from NodeOps marketplace (you'll get a deployment URL)
* Access to your agent's memory storage system
* Python 3.10+ with `httpx` and `sentence-transformers`

### Step-by-Step Instructions

#### Step 1: Export Your Agent Memories

Create a script `export_memories.py` to get memories from your storage:

```python
# This is pseudocode - adapt to your memory storage system

# Option A: From database
import your_db_connector
memories = your_db.query("SELECT * FROM agent_memories")

# Option B: From vector database
import your_vector_db
memories = your_vector_db.get_all_memories()

# Option C: From memory system API
import requests
memories = requests.get("http://your-agent-api/memories").json()
```

#### Step 2: Push to VectorBoard

Create `push_memories.py`:

```python
import httpx
from sentence_transformers import SentenceTransformer
from datetime import datetime

# 1. Initialize embedding model (SAME as your agent uses)
model = SentenceTransformer('all-MiniLM-L6-v2')  # Replace with your model

# 2. Connect to VectorBoard
# For local deployment:
VECTORBOARD_URL = "http://localhost:8501"
# For NodeOps deployment, use your deployment URL:
# VECTORBOARD_URL = "https://your-deployment.nodeops.network"
client = httpx.Client(timeout=30.0)

# 3. Load your memories (replace with your actual export code)
memories = [
    {
        "id": "mem_1",
        "text": "User prefers concise, technical explanations",
        "created_at": "2024-01-15T10:30:00",
        "type": "preference",
        "user_id": "user_123"
    },
    # ... more memories from your export
]

print(f"Pushing {len(memories)} memories...")

for i, memory in enumerate(memories):
    # If you already have embeddings, use them; otherwise generate
    embedding = memory.get('embedding') or model.encode(memory['text']).tolist()
    
    # Prepare metadata
    metadata = {
        "timestamp": memory.get('created_at', datetime.now().isoformat()),
        "type": memory.get('type', 'memory'),
        "user_id": memory.get('user_id', 'unknown')
    }
    # Add any other metadata fields you have
    
    # Push to VectorBoard
    response = client.post(
        f"{VECTORBOARD_URL}/api/embeddings",
        json={
            "id": f"mem_{memory.get('id', i)}",
            "vector": embedding,
            "text": memory['text'],
            "metadata": metadata,
            "collection": "agent_memory"
        }
    )
    
    if response.status_code == 200:
        if (i + 1) % 1000 == 0:
            print(f"✅ {i+1}/{len(memories)}")
    else:
        print(f"❌ Failed {i+1}: {response.status_code}")

print(f"\n🎉 Done! View at {VECTORBOARD_URL}")
```

#### Step 3: Run the Script

```bash
python push_memories.py
```

**Note:** For large datasets (10K+ memories), this may take several minutes. Consider batching or using the batch endpoint.

#### Step 4: Visualize in VectorBoard

1. **Open dashboard:**
   * Local: <http://localhost:8501>
   * NodeOps: <https://your-deployment.nodeops.network>
2. **Select collection:** `agent_memory`
3. **Visualize with time-based coloring:**
   * Choose visualization method (UMAP recommended)
   * Color-code by `timestamp` metadata
   * See how memories cluster over time
4. **Analyze memory evolution:**
   * **Tight clusters by timestamp** → Memories from same period cluster together
   * **Drifting clusters** → Old and new memories separating (may need cleanup)
   * **Isolated points** → Dead zones, memories never retrieved
5. **Color-code by type:**
   * Switch to `type` metadata
   * See if preference memories cluster together
   * Identify if different types overlap (potential conflicts)
6. **Identify redundant memories:**
   * Look for very tight clusters (many memories about the same thing)
   * These are candidates for archiving

#### Step 5: Real-Time Monitoring (Optional)

For ongoing monitoring, integrate VectorBoard logging into your agent:

```python
import httpx
from datetime import datetime

class VectorBoardMemoryLogger:
    """Add this to your agent to log memories in real-time"""
    
    def __init__(self, vectorboard_url="http://localhost:8501", collection="agent_memory"):
        # For NodeOps: vectorboard_url="https://your-deployment.nodeops.network"
        self.url = vectorboard_url
        self.collection = collection
        self.client = httpx.Client()
    
    def log_memory(self, text: str, embedding: list, metadata: dict = None):
        """Call this whenever your agent stores a memory"""
        
        if metadata is None:
            metadata = {}
        metadata['timestamp'] = datetime.now().isoformat()
        
        try:
            self.client.post(
                f"{self.url}/api/embeddings",
                json={
                    "id": f"mem_{hash(text)}_{datetime.now().timestamp()}",
                    "vector": embedding,
                    "text": text,
                    "metadata": metadata,
                    "collection": self.collection
                }
            )
        except:
            pass  # Don't break agent if VectorBoard is down

# In your agent code:
memory_logger = VectorBoardMemoryLogger()

def store_agent_memory(content, embedding, user_id):
    # Your existing memory storage
    your_memory_db.store(content, embedding)
    
    # Also log to VectorBoard (non-blocking)
    memory_logger.log_memory(
        text=content,
        embedding=embedding,
        metadata={"user_id": user_id, "type": "conversation"}
    )
```
