Slate provides the cognitive substrate for building agents that learn from experience, maintain context, and execute skills. It turns stateless LLMs into intelligent, adaptive systems.
To build a "Cognitive Agent", weave Slate's memory systems into your LLM loop:
client.reminisce(input) → Add past relevant experiences to prompt (Few-Shot Learning).client.drift() → Add current context/state to prompt.client.commit(input, action, outcome) → Save the result.client.focus(new_state) → Update working memory.This loop turns a stateless LLM into an agent that learns from every interaction and maintains coherence over long conversations.
| Feature | Working Memory | Episodic Memory |
|---|---|---|
| Purpose | Current task state, active context | Past experiences, learning history |
| Retention | Short-term (decays if not accessed) | Long-term (permanent) |
| Retrieval | Attention-sorted (drift) | Semantic similarity (reminisce) |
| Use Case | Conversation history, scratchpad, current task | Few-shot learning, preference recall |
import { CortexClient } from "slate-client";
const client = new CortexClient("grpc.your-instance-id.slate.tryrice.com:80", "your-token", "my-agent");
async function agentLoop(userInput: string) {
// 1. RECALL: Check if we've handled something similar before
const pastExperiences = await client.reminisce(userInput, 3);
// 2. ORIENT: Get current context from working memory
const currentContext = await client.drift();
// 3. DECIDE & ACT: Build prompt with context and call LLM
const prompt = buildPrompt(userInput, currentContext, pastExperiences);
const response = await llm.generate(prompt);
// 4. LEARN: Record this experience for future reference
await client.commit(
userInput, // Input
response, // Outcome
{ action: "respond", reasoning: "User query" }
);
// 5. UPDATE CONTEXT: Add new information to working memory
await client.focus(`Handled: ${userInput}`);
return response;
}
from slate_client import CortexClient
client = CortexClient(address="grpc.your-instance-id.slate.tryrice.com:80", token="your-token", run_id="my-agent")
def agent_loop(user_input: str) -> str:
# 1. RECALL: Check if we've handled something similar before
past_experiences = client.reminisce(user_input, limit=3).traces
# 2. ORIENT: Get current context from working memory
current_context = client.drift().items
# 3. DECIDE & ACT: Build prompt with context and call LLM
prompt = build_prompt(user_input, current_context, past_experiences)
response = llm.generate(prompt)
# 4. LEARN: Record this experience for future reference
client.commit(
user_input, # input
response, # outcome
action="respond",
reasoning="User query"
)
# 5. UPDATE CONTEXT: Add new information to working memory
client.focus(f"Handled: {user_input}")
return response
When your agent needs to execute deterministic operations, use Procedural Memory:
// Node.js
const taxResult = await client.trigger("calculate_tax");
# Python
tax_result = client.trigger("calculate_tax")
This keeps complex calculations server-side, sandboxed, and consistent.