#atom

Comparing memory-less and memory-enabled LLM agent architectures

Core Idea: The distinction between stateless and stateful agents centers on memory persistence, where stateless agents treat each interaction as independent while stateful agents maintain conversation history and context across multiple interactions.

Key Elements

Implementation Example

from langchain_anthropic import ChatAnthropic
from langchain_core.messages import HumanMessage
from langgraph.checkpoint.memory import MemorySaver
from langgraph.prebuilt import create_react_agent

# Stateless agent - no checkpointer
model = ChatAnthropic(model_name="claude-3-sonnet-20240229")
stateless_agent = create_react_agent(model, tools=[])

# Each call is independent and retains no history
stateless_agent.invoke({
    "messages": [HumanMessage(content="My name is Jamie")]
})
stateless_agent.invoke({
    "messages": [HumanMessage(content="What's my name?")]
})  # Will not know the name is Jamie

# Stateful agent - with checkpointer
memory = MemorySaver()
stateful_agent = create_react_agent(model, tools=[], checkpointer=memory)

# Configure with thread ID for state persistence
config = {"configurable": {"thread_id": "user123"}}

# First interaction
stateful_agent.invoke({
    "messages": [HumanMessage(content="My name is Jamie")]
}, config)

# Later interaction - will remember the name
stateful_agent.invoke({
    "messages": [HumanMessage(content="What's my name?")]
}, config)  # Will know the name is Jamie

Practical Applications

Connections

References

  1. LangChain documentation on agent state management
  2. LangGraph documentation on checkpointing

#agents #stateful #stateless #memory-systems #conversation-history #llm


Connections:


Sources: