Subtitle:
Maintaining and utilizing user-specific information across agent interactions without exposing it to LLMs
Core Idea:
Agent context management provides a structured way to maintain user-specific information (preferences, history, metadata) that persists across interactions and can be utilized by tools without being directly exposed in LLM prompts.
Key Principles:
- Metadata Separation:
- Keeps contextual information separate from LLM prompts
- Passes context directly to tools rather than through the LLM
- Preference Persistence:
- Maintains user preferences across multiple interactions
- Enables consistent personalization without repetition
- Secure Information Handling:
- Avoids exposing sensitive data to the LLM directly
- Provides tools with only the information they need
Why It Matters:
- Enhanced Personalization:
- Tools can account for user preferences without explicit mentions
- Context Window Optimization:
- Reduces prompt size by keeping metadata outside the LLM context
- Privacy Improvement:
- Minimizes exposure of sensitive information to LLMs
How to Implement:
- Define Context Structure:
- Create data classes or schemas for user context
- Determine which fields are needed for personalization
- Integrate with Tools:
- Design tools to accept and utilize context information
- Implement logic that adapts tool behavior based on context
- Manage Context Persistence:
- Store context between interactions in appropriate database
- Update context based on new information from conversations
Example:
- Scenario:
- Implementing user preferences for a travel planning agent
- Application:
from dataclasses import dataclass
from typing import List, Optional
from openai import agents
# Define user context structure
@dataclass
class UserContext:
user_id: str
preferred_airlines: List[str] = None
preferred_hotel_amenities: List[str] = None
budget_level: Optional[str] = None # "budget", "mid-range", "luxury"
dietary_restrictions: List[str] = None
# Tool that uses context
@agents.tool
def search_flights(wrapper, origin: str, destination: str, date: str) -> str:
"""
Search for flights based on origin, destination, and date.
Parameters:
- origin: Departure city
- destination: Arrival city
- date: Date in YYYY-MM-DD format
Returns:
- List of available flights
"""
# Access user context (not exposed to LLM)
user_context: UserContext = wrapper.user_context
# Mock flight data
flight_options = [
{"airline": "Ocean Air", "departure": "08:00", "arrival": "10:30", "price": "$350"},
{"airline": "Mountain Air", "departure": "11:00", "arrival": "13:30", "price": "$420"},
{"airline": "Desert Air", "departure": "14:00", "arrival": "16:30", "price": "$380"}
]
# Sort based on user's preferred airlines if available
if user_context and user_context.preferred_airlines:
# Move preferred airlines to the top
flight_options.sort(key=lambda f: f["airline"] not in user_context.preferred_airlines)
# Format results
result = f"Found {len(flight_options)} flights from {origin} to {destination} on {date}:\n\n"
for flight in flight_options:
result += f"- {flight['airline']}: {flight['departure']} - {flight['arrival']}, {flight['price']}"
# Note if this matches preferences
if (user_context and user_context.preferred_airlines and
flight["airline"] in user_context.preferred_airlines):
result += " (matches your preferred airline)"
result += "\n"
return result
# Using context in runner
travel_agent = agents.Agent(
name="TravelPlanner",
instructions="Help users plan trips...",
tools=[search_flights]
)
# User context passed to runner but NOT to LLM prompt
user_context = UserContext(
user_id="user123",
preferred_airlines=["Ocean Air", "Mountain Air"],
preferred_hotel_amenities=["gym", "pool", "wifi"],
budget_level="mid-range"
)
response = agents.Runner().run_sync(
travel_agent,
"Find me a flight from New York to Chicago tomorrow",
user_context=user_context
)
- Result:
- Agent searches for flights and prioritizes the user's preferred airlines
- The LLM never sees the preferred airlines in its context, but the tool uses this information
- Response highlights that Ocean Air matches user preferences without the user having to specify again
Connections:
- Related Concepts:
- Agents SDK Overview: Framework that supports context management
- AI Agent Tools: Functions that can utilize context information
- Broader Concepts:
- Personalization in AI Systems: General approaches to customizing AI responses
- Privacy-Preserving AI: Methods to protect user information in AI systems
References:
- Primary Source:
- OpenAI Agents SDK documentation on user context
- Additional Resources:
- Best practices for user preference management in conversational AI
- Privacy considerations for agent systems
Tags:
#ai #agents #context-management #personalization #user-preferences #privacy #metadata
Connections:
Sources: