#atom

Methods for providing external information to language models during inference

Core Idea: Different approaches exist for adding context to LLMs, each with trade-offs in efficiency, cost, and implementation complexity.

Key Elements

Context Stuffing

Vector Indexing

LLMs.txt + Tool Calling

Connections

References

  1. RAG (Retrieval-Augmented Generation) literature
  2. Vector database documentation (Pinecone, Chroma, etc.)
  3. LLM context window optimization research

#context-loading #rag #vector-indexing #llm #knowledge-management


Connections:


Sources: