#atom

Subtitle:

Learning from demonstrations within the prompt


Core Idea:

Few-shot prompting presents the model with high-quality examples of input-output pairs before asking it to complete a new task, helping it understand the expected pattern and criteria for good answers.


Key Principles:

  1. Demonstration Learning:
    • Showing the model what good performance looks like through examples
  2. Pattern Recognition:
    • Enabling the model to recognize and apply patterns from examples to new inputs
  3. Example Selection Impact:
    • Choice of examples significantly affects performance, from near-random to state-of-the-art

Why It Matters:


How to Implement:

  1. Select Diverse Examples:
    • Choose examples that represent different aspects of the task
  2. Order Strategically:
    • Arrange examples to avoid biasing the model (avoid recency and majority label bias)
  3. Match Test Sample:
    • Include examples semantically similar to the test case when possible

Example:


Connections:


References:

  1. Primary Source:
    • Weng, Lilian. (Mar 2023). Prompt Engineering. Lil'Log.
  2. Additional Resources:
    • Liu et al. "What Makes Good In-Context Examples for GPT-3?"
    • Lu et al. "Fantastically Ordered Prompts and Where to Find Them"

Tags:

#few-shot #in-context-learning #examples #demonstrations #prompting


Connections:


Sources: