#atom

Methodologies for optimizing language model inputs to achieve desired outputs

Core Idea: Prompt engineering involves crafting, refining, and structuring inputs to language models to elicit more accurate, relevant, and useful outputs without changing the underlying model weights, essentially programming LLMs through carefully designed natural language instructions.

Key Elements

Fundamental Techniques

Advanced Strategies

Design Principles

Implementation Patterns

Specialized Applications

Connections

References

  1. Wei, J., et al. (2022). "Chain-of-Thought Prompting Elicits Reasoning in Large Language Models." NeurIPS.
  2. Kojima, T., et al. (2022). "Large Language Models are Zero-Shot Reasoners." NeurIPS.
  3. Yao, S., et al. (2023). "ReAct: Synergizing Reasoning and Acting in Language Models."
  4. White, J., et al. (2023). "Prompt Engineering: A Comprehensive Guide." arXiv.

#PromptEngineering #LLMTechniques #NLP #FewShotLearning #ZeroShotPrompting #ChainOfThought #AIInteraction


Connections:


Sources: