Step-Back Prompting: A technique where the model is first prompted to ask and answer a high-level abstract question before addressing the specific original question.
Chain-of-Thought (CoT): A prompting method encouraging LLMs to generate intermediate reasoning steps.
RAG: Retrieval-Augmented Generation—enhancing model responses by retrieving relevant external documents.
MMLU: Massive Multitask Language Understanding—a benchmark covering diverse domains like STEM and humanities.
TimeQA: A question-answering dataset requiring temporal reasoning and time-sensitive knowledge.
MuSiQue: A multi-hop reasoning dataset requiring composition of multiple facts.
Abstraction: The cognitive process of deriving general principles or high-level concepts from specific instances.
PaLM-2L: A large version of Google's Pathways Language Model 2.
Take a Deep Breath (TDB): A zero-shot prompting technique asking the model to 'Take a deep breath and work on this problem step-by-step'.