CoT: Chain-of-Thought—a prompting method where the model generates intermediate reasoning steps before the final answer
Rationale: The text explanation or reasoning path generated by the model to justify its answer
Instruction Tuning: Fine-tuning language models on datasets formatted as natural language instructions (e.g., 'Translate this sentence:')
Zero-shot: Evaluating a model on a task it has not explicitly seen during training, without providing examples in the prompt
Few-shot: Evaluating or adapting a model using a small number of examples (e.g., 64) provided in the prompt or used for lightweight fine-tuning
LoRA: Low-Rank Adaptation—a parameter-efficient fine-tuning technique that updates only a small subset of parameters
Flan: Finetuned Language Net—a series of T5 models instruction-tuned on a large collection of tasks
BBH: Big Bench Hard—a challenging subset of the BIG-Bench benchmark focusing on tasks where models require multi-step reasoning