CoT: Chain-of-Thought—a technique prompting LLMs to generate intermediate reasoning steps before the final answer
Zero-Shot-CoT: Eliciting reasoning without examples, typically using the prompt 'Let's think step by step'
Few-Shot-CoT: Eliciting reasoning by providing input-output examples that include the reasoning steps (rationales)
Self-Consistency: A decoding strategy that samples multiple reasoning paths and selects the most consistent answer via majority vote
Atomic Knowledge: Knowledge pieces within an LLM that are pertinent to a task and maintain strong mutual interconnections, essential for CoT to function
ToT: Tree-of-Thoughts—a framework allowing LLMs to explore multiple reasoning paths in a tree structure, enabling backtracking and lookahead
PoT: Program-of-Thoughts—decoupling computation from reasoning by generating executable code as the rationale
ICL: In-Context Learning—the ability of a model to learn from examples provided in the prompt without parameter updates
ReAct: Reason+Act—a paradigm where agents generate reasoning traces and task-specific actions in an interleaved manner