CoT: Chain-of-Thought—a prompting technique where models generate step-by-step reasoning rationales before the final answer
XoT: Generalized Chain of Thought—the survey's proposed umbrella term covering all variations of reasoning paths (linear, tree, graph) and construction methods
ToT: Tree of Thoughts—a reasoning topology that allows models to explore multiple branching reasoning paths and backtrack if necessary
GoT: Graph of Thoughts—a reasoning topology allowing arbitrary graph structures, including aggregation of multiple thoughts and loops
PAL: Program-Aided Language models—methods that decouple reasoning by generating code (e.g., Python) to perform calculation or logic steps
L2M: Least-to-Most prompting—a technique that decomposes complex problems into simpler sub-questions to be solved sequentially
Socratic Questioning: A method using recursive self-questioning to break down complex problems from the bottom up
Self-Consistency: An ensemble method that samples multiple reasoning paths and selects the final answer via majority vote
In-context Learning: The ability of a model to learn a task from a few examples provided in the prompt at inference time, without weight updates