CoT: Chain-of-Thought—a prompting technique where the model is guided to generate intermediate logical steps before producing a final answer
ToT: Tree-of-Thought—a framework allowing models to explore multiple reasoning paths in a tree structure, evaluating and pruning branches to find the best solution
RAG: Retrieval-Augmented Generation—enhancing model responses by retrieving relevant documents from an external knowledge base to ground the output
Neuro-symbolic AI: Hybrid systems combining neural networks (pattern recognition) with symbolic logic (explicit rules) to improve reasoning and explainability
Self-Consistency: A technique that generates multiple reasoning paths for a single prompt and selects the final answer via majority voting
Hallucinations: Instances where an LLM generates plausible-sounding but factually incorrect or nonsensical information
Abductive Reasoning: Inferring the most likely explanation for a given set of observations (often used in diagnostics)
MANNs: Memory-Augmented Neural Networks—models equipped with external memory modules to store and retrieve information for long-term consistency
PAL: Program-Aided Language Models—frameworks where the LLM generates code (e.g., Python) to perform calculations or logic verification instead of relying on internal text generation
GNNs: Graph Neural Networks—models designed to process graph-structured data, useful for reasoning over knowledge graphs