Neurosymbolic AI: A field of AI combining neural networks (for learning and generalization) with symbolic logic (for reasoning and verifiability)
Model Grounding: The concept that symbols (words) derive meaning from their mapping to regions or directions within the neural network's internal vector space, rather than mapping to physical world objects
Symbol Grounding Problem: The fundamental challenge in AI of how words (symbols) get their meaning; classically, how they connect to the real world
Instruction Tuning: Training language models using pairs of instructions and outputs to improve their ability to follow tasks
Prompt Tuning: Optimizing the input text (prompt) given to an LLM to guide its behavior, effectively acting as a form of 'learning' without changing model weights
Gradient Accumulation: A training technique where updates are aggregated over multiple steps; here used analogously to describe accumulating prompt revisions before finalizing a 'learned' state
Axiomatic deductive reasoning: Reasoning based on a set of premises or axioms to derive a logically certain conclusion