PoT: Program of Thought—a prompting strategy where the model generates executable code (like Python) to solve reasoning steps instead of just text
LoRA: Low-Rank Adaptation—a parameter-efficient fine-tuning technique that updates only a small set of added weights while keeping the base model frozen
Teacher-Student: A framework where a large, capable model (Teacher) generates training data to supervise a smaller model (Student)
SFT: Supervised Fine-Tuning—training a model on a labeled dataset of inputs and desired outputs
Entity Extraction: Identifying and isolating specific values (numbers, dates, names) from the text/table needed for calculation
Concept Accuracy: A proposed metric measuring whether a model correctly identifies the necessary financial formula/logic, independent of calculation errors
SLM: Small Language Model—typically models with under ~15 billion parameters (e.g., Mistral-7B, Phi-3)