CoT: Chain-of-Thought—a prompting strategy that includes intermediate reasoning steps between input and output.
ICL: In-Context Learning—the ability of LLMs to learn tasks from a few examples in the prompt without parameter updates.
BMA: Bayesian Model Averaging—a statistical method that estimates a value by weighting predictions from different models (or task parameters) by their posterior probability.
MLE: Maximum Likelihood Estimation—a method of estimating the parameters of a probability distribution by maximizing a likelihood function.
PAC-Bayes: Probably Approximately Correct-Bayesian—a framework for deriving bounds on the generalization error of learning algorithms.
SC-CoT: Self-Consistency Chain-of-Thought—a variant of CoT that samples multiple reasoning paths and selects the most consistent answer.
ToT: Tree-of-Thought—a prompting method that explores multiple reasoning paths in a tree structure.
Prompting Error: The statistical error arising from inferring the true task $\theta^*$ using only a finite number of demonstration examples in the prompt.
Pretraining Error: The error arising from the LLM's parameters not perfectly matching the true data distribution due to finite pretraining data.