CoT: Chain-of-Thought—a prompting technique where the model generates intermediate reasoning steps before the final answer
DA: Direct Answering—a prompting paradigm where the model generates the final answer immediately without intermediate reasoning
Contextual Distance: The number of tokens separating the in-context demonstrations from the point where the model generates the final answer (usually increased by CoT rationales)
Pattern-based ICL: In-context learning tasks where input-output pairs follow a consistent, explicit, and verbalizable rule (e.g., arithmetic progression, string manipulation)
Dummy Rationale: Semantically neutral text (e.g., Shakespeare sonnets) generated by the model instead of reasoning to isolate the effect of token length (contextual distance) on performance
Explicit-Implicit Hybrid Mechanism: The hypothesis that CoT predictions arise from a mix of explicit reasoning (the rationale) and implicit reasoning (latent pattern matching), with implicit often compensating for flawed explicit logic