Bidirectional Reasoning: The ability to perform both a forward transformation and its inverse (e.g., obfuscation and deobfuscation) without explicit training on the reverse direction
Cognitive Specialization: A learning pathology where training on a forward task creates a directional bias, improving forward performance while degrading reverse reasoning capabilities
Contrastive Fine-Tuning (CFT): A training method using triplets of examples (positive, negative, and anchor) to force the model to learn semantic distinctions rather than just surface patterns
Obfuscation: Transforming code to make it difficult for humans to read while preserving its computational logic (semantics)
Deobfuscation: The reverse process of obfuscation; restoring code to a readable state while maintaining its logic
CodeBLEU: A metric for evaluating code generation that considers syntactic and semantic features (like data flow) alongside n-gram matching
LoRA: Low-Rank Adaptation—a parameter-efficient fine-tuning technique that freezes pre-trained weights and injects trainable rank decomposition matrices
Dead Code Insertion: An obfuscation technique where non-functional code (code that doesn't affect the program's output) is added to confuse readers
Chain-of-Thought: A prompting strategy where the model is encouraged to generate intermediate reasoning steps before the final answer