ISA: Implicit Sentiment Analysis—detecting sentiment in text that lacks explicit emotional words (e.g., factual statements implying an opinion)
CoT: Chain-of-Thought—a prompting technique where the model generates intermediate reasoning steps before the final answer
ESA: Explicit Sentiment Analysis—traditional sentiment analysis where texts contain direct emotional words like 'happy' or 'terrible'
THOR: Three-hop Reasoning—the proposed framework that infers Aspect → Opinion → Polarity sequentially
Self-consistency: A decoding strategy where the model generates multiple reasoning paths and selects the most consistent answer via voting
Reasoning Revising: A supervised fine-tuning technique where the model is trained to predict the final label using its own generated reasoning steps as input context
LLM: Large Language Model—massive neural networks trained on vast text data, capable of complex tasks via prompting
Flan-T5: An instruction-tuned version of the T5 (Text-to-Text Transfer Transformer) model