Supervised Contrastive Learning: A learning paradigm where the contrastive loss pulls together embeddings of samples with the same label and pushes apart those with different labels.
Channel Independence: A modeling strategy where multivariate time series are treated as multiple independent univariate series, sharing the same model weights.
MSE: Mean Squared Error—a standard metric for regression tasks measuring the average squared difference between predicted and actual values.
MAE: Mean Absolute Error—a standard metric measuring the average absolute difference between predicted and actual values.
TimesNet: A state-of-the-art time series foundation model architecture used as a primary baseline in this paper.
ETTh1/ETTm1: Standard datasets for time series forecasting containing electricity transformer data (hourly and 15-minute intervals).
SupCon: Supervised Contrastive loss function.
Foundation Models: Large-scale models trained on broad data to be adapted to downstream tasks; here applied to time series.