Evaluation Setup
Long-horizon forecasting on univariate and multivariate tasks
Benchmarks:
- Long-horizon forecasting benchmarks (Forecasting)
Metrics:
- MSE (Mean Squared Error)
- MAE (Mean Absolute Error)
- Statistical methodology: Not explicitly reported in the paper
Key Results
| Benchmark |
Metric |
Baseline |
This Paper |
Δ |
| TimeSqueeze demonstrates superior training efficiency compared to point-token baselines. |
| Pretraining Convergence |
Convergence Speed |
1x |
20x |
+19x
|
| Pretraining Data Efficiency |
Data Efficiency |
1x |
8x |
+7x
|
Main Takeaways
- TimeSqueeze consistently outperforms architectures using point-wise tokenization or fixed-size patching across long-horizon benchmarks.
- The dynamic patching strategy effectively balances information preservation and computational cost.
- The method scales well, showing consistent gains in both zero-shot and full-shot settings.