← Back to Paper List

Efficient Training of Robust Traditional Chinese LLaMA-1B on a Single Consumer GPU: Continual Pre-training, SFT, and DPO

Yu-Cheng Chih, Ming-Tao Duan, Yong-Hao Hou
arXiv (2025)
Pretraining RL Benchmark
📄

No Summary Available

This paper hasn't been summarized yet.

×