← Back to Paper List

Parameter-Efficient Fine-Tuning Methods for Pretrained Language Models: A Critical Review and Assessment

Lingling Xu, Haoran Xie, S. J. Qin, Xiaohui Tao, F. Wang
Hong Kong Metropolitan University, Lingnan University
IEEE Transactions on Pattern Analysis and Machine Intelligence (2023)
Pretraining Benchmark

📝 Paper Summary

Parameter-Efficient Fine-Tuning (PEFT) Large Language Model Adaptation
The paper provides a comprehensive taxonomy of Parameter-Efficient Fine-Tuning (PEFT) methods—categorizing them into additive, partial, reparameterized, hybrid, and unified approaches—to address the computational prohibitive costs of adapting large pretrained models.
Core Problem
Full fine-tuning of Large Language Models (LLMs) requires updating all parameters, which creates prohibitive computational and memory demands and risks catastrophic forgetting or overfitting on small datasets.
Why it matters:
  • As models scale to billions of parameters (e.g., Falcon-180B), the hardware required for traditional fine-tuning becomes inaccessible to most researchers.
  • Full parameter updates can degrade the general knowledge preserved in the pretrained model (catastrophic forgetting).
  • Existing surveys often lack comprehensive categorization of the latest methods or quantitative comparisons.
Concrete Example: Adapting the Falcon-180B model via full fine-tuning would require a minimum of 5120GB of computational resources, making it impossible for standard hardware setups, whereas PEFT methods reduce this by freezing most parameters.
Key Novelty
Unified PEFT Taxonomy
  • Classifies PEFT techniques into five distinct categories: Additive (adding new params), Partial (selecting sub-params), Reparameterized (low-rank transforms), Hybrid (combinations), and Unified.
  • Detailed synthesis of Adapter-based methods (Sequential, Residual, Parallel) and Soft Prompt methods (Prompt-tuning, Prefix-tuning) into a structured framework.
Breakthrough Assessment
6/10
A systematic survey that organizes a chaotic field (PEFT). While it may not introduce a new SOTA model itself, the taxonomy is highly valuable for understanding the landscape.
×