LoRA: Low-Rank Adaptation—a PEFT technique injecting trainable rank-decomposition matrices into frozen model layers
SVD: Singular Value Decomposition—factorizing a matrix into singular vectors and values, used here to extract dominant update directions
delta weights: The change in model weights (Delta W = B * A) learned during a training round
randomized SVD: An efficient approximation algorithm for SVD that uses random projections to reduce computational cost for large matrices
residual components: The parts of the weight update that fall outside the top-r singular components; usually discarded in LoRA but merged into the backbone here
FedIT: A baseline federated LoRA method that simply averages A and B matrices separately
FLoRA: A baseline that merges updates into the backbone and re-initializes LoRA modules each round
pass@1: Evaluation metric for code generation measuring the percentage of problems where the first generated solution is correct