L

Low-Rank Adaptation

Definition

A parameter-efficient fine-tuning technique that freezes the pre-trained model weights and injects trainable low-rank decomposition matrices into transformer layers. LoRA drastically reduces the number of trainable parameters and memory requirements for fine-tuning.

Defined Term