LOFT framework decouples subspace selection from transformation in orthogonal fine-tuning
New arXiv preprint introduces LOFT, a low-rank orthogonal fine-tuning framework that decouples subspace selection from transformation, recovering existing methods while enabling gradient-informed support selection across language and vision tasks.
Orthogonal parameter-efficient fine-tuning methods have been treating subspace choice and transformation as a single design decision when they should be separate levers, according to a preprint posted to arXiv on May 13.
LOFT (Low-Rank Orthogonal Fine-Tuning) frames orthogonal adaptation as a multiplicative subspace rotation, exposing support selection—where the adaptation happens—as a central design axis independent of how the transformation is parameterized. The framework recovers coordinate-based, butterfly, Householder, and principal-subspace orthogonal PEFT variants as special cases, but the real contribution is making support selection an explicit tunable component rather than a byproduct of whichever parameterization a practitioner happened to pick.
The authors develop a first-order analysis showing that useful adaptation supports should be informed by the downstream training signal, not chosen arbitrarily or inherited from the pretrained weight structure. They test gradient-informed support selection strategies across language understanding benchmarks, visual transfer tasks, mathematical reasoning datasets, and multilingual out-of-distribution adaptation. Under matched parameter, memory, and compute budgets, LOFT recovers principal-subspace orthogonal adaptation performance while gradient-informed supports improve the efficiency-performance trade-off—meaning practitioners get better task adaptation per parameter dollar when the subspace is chosen based on what the downstream loss actually needs.
The unified formulation makes it easier to compare orthogonal PEFT methods that previously looked structurally different, and the task-aware support selection strategies are immediately applicable to existing orthogonal fine-tuning codebases.
