A parameter-efficient fine-tuning technique that trains small adapter weights rather than full model weights. LoRA adapters are common deliverables in custom AI projects, so ownership, portability to other base models, and confidentiality terms are often stated explicitly.