A training technique that aligns models using preference data without requiring a separate reward model. DPO is an alternative to RLHF for alignment training. Not to be confused with Data Protection Officer (DPO) (role).
A training technique that aligns models using preference data without requiring a separate reward model. DPO is an alternative to RLHF for alignment training. Not to be confused with Data Protection Officer (DPO) (role).