Lichang Chen*, Chen Zhu*, Davit Soselia, Jiuhai Chen, Tianyi Zhou, Tom Goldstein, Heng Huang, Mohammad Shoeybi, Bryan Catanzaro
*denotes equal contribution, order is random.
Since a well-formated, verbose but less helpful response from the LLMs can often deceive human annotators/LLMs and achieve high scores. To address such a hacking issues in preference optimization, we establish a more reliable reward model, ODIN, which disentangled the length feature in the reward. By applying ODIN to PPO and ReMax, we achieve significant higher Pareto front than the baselines.
The overview of our method:
ODIN has two heads to predict two rewards, but only uses one for RL. In RM training stage, ODIN is trained with the same human preference data as vanilla RM with a carefully designed loss to disentangle the length signal and the quality signal into two heads. Only the quality head is involved in RL fine-tuning stage, and the length reward is discarded to reduce reward hacking on lengthWe recommend using docker for all the experiments:
% Docker image link
https://hub.docker.com/r/morningpig/dsp_rlhf
% Reward model training is important to the success
% The details about the parameters are in the script file
bash scripts/train_rm.sh
% set up the path in the script file before submitting the job
sbatch scripts/ppo.slurm
% Please refer to the evaluation folder for more details.
bash scripts/eval.sh
If you find our repo is useful, please consider citing our paper:
@inproceedings{
author={Lichang Chen and Chen Zhu and Jiuhai Chen and Davit Soselia and Tianyi Zhou and Tom Goldstein and Heng Huang and Mohammad Shoeybi and Bryan Catanzaro},
booktitle={Forty-first International Conference on Machine Learning},
year={2024},
url={https://openreview.net/forum?id=zcIV8OQFVF}
}