eCite Digital Repository

Weak human preference supervision for deep reinforcement learning


Cao, Z and Wong, KC and Lin, C-T, Weak human preference supervision for deep reinforcement learning, IEEE Transactions on Neural Networks and Learning Systems ISSN 2162-237X (In Press) [Refereed Article]

Copyright Statement

Copyright 2021 IEEE

Official URL:


The current reward learning from human preferences could be used to resolve complex reinforcement learning (RL) tasks without access to a reward function by defining a single fixed preference between pairs of trajectory segments. However, the judgement of preferences between trajectories is not dynamic and still requires human input over thousands of iterations. In this study, we proposed a weak human preference supervision framework, for which we developed a human preference scaling model that naturally reflects the human perception of the degree of weak choices between trajectories and established a human-demonstration estimator via supervised learning to generate the predicted preferences for reducing the number of human inputs. The proposed weak human preference supervision framework can effectively solve complex RL tasks and achieve higher cumulative rewards in simulated robot locomotion – MuJoCo games – relative to the single fixed human preferences. Furthermore, our established human-demonstration estimator requires human feedback only for less than 0.01% of the agent’s interactions with the environment and significantly reduces the cost of human inputs by up to 30% compared with the existing approaches. To present the flexibility of our approach, we released a video ( showing comparisons of the behaviours of agents trained on different types of human input. We believe that our naturally inspired human preferences with weakly supervised learning are beneficial for precise reward learning and can be applied to state-of-the-art RL systems, such as human-autonomy teaming systems.

Item Details

Item Type:Refereed Article
Keywords:deep reinforcement learning, weak human preferences,, scaling, supervised learning
Research Division:Information and Computing Sciences
Research Group:Artificial intelligence
Research Field:Autonomous agents and multiagent systems
Objective Division:Information and Communication Services
Objective Group:Information systems, technologies and services
Objective Field:Artificial intelligence
UTAS Author:Cao, Z (Dr Zehong Cao)
UTAS Author:Wong, KC (Mr Kai Chiu Wong)
ID Code:144663
Year Published:In Press
Deposited By:Information and Communication Technology
Deposited On:2021-06-03
Last Modified:2021-09-28

Repository Staff Only: item control page