A Hybrid PAC Reinforcement Learning Algorithm for Human-Robot Interaction

Date
2022-03-09
Journal Title
Journal ISSN
Volume Title
Publisher
Frontiers in Robotics and AI
Abstract
This paper offers a new hybrid probably approximately correct (PAC) reinforcement learning (RL) algorithm for Markov decision processes (MDPs) that intelligently maintains favorable features of both model-based and model-free methodologies. The designed algorithm, referred to as the Dyna-Delayed Q-learning (DDQ) algorithm, combines model-free Delayed Q-learning and model-based R-max algorithms while outperforming both in most cases. The paper includes a PAC analysis of the DDQ algorithm and a derivation of its sample complexity. Numerical results are provided to support the claim regarding the new algorithm’s sample efficiency compared to its parents as well as the best known PAC model-free and model-based algorithms in application. A real-world experimental implementation of DDQ in the context of pediatric motor rehabilitation facilitated by infant-robot interaction highlights the potential benefits of the reported method.
Description
This article was originally published in Frontiers in Robotics and AI. The version of record is available at: https://doi.org/10.3389/frobt.2022.797213
Keywords
reinforcement learning, probably approximately correct, markov decision process, human-robot interaction, sample complexity
Citation
Zehfroosh, Ashkan, and Herbert G. Tanner. 2022. “A Hybrid PAC Reinforcement Learning Algorithm for Human-Robot Interaction.” Frontiers in Robotics and AI 9 (March): 797213. https://doi.org/10.3389/frobt.2022.797213.