AI Bibliography |
Kauvar, I., Doyle, C., Zhou, L., & Haber, N. (2023). Curious replay for model-based adaptation. International Conference on Machine Learning. |
Resource type: Journal Article BibTeX citation key: Kauvar2023 View all bibliographic details |
Categories: Artificial Intelligence, Cognitive Science, Computer Science, Data Sciences, Decision Theory, General Subcategories: AI transfer learning, Autonomous systems, Decision making, Deep learning, Machine learning, Q-learning Creators: Doyle, Haber, Kauvar, Zhou Publisher: Collection: International Conference on Machine Learning |
Attachments |
Abstract |
Agents must be able to adapt quickly as an environment changes. We find that existing model-based reinforcement learning agents are unable to do this well, in part because of how they use past experiences to train their world model. Here, we present Curious Replay---a form of prioritized experience replay tailored to model-based agents through use of a curiosity-based priority signal. Agents using Curious Replay exhibit improved performance in an exploration paradigm inspired by animal behavior and on the Crafter benchmark. DreamerV3 with Curious Replay surpasses state-of-the-art performance on Crafter, achieving a mean score of 19.4 that substantially improves on the previous high score of 14.5 by DreamerV3 with uniform replay, while also maintaining similar performance on the Deepmind Control Suite. Code for Curious Replay is available at github.com/AutonomousAgentsLab/curiousreplay.
|