AI Bibliography

WIKINDX Resources  

Van Hasselt, H., Guez, A., & Silver, D. 2016, Deep reinforcement learning with double q-learning. Paper presented at Thirtieth AAAI conference on artificial intelligence. 
Resource type: Proceedings Article
BibTeX citation key: VanHasselt2016
View all bibliographic details
Categories: Artificial Intelligence, Computer Science, Data Sciences, General, Mathematics
Subcategories: AI transfer learning, Big data, Machine intelligence, Machine learning, Q-learning
Creators: Guez, Silver, Van Hasselt
Publisher:
Collection: Thirtieth AAAI conference on artificial intelligence
Attachments  
Abstract
The popular Q-learning algorithm is known to overestimate action values under certain conditions. It was not previously known whether, in practice, such overestimations are common, whether they harm performance, and whether they can generally be prevented. In this paper, we answer all these questions affirmatively. In particular, we first show that the recent DQN algorithm, which combines Q-learning with a deep neural network, suffers from substantial overestimations in some games in the Atari 2600 domain. We then show that the idea behind the Double Q-learning algorithm, which was introduced in a tabular setting, can be generalized to work with large-scale function approximation. We propose a specific adaptation to the DQN algorithm and show that the resulting algorithm not only reduces the observed overestimations, as hypothesized, but that this also leads to much better performance on several games.
  
WIKINDX 6.7.0 | Total resources: 1621 | Username: -- | Bibliography: WIKINDX Master Bibliography | Style: American Psychological Association (APA)