Single-player to Two-player Knowledge Transfer in Atari 2600 Games
Date
2024-11-18
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
Playing two-player games using reinforcement learning and self-play can be challenging due to the complexity of two-player environments and the potential instability in the training process. It is proposed that a reinforcement learning algorithm can train more efficiently and achieve improved performance in a two-player game by leveraging the knowledge from the single-player version of the same game. This study examines the proposed idea in ten different Atari 2600 environments using the Atari 2600 RAM as the input state. The advantages of using transfer learning from a single-player training process over training in a two-player setting from scratch are discussed, and the results are demonstrated in several metrics, such as the training time and average total reward. Finally, a method for calculating RAM complexity and its relationship to performance after transfer is discussed. Results show that in most cases transferred agent is performing better than the agent trained from scratch while taking less time to train. Moreover, it is shown that RAM complexity can be used as a weak predictor to predict the transfer's effectiveness.
Description
Keywords
Reinforcement Learning, Transfer Learning, Game-Playing AI, Atari 2600, Deep Q Networks, Multiplayer Games, Deep Reinforcement Learning, Deep Learning
Citation
Saadat, K. (2024). Single-player to two-player knowledge transfer in Atari 2600 games (Master's thesis, University of Calgary, Calgary, Canada). Retrieved from https://prism.ucalgary.ca.