Recurrent Off-Policy Deep Reinforcement Learning Doesn't Have to be Slow

2025๋…„ 12์›” 23์ผ
3 authors

์ดˆ๋ก

Recurrent off-policy deep reinforcement learning models achieve state-of-the-art performance but are often sidelined due to their high computational demands. In response, we introduce RISE (Recurrent Integration via Simplified Encodings), a novel approach that can leverage recurrent networks in any image-based off-policy RL setting without significant computational overheads via using both learnable and non-learnable encoder layers. When integrating RISE into leading non-recurrent off-policy RL algorithms, we observe a 35.6% human-normalized interquartile mean (IQM) performance improvement across the Atari benchmark. We analyze various implementation strategies to highlight the versatility and potential of our proposed framework.

์นดํ…Œ๊ณ ๋ฆฌ

์ €์ž