User description

In this paper, we introduce a new set of reinforcement learning (RL) tasks in Minecraft (a flexible 3D world). We then use these tasks to systematically compare and contrast existing deep reinforcement learning (DRL) architectures with our new memory-based DRL architectures. These tasks are designed to emphasize, in a controllable manner, issues that pose challenges for RL methods including partial observability (due to first-person visual observations), delayed rewards, high-dimensional visual observations, and the need to use active perception in a correct manner so as to perform well in the tasks. While these tasks are conceptually simple to describe, by virtue of having all of these challenges simultaneously they are difficult for current DRL architectures. Additionally, we evaluate the generalization performance of the architectures on environments not used during training. The experimental results show that our new architectures generalize to unseen environments better than existing DRL architectures. EMAIL Cite this PaperBibTeX @InProceedingspmlr-v48-oh16, title = Control of Memory, Active Perception, and Action in Minecraft, author = Oh, Junhyuk and Chockalingam, Valliappa and Satinder, and Lee, Honglak, booktitle = Proceedings of The 33rd International Conference on Machine Learning, pages = 2790--2799, year = 2016, editor = Balcan, Maria Florina and Weinberger, Kilian Q., volume = 48, series = Proceedings of Machine Learning Research, address = New York, New York, USA, month = 20--22 Jun, publisher = PMLR, pdf = http://proceedings.mlr.press/v48/oh16.pdf, url = https://proceedings.mlr.press/v48/oh16.html, abstract = In this paper, we introduce a new set of reinforcement learning (RL) tasks in Minecraft (a flexible 3D world). We then use these tasks to systematically compare and contrast existing deep reinforcement learning (DRL) architectures with our new memory-based DRL architectures. These tasks are designed to emphasize, in a controllable manner, issues that pose challenges for RL methods including partial observability (due to first-person visual observations), delayed rewards, high-dimensional visual observations, and the need to use active perception in a correct manner so as to perform well in the tasks. While these tasks are conceptually simple to describe, by virtue of having all of these challenges simultaneously they are difficult for current DRL architectures. Additionally, we evaluate the generalization performance of the architectures on environments not used during training. The experimental results show that our new architectures generalize to unseen environments better than existing DRL architectures. Copy to ClipboardDownload Endnote %0 Conference Paper %T Control of Memory, Active Perception, and Action in Minecraft %A Junhyuk Oh %A Valliappa Chockalingam %A Satinder %A Honglak Lee %B Proceedings of The 33rd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2016 %E Maria Florina Balcan %E Kilian Q. Weinberger %F pmlr-v48-oh16 %I PMLR %P 2790--2799 %U https://proceedings.mlr.press/v48/oh16.html %V 48 %X In this paper, we introduce a new set of reinforcement learning (RL) tasks in Minecraft (a flexible 3D world). We then use these tasks to systematically compare and contrast existing deep reinforcement learning (DRL) architectures with our new memory-based DRL architectures. These tasks are designed to emphasize, in a controllable manner, issues that pose challenges for RL methods including partial observability (due to first-person visual observations), delayed rewards, high-dimensional visual observations, and the need to use active perception in a correct manner so as to perform well in the tasks. While these tasks are conceptually simple to describe, by virtue of having all of these challenges simultaneously they are difficult for current DRL architectures. Additionally, we evaluate the generalization performance of the architectures on environments not used during training. The experimental results show that our new architectures generalize to unseen environments better than existing DRL architectures. Copy to ClipboardDownload RIS TY - CPAPER TI - Control of Memory, Active Perception, and Action in Minecraft AU - Junhyuk Oh AU - Valliappa Chockalingam AU - Satinder AU - Honglak Lee BT - Proceedings of The 33rd International Conference on Machine Learning DA - 2016/06/11 ED - Maria Florina Balcan ED - Kilian Q. Weinberger ID - pmlr-v48-oh16 PB - PMLR DP - Proceedings of Machine Learning Research VL - 48 SP - 2790 EP - 2799 L1 - http://proceedings.mlr.press/v48/oh16.pdf UR - https://proceedings.mlr.press/v48/oh16.html AB - In this paper, we introduce a new set of reinforcement learning (RL) tasks in Minecraft (a flexible 3D world). We then use these tasks to systematically compare and contrast existing deep reinforcement learning (DRL) architectures with our new memory-based DRL architectures. These tasks are designed to emphasize, in a controllable manner, issues that pose challenges for RL methods including partial observability (due to first-person visual observations), delayed rewards, high-dimensional visual observations, and the need to use active perception in a correct manner so as to perform well in the tasks. While these tasks are conceptually simple to describe, by virtue of having all of these challenges simultaneously they are difficult for current DRL architectures. Additionally, we evaluate the generalization performance of the architectures on environments not used during training. The experimental results show that our new architectures generalize to unseen environments better than existing DRL architectures. ER - Copy to ClipboardDownload APA Oh, J., Chockalingam, V., Satinder, & Lee, H.. (2016). Control of Memory, Active Perception, and Action in Minecraft. Proceedings of The 33rd International Conference on Machine Learning, in Proceedings of Machine Learning Research 48:2790-2799 Available from https://proceedings.mlr.press/v48/oh16.html.