Sydney
10 August 2017
Lifelong Learning: A Reinforcement Learning Approach
ICML WORKSHOP 2017
ACCEPTED PAPERS
-
Transferring Autonomous Driving Knowledge on Simulated and Real Intersections
David Isele and Akansel Cosgun
(pdf)
-
Lifelong Few-Shot Learning
Chelsea Finn, Pieter Abbeel and Sergey Levine
(pdf)
-
Lifelong Learning with Structurally Adaptive CNNs
Thushan Ganegedara, Lionel Ott and Fabio Ramos
(pdf)
-
Transfer in Reinforcement Learning with Successor Features and Generalised Policy Improvement
Andre Barreto, Will Dabney, Remi Munos, Jonathan Hunt, Tom Schaul, David Silver and Hado van Hasselt
(pdf)
-
Atlas Architecture: Constructing and Traversing Generalized Graph Manifolds in Feature Space
Everett Fall, Wen-Hsuan Chu and Liang-Gee Chen
(pdf)
-
Advantages and Limitations of using Successor Features for Transfer in Reinforcement Learning
Lucas Lehnert, Stefanie Tellex and Michael L. Littman
(pdf)
-
The Effects of Memory Replay in Reinforcement Learning
Ruishan Liu and James Zou
(pdf)
-
Zero-Shot Task Generalization with Multi-Task Deep Reinforcement Learning
Junhyuk Oh, Satinder Singh, Honglak Lee and Pushmeet Kohli
(pdf)
Also accepted at ICML 2017.
-
Regret Minimization in MDPs with Options without Prior Knowledge
Ronan Fruit, Matteo Pirotta, Alessandro Lazaric and Emma Brunskill
(pdf)
-
Generalizing Sensorimotor Policies with Spatial Attention and Weakly Labeled Data
Avi Singh, Larry Yang and Sergey Levine
(pdf)
-
Asynchronous Data Aggregation for Training Visual Navigation Networks
Mathew Monfort, Matthew Johnson, Aude Oliva and Katja Hofmann
(pdf)
Also accepted at AAMAS 2017.
-
Bridging the Gap Between Value and Policy Based Reinforcement Learning
Ofir Nachum, Mohammad Norouzi, Kelvin Xu and Dale Schuurmans
(pdf)
-
Revisiting Dyna for control in continuous-state domains
Andrew Patterson, Yangchen Pan, Adam White and Martha White
(pdf)
-
Benchmark Environments for Multitask Learning in Continuous Domains
Peter Henderson, Wei-Di Chang, Florian Shkurti, Johanna Hansen, David Meger and Gregory Dudek
(pdf)
-
Hierarchical Subtask Discovery With Non-Negative Matrix Factorization
Adam Earle, Andrew Saxe and Benjamin Rosman
(pdf)
-
Learning Long-term Dependencies with Deep Memory States
Vitchyr Pong, Shixiang Gu and Sergey Levine
(pdf)
-
A Laplacian Framework for Option Discovery in Reinforcement Learning
Marlos C. Machado, Marc G. Bellemare and Michael Bowling
(pdf)
Also accepted at ICML 2017.
-
Distributed Adaptive Sampling for Kernel Matrix Approximation
Daniele Calandriello, Alessandro Lazaric and Michal Valko
(pdf)
Also accepted at AISTATS 2017.
-
Meta-Learning with Temporal Convolutions
Nikhil Mishra, Mostafa Rohaninejad, Xi Chen and Pieter Abbeel
(pdf)
​