top of page

ACCEPTED PAPERS

  1. Transferring Autonomous Driving Knowledge on Simulated and Real Intersections
    David Isele and Akansel Cosgun
    (pdf)
     

  2. Lifelong Few-Shot Learning
    Chelsea Finn, Pieter Abbeel and Sergey Levine
    (pdf)
     

  3. Lifelong Learning with Structurally Adaptive CNNs
    Thushan Ganegedara, Lionel Ott and Fabio Ramos
    (pdf)
     

  4. Transfer in Reinforcement Learning with Successor Features and Generalised Policy Improvement
    Andre Barreto, Will Dabney, Remi Munos, Jonathan Hunt, Tom Schaul, David Silver and Hado van Hasselt
    (pdf)
     

  5. Atlas Architecture: Constructing and Traversing Generalized Graph Manifolds in Feature Space
    Everett Fall, Wen-Hsuan Chu and Liang-Gee Chen
    (pdf)
     

  6. Advantages and Limitations of using Successor Features for Transfer in Reinforcement Learning
    Lucas Lehnert, Stefanie Tellex and Michael L. Littman
    (pdf)
     

  7. The Effects of Memory Replay in Reinforcement Learning
    Ruishan Liu and James Zou
    (pdf)
     

  8. Zero-Shot Task Generalization with Multi-Task Deep Reinforcement Learning
    Junhyuk Oh, Satinder Singh, Honglak Lee and Pushmeet Kohli
    (pdf)
    Also accepted at ICML 2017.
     

  9. Regret Minimization in MDPs with Options without Prior Knowledge
    Ronan Fruit, Matteo Pirotta, Alessandro Lazaric and Emma Brunskill
    (pdf)
     

  10. Generalizing Sensorimotor Policies with Spatial Attention and Weakly Labeled Data
    Avi Singh, Larry Yang and Sergey Levine
    (pdf)
     

  11. Asynchronous Data Aggregation for Training Visual Navigation Networks
    Mathew Monfort, Matthew Johnson, Aude Oliva and Katja Hofmann
    (pdf)
    Also accepted at AAMAS 2017.
     

  12. Bridging the Gap Between Value and Policy Based Reinforcement Learning
    Ofir Nachum, Mohammad Norouzi, Kelvin Xu and Dale Schuurmans
    (pdf)
     

  13. Revisiting Dyna for control in continuous-state domains
    Andrew Patterson, Yangchen Pan, Adam White and Martha White
    (pdf)
     

  14. Benchmark Environments for Multitask Learning in Continuous Domains
    Peter Henderson, Wei-Di Chang, Florian Shkurti, Johanna Hansen, David Meger and Gregory Dudek
    (pdf)
     

  15. Hierarchical Subtask Discovery With Non-Negative Matrix Factorization
    Adam Earle, Andrew Saxe and Benjamin Rosman
    (pdf)
     

  16. Learning Long-term Dependencies with Deep Memory States
    Vitchyr Pong, Shixiang Gu and Sergey Levine
    (pdf)
     

  17. A Laplacian Framework for Option Discovery in Reinforcement Learning
    Marlos C. Machado, Marc G. Bellemare and Michael Bowling
    (pdf)
    Also accepted at ICML 2017.
     

  18. Distributed Adaptive Sampling for Kernel Matrix Approximation
    Daniele Calandriello, Alessandro Lazaric and Michal Valko
    (pdf)
    Also accepted at AISTATS 2017.
     

  19. Meta-Learning with Temporal Convolutions
    Nikhil Mishra, Mostafa Rohaninejad, Xi Chen and Pieter Abbeel
    (pdf)

​

bottom of page