One of the major challenges in Reinforcement Learning (RL) is scaling it to high-dimensional, real-world applications. Although there have been some successes in recent years (E.g. video games, robotics, resource allocation), many issues still exist. One approach to scaling RL is to model these high-dimensional applications with abstractions.

 

An abstraction is an important tool that enables an agent to focus less on the lower level details of a task and more on solving the task at hand. Many real-world domains can be modelled using some form of abstraction. Temporal abstraction (i.e., options or skills) as well as spatial abstraction (i.e., state space representation) are two important examples. However, designing and learning abstractions are non-trivial and time-consuming. In addition, an incorrect abstraction can be the difference between successfully solving a task and complete failure.

 

The goal of this workshop is to provide a forum to discuss the current challenges in designing as well as learning abstractions in real-world Reinforcement Learning (RL) problems. In addition, the aim of this workshop is promote interaction between researchers working on different forms of abstraction to try and find a synergy between the various techniques.