Author: Kevin Sebastian Luck
Energy-Efficiency of Reinforcement Learning
This thesis will investigate the energy consumption of such reinforcement learning algorithms for both training and inference using monitoring capabilities. The goal is to find out how different algorithms compare in performance vs energy consumption on practical applications and if there are ways to reduce energy consumption by trading performance for low computational complexity, for example.
Robustify Behaviour and Morphology of Robots against Future Damage
This project is aimed at developing new methodologies and frameworks for learning behaviours and designs of robots with the goal of making them robust to mechanical failures or stark environmental changes impacting the performance of the robot.
Meta-Learning Embeddings for Reinforcement Learning
Meta-Learning of ‘learning-how-to-learn’ has been immensely popular in the deep learning community in recent years. In this thesis, we will investigate the problem of using meta-learning approaches and ideas to learn latent policy embeddings for use in reinforcement learning. A potential approach for this is hyper networks, ie networks which can generate other networks (see references). The thesis will investigate the application of these approaches and evaluate their usefulness in robot learning and continuous control tasks. Currently, the use of policy embeddings and hyper networks is an active area in research with potential applications to real-world robotics. This is a research-oriented thesis where the student will have the possibility to work on state-of-the-art problems and propose novel methods and algorithms for their use in robot learning.
Creating tool-boxes for the Co-Design of Robots in Simulation
This is a more engineering and software-development oriented thesis, aiming at providing open-source implementations for the research community. However, the thesis student will get some exposuire to the use of Deep Leanring algorithms and their application to practical research problems.
Improving Co-Design with Imitation Learning
This research thesis will investigate further how the use of imitation learning methods and algorithms can be used to improve existing Co-Design algorithms. The aim of this thesis is to develop systems developing both the body and mind of robots.
Evolutionary Imitation Learning for Continous Control
The thesis will start with an initial literature review to identify the space of potential hypothesis to investigate and apply the developed method to continous control tasks. This thesis is well suited for students interested in a research oriented master thesis with some possibilities to develop your own idea.
Master Thesis on the Co-Adaptation of Robots
The goal of this Master thesis is to develop simulation tools necessary to evaluate co-adaptation techniques, and to develop new approaches for learning the behaviour and design of robots using deep learning and deep reinforcement learning.