Supervisor: Prof. Ville Kyrki (firstname.lastname@example.org)
Advisors: Dr. Kevin Luck (email@example.com)
Deep reinforcement learning is a promising approach to enable robots to self-adapt and acquire various skills, e.g. for locomotion and manipulation tasks. Current work  aims to develop data-efficient deep learning approaches suitable for the co-adaptation of robot design and behaviour in the real world in the absence of simulations or known dynamical models.
In order to be able to co-adapt robots using only real-world data and thus circumvent the simulation-to-reality-gap, we require an approach which (1) can quickly in a one-shot-manner adapt behavioural policies to new designs, (2) is able to optimize arbitrary design parameters, and (3) minimizes the number of manufacturing cycles required.
The goal of this Master thesis is twofold: (1) Develop simulation tools necessary to evaluate co-adaptation techniques, and (2) develop new approaches for learning the behaviour and design of robots using deep learning and deep reinforcement learning.
This thesis topic provides you with the opportunity to develop your own ideas and algorithms based on the current efforts in the area of deep co-adaptation and you will benefit from a network of collaborators. It will start with a literature review to bring you up to date on the landscape of co-adaptation algorithms and the development of a new simulation toolbox for your later evaluation studies. During the first meetings we will determine potential directions for the development of new algorithms based on your strenghts, interests and knowledge.
For discussions or questions please send an email to Kevin Luck (firstname.lastname@example.org).
- Simulation environments & toolboxes for the co-adaptation of simulated agents
- Extension or development of a new variant for the deep co-learning of agent behaviour and morphology
Prerequisites: Basics of machine learning, Python, Linux
Suggested tools: PyTorch, MuJoCO, PyBullet (no knowledge required but it is expected that you will learn to use these tools for your research)
Start: Available immediately
Luck, Kevin Sebastian, Heni Ben Amor, and Roberto Calandra. Data-efficient co-adaptation of morphology and behaviour with deep reinforcement learning.” Conference on Robot Learning. PMLR, 2020.