Master thesis topics
To have more information on any of the proposals, please contact either Prof. Ville Kyrki or one of the advisors indicated in the proposal of interest.
At present, the following master thesis proposals are available in the group:
Data collection is a major obstacle to applying deep learning methods to robotics. In this thesis, we propose to perform a quantitative study of the use of synthetic data from 3D software and physics simulators to train machine learning models that can later be deployed on physical systems.
In this thesis, an extensive investigation of constrained DDP methods will be performed and the major selected ones will be implemented in simulation environment for trajectory optimizations of different robots such as a simple point robot, 2D car-like robot, 3D quadrotor robot and cart-pole system. In this context, the methods will be compared in terms of convergence speed, computational complexity, sensitivity to initializations and parameter selections.
The goal of this thesis is to develop reinforcement learning-based approaches for dynamic cloth manipulation. In this context, the thesis is expected to include high-fidelity cloth simulations, training of an agent in that simulation environment to perform dynamic manipulation tasks, and transferring the learned policies to a real robot in testing its generalization capabilities with different sizes of clothes.
Master Thesis on “Trajectory Planning and Tracking for Autonomous Vehicles Using Model Predictive Control Incorporating Vehicle Dynamics”
In this thesis, an MPC based trajectory planning and tracking method will be developed for an autonomous vehicle in simulation. A high fidelity driving simulator will be employed to incorporate vehicle dynamics in MPC constraints. The developed control must guarantee collision-free, comfortable and efficient driving performance in complex urban driving environment.
The goal of this thesis to bridge the gap between slow representation learning and brain-inspired navigation. After reviewing the relevant literature from both domains, an experiment will investigate if slow representations can lead to the self organization of structures similar to complex cells in the brain. If the results confirm the hypothesis, another experiment will investigate if the emerging structures can be used for navigation similar to the hand-crafted place-cell network in ViTa-SLAM.
Master thesis on “Hybrid learning and control for human-robot interaction: an imitation learning perspective”
The goal of this project is to exploit more comprehensive information from humans, in order to learn as many skill patterns from humans as possible according to the tasks at hand.
Master Thesis on “Research and Development of a Decision Making and Control Method for Autonomous Vehicles Combining Advanced Driver Assistance System with Reinforcement Learning”
In this thesis a method of training the agent with RL, where traffic participants’ safety is addressed with ADAS functions, will be investigated.
The thesis goal is to develop a Graph Neural Network to learn to simulate multiple physical systems such as fluids, soft-bodies and rigid-body systems.
The goal of this thesis is to increase the efficiency of reinforcement learning when limited number of examples is available by providing a method of obtaining a large number task-specific trajectories from only a few demonstrations.
The goal of this thesis is to integrate KMP with reinforcement learning to provide an automatic adaptation approach to adapt the trajectory and goal in order to optimize a desired task.
This thesis goal is to develop a probabilistic approach for building and updating people flow maps able to help the robot being socially-aware while navigating by predicting people movement.
Master Thesis on “High-Level Decision Making in Self-Driving Cars using Deep Reinforcement Learning”
Decision-making is one of the most important modules in a self-driving car system and navigating an autonomous car in a dynamic and multi-agent urban scenario needs to understand the scene and the interaction between agents. One way to do the scene understanding is to use Graph Neural Networks (GNNs) to learn the interaction between agents […]