Data collection is a major obstacle to applying deep learning methods to robotics. In this thesis, we propose to perform a quantitative study of the use of synthetic data from 3D software and physics simulators to train machine learning models that can later be deployed on physical systems.
In this thesis, an extensive investigation of constrained DDP methods will be performed and the major selected ones will be implemented in simulation environment for trajectory optimizations of different robots such as a simple point robot, 2D car-like robot, 3D quadrotor robot and cart-pole system. In this context, the methods will be compared in terms of convergence speed, computational complexity, sensitivity to initializations and parameter selections.
The goal of this thesis is to develop reinforcement learning-based approaches for dynamic cloth manipulation. In this context, the thesis is expected to include high-fidelity cloth simulations, training of an agent in that simulation environment to perform dynamic manipulation tasks, and transferring the learned policies to a real robot in testing its generalization capabilities with different sizes of clothes.
In this thesis, an MPC based trajectory planning and tracking method will be developed for an autonomous vehicle in simulation. A high fidelity driving simulator will be employed to incorporate vehicle dynamics in MPC constraints. The developed control must guarantee collision-free, comfortable and efficient driving performance in complex urban driving environment.
The goal of this thesis to bridge the gap between slow representation learning and brain-inspired navigation. After reviewing the relevant literature from both domains, an experiment will investigate if slow representations can lead to the self organization of structures similar to complex cells in the brain. If the results confirm the hypothesis, another experiment will investigate if the emerging structures can be used for navigation similar to the hand-crafted place-cell network in ViTa-SLAM.
The goal of this project is to exploit more comprehensive information from humans, in order to learn as many skill patterns from humans as possible according to the tasks at hand.
The thesis goal is to develop a Graph Neural Network to learn to simulate multiple physical systems such as fluids, soft-bodies and rigid-body systems.
The goal of this thesis is to increase the efficiency of reinforcement learning when limited number of examples is available by providing a method of obtaining a large number task-specific trajectories from only a few demonstrations.
The goal of this thesis is to integrate KMP with reinforcement learning to provide an automatic adaptation approach to adapt the trajectory and goal in order to optimize a desired task.