The goal of this assignment is to understand the constrained DDP with safety precautions and implement it on a real robot, e.g., Turtlebot 3 Waffle Pi. It is expected to perform the experiments in an engineered environment in which the positions of the robot and the obstacles will be measured by an external vision based system (motion capture system).
In this thesis, an extensive investigation of constrained DDP methods will be performed and the major selected ones will be implemented in simulation environment for trajectory optimizations of different robots such as a simple point robot, 2D car-like robot, 3D quadrotor robot and cart-pole system. In this context, the methods will be compared in terms of convergence speed, computational complexity, sensitivity to initializations and parameter selections.
The goal of this thesis is to develop reinforcement learning-based approaches for dynamic cloth manipulation. In this context, the thesis is expected to include high-fidelity cloth simulations, training of an agent in that simulation environment to perform dynamic manipulation tasks, and transferring the learned policies to a real robot in testing its generalization capabilities with different sizes of clothes.
In this thesis, an MPC based trajectory planning and tracking method will be developed for an autonomous vehicle in simulation. A high fidelity driving simulator will be employed to incorporate vehicle dynamics in MPC constraints. The developed control must guarantee collision-free, comfortable and efficient driving performance in complex urban driving environment.
The goal of this thesis to bridge the gap between slow representation learning and brain-inspired navigation. After reviewing the relevant literature from both domains, an experiment will investigate if slow representations can lead to the self organization of structures similar to complex cells in the brain. If the results confirm the hypothesis, another experiment will investigate if the emerging structures can be used for navigation similar to the hand-crafted place-cell network in ViTa-SLAM.
The goal of this project is to exploit more comprehensive information from humans, in order to learn as many skill patterns from humans as possible according to the tasks at hand.
The goal of this assignment is to understand the KMP and to implement a C++ code for the kernelized treatment of orientation data in real setup.
The goal of this assignment is to understand dynamic movement primitives in the context of geometry awareness and provide a C++ code for the algorithm.
The goal of this thesis is to increase the efficiency of reinforcement learning when limited number of examples is available by providing a method of obtaining a large number task-specific trajectories from only a few demonstrations.
The goal of this thesis is to integrate KMP with reinforcement learning to provide an automatic adaptation approach to adapt the trajectory and goal in order to optimize a desired task.