ICRA 2023, here we come!
The Intelligent Robotics group will attend to ICRA 2023. Read more!
Welcome to Tsvetomila Mihaylova
We welcome Tsvetomila as a new Postdoc in the group.
Welcome to Shivam Chaubey
We welcome Shivam as a new PhD Candidate in the group.
Master thesis on “Development of data-driven driver model”
Supervisor: Prof. Ville Kyrki (email@example.com) Advisor: Daulet Baimukashev (firstname.lastname@example.org), Shoaib Azam (email@example.com) Keywords: imitation learning, autonomous driving Data-driven driver models are superior to rule-based models in interactive multi-agent scenarios where it is essential to consider agents’ behavior. For example, humans have diverse driving styles as aggressive, neutral, or defensive  and it is challenging to […]
Master thesis on “Visual Action Planning for Complex Object Manipulation”
The aim of this thesis is based on consecutive action planning for robot manipulation and will focus on implementation and improving of a recently presented visual action planning of complex manipulation tasks
Preventing Mode Collapse When Imitating Latent Policies From Observation
Robotics Seminar Series. Fourth Session 2022 – 25th November 2022. Speaker: Oliver Struckmeier
Semantic map generation in SUMO
This project aims to extend the functionality of the SUMO simulator with suitable software packages which generate semantic representations and control the vehicles using low-level control actions. This enables integration of data-driven vehicle models.
Welcome to Almas Shintemirov
We welcome Almas as a new Research Fellow in the group.
Research Activities in Robotics and Mechatronics at Nazarbayev University, Kazakhstan
Robotics Seminar Series. Third Session 2022 – 4th November 2022. Speaker: Almas Shintemirov
Master Thesis on “Interactive Bayesian Multiobjective Evolutionary Optimization in Reinforcement Learning Problems with Conflicting Reward Functions”
In many real-world problems, there are multiple conflicting objective functions that need to be optimized simultaneously. For example, an investment company wants to create an optimum portfolio of stocks to maximize profits and minimize risk simultaneously. However, most reinforcement learning (RL) problems do not explicitly consider the tradeoff between multiple conflicting reward functions and assume a scalarized single objective reward function to be optimized. Multiobjective evolutionary optimization algorithms (MOEAs) can be used to find Pareto optimal policies by considering multiple reward functions as objectives.