With the growing advancement of robotics research, there is a growing need for people-friendly communication between robots and humans. On one hand, the decisions of the autonomous system need to be understandable to humans, and on the other – humans need to be able to specify commands in a way that is natural to them.
In cases of shared autonomy, where an operator needs to take control in case the autonomous system is uncertain about a situation, our research focuses on automatically generating hypothetical explanations of the model decisions and analysis of which part of the data would be relevant for a human to understand the situation in order to react in a timely manner.
Another direction of the research is allowing humans to specify commands to the autonomous systems in an intuitive way – by using natural language.
- Tsvetomila Mihaylova (tsvetomila.mihaylova(at)aalto.fi), Postdoctoral researcher.
- Ville Kyrki (ville.kyrki(at)aalto.fi), Professor, group leader.
The goal of this master thesis is to integrate a large vision-language model (VLM) with a manipulation policy in order to control a robotic hand for predefined manipulation tasks, such as grasping or pushing.
The goal of this master thesis is to explore existing approaches, datasets and models that provide textual explanations of driving situations, to implement a state-of-the-art model and to validate it on predefined driving conflict situations.