Robots’ ability to interact with their surroundings is an essential capability, especially in unstructured human-inhabited environments. The knowledge of such an environment is usually obtained through sensors. The study of acquiring knowledge from sensor data is called robotic perception. Perception is the first step in many tasks such as manipulation or human-robot interaction.
What we do
- Object detection for manipulation, where objects are recognized and manipulated in unknown environments;
- 3D object modelling, where models are created using a combination of vision and touch;
- robot to robot interaction, where we were able to make robots understand each other’s intentions by pointing;
- human-robot interaction, where the robot is able to infer where users are directing their attention through vision;
- as well as in the context of assistive robotics, where we studied ways for a robot to perceive human respiratory rate.
In this project we research how to build maps which include the uncertainty of the robot over the occupancy of the objects in the environment.
We have shown how the constructed maps can be used to increase global navigation safety by planning trajectories which avoid areas of high uncertainty, enabling higher autonomy for mobile robots in indoor settings.
In robotic grasping, knowing the object shape allows for better grasp planning. However, in many environments it is impossible to know a priori the shape of all possible objects. For this reason, the object to be grasped is usually perceived through some sensory input, commonly vision. However, one of the main problems with this approach […]