Shape-based grasping

In robotic grasping, knowing the object shape allows for better grasp planning. However, in many environments it is impossible to know a priori the shape of all possible objects. For this reason, the object to be grasped is usually perceived through some sensory input, commonly vision. However, one of the main problems with this approach is that only one side of the object is perceived, due to object self-occluding its back side.

To cope with these limitation, in this project, we focus on shape completion, training a deep network to estimate the complete object shape. In contrast to most recent work in the field where the focus is explicitly on generating
more exact point estimates of the shape, this project takes another viewpoint of the problem by also modeling the uncertainty over the completed shape. This uncertainty can then be incorporated into probabilistic grasp planners to enable robust grasp planning over uncertain shapes.

We demonstrated, both in simulation and on real hardware statistically significant improvements compared to doing shape completion without uncertainty.

A cup, seen only from the front, gets reconstructed in multiple plausible shapes to plan the grasp more likely to succeed on every shape

People involved

Project updates

Beyond Top-Grasps Through Scene Completion

Current end-to-end grasp planning methods propose grasps in the order of seconds that attain high grasp success rates on a diverse set of objects, but often by constraining the workspace to top-grasps. In this work, we present a method that allows end-to-end top-grasp planning methods to generate full six-degree-of-freedom grasps using a single RGB-D view […]