Shape-based grasping
In robotic grasping, knowing the object shape allows for better grasp planning. However, in many environments it is impossible to know a priori the shape of all possible objects. For this reason, the object to be grasped is usually perceived through some sensory input, commonly vision. However, one of the main problems with this approach is that only one side of the object is perceived, due to object self-occluding its back side.
To cope with these limitation, in this project, we focus on shape completion, training a deep network to estimate the complete object shape. In contrast to most recent work in the field where the focus is explicitly on generating
more exact point estimates of the shape, this project takes another viewpoint of the problem by also modeling the uncertainty over the completed shape. This uncertainty can then be incorporated into probabilistic grasp planners to enable robust grasp planning over uncertain shapes.
We demonstrated, both in simulation and on real hardware statistically significant improvements compared to doing shape completion without uncertainty.
People involved
- Jens Lundell (jens.lundell@aalto.fi), doctoral candidate
- Francesco Verdoja (francesco.verdoja@aalto.fi), postdoctoral researcher
- Ville Kyrki (ville.kyrki@aalto.fi), professor
Project updates
Multi-FinGAN: Generative Coarse-To-Fine Sampling of Multi-Finger Grasps
Preprint https://arxiv.org/pdf/2012.09696.pdf.
Beyond Top-Grasps Through Scene Completion
Current end-to-end grasp planning methods propose grasps in the order of seconds that attain high grasp success rates on a diverse set of objects, but often by constraining the workspace to top-grasps. In this work, we present a method that allows end-to-end top-grasp planning methods to generate full six-degree-of-freedom grasps using a single RGB-D view […]