Master Thesis on “Imitation Learning from Human Demonstrations for Manipulation of Deformable Objects”

Supervisor: Prof. Ville Kyrki (ville.kyrki(at)aalto.fi)

Advisor: Dr. Almas Shintemirov (almas.shintemirov(at)aalto.fi)

Keywords: dynamic modeling, deep learning, deformable object manipulation.

Figure 1 – Simulation of a plastic bag manipulation by a Franka robotic arm in NVIDIA Isaac Sim.

Project Description

Manipulation of deformable objects such as textiles is a very challenging task due to the complexity and high dimensionality of the deformable objects, that prevent direct usage of traditional robot motion planning algorithms [1]. In this thesis an interested student will work on implementation a 3D joystick teleoperation of a robot-manipulator, both using an experimental Franka robot in the Aalto robot lab (TUAS building lobby) and in simulation, for collecting demonstration data of manipulation tasks involving complex shape deformable objects such as loading rigid objects into a plastic bag [2]. The collected data will be used for training and testing a variety imitation learning methods [3-5] such as Behavior Cloning and/or Inverse Reinforcement Learning for autonomous manipulation of deformable objects in similar scenarios. A state-of-the-art NVIDIA Isaac Sim robotics simulator, offering photo-realistic, physically accurate virtual environments to develop, test, and manage AI-based robots [6], will be used for synthetic data generation, learning algorithm training and verification.

Deliverables

  • Literature review on imitation learning for robot manipulation tasks;
  • Human demonstration data collection using experimental/simulated robot teleoperation;
  • Deploying relevant imitation learning algorithms using collected human demonsration data; 
  • Evaluation of the developed algorithms on virtual/experimental Franka robot.

Practical Information

Pre-requisites: Python (high/medium), Machine/Deep Learning (medium), C++ (beginning)

Tools: PyTorch, Robot Operating System (ROS)

Start: Available Immediately

References

  1. H. A. Kadi, K. Terzi_Data-Driven Robotic Manipulation of Cloth-like Deformable Objects: present, challenges, and future prospects, Sensors, 2022 https://www.mdpi.com/1424-8220/23/5/2389 
  2. L. Chen, et al. AutoBag: Learning to Open Plastic Bags and Insert Objects https://arxiv.org/pdf/2210.17217.pdf 
  3. B. Fang, et al. Survey of imitation learning for robotic manipulation. International Journal of Intelligent Robotics and Applications (2019) 3:362–369. https://link.springer.com/content/pdf/10.1007/s41315-019-00103-5.pdf
  4. What Matters in Learning from Offline Human Demonstrations for Robot Manipulation https://arxiv.org/pdf/2108.03298
  5. MimicPlay: Long-Horizon Imitation Learning by Watching Human Play. https://arxiv.org/pdf/2302.12422
  6. https://developer.nvidia.com/isaac-sim