Ryan Diaz

Session
Session 2
Board Number
65

Large-Scale Object Model Generation for Learning Contact-Rich Robotic Manipulation Tasks

The recent success of data-driven methods in areas such as computer vision and natural language processing has prompted efforts in the field of robotics to harness the capabilities of data. Applying the data-driven method in the context of learning a contact-rich manipulation task involves running the task in a simulation, which requires 3D model data of the interactable objects. This project aims to generate object model data to help a dual arm robotic agent learn the task of putting two objects with a specific geometric relationship together, mirroring real-world household tasks that are similar to capping a bottle. The object models are procedurally generated using Blender with variations in model shape and texture. The generated object models are then used in simulation, where the success rate of the dual arm setup completing the task is measured. A real-life configuration of the task is also constructed in order to evaluate the transfer of the simulation learning to the real world, with a UR5 dual arm manipulator interacting with physical replicas of the generated objects. We hope that training the robotic agent on this dataset will allow it to generalize and perform the desired task on unseen and potentially more complex geometries, allowing for a similar setup to be used to perform the task in everyday household contexts.