Publications

GarmentLab: A Unified Simulation and Benchmark for Garment Manipulation

Haoran Lu*, Ruihai Wu*, Yitong Li*, Sijie Li, Ziyu Zhu, Chuanruo Ning, Yan Shen, Longzan Luo, Yuanpei Chen, Hao Dong

Published in NeurIPS 2024

Award: Spotlight Presentation at ICRA 2024 Workshop on Deformable Object Manipulation

We present GarmentLab, a benchmark designed for garment manipulation within realistic 3D indoor scenes. Our benchmark encompasses a diverse range of garment types, robotic systems and manipulators including dexterous hands. The multitude of tasks included in the benchmark enables further exploration of the interactions between garments, deformable objects, rigid bodies, fluids, and avatars.

Paper / Code / Project

Broadcasting Support Relations Recursively from Local Dynamics for Object Retrieval in Clutters

Yitong Li*, Ruihai Wu*, Haoran Lu, Chuanruo Ning, Yan Shen, Guanqi Zhan, Hao Dong

Published in RSS 2024

In this paper, we study retrieving objects in complicated clutters via a novel method of recursively broadcasting the accurate local dynamics to build a support relation graph of the whole scene, which largely reduces the complexity of the support relation inference and improves the accuracy.

Paper / Code / Project

Where2Explore: Few-shot Affordance Learning for Unseen Novel Categories of Articulated Objects

Chuanruo Ning*, Ruihai Wu*, Haoran Lu, Kaichun Mo, Hao Dong

Published in NeurIPS 2023

We introduce an affordance learning framework that effectively explores novel categories with minimal interactions on a limited number of instances. Our framework explicitly estimates the geometric similarity across different categories, identifying local areas that differ from shapes in the training categories for efficient exploration while concurrently transferring affordance knowledge to similar parts of the objects.

Paper / Code / Project

BiAssemble:Learning Collaborative Affordance for Bimanual Geometric Assembly

Yan Shen, Ruihai Wu, yubin Ke, Xinyuan Song, Zeyi Li, Xiaoqi Li, Hongwei Fan, Haoran Lu, Hao Dong

Published in Under Review

We exploit the geometric generalization capability of point-level affordance, learning affordance that enables both generalization and collaboration in long-horizon geometric assembly tasks.

Neural Dynamics Augmented Diffusion Policy

Ruihai Wu*, Mingtong Zhang*, Haozhe Chen*, Haoran Lu, Yitong Li, Yunzhu Li

Published in Under Review

To reduce the number of required demonstrations for skill learning, we propose dynamics-guided diffusion policy. This method leverages learned dynamics models, which can explicitly model the interactions in a much wider space than the regions just covered by expert demonstrations.

ImageManip: Image-based Robotic Manipulation with Affordance-guided Next View Selection

Xiaoqi Li, Yanzi Wang, Yan Shen, Haoran Lu, Qianxu Wang, Boshi An, Jiaming Liu, Hao Dong

Published in Under Review

We leverage geometric consisency to fuse the views, resulting in a refined depth map and a more precise affordance map for robot manipulation decisions. By comparing with prior works that adopt point clouds or RGB images as inputs, we demonstrate the effectiveness and practicality of our method.

Project