Portfolio item number 1
Short description of portfolio item number 1
Short description of portfolio item number 1
Short description of portfolio item number 2
Xiaoqi Li, Yanzi Wang, Yan Shen, Haoran Lu, Qianxu Wang, Boshi An, Jiaming Liu, Hao Dong
Published in Under Review
We leverage geometric consisency to fuse the views, resulting in a refined depth map and a more precise affordance map for robot manipulation decisions. By comparing with prior works that adopt point clouds or RGB images as inputs, we demonstrate the effectiveness and practicality of our method.
Ruihai Wu*, Mingtong Zhang*, Haozhe Chen*, Haoran Lu, Yitong Li, Yunzhu Li
Published in Under Review
To reduce the number of required demonstrations for skill learning, we propose dynamics-guided diffusion policy. This method leverages learned dynamics models, which can explicitly model the interactions in a much wider space than the regions just covered by expert demonstrations.
Yan Shen, Ruihai Wu, yubin Ke, Xinyuan Song, Zeyi Li, Xiaoqi Li, Hongwei Fan, Haoran Lu, Hao Dong
Published in Under Review
We exploit the geometric generalization capability of point-level affordance, learning affordance that enables both generalization and collaboration in long-horizon geometric assembly tasks.
Chuanruo Ning*, Ruihai Wu*, Haoran Lu, Kaichun Mo, Hao Dong
Published in NeurIPS 2023
We introduce an affordance learning framework that effectively explores novel categories with minimal interactions on a limited number of instances. Our framework explicitly estimates the geometric similarity across different categories, identifying local areas that differ from shapes in the training categories for efficient exploration while concurrently transferring affordance knowledge to similar parts of the objects.
Yitong Li*, Ruihai Wu*, Haoran Lu, Chuanruo Ning, Yan Shen, Guanqi Zhan, Hao Dong
Published in RSS 2024
In this paper, we study retrieving objects in complicated clutters via a novel method of recursively broadcasting the accurate local dynamics to build a support relation graph of the whole scene, which largely reduces the complexity of the support relation inference and improves the accuracy.
Ruihai Wu*, Haoran Lu*, Yiyan Wang, Yubo Wang, Hao Dong
Published in CVPR 2024
Award: Spotlight Presentation at ICRA 2024 Workshop on Deformable Object Manipulation
We propose to learn dense visual correspondence for diverse garment manipulation tasks with category-level generalization using only one- or few-shot human demonstrations.
Haoran Lu*, Ruihai Wu*, Yitong Li*, Sijie Li, Ziyu Zhu, Chuanruo Ning, Yan Shen, Longzan Luo, Yuanpei Chen, Hao Dong
Published in NeurIPS 2024
Award: Spotlight Presentation at ICRA 2024 Workshop on Deformable Object Manipulation
We present GarmentLab, a benchmark designed for garment manipulation within realistic 3D indoor scenes. Our benchmark encompasses a diverse range of garment types, robotic systems and manipulators including dexterous hands. The multitude of tasks included in the benchmark enables further exploration of the interactions between garments, deformable objects, rigid bodies, fluids, and avatars.
Published:
This is a description of your talk, which is a markdown files that can be all markdown-ified like any other post. Yay markdown!
Published:
This is a description of your conference proceedings talk, note the different field in type. You can put anything in this field.
Undergraduate course, University 1, Department, 2014
This is a description of a teaching experience. You can use markdown like any other post.
Workshop, University 1, Department, 2015
This is a description of a teaching experience. You can use markdown like any other post.