top of page

RGB-D

in hand manipulationdataset

This dataset contains 13 sequences of in-hand manipulation of objects from the YCB dataset.

 

RGB-D in-hand object manipulation is potentially the fastest, easiest way for novices to construct 3D models of household objects. However, it remains challenging to accurately segment the target object from the user’s hands and background.  To help the computer vision research community benchmark new algorithms on this challenging problem, we are releasing a dataset that provides dense pixel level annotations for in-hand scanning of hand-sized objects. This dataset is used for video object tracking from hand-object interaction. Our dataset contains 13 sequences of in-hand manipulation of objects from the YCB dataset. Each sequence ranges from 300 to 700 frames in length (filmed at 30fps) and contains in-hand manipulation of the objects revealing all sides. The dataset is complete with color images, color aligned to depth images, and depth images. The pixel-wise annotations on the color aligned to depth image are provided every 10 frames.

DOWNLOAD THE DATASET

DOWNLOAD THE COMPLETE DATASET (1.2GB)

 

The complete dataset (.zip) includes pixel-level object segments, color images(color), color aligned to depth images(cad), depth images for the 13 object sequences. The ground truth annotations are aligned to cad.

YCB.JPG
DOWNLOAD THE SEGMENTATION RESULTS (12MB)

Download the segmentation results obtained by our algorithm, the result is per frame and is aligned to CAD.

annotation.png
DOWNLOAD BACKGROUND SCRIBBLES (2MB)

 

Use this file if you use user scribbles instead of ground truth segment for initialization. The scribble images have 2 pixel values, where 0 stands for user scribbled sure background.

DOWNLOAD THE YCB MESHES (2MB)

 

Download the 13 untextured poission meshes from the YCB dataset (http://www.ycbbenchmarks.com/).

DOWNLOADS
PUBLICATION

PUBLICATION

spray.png

Fan Wang and Kris Hauser
In-hand Object Scanning via RGB-D Video Segmentation International Conference on Robotics and Automation (ICRA), 2019.

[BibTeX] [PDF]

The paper proposes a  technique for 3D object scanning via in-hand manipulation, in which an object reoriented in front of a video camera with multiple grasps and regrasps. In-hand object tracking is a significant challenge under fast movement, rapid appearance changes, and occlusions. BackFlow tracks arbitrary in-hand objects more effectively than existing techniques.  Experiments show that our method achieve 6% increase in accuracy compared to top performing video tracking algorithms.

 

The improved tracking accuracy results in noticeably higher quality reconstructed models.  Moreover, testing with a novice user on a set of 200 objects demonstrates relatively rapid construction of complete 3D object models.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Citation

If you use this code, please cite the following:

@article{Wang2019,

   title={In-hand Object Scanning via RGB-D Video Segmentation},

   author={Fan Wang and Kris Hauser},

   journal={2019 IEEE International Conference on Robotics and Automation (ICRA)},

   year={2019}

}

 

Download the code HERE

reconstruct.png
PEOPLE

PEOPLE

index.png
FAN WANG
PHD STUDENT

fan.wang2 at duke.edu

kris.jpg
KRIS HAUSER
ASSOCIATE PROFESSOR

kris.hauser at duke.edu

CONTACT

US

If you have any questions regarding the site, please email faninedinburgh at gmail.com or leave a message here

Thanks for submitting!

contact
bottom of page