FS-Net: Fast Shape-based Network for Category-Level 6D Object Pose Estimation with Decoupled Rotation Mechanism

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Colleges, School and Institutes

External organisations

  • College of Computer Science and Software Engineering, Shenzhen University, Shenzhen, China.


In this paper, we focus on category-level 6D pose and size estimation from a monocular RGB-D image. Previous methods suffer from inefficient category-level pose feature extraction, which leads to low accuracy and inference speed. To tackle this problem, we propose a fast shape based network (FS-Net) with efficient category-level feature extraction for 6D pose estimation. First, we design an orientation aware auto encoder with 3D graph convolution for latent feature extraction. Thanks to the shift and scale invariance properties of 3D graph convolution, the learned latent feature is insensitive to point shift and object size. Then, to efficiently decode category-level rotation information from the latent feature, we propose a novel decoupled rotation mechanism that employs two decoders to complementarily access the rotation information. For translation and size, we estimate them by two residuals: the difference between the mean of object points and ground truth translation, and the difference between the mean size of the category and ground truth size, respectively. Finally, to increase the generalization ability of the FS-Net, we propose an online box-cage based 3D deformation mechanism to augment the training data. Extensive experiments on two benchmark datasets show that the proposed method achieves state-of-the-art performance in both category- and instance-level 6D object pose estimation. Especially in category-level pose estimation, without extra synthetic data, our method outperforms existing methods by 6.3% on the NOCS-REAL dataset.


Original languageEnglish
Title of host publicationConference on Computer Vision and Pattern Recognition
Publication statusAccepted/In press - 1 Mar 2021