Look Deeper into Depth: Monocular Depth Estimation with Semantic Booster and Attention-Driven Loss

Jianbo Jiao, Ying Cao, Yibing Song, W H LAU Rynson

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Monocular depth estimation benefits greatly from learning based techniques. By studying the training data, we observe that the per-pixel depth values in existing datasets typically exhibit a long-tailed distribution. However, most previous approaches treat all the regions in the training data equally regardless of the imbalanced depth distribution, which restricts the model performance particularly on distant depth regions. In this paper, we investigate the long tail property and delve deeper into the distant depth regions (i.e. the tail part) to propose an attention-driven loss for the network supervision. In addition, to better leverage the semantic information for monocular depth estimation, we propose a synergy network to automatically learn the information sharing strategies between the two tasks. With the proposed attention-driven loss and synergy network, the depth estimation and semantic labeling tasks can be mutually improved. Experiments on the challenging indoor dataset show that the proposed approach achieves state-of-the-art performance on both monocular depth estimation and semantic labeling tasks.
Original languageEnglish
Title of host publicationComputer Vision – ECCV 2018
DOIs
Publication statusPublished - Sep 2018

Fingerprint

Dive into the research topics of 'Look Deeper into Depth: Monocular Depth Estimation with Semantic Booster and Attention-Driven Loss'. Together they form a unique fingerprint.

Cite this