FormResNet: Formatted Residual Learning for Image Restoration

Jianbo Jiao, Wei-Chih Tu, Shengfeng He, W H LAU Rynson

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

In this paper, we propose a deep CNN to tackle the image restoration problem by learning the structured residual. Previous deep learning based methods directly learn the mapping from corrupted images to clean images, and may suffer from the gradient exploding/vanishing problems of deep neural networks. We propose to address the image restoration problem by learning the structured details and recovering the latent clean image together, from the shared information between the corrupted image and the latent image. In addition, instead of learning the pure difference (corruption), we propose to add a “residual formatting layer” to format the residual to structured information, which allows the network to converge faster and boosts the performance. Furthermore, we propose a cross-level loss net to ensure both pixel-level accuracy and semantic-level visual quality. Evaluations on public datasets show that the proposed method outperforms existing approaches quantitatively and qualitatively.
Original languageEnglish
Title of host publicationProceedings : 30th IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW 2017)
DOIs
Publication statusPublished - Jul 2017

Fingerprint

Dive into the research topics of 'FormResNet: Formatted Residual Learning for Image Restoration'. Together they form a unique fingerprint.

Cite this