TY - GEN
T1 - FormResNet
T2 - Formatted Residual Learning for Image Restoration
AU - Jiao, Jianbo
AU - Tu, Wei-Chih
AU - He, Shengfeng
AU - Rynson, W H LAU
PY - 2017/7
Y1 - 2017/7
N2 - In this paper, we propose a deep CNN to tackle the image restoration problem by learning the structured residual. Previous deep learning based methods directly learn the mapping from corrupted images to clean images, and may suffer from the gradient exploding/vanishing problems of deep neural networks. We propose to address the image restoration problem by learning the structured details and recovering the latent clean image together, from the shared information between the corrupted image and the latent image. In addition, instead of learning the pure difference (corruption), we propose to add a “residual formatting layer” to format the residual to structured information, which allows the network to converge faster and boosts the performance. Furthermore, we propose a cross-level loss net to ensure both pixel-level accuracy and semantic-level visual quality. Evaluations on public datasets show that the proposed method outperforms existing approaches quantitatively and qualitatively.
AB - In this paper, we propose a deep CNN to tackle the image restoration problem by learning the structured residual. Previous deep learning based methods directly learn the mapping from corrupted images to clean images, and may suffer from the gradient exploding/vanishing problems of deep neural networks. We propose to address the image restoration problem by learning the structured details and recovering the latent clean image together, from the shared information between the corrupted image and the latent image. In addition, instead of learning the pure difference (corruption), we propose to add a “residual formatting layer” to format the residual to structured information, which allows the network to converge faster and boosts the performance. Furthermore, we propose a cross-level loss net to ensure both pixel-level accuracy and semantic-level visual quality. Evaluations on public datasets show that the proposed method outperforms existing approaches quantitatively and qualitatively.
UR - https://scholars.cityu.edu.hk/en/publications/formresnet(bbaecf99-dccd-4427-8859-5826574e25d5).html
U2 - 10.1109/CVPRW.2017.140
DO - 10.1109/CVPRW.2017.140
M3 - Conference contribution
SN - 9781538607336
SN - 9781538607343
BT - Proceedings : 30th IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW 2017)
ER -