Spatially-adaptive filter units for deep neural networks

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Authors

Colleges, School and Institutes

External organisations

  • University of Ljubljana

Abstract

Classical deep convolutional networks increase receptive field size by either gradual resolution reduction or application of hand-crafted dilated convolutions to prevent increase in the number of parameters. In this paper we propose a novel displaced aggregation unit (DAU) that does not require hand-crafting. In contrast to classical filters with units (pixels) placed on a fixed regular grid, the displacement of the DAUs are learned, which enables filters to spatially-adapt their receptive field to a given problem. We extensively demonstrate the strength of DAUs on a classification and semantic segmentation tasks. Compared to ConvNets with regular filter, ConvNets with DAUs achieve comparable performance at faster convergence and up to 3-times reduction in parameters. Furthermore, DAUs allow us to study deep networks from novel perspectives. Westudy spatial distributions of DAU filters and analyze the number of parameters allocated for spatial coverage in a filter.

Details

Original languageEnglish
Title of host publicationProceedings of 30th IEEE/CVF Computer Vision and Pattern Recognition Conference (CVPR 2018)
Publication statusPublished - 17 Dec 2018
Event 30th IEEE/CVF Computer Vision and Pattern Recognition Conference (CVPR 2018) - Salt Lake City, Utah, United States
Duration: 19 Jun 201821 Jun 2018

Conference

Conference 30th IEEE/CVF Computer Vision and Pattern Recognition Conference (CVPR 2018)
CountryUnited States
CitySalt Lake City, Utah
Period19/06/1821/06/18