Spatially-adaptive filter units for deep neural networks

Domen Tabernik, Matej Kristan, Ales Leonardis

Research output: Chapter in Book/Report/Conference proceedingConference contribution

3 Citations (Scopus)
154 Downloads (Pure)

Abstract

Classical deep convolutional networks increase receptive field size by either gradual resolution reduction or application of hand-crafted dilated convolutions to prevent increase in the number of parameters. In this paper we propose a novel displaced aggregation unit (DAU) that does not require hand-crafting. In contrast to classical filters with units (pixels) placed on a fixed regular grid, the displacement of the DAUs are learned, which enables filters to spatially-adapt their receptive field to a given problem. We extensively demonstrate the strength of DAUs on a classification and semantic segmentation tasks. Compared to ConvNets with regular filter, ConvNets with DAUs achieve comparable performance at faster convergence and up to 3-times reduction in parameters. Furthermore, DAUs allow us to study deep networks from novel perspectives. Westudy spatial distributions of DAU filters and analyze the number of parameters allocated for spatial coverage in a filter.
Original languageEnglish
Title of host publicationProceedings of 30th IEEE/CVF Computer Vision and Pattern Recognition Conference (CVPR 2018)
PublisherIEEE Computer Society Press
Pages9388-9396
Number of pages9
DOIs
Publication statusPublished - 17 Dec 2018
Event 30th IEEE/CVF Computer Vision and Pattern Recognition Conference (CVPR 2018) - Salt Lake City, Utah, United States
Duration: 19 Jun 201821 Jun 2018

Conference

Conference 30th IEEE/CVF Computer Vision and Pattern Recognition Conference (CVPR 2018)
Country/TerritoryUnited States
CitySalt Lake City, Utah
Period19/06/1821/06/18

Fingerprint

Dive into the research topics of 'Spatially-adaptive filter units for deep neural networks'. Together they form a unique fingerprint.

Cite this