Multi-scale adaptive feature fusion network for semantic segmentation in remote sensing images

Ronghua Shang, Jiyu Zhang*, Licheng Jiao, Yangyang Li, Naresh Marturi, Rustam Stolkin

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

8 Citations (Scopus)

Abstract

Semantic segmentation of high-resolution remote sensing images is highly challenging due to the presence of a complicated background, irregular target shapes, and similarities in the appearance of multiple target categories. Most of the existing segmentation methods that rely only on simple fusion of the extracted multi-scale features often fail to provide satisfactory results when there is a large difference in the target sizes. Handling this problem through multi-scale context extraction and efficient fusion of multi-scale features, in this paper we present an end-to-end multi-scale adaptive feature fusion network (MANet) for semantic segmentation in remote sensing images. It is a coding and decoding structure that includes a multi-scale context extraction module (MCM) and an adaptive fusion module (AFM). The MCM employs two layers of atrous convolutions with different dilatation rates and global average pooling to extract context information at multiple scales in parallel. MANet embeds the channel attention mechanism to fuse semantic features. The high-and low-level semantic information are concatenated to generate global features via global average pooling. These global features are used as channel weights to acquire adaptive weight information of each channel by the fully connected layer. To accomplish an efficient fusion, these tuned weights are applied to the fused features. Performance of the proposed method has been evaluated by comparing it with six other state-of-the-art networks: fully convolutional networks (FCN), U-net, UZ1, Light-weight RefineNet, DeepLabv3+, and APPD. Experiments performed using the publicly available Potsdam and Vaihingen datasets show that the proposed MANet significantly outperforms the other existing networks, with overall accuracy reaching 89.4% and 88.2%, respectively and with average of F1 reaching 90.4% and 86.7% respectively.

Original languageEnglish
Article number872
JournalRemote Sensing
Volume12
Issue number5
DOIs
Publication statusPublished - 1 Mar 2020

Bibliographical note

Funding Information:
Funding: This research was funded by the National Natural Science Foundation of China under Grants Nos. 61773304, 61836009, 61871306, 61772399 and U1701267, the Fund for Foreign Scholars in University Research and Teaching Programs (the 111 Project) under Grants No. B07048, and the Program for Cheung Kong Scholars and Innovative Research Team in University under Grant IRT1170.

Publisher Copyright:
© 2020 by the author. Licensee MDPI, Basel, Switzerland.

Copyright:
Copyright 2020 Elsevier B.V., All rights reserved.

Keywords

  • Adaptive fusion
  • CNN
  • Deep learning
  • Multi-scale context
  • Remote sensing image
  • Semantic segmentation

ASJC Scopus subject areas

  • Earth and Planetary Sciences(all)

Fingerprint

Dive into the research topics of 'Multi-scale adaptive feature fusion network for semantic segmentation in remote sensing images'. Together they form a unique fingerprint.

Cite this