Projects per year
Abstract
The human visual system has limited capacity in simultaneously processing multiple visual inputs. Consequently, humans rely on shifting their attention from one location to another. When viewing an image of complex scenes, psychology studies and behavioural observations show that humans prioritise and sequentially shift attention among multiple visual stimuli. In this paper, we propose to predict the saliency rank of multiple objects by inferring human attention shift. We first construct a new large-scale salient object ranking dataset, with the saliency rank of objects defined by the order that an observer attends to these objects via attention shift. We then propose a new deep learning-based model to leverage both bottom-up and top-down attention mechanisms for saliency rank prediction. Our model includes three novel modules: Spatial Mask Module (SMM), Selective Attention Module (SAM) and Salient Instance Edge Module (SIEM). SMM integrates bottom-up and semantic object properties to enhance contextual object features, from which SAM learns the dependencies between object features and image features for saliency reasoning. SIEM is designed to improve segmentation of salient objects, which helps further improve their rank predictions. Experimental results show that our proposed network achieves state-of-the-art performances on the salient object ranking task across multiple datasets. Code and data are available at https://github.com/SirisAvishek/Attention_Shift_Ranks .
Original language | English |
---|---|
Pages (from-to) | 964–986 |
Number of pages | 23 |
Journal | International Journal of Computer Vision |
Volume | 132 |
Early online date | 18 Oct 2023 |
DOIs | |
Publication status | Published - Mar 2024 |
Bibliographical note
Funding Information:This work was funded by a Swansea University Doctoral Training Postgraduate Research Scholarship 0301[164]. For the purpose of Open Access the author has applied a CC BY copyright licence to any Author Accepted Manuscript version arising from this submission. Jianbo Jiao is supported by the Royal Society grant IES\R3\223050, and was supported by the EPSRC Programme Grant Visual AI EP/T028572/1. Gary Tam is supported by the Royal Society grant IEC/NSFC/211159. This work was supported by the Research Grants Council of Hong Kong (Grant No.: 11205620), and a Strategic Research Grant from City University of Hong Kong (Ref.: 7005674).
Publisher Copyright:
© 2023, The Author(s).
Keywords
- Attention shift
- Saliency
- Saliency ranking
- Salient object detection
ASJC Scopus subject areas
- Software
- Computer Vision and Pattern Recognition
- Artificial Intelligence
Fingerprint
Dive into the research topics of 'Inferring Attention Shifts for Salient Instance Ranking'. Together they form a unique fingerprint.Projects
- 1 Active
-
CLRM3D: Continual Large-scale Representation Learning from Multi-Modal Medical Data
Jiao, J. (Principal Investigator)
18/04/23 → 17/04/25
Project: Research Councils