Continuously adaptive data fusion and model re-learning for particle filter tracking with multiple features

Jingjing Xiao, Rustam Stolkin, Mourad Oussalah, Ales Leonardis

Research output: Contribution to journalArticlepeer-review

17 Citations (Scopus)
234 Downloads (Pure)


This paper presents a new method for object tracking in a camera sensor with particle filters. The method enables multiple target and background models, arbitrarily spanning many features or imaging modalities, to be adaptively fused to provide optimal discriminating ability against changing backgrounds, which may present varying degrees of clutter and camouflage for different kinds of features at different times. Furthermore, we show how to continuously and robustly relearn all models for all feature modalities online during tracking and for targets whose appearance may be continually changing. Both the data fusion weightings and model relearning parameters are robustly adapted at each frame, by extracting contextual information to inform the saliency assessments of each part of each model. In addition, we propose a two-step estimation method for improving robustness, by preventing excessive drifting of particles during tracking past challenging, cluttered background scenes. We demonstrate the method by implementing a version of the tracker, which combines both shape and color models, and testing it on a publicly available benchmark data set. Results suggest that the proposed method outperforms a number of well-known state-of-the-art trackers from the literature.
Original languageEnglish
JournalIEEE Sensors Journal
Issue number8
Early online date5 Jan 2016
Publication statusPublished - 15 Apr 2016


  • HOG feature
  • Visual object tracking
  • color histogram
  • colour histogram
  • data fusion
  • online model learning
  • particle filter
  • visual object tracking


Dive into the research topics of 'Continuously adaptive data fusion and model re-learning for particle filter tracking with multiple features'. Together they form a unique fingerprint.

Cite this