Continuously adaptive data fusion and model re-learning for particle filter tracking with multiple features
Research output: Contribution to journal › Article
Colleges, School and Institutes
This paper presents a new method for object tracking in a camera sensor with particle filters. The method enables multiple target and background models, arbitrarily spanning many features or imaging modalities, to be adaptively fused to provide optimal discriminating ability against changing backgrounds, which may present varying degrees of clutter and camouflage for different kinds of features at different times. Furthermore, we show how to continuously and robustly relearn all models for all feature modalities online during tracking and for targets whose appearance may be continually changing. Both the data fusion weightings and model relearning parameters are robustly adapted at each frame, by extracting contextual information to inform the saliency assessments of each part of each model. In addition, we propose a two-step estimation method for improving robustness, by preventing excessive drifting of particles during tracking past challenging, cluttered background scenes. We demonstrate the method by implementing a version of the tracker, which combines both shape and color models, and testing it on a publicly available benchmark data set. Results suggest that the proposed method outperforms a number of well-known state-of-the-art trackers from the literature.
|Journal||IEEE Sensors Journal|
|Early online date||5 Jan 2016|
|Publication status||Published - 15 Apr 2016|
- HOG feature, Visual object tracking, color histogram, colour histogram, data fusion, online model learning, particle filter, visual object tracking