An adaptive coupled-layer visual model for robust visual tracking

L. Čehovin, M. Kristan, A. Leonardis

Research output: Chapter in Book/Report/Conference proceedingConference contribution

82 Citations (Scopus)


This paper addresses the problem of tracking objects which undergo rapid and significant appearance changes. We propose a novel coupled-layer visual model that combines the target's global and local appearance. The local layer in this model is a set of local patches that geometrically constrain the changes in the target's appearance. This layer probabilistically adapts to the target's geometric deformation, while its structure is updated by removing and adding the local patches. The addition of the patches is constrained by the global layer that probabilistically models target's global visual properties such as color, shape and apparent local motion. The global visual properties are updated during tracking using the stable patches from the local layer. By this coupled constraint paradigm between the adaptation of the global and the local layer, we achieve a more robust tracking through significant appearance changes. Indeed, the experimental results on challenging sequences confirm that our tracker outperforms the related state-of-the-art trackers by having smaller failure rate as well as better accuracy.
Original languageEnglish
Title of host publicationComputer Vision (ICCV), 2011 IEEE International Conference
PublisherIEEE Computer Society Press
Number of pages8
ISBN (Print)978-1457711015
Publication statusPublished - Nov 2011
EventICCV 2011: 13th International Conference on Computer Vision - Barcelona, Spain
Duration: 6 Nov 201113 Nov 2011


ConferenceICCV 2011: 13th International Conference on Computer Vision


Dive into the research topics of 'An adaptive coupled-layer visual model for robust visual tracking'. Together they form a unique fingerprint.

Cite this