We address the problem of video representation learning without human-annotated labels. While previous efforts address the problem by designing novel self-supervised tasks using video data, the learned features are merely on a frame-by-frame basis, which are not applicable to many video analytic tasks where spatio-temporal features are prevailing. In this paper we propose a novel self-supervised approach to learn spatio-temporal features for video representation. Inspired by the success of two-stream approaches in video classification, we propose to learn visual features by regressing both motion and appearance statistics along spatial and temporal dimensions, given only the input video data. Specifically, we extract statistical concepts (fast-motion region and the corresponding dominant direction, spatio-temporal color diversity, dominant color, etc.) from simple patterns in both spatial and temporal domains. Unlike prior puzzles that are even hard for humans to solve, the proposed approach is consistent with human inherent visual habits and therefore easy to answer. We conduct extensive experiments with C3D to validate the effectiveness of our proposed approach. The experiments show that our approach can significantly improve the performance of C3D when applied to video classification tasks. Code is available at https://github.com/laura-wang/video repres mas.
|Title of host publication||Proceedings - 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2019|
|Publisher||IEEE Computer Society Press|
|Number of pages||10|
|Publication status||Published - Jun 2019|
|Event||32nd IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2019 - Long Beach, United States|
Duration: 16 Jun 2019 → 20 Jun 2019
|Name||Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition|
|Conference||32nd IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2019|
|Period||16/06/19 → 20/06/19|
Bibliographical noteFunding Information:
Acknowledgements: This work is supported in part by the Natural Science Foundation of China under Grant U1613218 and 61702194, in part by the Hong Kong ITC under Grant ITS/448/16FP, and in part by the VC Fund 4930745 of the CUHK T Stone Robotics Institute. Jianbo Jiao is supported by the EPSRC Programme Grant See-bibyte EP/M013774/1.
© 2019 IEEE.
- Representation Learning
- Video Analytics
ASJC Scopus subject areas
- Computer Vision and Pattern Recognition