DiffPose: SpatioTemporal Diffusion Model for Video-Based Human Pose Estimation

Runyang Feng, Yixing Gao*, Tze Ho Elden Tse, Xueqing Ma, Hyung Jin Chang

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contribution

36 Downloads (Pure)

Abstract

Denoising diffusion probabilistic models that were initially proposed for realistic image generation have recently shown success in various perception tasks (e.g., object detection and image segmentation) and are increasingly gaining attention in computer vision. However, extending such models to multi-frame human pose estimation is non-trivial due to the presence of the additional temporal dimension in videos. More importantly, learning representations that focus on keypoint regions is crucial for accurate localization of human joints. Nevertheless, the adaptation of the diffusion-based methods remains unclear on how to achieve such objective. In this paper, we present DiffPose, a novel diffusion architecture that formulates video-based human pose estimation as a conditional heatmap generation problem. First, to better leverage temporal information, we propose SpatioTemporal Representation Learner which aggregates visual evidences across frames and uses the resulting features in each denoising step as a condition. In addition, we present a mechanism called Lookup-based MultiScale Feature Interaction that determines the correlations between local joints and global contexts across multiple scales. This mechanism generates delicate representations that focus on keypoint regions. Altogether, by extending diffusion models, we show two unique characteristics from DiffPose on pose estimation task: (i) the ability to combine multiple sets of pose estimates to improve prediction accuracy, particularly for challenging joints, and (ii) the ability to adjust the number of iterative steps for feature refinement without retraining the model. DiffPose sets new state-of-the-art results on three benchmarks: PoseTrack2017, PoseTrack2018, and PoseTrack21.
Original languageEnglish
Title of host publication2023 IEEE/CVF International Conference on Computer Vision (ICCV)
PublisherIEEE
Pages14815-14826
Number of pages12
ISBN (Electronic)9798350307184
ISBN (Print)9798350307191 (PoD)
DOIs
Publication statusPublished - 15 Jan 2024
Event2023 International Conference on Computer Vision - Paris Convention Centre, Paris, France
Duration: 2 Oct 20236 Oct 2023

Publication series

NameProceedings of the IEEE International Conference on Computer Vision
PublisherIEEE
ISSN (Print)1550-5499
ISSN (Electronic)2380-7504

Conference

Conference2023 International Conference on Computer Vision
Abbreviated titleICCV 2023
Country/TerritoryFrance
CityParis
Period2/10/236/10/23

Bibliographical note

Acknowledgments:
This work is supported in part by the National Natural Science Foundation of China under Grant No. 62203184 and the International Cooperation Project under Grant No. 20220402009GH. This work is also supported in part by the MSIT (Ministry of Science and ICT), Korea, under the ITRC (Information Technology Research Center) support program (IITP-2023-2020-0-01789), supervised by the IITP (Institute for Information & Communications Technology Planning & Evaluation).

Keywords

  • Computer vision
  • Visualization
  • Computational modeling
  • Pose estimation
  • Noise reduction
  • Predictive models
  • Probabilistic logic

Fingerprint

Dive into the research topics of 'DiffPose: SpatioTemporal Diffusion Model for Video-Based Human Pose Estimation'. Together they form a unique fingerprint.

Cite this