PMnet: learning of disentangled pose and movement for unsupervised motion retargeting

Jongin Lim, Hyung Jin Chang, Jin Young Choi

Research output: Chapter in Book/Report/Conference proceedingConference contribution

394 Downloads (Pure)


In this paper, we propose a deep learning framework for unsupervised motion retargeting. In contrast to the existing method, we decouple the motion retargeting process into two parts that explicitly learn poses and movements of a character. Here, the first part retargets the pose of the character at each frame, while the second part retargets the character’s overall movement. To realize these two processes, we develop a novel architecture referred to as the pose-movement network (PMnet), which separately learns frame-by-frame poses and overall movement. At each frame, to follow the pose of the input character, PMnet learns how to make the input pose first and then adjusts it to fit the target character’s kinematic configuration. To handle the overall movement, a normalizing process is introduced to make the overall movement invariant to the size of the character. Along with the normalizing process, PMnet regresses the overall movement to fit the target character. We then introduce a novel loss function that allows PMnet to properly retarget the poses and overall movement. The proposed method is verified via several self-comparisons and outperforms the state-of-the-art (sota) method by reducing the motion retargeting error (average joint position error) from 7.68 (sota) to 1.95 (ours).
Original languageEnglish
Title of host publicationProceedings of the 30th British Machine Vision Conference (BMVC 2019)
PublisherBritish Machine Vision Association, BMVA
Number of pages13
Publication statusPublished - 12 Sept 2019
Event30th British Machine Vision Conference (BMVC 2019) - Cardiff, United Kingdom
Duration: 9 Sept 201912 Sept 2019


Conference30th British Machine Vision Conference (BMVC 2019)
Country/TerritoryUnited Kingdom


Dive into the research topics of 'PMnet: learning of disentangled pose and movement for unsupervised motion retargeting'. Together they form a unique fingerprint.

Cite this