Existing GAN-based multi-view face synthesis methods rely heavily on 'creating' faces, and thus they struggle in reproducing the faithful facial texture and fail to preserve identity when undergoing a large angle rotation. In this paper, we combat this problem by dividing the challenging large-angle face synthesis into a series of easy small-angle rotations, and each of them is guided by a face flow to maintain faithful facial details. In particular, we propose a Face Flow-guided Generative Adversarial Network (FFlowGAN) that is specifically trained for small-angle synthesis. The proposed network consists of two modules, a face flow module that aims to compute a dense correspondence between the input and target faces. It provides strong guidance to the second module, face synthesis module, for emphasizing salient facial texture. We apply FFlowGAN multiple times to progressively synthesize different views, and therefore facial features can be propagated to the target view from the very beginning. All these multiple executions are cascaded and trained end-to-end with a unified back-propagation, and thus we ensure each intermediate step contributes to the final result. Extensive experiments demonstrate the proposed divide-and-conquer strategy is effective, and our method outperforms the state-of-the-art on four benchmark datasets qualitatively and quantitatively.
Bibliographical noteFunding Information:
Manuscript received June 23, 2020; revised January 14, 2021; accepted June 9, 2021. Date of publication June 28, 2021; date of current version July 2, 2021. This work was supported in part by the Key-Area Research and Development Program of Guangdong Province, China, under Grant 2020B010166003, Grant 2020B010165004, and Grant 2018B010107003; in part by the National Natural Science Foundation of China under Grant 61772206, Grant U1611461, Grant 61472145, and Grant 61972162; in part by the Guangdong International Science and Technology Cooperation Project under Grant 2021A0505030009; in part by the Guangdong Natural Science Foundation under Grant 2021A1515012625; in part by the Guangzhou Basic and Applied Research Project under Grant 202102021074; and in part by the China Computer Federation (CCF)-Tencent Open Research Fund. The associate editor coordinating the review of this manuscript and approving it for publication was Dr. Christophoros Nikou. (Corresponding authors: Xuemiao Xu; Shengfeng He.) Yangyang Xu, Keke Li, Cheng Xu, and Shengfeng He are with the School of Computer Science and Engineering, South China University of Technology, Guangzhou 510006, China (e-mail: firstname.lastname@example.org; email@example.com; firstname.lastname@example.org; email@example.com).
© 1992-2012 IEEE.
- Multi-view face synthesis
- pose-invariant face recognition
- face reconstruction
ASJC Scopus subject areas
- Computer Graphics and Computer-Aided Design