Skip to content. Skip to navigation
CIM Menus


Domain-Specific Face Synthesis for Video-based Face Recognition

Fania Mokhayeri
LiViA Lab Ecole de technologie superieure

December 5, 2019 at  2:30 PM
McConnell Engineering Room 437

Designing a robust system for video-based face recognition in surveillance applications have been a long-standing challenge due to the visual domain shift between faces from source domain and those from target domain. We present 3 data augmentation techniques by generating synthetic face images and exploiting the variational information of the generic set to overcome such challenges.

As a first approach, a domain-specific face synthesis is proposed that generate a representative set of face images under target domain capture conditions by integrating an image-based face relighting technique inside a 3D morphable model. The generated synthetic faces are employed to form a cross-domain dictionary that accounts for structured sparsity where the dictionary blocks combine the original and synthetic faces of each individual.

Another photorealistic face synthesis approach based on adversarial training is presented that employs a generative adversarial network conditioned by images sampled from a 3D morphable model. It also employs an additional adversarial game as a third player to provide control over the face generation process. In this way, a set of realistic and identity preserving synthetic images are generated which are used as additional design data within a Siamese network to boost face recognition performance.

The third approach presents a paired sparse representation model that reconstructs a probe image by jointly using a variational dictionary designed with generic set and a gallery dictionary augmented with synthetic images. The augmented gallery dictionary is also encouraged to share the same sparsity pattern with the variational dictionary by solving a simultaneous sparsity-based optimization problem.