Framework


The results of the proposed PSAM in four fine-grained face editing tasks, including face makeup editing (1st row, left column), face aging (1st row, right column), illumination editing (2nd row) and expression editing (3rd row). Note that although intra-attribute differences are subtle, PSAM can produce very distinguishable fine-grained results.

Abstract

Generative adversarial networks have achieved great success in image generation tasks. But fine-grained face image generation, which aims at modifying face images for synthesizing desired attributes, is still a challenging task. Previous methods directly send the attribute information to the network, which is then processed through multiple convolution layers with the input image. This does not exploit the connection between the input image and the target domain. To solve this problem, we propose PSAM, a novel Personalized Spatial-aware Affine Modulation. It can fully consider the personalized information of the input image, and use the spatial-aware module to synthesize photorealistic images. Extensive experiments demonstrate that our proposed PSAM is effective in four fine-grained face editing tasks including makeup, expression, illumination, and aging. Besides, we also propose a new training strategy and a new makeup dataset. The code and dataset is available at http://xxx.

Dataset

Code to be released
Dataset to be released

Framework


Architecture of the proposed PSAM. An input image I and a target attribute a are concatenated to form feature M, which is then fed into a modulation module to get two condition tensor Γ and B, which share the same shape as M. Affine modulation are applied to linear transform M with Γ and B to produce output images. The generated images and real images are fed into the discriminator D to classify their sources and corresponding attribute.

Experiment


To fully evaluate our proposed methods, we experiment with four face editing tasks, including makeup (Makeup-10K dataset), illumination (Multi-PIE dataset), aging (YGA dataset), expressions (CFEE dataset). Both FID and intra-FID scores are used to evaluate the quality of the enerated images. The lower the FID score, the better the quality of the generated data. As can be seen in table, PSAM achieves best performance on all datasets. See the paper for more details.

References