Makeup Style Transfer on Low-Quality Images with Weighted Multi-Scale Attention

Daniel Organisciak, Edmond S. L. Ho and Hubert P. H. Shum
Proceedings of the 2020 International Conference on Pattern Recognition (ICPR), 2020

 H5-Index: 58# Citation: 12#

Makeup Style Transfer on Low-Quality Images with Weighted Multi-Scale Attention
# According to Google Scholar 2023"

Abstract

Facial makeup style transfer is an extremely challenging sub-field of image-to-image-translation. Due to this difficulty, state-of-the-art results are mostly reliant on the Face Parsing Algorithm, which segments a face into parts in order to easily extract makeup features. However, this algorithm can only work well on high-definition images where facial features can be accurately extracted. Faces in many real-world photos, such as those including a large background or multiple people, are typically of low-resolution, which considerably hinders state-of- the-art algorithms. In this paper, we propose an end-to-end holistic approach to effectively transfer makeup styles between two low-resolution images. The idea is built upon a novel weighted multi-scale spatial attention module, which identifies salient pixel regions on low-resolution images in multiple scales, and uses channel attention to determine the most effective attention map. This design provides two benefits: low-resolution images are usually blurry to different extents, so a multi-scale architecture can select the most effective convolution kernel size to implement spatial attention; makeup is applied on both a macro-level (foundation, fake tan) and a micro-level (eyeliner, lipstick) so different scales can excel in extracting different makeup features. We develop an Augmented CycleGAN network that embeds our attention modules at selected layers to most effectively transfer makeup. Our system is tested with the FBD data set, which consists of many low-resolution facial images, and demonstrate that it outperforms state-of-the-art methods, particularly in transferring makeup for blurry images and partially occluded images.

Downloads

YouTube

Citations

BibTeX

@inproceedings{organisciak20makeup,
 author={Organisciak, Daniel and Ho, Edmond S. L. and Shum, Hubert P. H.},
 booktitle={Proceedings of the 2020 International Conference on Pattern Recognition},
 series={ICPR '20},
 title={Makeup Style Transfer on Low-Quality Images with Weighted Multi-Scale Attention},
 year={2020},
 month={1},
 pages={6011--6018},
 numpages={8},
 doi={10.1109/ICPR48806.2021.9412604},
 location={Milan, Italy},
}

RIS

TY  - CONF
AU  - Organisciak, Daniel
AU  - Ho, Edmond S. L.
AU  - Shum, Hubert P. H.
T2  - Proceedings of the 2020 International Conference on Pattern Recognition
TI  - Makeup Style Transfer on Low-Quality Images with Weighted Multi-Scale Attention
PY  - 2020
Y1  - 1 2020
SP  - 6011
EP  - 6018
DO  - 10.1109/ICPR48806.2021.9412604
ER  - 

Plain Text

Daniel Organisciak, Edmond S. L. Ho and Hubert P. H. Shum, "Makeup Style Transfer on Low-Quality Images with Weighted Multi-Scale Attention," in ICPR '20: Proceedings of the 2020 International Conference on Pattern Recognition, pp. 6011-6018, Milan, Italy, Jan 2020.

Supporting Grants

Northumbria University

Postgraduate Research Scholarship (Ref: ): £65,000, Principal Investigator ()
Received from Faculty of Engineering and Environment, Northumbria University, UK, 2018-2021
Project Page

Similar Research

Mridula Vijendran, Frederick W. B. Li and Hubert P. H. Shum, "Tackling Data Bias in Painting Classification with Style Transfer", Proceedings of the 2023 International Conference on Computer Vision Theory and Applications (VISAPP), 2023
Mridula Vijendran, Frederick W. B. Li, Jingjing Deng and Hubert P. H. Shum, "ST-SACLF: Style Transfer Informed Self-Attention Classifier for Bias-Aware Painting Classification", Communications in Computer and Information Science (CCIS), 2024
Shanfeng Hu, Hubert P. H. Shum, Xiaohui Liang, Frederick W. B. Li and Nauman Aslam, "Facial Reshaping Operator for Controllable Face Beautification", Expert Systems with Applications (ESWA), 2021
Andreea Stef, Kaveen Perera, Hubert P. H. Shum and Edmond S. L. Ho, "Synthesizing Expressive Facial and Speech Animation by Text-to-IPA Translation with Emotion Control", Proceedings of the 2018 International Conference on Software, Knowledge, Information Management and Applications (SKIMA), 2018

 

 

Last updated on 4 June 2024
RSS Feed