Research Publications - Face Modelling

Select a Topic:​ All Motion Analysis Character Animation Interaction Modelling Video Analysis Action Recognition 3D Reconstruction Healthcare Diagnosis Virtual Reality Environment Sensing Face Modelling Responsible AI Surface Modelling Robotics Crowd Modelling Hands and Gestures Surveillance Cybersecurity Medical Imaging

Sort By:​YearTypeCitationImpact Factor

We research different representations of human faces ranging from image features to facial landmarks for applications in healthcare, image processing and artwork analysis.

Insterested in our research? Consider joining us.


2024

ST-SACLF: Style Transfer Informed Self-Attention Classifier for Bias-Aware Painting Classification
ST-SACLF: Style Transfer Informed Self-Attention Classifier for Bias-Aware Painting Classification
Book Chapter: Communications in Computer and Information Science (CCIS), 2024
Mridula Vijendran, Frederick W. B. Li, Jingjing Deng and Hubert P. H. Shum
Webpage Cite This Paper GitHub

2023

INCLG: Inpainting for Non-Cleft Lip Generation with a Multi-Task Image Processing Network
INCLG: Inpainting for Non-Cleft Lip Generation with a Multi-Task Image Processing Network  Impact Factor: 1.3
Software Impacts (SIMPAC), 2023
Shuang Chen, Amir Atapour-Abarghouei, Edmond S. L. Ho and Hubert P. H. Shum
Webpage Cite This Paper CodeOcean GitHub
Tackling Data Bias in Painting Classification with Style Transfer
Tackling Data Bias in Painting Classification with Style Transfer
Proceedings of the 2023 International Conference on Computer Vision Theory and Applications (VISAPP), 2023
Mridula Vijendran, Frederick W. B. Li and Hubert P. H. Shum
Webpage Cite This Paper GitHub

2022

A Feasibility Study on Image Inpainting for Non-Cleft Lip Generation from Patients with Cleft Lip
A Feasibility Study on Image Inpainting for Non-Cleft Lip Generation from Patients with Cleft Lip  Oral Presentation
Proceedings of the 2022 IEEE-EMBS International Conference on Biomedical and Health Informatics (BHI), 2022
Shuang Chen, Amir Atapour-Abarghouei, Jane Kerby, Edmond S. L. Ho, David C. G. Sainsbury, Sophie Butterworth and Hubert P. H. Shum
Webpage Cite This Paper GitHub

2021

LMZMPM: Local Modified Zernike Moment Per-Unit Mass for Robust Human Face Recognition
LMZMPM: Local Modified Zernike Moment Per-Unit Mass for Robust Human Face Recognition  Impact Factor: 6.3 Top 25% Journal in Engineering, Electrical & Electronic Citation: 18#
IEEE Transactions on Information Forensics and Security (TIFS), 2021
Arindam Kar, Sourav Pramanik, Arghya Chakraborty, Debotosh Bhattacharjee, Edmond S. L. Ho and Hubert P. H. Shum
Webpage Cite This Paper
Facial Reshaping Operator for Controllable Face Beautification
Facial Reshaping Operator for Controllable Face Beautification  Impact Factor: 7.5 Top 25% Journal in Computer Science, Artificial Intelligence Citation: 10#
Expert Systems with Applications (ESWA), 2021
Shanfeng Hu, Hubert P. H. Shum, Xiaohui Liang, Frederick W. B. Li and Nauman Aslam
Webpage Cite This Paper Supplementary Material

2020

Makeup Style Transfer on Low-Quality Images with Weighted Multi-Scale Attention
Makeup Style Transfer on Low-Quality Images with Weighted Multi-Scale Attention  H5-Index: 56# Citation: 13#
Proceedings of the 2020 International Conference on Pattern Recognition (ICPR), 2020
Daniel Organisciak, Edmond S. L. Ho and Hubert P. H. Shum
Webpage Cite This Paper Supplementary Material GitHub YouTube Presentation Slides

2019

Multiview Discriminative Marginal Metric Learning for Makeup Face Verification
Multiview Discriminative Marginal Metric Learning for Makeup Face Verification  Impact Factor: 5.5 Top 25% Journal in Computer Science, Artificial Intelligence Citation: 20#
Neurocomputing, 2019
Lining Zhang, Hubert P. H. Shum, Li Liu, Guodong Guo and Ling Shao
Webpage Cite This Paper
A Generic Framework for Editing and Synthesizing Multimodal Data with Relative Emotion Strength
A Generic Framework for Editing and Synthesizing Multimodal Data with Relative Emotion Strength
Computer Animation and Virtual Worlds (CAVW), 2019
Jacky C. P. Chan, Hubert P. H. Shum, He Wang, Li Yi, Wei Wei and Edmond S. L. Ho
Webpage Cite This Paper YouTube

2018

Synthesizing Expressive Facial and Speech Animation by Text-to-IPA Translation with Emotion Control
Synthesizing Expressive Facial and Speech Animation by Text-to-IPA Translation with Emotion Control
Proceedings of the 2018 International Conference on Software, Knowledge, Information Management and Applications (SKIMA), 2018
Andreea Stef, Kaveen Perera, Hubert P. H. Shum and Edmond S. L. Ho
Webpage Cite This Paper
Patient Assessment Assistant Using Augmented Reality
Patient Assessment Assistant Using Augmented Reality
Proceedings of the 2018 UK-China Newton Fund Researcher Links Workshop Health and Well-being Through VR and AR, 2018
Edmond S. L. Ho, Kevin D. McCay, Hubert P. H. Shum, Longzhi Yang, David Sainsbury and Peter Hodgkinson
Webpage Cite This Paper

2016

Depth Sensor Based Facial and Body Animation Control
Depth Sensor Based Facial and Body Animation Control
Book Chapter: Handbook of Human Motion, 2016
Yijun Shen, Jingtian Zhang, Longzhi Yang and Hubert P. H. Shum
Webpage Cite This Paper


† According to Journal Citation Reports 2023
‡ According to Core Ranking 2023
# According to Google Scholar 2024


 

Last updated on 6 October 2024
RSS Feed