[Recruitment]
Associate/Assistant Professor in Artificial Intelligence for Space-Enabled Technologies, Durham University

We are looking for applicants in Artificial Intelligence, Computer Vision, Edge Computing, Digital Twins, Human Computer Interaction, User Modelling, Robotics or Resilient Computing with potentials/achievements in informing space applications.

The post hoder will enjoy 1) a permanent (equivalent to US tenured) position at a top 100 university, 2) significantly reduced teaching, 3) a fully-funded PhD, 4) travel budget, 5) chance for a 2-year fully-funded Post-Doc.

A Generic Framework for Editing and Synthesizing Multimodal Data with Relative Emotion Strength

Jacky C. P. Chan, Hubert P. H. Shum, He Wang, Li Yi, Wei Wei and Edmond S. L. Ho
Computer Animation and Virtual Worlds (CAVW), 2019

 Impact Factor: 1.1

A Generic Framework for Editing and Synthesizing Multimodal Data with Relative Emotion Strength

Abstract

Emotion is considered to be a core element in performances [1]. In computer animation, both body motions and facial expressions are two popular mediums for a character to express the emotion. However, there has been limited research in studying how to effectively synthesize these two types of character movements using different levels of emotion strength with intuitive control, which is difficult to be modelled effectively. In this work, we explore a common model that can be used to represent the emotion for the applications of body motions and facial expressions synthesis. Unlike previous work which encode emotions into discrete motion style descriptors, we propose a continuous control indicator called emotion strength, by controlling which a data-driven approach is presented to synthesize motions with fine control over emotions. Rather than interpolating motion features to synthesize new motion as in existing work, our method explicitly learns a model mapping low-level motion features to the emotion strength. Since the motion synthesis model is learned in the training stage, the computation time required for synthesizing motions at run-time is very low. We further demonstrate the generality of our proposed framework by editing 2D face images using relative emotion strength. As a result, our method can be applied to interactive applications such as computer games, image editing tools and virtual reality applications, as well as offline applications such as animation and movie production.

Downloads

YouTube

Citations

BibTeX

@article{chan19generic,
 author={Chan, Jacky C. P. and Shum, Hubert P. H. and Wang, He and Yi, Li and Wei, Wei and Ho, Edmond S. L.},
 journal={Computer Animation and Virtual Worlds},
 title={A Generic Framework for Editing and Synthesizing Multimodal Data with Relative Emotion Strength},
 year={2019},
 volume={30},
 number={6},
 pages={e1871},
 numpages={20},
 doi={10.1002/cav.1871},
 publisher={John Wiley and Sons Ltd.},
 Address={Chichester, UK},
}

RIS

TY  - JOUR
AU  - Chan, Jacky C. P.
AU  - Shum, Hubert P. H.
AU  - Wang, He
AU  - Yi, Li
AU  - Wei, Wei
AU  - Ho, Edmond S. L.
T2  - Computer Animation and Virtual Worlds
TI  - A Generic Framework for Editing and Synthesizing Multimodal Data with Relative Emotion Strength
PY  - 2019
VL  - 30
IS  - 6
SP  - e1871
EP  - e1871
DO  - 10.1002/cav.1871
PB  - John Wiley and Sons Ltd.
ER  - 

Plain Text

Jacky C. P. Chan, Hubert P. H. Shum, He Wang, Li Yi, Wei Wei and Edmond S. L. Ho, "A Generic Framework for Editing and Synthesizing Multimodal Data with Relative Emotion Strength," Computer Animation and Virtual Worlds, vol. 30, no. 6, pp. e1871, John Wiley and Sons Ltd., 2019.

Supporting Grants

Similar Research

Edmond S. L. Ho, Hubert P. H. Shum, He Wang and Li Yi, "Synthesizing Motion with Relative Emotion Strength", Proceedings of the 2017 ACM SIGGRAPH Asia Workshop on Data-Driven Animation Techniques (D2AT), 2017
Andreea Stef, Kaveen Perera, Hubert P. H. Shum and Edmond S. L. Ho, "Synthesizing Expressive Facial and Speech Animation by Text-to-IPA Translation with Emotion Control", Proceedings of the 2018 International Conference on Software, Knowledge, Information Management and Applications (SKIMA), 2018
Hubert P. H. Shum, Ludovic Hoyet, Edmond S. L. Ho, Taku Komura and Franck Multon, "Preparation Behaviour Synthesis with Reinforcement Learning", Proceedings of the 2013 International Conference on Computer Animation and Social Agents (CASA), 2013
He Wang, Edmond S. L. Ho, Hubert P. H. Shum and Zhanxing Zhu, "Spatio-Temporal Manifold Learning for Human Motions via Long-Horizon Modeling", IEEE Transactions on Visualization and Computer Graphics (TVCG), 2021
Liuyang Zhou, Lifeng Shang, Hubert P. H. Shum and Howard Leung, "Human Motion Variation Synthesis with Multivariate Gaussian Processes", Computer Animation and Virtual Worlds (CAVW) - Proceedings of the 2014 International Conference on Computer Animation and Social Agents (CASA), 2014
Hubert P. H. Shum, Ludovic Hoyet, Edmond S. L. Ho, Taku Komura and Franck Multon, "Natural Preparation Behavior Synthesis", Computer Animation and Virtual Worlds (CAVW), 2013

 

 

Last updated on 24 February 2024
RSS Feed