Denoising Diffusion Probabilistic Models for Styled Walking Synthesis

Edmund J. C. Findlay, Haozheng Zhang, Ziyi Chang and Hubert P. H. Shum
Proceedings of the 2022 ACM SIGGRAPH Conference on Motion, Interaction and Games (MIG) Posters, 2022

Denoising Diffusion Probabilistic Models for Styled Walking Synthesis

Abstract

Generating realistic motions for digital humans is time-consuming for many graphics applications. Data-driven motion synthesis approaches have seen solid progress in recent years through deep generative models. These results offer high-quality motions but typically suffer in motion style diversity. For the first time, we propose a framework using the denoising diffusion probabilistic model (DDPM) to synthesize styled human motions, integrating two tasks into one pipeline with increased style diversity compared with traditional motion synthesis methods. Experimental results show that our system can generate high-quality and diverse walking motions.

Downloads

YouTube

Citations

BibTeX

@inproceedings{edmund22style,
 author={Findlay, Edmund J. C. and Zhang, Haozheng and Chang, Ziyi and Shum, Hubert P. H.},
 booktitle={Proceedings of the 2022 ACM SIGGRAPH Conference on Motion, Interaction and Games},
 series={MIG '22},
 title={Denoising Diffusion Probabilistic Models for Styled Walking Synthesis},
 year={2022},
 publisher={ACM},
 Address={New York, NY, USA},
 location={Guanajuato, Mexico},
}

RIS

TY  - CONF
AU  - Findlay, Edmund J. C.
AU  - Zhang, Haozheng
AU  - Chang, Ziyi
AU  - Shum, Hubert P. H.
T2  - Proceedings of the 2022 ACM SIGGRAPH Conference on Motion, Interaction and Games
TI  - Denoising Diffusion Probabilistic Models for Styled Walking Synthesis
PY  - 2022
PB  - ACM
ER  - 

Plain Text

Edmund J. C. Findlay, Haozheng Zhang, Ziyi Chang and Hubert P. H. Shum, "Denoising Diffusion Probabilistic Models for Styled Walking Synthesis," in MIG '22: Proceedings of the 2022 ACM SIGGRAPH Conference on Motion, Interaction and Games, Guanajuato, Mexico, ACM, 2022.

Supporting Grants

Similar Research

Ziyi Chang, Edmund J. C. Findlay, Haozheng Zhang and Hubert P. H. Shum, "Unifying Human Motion Synthesis and Style Transfer with Denoising Diffusion Probabilistic Models", Proceedings of the 2023 International Conference on Computer Graphics Theory and Applications (GRAPP), 2023
He Wang, Edmond S. L. Ho, Hubert P. H. Shum and Zhanxing Zhu, "Spatio-Temporal Manifold Learning for Human Motions via Long-Horizon Modeling", IEEE Transactions on Visualization and Computer Graphics (TVCG), 2021
Liuyang Zhou, Lifeng Shang, Hubert P. H. Shum and Howard Leung, "Human Motion Variation Synthesis with Multivariate Gaussian Processes", Computer Animation and Virtual Worlds (CAVW) - Proceedings of the 2014 International Conference on Computer Animation and Social Agents (CASA), 2014
Edmond S. L. Ho, Hubert P. H. Shum, He Wang and Li Yi, "Synthesizing Motion with Relative Emotion Strength", Proceedings of the 2017 ACM SIGGRAPH Asia Workshop on Data-Driven Animation Techniques (D2AT), 2017
Hubert P. H. Shum, Taku Komura and Pranjul Yadav, "Angular Momentum Guided Motion Concatenation", Computer Animation and Virtual Worlds (CAVW) - Proceedings of the 2009 International Conference on Computer Animation and Social Agents (CASA), 2009
Hubert P. H. Shum, Ludovic Hoyet, Edmond S. L. Ho, Taku Komura and Franck Multon, "Natural Preparation Behavior Synthesis", Computer Animation and Virtual Worlds (CAVW), 2013
Hubert P. H. Shum, Ludovic Hoyet, Edmond S. L. Ho, Taku Komura and Franck Multon, "Preparation Behaviour Synthesis with Reinforcement Learning", Proceedings of the 2013 International Conference on Computer Animation and Social Agents (CASA), 2013
Qianhui Men, Edmond S. L. Ho, Hubert P. H. Shum and Howard Leung, "A Quadruple Diffusion Convolutional Recurrent Network for Human Motion Prediction", IEEE Transactions on Circuits and Systems for Video Technology (TCSVT), 2021

 

 

Last updated on 25 March 2024
RSS Feed