Automatic dance synthesis has become more and more popular due to the increasing demand in computer games and animations. Existing research generates dance motions without much consideration for the context of the music. In reality, professional dancers make choreography according to the lyrics and music features. In this research, we focus on a particular genre of dance known as sign dance, which combines gesture-based sign language with full body dance motion. We propose a system to automatically generate sign dance from a piece of music and its corresponding sign gesture. The core of the system is a Sign Dance Model trained by multiple regression analysis to represent the correlations between sign dance and sign gesture/music, as well as a set of objective functions to evaluate the quality of the sign dance. Our system can be applied to music visualization, allowing people with hearing difficulties to understand and enjoy music.
Naoya Iwamoto, Hubert P. H. Shum, Wakana Asahina and Shigeo Morishima,
"Automatic Sign Dance Synthesis from Gesture-based Sign Language",
Proceedings of the 2019 International Conference on Motion, Interactions and Games (MIG), 2019
TY - CONF
Naoya Iwamoto, Hubert P. H. Shum, Wakana Asahina and Shigeo Morishima, "Automatic Sign Dance Synthesis from Gesture-based Sign Language," in MIG '19: Proceedings of the 2019 International Conference on Motion in Games, pp. 18:1-18:9, Newcastle upon Tyne, UK, ACM, Oct 2019.
Last updated on 26 October 2021, RSS Feeds