Humans adjust their movements in advance to prepare for the forthcoming action, resulting in an efficient and smooth transition. However, traditional computer animation approaches such as motion graphs simply concatenates a series of actions without taking into account the following one. In this paper, we propose a new method to produce preparation behaviours using reinforcement learning. As an offline process, the system learns the optimal way to approach a target and prepare for interaction. A scalar value called the level of preparation is introduced, which represents the degree of transition from the initial action to the interacting action. To synthesize the movements of preparation, we propose a customized motion blending scheme based on the level of preparation, which is followed by an optimization framework that adjusts the posture to keep the balance. During run-time, the trained controller drives the character to move to a target with the appropriate level of preparation, resulting in a human-like behaviour. We create scenes in which the character has to move in a complex environment and interacts with objects, such as crawling under and jumping over obstacles while walking. The method is useful not only for computer animation, but also for real-time applications such as computer games, in which the characters need to accomplish a series of tasks in a given environment.
Hubert P. H. Shum, Ludovic Hoyet, Edmond S. L. Ho, Taku Komura and Franck Multon,
"Natural Preparation Behavior Synthesis",
Computer Animation and Virtual Worlds (CAVW), 2013
Impact Factor: 1.020# Citation: 2##
TY - JOUR
Hubert P. H. Shum, Ludovic Hoyet, Edmond S. L. Ho, Taku Komura and Franck Multon, "Natural Preparation Behavior Synthesis," Computer Animation and Virtual Worlds, vol. 25, no. 5--6, pp. 531-542, John Wiley and Sons Ltd., 2013.
Last update: 04 August 2021