Efficient computation of strategic movements is essential to control virtual avatars intelligently in computer games and 3D virtual environments. Such a module is needed to control non-player characters (NPCs) to fight, play team sports or move through a mass crowd. Reinforcement learning is an approach to achieve real-time optimal control. However, the huge state space of human interactions makes it difficult to apply existing learning methods to control avatars when they have dense interactions with other characters. In this research, we propose a new methodology to ef?ciently plan the movements of an avatar interacting with another. We make use of the fact that the subspace of meaningful interactions is much smaller than the whole state space of two avatars. We efficiently collect samples by exploring the subspace where dense interactions between the avatars occur and favor samples that have high connectivity with the other samples. Using the collected samples, a ?nite state machine (FSM) called Interaction Graph is composed. At run-time, we compute the optimal action of each avatar by min-max search or dynamic programming on the Interaction Graph. The methodology is applicable to control NPCs in ?ghting and ball-sports games.
TY - CONF
Hubert P. H. Shum, Taku Komura and Shuntaro Yamazaki, "Simulating Interactions of Avatars in High Dimensional State Space," in I3D '08: Proceedings of the 2008 ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games, pp. 131-138, Redwood City, California, ACM, 2 2008.
Last updated on 17 September 2023