We are looking for applicants in Artificial Intelligence, Computer Vision, Edge Computing, Digital Twins, Human Computer Interaction, User Modelling, Robotics or Resilient Computing with potentials/achievements in informing space applications.
The post hoder will enjoy 1) a permanent (equivalent to US tenured) position at a top 100 university, 2) significantly reduced teaching, 3) a fully-funded PhD, 4) travel budget, 5) chance for a 2-year fully-funded Post-Doc.
Constructing virtual scenes that incorporate human and object interaction has traditionally been a time consuming process in computer animation, whereby the motion of an actor is first recorded and any objects used in the scene are then intricately added by an animator. The Microsoft Kinect utilizes a synchronized RGBD stream to provide markerless skeletal tracking of humans, enabling efficient motion capture; however, the problem of capturing environment objects remains unsolved. In this paper, we propose a new framework to segment and track three major types of environment objects using Kinect, namely background planes, stationary objects and dynamic objects. We demonstrate that the motion of an actor and their surrounding environment can be obtained at the same time, saving considerable effort for the animators. Our proposed system is best to be applied to applications involving extensive human-object interactions, such as console games and animation designs.
TY - CONF
Kevin Mackay, Hubert P. H. Shum and Taku Komura, "Environment Capturing with Microsoft Kinect," in SKIMA '12: Proceedings of the 2012 International Conference on Software, Knowledge, Information Management and Applications, Dhaka, Bangladesh, Dec 2012.
Last updated on 24 February 2024