Improving Posture Classification Accuracy for Depth Sensor-Based Human Activity Monitoring in Smart Environments

Edmond S. L. Ho, Jacky C. P. Chan, Donald C. K. Chan, Hubert P. H. Shum, Yiu-ming Cheung and P. C. Yuen
Computer Vision and Image Understanding (CVIU), 2016

 Impact Factor: 4.5 Citation: 90#

Improving Posture Classification Accuracy for Depth Sensor-Based Human Activity Monitoring in Smart Environments
# According to Google Scholar 2023"

Abstract

Smart environments and monitoring systems are popular research areas nowadays due to its potential to enhance the quality of life. Applications such as human behaviour analysis and workspace ergonomics monitoring are automated, thereby improving well-being of individuals with minimal running cost. The central problem of smart environments is to understand what the user is doing in order to provide the appropriate support. While it is difficult to obtain information of full body movement in the past, depth camera-based motion sensing technology such as Kinect has made it possible to obtain 3D posture without complex setup. This has fused a large number of research projects to apply Kinect in smart environments. The common bottleneck of these researches is the high amount of errors in the detected joint positions, which would result in inaccurate analysis and false alarms. In this paper, we propose a framework that accurately classifies the nature of the 3D postures obtained by Kinect using a max-margin classifier. Different from previous work in the area, we integrate the information about the reliability of the tracked joints in order to enhance the accuracy and robustness of our framework. As a result, apart from general classifying activity of different movement context, our proposed method can classify the subtle differences between correctly performed and incorrectly performed movement in the same context. We demonstrate how our framework can be applied to evaluate the user??s posture and identify the postures that may result in musculoskeletal disorders. Such a system can be used in workplace such as offices and factories to reduce risk of injury. Experimental results have shown that our method consistently outperforms existing algorithms in both activity classification and posture healthiness classification. Due to the low-cost and the easy deployment process of depth camera based motion sensors, our framework can be applied widely in home and office to facilitate smart environments.

Downloads

YouTube

Citations

BibTeX

@article{ho16improving,
 author={Ho, Edmond S. L. and Chan, Jacky C. P. and Chan, Donald C. K. and Shum, Hubert P. H. and Cheung, Yiu-ming and Yuen, P. C.},
 journal={Computer Vision and Image Understanding},
 title={Improving Posture Classification Accuracy for Depth Sensor-Based Human Activity Monitoring in Smart Environments},
 year={2016},
 month={7},
 volume={148},
 pages={97--110},
 numpages={14},
 doi={10.1016/j.cviu.2015.12.011},
 issn={1077-3142},
 publisher={Elsevier},
}

RIS

TY  - JOUR
AU  - Ho, Edmond S. L.
AU  - Chan, Jacky C. P.
AU  - Chan, Donald C. K.
AU  - Shum, Hubert P. H.
AU  - Cheung, Yiu-ming
AU  - Yuen, P. C.
T2  - Computer Vision and Image Understanding
TI  - Improving Posture Classification Accuracy for Depth Sensor-Based Human Activity Monitoring in Smart Environments
PY  - 2016
Y1  - 7 2016
VL  - 148
SP  - 97
EP  - 110
DO  - 10.1016/j.cviu.2015.12.011
SN  - 1077-3142
PB  - Elsevier
ER  - 

Plain Text

Edmond S. L. Ho, Jacky C. P. Chan, Donald C. K. Chan, Hubert P. H. Shum, Yiu-ming Cheung and P. C. Yuen, "Improving Posture Classification Accuracy for Depth Sensor-Based Human Activity Monitoring in Smart Environments," Computer Vision and Image Understanding, vol. 148, pp. 97-110, Elsevier, Jul 2016.

Supporting Grants

The Engineering and Physical Sciences Research Council
Interaction-based Human Motion Analysis
EPSRC First Grant Scheme (Ref: EP/M002632/1): £123,819, Principal Investigator ()
Received from The Engineering and Physical Sciences Research Council, UK, 2014-2016
Project Page

Similar Research

Meng Li, Howard Leung and Hubert P. H. Shum, "Human Action Recognition via Skeletal and Depth Based Feature Fusion", Proceedings of the 2016 ACM International Conference on Motion in Games (MIG), 2016
Jingtian Zhang, Lining Zhang, Hubert P. H. Shum and Ling Shao, "Arbitrary View Action Recognition via Transfer Dictionary Learning on Synthetic Training Data", Proceedings of the 2016 IEEE International Conference on Robotics and Automation (ICRA), 2016
Zheming Zuo, Daniel Organisciak, Hubert P. H. Shum and Longzhi Yang, "Saliency-Informed Spatio-Temporal Vector of Locally Aggregated Descriptors and Fisher Vectors for Visual Action Recognition", Proceedings of the 2018 British Machine Vision Conference Workshop on Image Analysis for Human Facial and Activity Recognition (IAHFAR), 2018
Ying Huang, Hubert P. H. Shum, Edmond S. L. Ho and Nauman Aslam, "High-Speed Multi-Person Pose Estimation with Deep Feature Transfer", Computer Vision and Image Understanding (CVIU), 2020

 

 

Last updated on 14 April 2024
RSS Feed