High-Speed Multi-Person Pose Estimation with Deep Feature Transfer

Ying Huang, Hubert P. H. Shum, Edmond S. L. Ho and Nauman Aslam
Computer Vision and Image Understanding (CVIU), 2020

 Impact Factor: 4.5

High-Speed Multi-Person Pose Estimation with Deep Feature Transfer

Abstract

Recent advancements in deep learning have significantly improved the accuracy of multi-person pose estimation from RGB images. However, these deep learning methods typically rely on a large number of deep refinement modules to refine the features of body joints and limbs, which hugely reduce the run-time speed and therefore limit the application domain. In this paper, we propose a feature transfer framework to capture the concurrent correlations between body joint and limb features. The concurrent correlations of these features form a complementary structural relationship, which mutually strengthens the network’s inferences and reduces the needs of refinement modules. The transfer sub-network is implemented with multiple convolutional layers, and is merged with the body part detection network to form an end-to-end system. The transfer relationship is automatically learned from ground-truth data instead of being manually encoded, resulting in a more general and efficient design. The proposed framework is validated on the multiple popular multi-person pose estimation benchmarks - MPII, COCO 2018 and PoseTrack 2017 and 2018. Experimental results show that our method not only significantly increases the inference speed to 73.8 frame per second (FPS), but also attains comparable state-of-the-art performance.

Downloads

YouTube

Citations

BibTeX

@article{huang20highspeed,
 author={Huang, Ying and Shum, Hubert P. H. and Ho, Edmond S. L. and Aslam, Nauman},
 journal={Computer Vision and Image Understanding},
 title={High-Speed Multi-Person Pose Estimation with Deep Feature Transfer},
 year={2020},
 volume={197-198},
 pages={103010},
 numpages={14},
 doi={10.1016/j.cviu.2020.103010},
 issn={1077-3142},
 publisher={Elsevier},
}

RIS

TY  - JOUR
AU  - Huang, Ying
AU  - Shum, Hubert P. H.
AU  - Ho, Edmond S. L.
AU  - Aslam, Nauman
T2  - Computer Vision and Image Understanding
TI  - High-Speed Multi-Person Pose Estimation with Deep Feature Transfer
PY  - 2020
VL  - 197-198
SP  - 103010
EP  - 103010
DO  - 10.1016/j.cviu.2020.103010
SN  - 1077-3142
PB  - Elsevier
ER  - 

Plain Text

Ying Huang, Hubert P. H. Shum, Edmond S. L. Ho and Nauman Aslam, "High-Speed Multi-Person Pose Estimation with Deep Feature Transfer," Computer Vision and Image Understanding, vol. 197-198, pp. 103010, Elsevier, 2020.

Supporting Grants

Erasmus Mundus
Sustainable Green Economies through Learning, Innovation, Networking and Knowledge Exchange (gLink)
Erasmus Mundus Action 2 Programme (Ref: 2014-0861/001-001): €3.03 million, Co-Investigator, Northumbria University Funding Management Leader (PI: Prof. Nauman Aslam)
Received from Erasmus Mundus, 2015-2018
Project Page

Similar Research

Zhengzhi Lu, He Wang, Ziyi Chang, Guoan Yang and Hubert P. H. Shum, "Hard No-Box Adversarial Attack on Skeleton-Based Human Action Recognition with Skeleton-Motion-Informed Gradient", Proceedings of the 2023 IEEE/CVF International Conference on Computer Vision (ICCV), 2023
Qianhui Men, Edmond S. L. Ho, Hubert P. H. Shum and Howard Leung, "Focalized Contrastive View-Invariant Learning for Self-Supervised Skeleton-Based Action Recognition", Neurocomputing, 2023
Yang Yang, Huiwen Bian, Hubert P. H. Shum, Nauman Aslam and Lanling Zeng, "Temporal Clustering of Motion Capture Data with Optimal Partitioning", Proceedings of the 2016 International Conference on Virtual-Reality Continuum and its Applications in Industry (VRCAI), 2016
Meng Li, Howard Leung and Hubert P. H. Shum, "Human Action Recognition via Skeletal and Depth Based Feature Fusion", Proceedings of the 2016 ACM International Conference on Motion in Games (MIG), 2016
Qianhui Men, Howard Leung, Edmond S. L. Ho and Hubert P. H. Shum, "A Two-Stream Recurrent Network for Skeleton-Based Human Interaction Recognition", Proceedings of the 2020 International Conference on Pattern Recognition (ICPR), 2020

 

 

Last updated on 28 April 2024
RSS Feed