Arbitrary View Action Recognition via Transfer Dictionary Learning on Synthetic Training Data

Jingtian Zhang, Lining Zhang, Hubert P. H. Shum and Ling Shao
Proceedings of the 2016 IEEE International Conference on Robotics and Automation (ICRA), 2016

H5-Index: 122#Core A* ConferenceCitation: 16#

Arbitrary View Action Recognition via Transfer Dictionary Learning on Synthetic Training Data
‡ According to Core Ranking 2023
# According to Google Scholar 2025

Abstract

Human action recognition is an important problem in robotic vision. Traditional recognition algorithms usually require the knowledge of view angle, which is not always available in robotic applications such as active vision. In this paper, we propose a new framework to recognize actions with arbitrary views. A main feature of our algorithm is that view-invariance is learned from synthetic 2D and 3D training data using transfer dictionary learning. This guarantees the availability of training data, and removes the hassle of obtaining real world video in specific viewing angles. The result of the process is a dictionary that can project real world 2D video into a view-invariant sparse representation. This facilitates the training of a view-invariant classifier. Experimental results on the IXMAS and N-UCLA datasets show significant improvements over existing algorithms.


Downloads


YouTube


Cite This Research

Plain Text

Jingtian Zhang, Lining Zhang, Hubert P. H. Shum and Ling Shao, "Arbitrary View Action Recognition via Transfer Dictionary Learning on Synthetic Training Data," in ICRA '16: Proceedings of the 2016 IEEE International Conference on Robotics and Automation, pp. 1678-1684, Stockholm, Sweden, IEEE, May 2016.

BibTeX

@inproceedings{zhang16arbitrary,
 author={Zhang, Jingtian and Zhang, Lining and Shum, Hubert P. H. and Shao, Ling},
 booktitle={Proceedings of the 2016 IEEE International Conference on Robotics and Automation},
 series={ICRA '16},
 title={Arbitrary View Action Recognition via Transfer Dictionary Learning on Synthetic Training Data},
 year={2016},
 month={5},
 pages={1678--1684},
 numpages={8},
 doi={10.1109/ICRA.2016.7487309},
 publisher={IEEE},
 location={Stockholm, Sweden},
}

RIS

TY  - CONF
AU  - Zhang, Jingtian
AU  - Zhang, Lining
AU  - Shum, Hubert P. H.
AU  - Shao, Ling
T2  - Proceedings of the 2016 IEEE International Conference on Robotics and Automation
TI  - Arbitrary View Action Recognition via Transfer Dictionary Learning on Synthetic Training Data
PY  - 2016
Y1  - 5 2016
SP  - 1678
EP  - 1684
DO  - 10.1109/ICRA.2016.7487309
PB  - IEEE
ER  - 


Supporting Grants

Northumbria University

Postgraduate Research Scholarship (Ref: ): £65,000, Principal Investigator ()
Received from Faculty of Engineering and Environment, Northumbria University, UK, 2015-2018
Project Page

Similar Research

Jingtian Zhang, Hubert P. H. Shum, Jungong Han and Ling Shao, "Action Recognition from Arbitrary Views Using Transferable Dictionary Learning", IEEE Transactions on Image Processing (TIP), 2018
Zheming Zuo, Daniel Organisciak, Hubert P. H. Shum and Longzhi Yang, "Saliency-Informed Spatio-Temporal Vector of Locally Aggregated Descriptors and Fisher Vectors for Visual Action Recognition", Proceedings of the 2018 British Machine Vision Conference Workshop on Image Analysis for Human Facial and Activity Recognition (IAHFAR), 2018
Qianhui Men, Edmond S. L. Ho, Hubert P. H. Shum and Howard Leung, "Focalized Contrastive View-Invariant Learning for Self-Supervised Skeleton-Based Action Recognition", Neurocomputing, 2023

HomeGoogle ScholarYouTubeLinkedInTwitter/XGitHubORCIDResearchGateEmail
 
Print