Please use this identifier to cite or link to this item: https://doi.org/10.1109/ICCV.2013.281
Title: Learning to share latent tasks for action recognition
Authors: Zhou, Q.
Wang, G.
Jia, K.
Zhao, Q. 
Keywords: Action Recognition
Latent Task
Issue Date: 2013
Citation: Zhou, Q., Wang, G., Jia, K., Zhao, Q. (2013). Learning to share latent tasks for action recognition. Proceedings of the IEEE International Conference on Computer Vision : 2264-2271. ScholarBank@NUS Repository. https://doi.org/10.1109/ICCV.2013.281
Abstract: Sharing knowledge for multiple related machine learning tasks is an effective strategy to improve the generalization performance. In this paper, we investigate knowledge sharing across categories for action recognition in videos. The motivation is that many action categories are related, where common motion pattern are shared among them (e.g. diving and high jump share the jump motion). We propose a new multi-task learning method to learn latent tasks shared across categories, and reconstruct a classifier for each category from these latent tasks. Compared to previous methods, our approach has two advantages: (1) The learned latent tasks correspond to basic motion patterns instead of full actions, thus enhancing discrimination power of the classifiers. (2) Categories are selected to share information with a sparsity regularizer, avoiding falsely forcing all categories to share knowledge. Experimental results on multiple public data sets show that the proposed approach can effectively transfer knowledge between different action categories to improve the performance of conventional single task learning methods. © 2013 IEEE.
Source Title: Proceedings of the IEEE International Conference on Computer Vision
URI: http://scholarbank.nus.edu.sg/handle/10635/83897
ISBN: 9781479928392
DOI: 10.1109/ICCV.2013.281
Appears in Collections:Staff Publications

Show full item record
Files in This Item:
There are no files associated with this item.

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.