Please use this identifier to cite or link to this item:
|Title:||Harnessing Lab Knowledge for Real-World Action Recognition|
General Schatten-p norm
Lab to real-world
|Source:||Ma, Z., Yang, Y., Nie, F., Sebe, N., Yan, S., Hauptmann, A.G. (2014-04-16). Harnessing Lab Knowledge for Real-World Action Recognition. International Journal of Computer Vision. ScholarBank@NUS Repository. https://doi.org/10.1007/s11263-014-0717-5|
|Abstract:||Much research on human action recognition has been oriented toward the performance gain on lab-collected datasets. Yet real-world videos are more diverse, with more complicated actions and often only a few of them are precisely labeled. Thus, recognizing actions from these videos is a tough mission. The paucity of labeled real-world videos motivates us to "borrow" strength from other resources. Specifically, considering that many lab datasets are available, we propose to harness lab datasets to facilitate the action recognition in real-world videos given that the lab and real-world datasets are related. As their action categories are usually inconsistent, we design a multi-task learning framework to jointly optimize the classifiers for both sides. The general Schatten (Formula presented.)-norm is exerted on the two classifiers to explore the shared knowledge between them. In this way, our framework is able to mine the shared knowledge between two datasets even if the two have different action categories, which is a major virtue of our method. The shared knowledge is further used to improve the action recognition in the real-world videos. Extensive experiments are performed on real-world datasets with promising results. © 2014 Springer Science+Business Media New York.|
|Source Title:||International Journal of Computer Vision|
|Appears in Collections:||Staff Publications|
Show full item record
Files in This Item:
There are no files associated with this item.
checked on Mar 7, 2018
WEB OF SCIENCETM
checked on Jan 30, 2018
checked on Mar 11, 2018
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.