Please use this identifier to cite or link to this item:
https://doi.org/10.1007/s11263-014-0717-5
DC Field | Value | |
---|---|---|
dc.title | Harnessing Lab Knowledge for Real-World Action Recognition | |
dc.contributor.author | Ma, Z. | |
dc.contributor.author | Yang, Y. | |
dc.contributor.author | Nie, F. | |
dc.contributor.author | Sebe, N. | |
dc.contributor.author | Yan, S. | |
dc.contributor.author | Hauptmann, A.G. | |
dc.date.accessioned | 2016-06-03T08:08:00Z | |
dc.date.available | 2016-06-03T08:08:00Z | |
dc.date.issued | 2014-04-16 | |
dc.identifier.citation | Ma, Z., Yang, Y., Nie, F., Sebe, N., Yan, S., Hauptmann, A.G. (2014-04-16). Harnessing Lab Knowledge for Real-World Action Recognition. International Journal of Computer Vision. ScholarBank@NUS Repository. https://doi.org/10.1007/s11263-014-0717-5 | |
dc.identifier.issn | 09205691 | |
dc.identifier.uri | http://scholarbank.nus.edu.sg/handle/10635/125099 | |
dc.description.abstract | Much research on human action recognition has been oriented toward the performance gain on lab-collected datasets. Yet real-world videos are more diverse, with more complicated actions and often only a few of them are precisely labeled. Thus, recognizing actions from these videos is a tough mission. The paucity of labeled real-world videos motivates us to "borrow" strength from other resources. Specifically, considering that many lab datasets are available, we propose to harness lab datasets to facilitate the action recognition in real-world videos given that the lab and real-world datasets are related. As their action categories are usually inconsistent, we design a multi-task learning framework to jointly optimize the classifiers for both sides. The general Schatten (Formula presented.)-norm is exerted on the two classifiers to explore the shared knowledge between them. In this way, our framework is able to mine the shared knowledge between two datasets even if the two have different action categories, which is a major virtue of our method. The shared knowledge is further used to improve the action recognition in the real-world videos. Extensive experiments are performed on real-world datasets with promising results. © 2014 Springer Science+Business Media New York. | |
dc.description.uri | http://libproxy1.nus.edu.sg/login?url=http://dx.doi.org/10.1007/s11263-014-0717-5 | |
dc.source | Scopus | |
dc.subject | Action recognition | |
dc.subject | General Schatten-p norm | |
dc.subject | Lab to real-world | |
dc.subject | Transfer learning | |
dc.type | Article | |
dc.contributor.department | ELECTRICAL & COMPUTER ENGINEERING | |
dc.description.doi | 10.1007/s11263-014-0717-5 | |
dc.description.sourcetitle | International Journal of Computer Vision | |
dc.description.coden | IJCVE | |
dc.identifier.isiut | 000337091700005 | |
Appears in Collections: | Staff Publications |
Show simple item record
Files in This Item:
There are no files associated with this item.
SCOPUSTM
Citations
32
checked on Mar 30, 2023
WEB OF SCIENCETM
Citations
30
checked on Mar 30, 2023
Page view(s)
184
checked on Mar 30, 2023
Google ScholarTM
Check
Altmetric
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.