Please use this identifier to cite or link to this item:
|Title:||How related exemplars help complex event detection in web videos?|
|Citation:||Yang, Y., Ma, Z., Xu, Z., Yan, S., Hauptmann, A.G. (2013). How related exemplars help complex event detection in web videos?. Proceedings of the IEEE International Conference on Computer Vision : 2104-2111. ScholarBank@NUS Repository. https://doi.org/10.1109/ICCV.2013.456|
|Abstract:||Compared to visual concepts such as actions, scenes and objects, complex event is a higher level abstraction of longer video sequences. For example, a 'marriage proposal' event is described by multiple objects (e.g., ring, faces), scenes (e.g., in a restaurant, outdoor) and actions (e.g., kneeling down). The positive exemplars which exactly convey the precise semantic of an event are hard to obtain. It would be beneficial to utilize the related exemplars for complex event detection. However, the semantic correlations between related exemplars and the target event vary substantially as relatedness assessment is subjective. Two related exemplars can be about completely different events, e.g., in the TRECVID MED dataset, both bicycle riding and equestrianism are labeled as related to 'attempting a bike trick' event. To tackle the subjectiveness of human assessment, our algorithm automatically evaluates how positive the related exemplars are for the detection of an event and uses them on an exemplar-specific basis. Experiments demonstrate that our algorithm is able to utilize related exemplars adaptively, and the algorithm gains good performance for complex event detection. © 2013 IEEE.|
|Source Title:||Proceedings of the IEEE International Conference on Computer Vision|
|Appears in Collections:||Staff Publications|
Show full item record
Files in This Item:
There are no files associated with this item.
checked on Mar 17, 2019
WEB OF SCIENCETM
checked on Mar 6, 2019
checked on Jan 12, 2019
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.