Please use this identifier to cite or link to this item: https://doi.org/10.1109/CVPR.2008.4587432
DC FieldValue
dc.titleLearning class-specific affinities for image labelling
dc.contributor.authorBatra D.
dc.contributor.authorSukthankar R.
dc.contributor.authorChen T.
dc.date.accessioned2018-08-21T05:05:09Z
dc.date.available2018-08-21T05:05:09Z
dc.date.issued2008
dc.identifier.citationBatra D., Sukthankar R., Chen T. (2008). Learning class-specific affinities for image labelling. 26th IEEE Conference on Computer Vision and Pattern Recognition, CVPR : 4587432. ScholarBank@NUS Repository. https://doi.org/10.1109/CVPR.2008.4587432
dc.identifier.isbn9781424422432
dc.identifier.urihttp://scholarbank.nus.edu.sg/handle/10635/146235
dc.description.abstractSpectral clustering and eigenvector-based methods have become increasingly popular in segmentation and recognition. Although the choice of the pairwise similarity metric (or affinities) greatly influences the quality of the results, this choice is typically specified outside the learning framework. In this paper, we present an algorithm to learn class-specific similarity functions. Mapping our problem in a Conditional Random Fields (CRF) framework enables us to pose the task of learning affinities as parameter learning in undirected graphical models. There are two significant advances over previous work. First, we learn the affinity between a pair of data-points as a function of a pairwise feature and (in contrast with previous approaches) the classes to which these two data-points were mapped, allowing us to work with a richer class of affinities. Second, our formulation provides a principled probabilistic interpretation for learning all of the parameters that define these affinities. Using ground truth segmentations and labellings for training, we learn the parameters with the greatest discriminative power (in an MLE sense) on the training data. We demonstrate the power of this learning algorithm in the setting of joint segmentation and recognition of object classes. Specifically, even with very simple appearance features, the proposed method achieves state-of-the-art performance on standard datasets.
dc.sourceScopus
dc.typeConference Paper
dc.contributor.departmentOFFICE OF THE PROVOST
dc.contributor.departmentDEPARTMENT OF COMPUTER SCIENCE
dc.description.doi10.1109/CVPR.2008.4587432
dc.description.sourcetitle26th IEEE Conference on Computer Vision and Pattern Recognition, CVPR
dc.description.page4587432
dc.published.statepublished
Appears in Collections:Staff Publications

Show simple item record
Files in This Item:
There are no files associated with this item.

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.