Please use this identifier to cite or link to this item:
|Title:||Learning CRFs for image parsing with adaptive subgradient descent||Authors:||Zhang, H.
|Keywords:||Adaptive Subgradient Descent
Conditional Random Field
|Issue Date:||2013||Citation:||Zhang, H., Wang, J., Tan, P., Wang, J., Quan, L. (2013). Learning CRFs for image parsing with adaptive subgradient descent. Proceedings of the IEEE International Conference on Computer Vision : 3080-3087. ScholarBank@NUS Repository. https://doi.org/10.1109/ICCV.2013.382||Abstract:||We propose an adaptive sub gradient descent method to efficiently learn the parameters of CRF models for image parsing. To balance the learning efficiency and performance of the learned CRF models, the parameter learning is iteratively carried out by solving a convex optimization problem in each iteration, which integrates a proximal term to preserve the previously learned information and the large margin preference to distinguish bad labeling and the ground truth labeling. A solution of sub gradient descent updating form is derived for the convex optimization problem, with an adaptively determined updating step-size. Besides, to deal with partially labeled training data, we propose a new objective constraint modeling both the labeled and unlabeled parts in the partially labeled training data for the parameter learning of CRF models. The superior learning efficiency of the proposed method is verified by the experiment results on two public datasets. We also demonstrate the powerfulness of our method for handling partially labeled training data. © 2013 IEEE.||Source Title:||Proceedings of the IEEE International Conference on Computer Vision||URI:||http://scholarbank.nus.edu.sg/handle/10635/83893||ISBN:||9781479928392||DOI:||10.1109/ICCV.2013.382|
|Appears in Collections:||Staff Publications|
Show full item record
Files in This Item:
There are no files associated with this item.
checked on Aug 20, 2019
WEB OF SCIENCETM
checked on Aug 12, 2019
checked on Aug 17, 2019
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.