Please use this identifier to cite or link to this item:
|Title:||Text localization in web images using probabilistic candidate selection model|
|Source:||Situ, L., Liu, R., Tan, C.L. (2011). Text localization in web images using probabilistic candidate selection model. Proceedings of the International Conference on Document Analysis and Recognition, ICDAR : 1359-1363. ScholarBank@NUS Repository. https://doi.org/10.1109/ICDAR.2011.273|
|Abstract:||Web has become increasingly oriented to multimedia content. Most information on the web is conveyed from images. Text localization in web image plays an important role in web image information extraction and retrieval. Current works on text localization in web images assume that text regions are in homogenous color and high contrast. Hence, the approaches may fail when text regions are in multi-color or imposed in complex background. In this paper, we propose a text extraction algorithm from web images based on the probabilistic candidate selection model. The model firstly segments text region candidates from input images using wavelet, Gaussian mixture model (GMM) and triangulation. The likelihood of a candidate region containing text is then learnt using a Bayesian probabilistic model from two features, namely, histogram of oriented gradient (HOG) and local binary pattern histogram Fourier feature (LBP-HF). Finally best candidate regions are integrated to form text regions. The algorithm is evaluated using 155 non-homogenous web images containing around 600 text regions. The results show that the proposed model is able to extract text regions from non-homogenous images effectively. © 2011 IEEE.|
|Source Title:||Proceedings of the International Conference on Document Analysis and Recognition, ICDAR|
|Appears in Collections:||Staff Publications|
Show full item record
Files in This Item:
There are no files associated with this item.
checked on Dec 14, 2017
WEB OF SCIENCETM
checked on Nov 19, 2017
checked on Dec 10, 2017
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.