Please use this identifier to cite or link to this item: https://doi.org/10.1109/DICTA.2012.6411703
Title: A new method for word segmentation from arbitrarily-oriented video text lines
Authors: Sharma, N.
Shivakumara, P. 
Pal, U.
Blumenstein, M.
Tan, C.L. 
Keywords: Video text candidates
Video text line
Video word segmentation
Issue Date: 2012
Source: Sharma, N.,Shivakumara, P.,Pal, U.,Blumenstein, M.,Tan, C.L. (2012). A new method for word segmentation from arbitrarily-oriented video text lines. 2012 International Conference on Digital Image Computing Techniques and Applications, DICTA 2012. ScholarBank@NUS Repository. https://doi.org/10.1109/DICTA.2012.6411703
Abstract: Word segmentation has become a research topic to improve OCR accuracy for video text recognition, because a video text line suffers from arbitrary orientation, complex background and low resolution. Therefore, for word segmentation from arbitrarily-oriented video text lines, in this paper, we extract four new gradient directional features for each Canny edge pixel of the input text line image to produce four respective pixel candidate images. The union of four pixel candidate images is performed to obtain a text candidate image. The sequence of the components in the text candidate image according to the text line is determined using nearest neighbor criteria. Then we propose a two-stage method for segmenting words. In the first stage, for the distances between the components, we apply K-means clustering with K=2 to get probable word and non-word spacing clusters. The words are segmented based on probable word spacing and all other components are passed to the second stage for segmenting correct words. For each segmented and un-segmented words passed to the second stage, the method repeats all the steps until the K-means clustering step to find probable word and non-word spacing clusters. Then the method considers cluster nature, height and width of the components to identify the correct word spacing. The method is tested extensively on video curved text lines, non-horizontal straight lines, horizontal straight lines and text lines from the ICDAR-2003 competition data. Experimental results and a comparative study shows the results are encouraging and promising. © 2012 IEEE.
Source Title: 2012 International Conference on Digital Image Computing Techniques and Applications, DICTA 2012
URI: http://scholarbank.nus.edu.sg/handle/10635/42112
ISBN: 9781467321815
DOI: 10.1109/DICTA.2012.6411703
Appears in Collections:Staff Publications

Show full item record
Files in This Item:
There are no files associated with this item.

SCOPUSTM   
Citations

10
checked on Jan 16, 2018

Page view(s)

43
checked on Jan 14, 2018

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.