Please use this identifier to cite or link to this item: https://scholarbank.nus.edu.sg/handle/10635/72162
DC FieldValue
dc.titleVideo-text extraction and recognition
dc.contributor.authorChen, T.B.
dc.contributor.authorGhosh, D.
dc.contributor.authorRanganath, S.
dc.date.accessioned2014-06-19T03:32:10Z
dc.date.available2014-06-19T03:32:10Z
dc.date.issued2004
dc.identifier.citationChen, T.B.,Ghosh, D.,Ranganath, S. (2004). Video-text extraction and recognition. IEEE Region 10 Annual International Conference, Proceedings/TENCON A : A319-A322. ScholarBank@NUS Repository.
dc.identifier.urihttp://scholarbank.nus.edu.sg/handle/10635/72162
dc.description.abstractThe detection and recognition of text from video is an important issue in automated content-based indexing of visual information in video archives. In this paper, we present a comprehensive system for extracting and recognizing artificial text from unconstrained, general-purpose videos. Exploiting the temporal feature of videos, an edge-detection-based text segmentation method is applied only on selective frames for extracting text from a video scene. Subsequently, a combination of techniques including multiple frame integration, gray-scale filtering, entropy-based thresholding and line adjacency graphs is used to enhance the detected text areas. Finally, character recognition is accomplished by using the character side profiles. Results obtained from experiments on uncompressed MPEG-1 video clips demonstrate the effectiveness of our proposed system. © 2004IEEE.
dc.sourceScopus
dc.typeConference Paper
dc.contributor.departmentELECTRICAL & COMPUTER ENGINEERING
dc.description.sourcetitleIEEE Region 10 Annual International Conference, Proceedings/TENCON
dc.description.volumeA
dc.description.pageA319-A322
dc.description.coden85QXA
dc.identifier.isiutNOT_IN_WOS
Appears in Collections:Staff Publications

Show simple item record
Files in This Item:
There are no files associated with this item.

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.