Please use this identifier to cite or link to this item: https://scholarbank.nus.edu.sg/handle/10635/69637
DC FieldValue
dc.titleCombining visual features for medical image retrieval and annotation
dc.contributor.authorXiong, W.
dc.contributor.authorQiu, B.
dc.contributor.authorTian, Q.
dc.contributor.authorXu, C.
dc.contributor.authorOng, S.H.
dc.contributor.authorFoong, K.
dc.date.accessioned2014-06-19T03:02:58Z
dc.date.available2014-06-19T03:02:58Z
dc.date.issued2006
dc.identifier.citationXiong, W.,Qiu, B.,Tian, Q.,Xu, C.,Ong, S.H.,Foong, K. (2006). Combining visual features for medical image retrieval and annotation. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 4022 LNCS : 632-641. ScholarBank@NUS Repository.
dc.identifier.isbn354045697X
dc.identifier.issn03029743
dc.identifier.urihttp://scholarbank.nus.edu.sg/handle/10635/69637
dc.description.abstractIn this paper we report our work using visual feature fusion for the tasks of medical image retrieval and annotation in the benchmark of ImageCLEF 2005. In the retrieval task, we use visual features without text information, having no relevance feedback. Both local and global features in terms of both structural and statistical nature are captured. We first identify visually similar images manually and form templates for each query topic. A pre-filtering process is utilized for a coarse retrieval. In the fine retrieval, two similarity measuring channels with different visual features are used in parallel and then combined in the decision level to produce a final score for image ranking. Our approach is evaluated over all 25 query topics with each containing example image(s) and topic textual statements. Over 50,000 images we achieved a mean average precision of 14.6%, as one of the best performed runs. In the annotation task, visual features are fused in an early stage by concatenation with normalization. We use support vector machines (SVM) with RBF kernels for the classification. Our approach is trained over a 9,000 image training set and tested over the given test set with 1000 images and on 57 classes with a correct classification rate of about 80%. © Springer-Verlag Berlin Heidelberg 2006.
dc.sourceScopus
dc.typeConference Paper
dc.contributor.departmentPREVENTIVE DENTISTRY
dc.contributor.departmentELECTRICAL & COMPUTER ENGINEERING
dc.description.sourcetitleLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
dc.description.volume4022 LNCS
dc.description.page632-641
dc.identifier.isiutNOT_IN_WOS
Appears in Collections:Staff Publications

Show simple item record
Files in This Item:
There are no files associated with this item.

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.