Please use this identifier to cite or link to this item: https://doi.org/10.1155/2020/8861035
DC FieldValue
dc.titleEvaluation of Multimodal Algorithms for the Segmentation of Multiparametric MRI Prostate Images
dc.contributor.authorNai, Y.-H.
dc.contributor.authorTeo, B.W.
dc.contributor.authorTan, N.L.
dc.contributor.authorChua, K.Y.W.
dc.contributor.authorWong, C.K.
dc.contributor.authorO'Doherty, S.
dc.contributor.authorStephenson, M.C.
dc.contributor.authorSchaefferkoetter, J.
dc.contributor.authorThian, Y.L.
dc.contributor.authorChiong, E.
dc.contributor.authorReilhac, A.
dc.date.accessioned2021-08-17T08:45:59Z
dc.date.available2021-08-17T08:45:59Z
dc.date.issued2020
dc.identifier.citationNai, Y.-H., Teo, B.W., Tan, N.L., Chua, K.Y.W., Wong, C.K., O'Doherty, S., Stephenson, M.C., Schaefferkoetter, J., Thian, Y.L., Chiong, E., Reilhac, A. (2020). Evaluation of Multimodal Algorithms for the Segmentation of Multiparametric MRI Prostate Images. Computational and Mathematical Methods in Medicine 2020 : 8861035. ScholarBank@NUS Repository. https://doi.org/10.1155/2020/8861035
dc.identifier.issn1748670X
dc.identifier.urihttps://scholarbank.nus.edu.sg/handle/10635/197333
dc.description.abstractProstate segmentation in multiparametric magnetic resonance imaging (mpMRI) can help to support prostate cancer diagnosis and therapy treatment. However, manual segmentation of the prostate is subjective and time-consuming. Many deep learning monomodal networks have been developed for automatic whole prostate segmentation from T2-weighted MR images. We aimed to investigate the added value of multimodal networks in segmenting the prostate into the peripheral zone (PZ) and central gland (CG). We optimized and evaluated monomodal DenseVNet, multimodal ScaleNet, and monomodal and multimodal HighRes3DNet, which yielded dice score coefficients (DSC) of 0.875, 0.848, 0.858, and 0.890 in WG, respectively. Multimodal HighRes3DNet and ScaleNet yielded higher DSC with statistical differences in PZ and CG only compared to monomodal DenseVNet, indicating that multimodal networks added value by generating better segmentation between PZ and CG regions but did not improve the WG segmentation. No significant difference was observed in the apex and base of WG segmentation between monomodal and multimodal networks, indicating that the segmentations at the apex and base were more affected by the general network architecture. The number of training data was also varied for DenseVNet and HighRes3DNet, from 20 to 120 in steps of 20. DenseVNet was able to yield DSC of higher than 0.65 even for special cases, such as TURP or abnormal prostate, whereas HighRes3DNet's performance fluctuated with no trend despite being the best network overall. Multimodal networks did not add value in segmenting special cases but generally reduced variations in segmentation compared to the same matched monomodal network. © 2020 Ying-Hwey Nai et al.
dc.publisherHindawi Limited
dc.rightsAttribution 4.0 International
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/
dc.sourceScopus OA2020
dc.typeArticle
dc.contributor.departmentDEAN'S OFFICE (MEDICINE)
dc.contributor.departmentDIAGNOSTIC RADIOLOGY
dc.contributor.departmentSURGERY
dc.description.doi10.1155/2020/8861035
dc.description.sourcetitleComputational and Mathematical Methods in Medicine
dc.description.volume2020
dc.description.page8861035
Appears in Collections:Elements
Staff Publications

Show simple item record
Files in This Item:
File Description SizeFormatAccess SettingsVersion 
10_1155_2020_8861035.pdf1.2 MBAdobe PDF

OPEN

NoneView/Download

Google ScholarTM

Check

Altmetric


This item is licensed under a Creative Commons License Creative Commons