Please use this identifier to cite or link to this item: https://doi.org/10.3390/diagnostics13010160
DC FieldValue
dc.titleA Deep Learning System for Automated Quality Evaluation of Optic Disc Photographs in Neuro-Ophthalmic Disorders
dc.contributor.authorChan, Ebenezer
dc.contributor.authorTang, Zhiqun
dc.contributor.authorNajjar, Raymond PP
dc.contributor.authorNarayanaswamy, Arun
dc.contributor.authorSathianvichitr, Kanchalika
dc.contributor.authorNewman, Nancy JJ
dc.contributor.authorBiousse, Valerie
dc.contributor.authorMilea, Dan
dc.contributor.authorBONSAI, Grp
dc.date.accessioned2023-02-13T06:36:19Z
dc.date.available2023-02-13T06:36:19Z
dc.date.issued2023-01-01
dc.identifier.citationChan, Ebenezer, Tang, Zhiqun, Najjar, Raymond PP, Narayanaswamy, Arun, Sathianvichitr, Kanchalika, Newman, Nancy JJ, Biousse, Valerie, Milea, Dan, BONSAI, Grp (2023-01-01). A Deep Learning System for Automated Quality Evaluation of Optic Disc Photographs in Neuro-Ophthalmic Disorders. DIAGNOSTICS 13 (1). ScholarBank@NUS Repository. https://doi.org/10.3390/diagnostics13010160
dc.identifier.issn2075-4418
dc.identifier.urihttps://scholarbank.nus.edu.sg/handle/10635/237164
dc.description.abstractThe quality of ocular fundus photographs can affect the accuracy of the morphologic assessment of the optic nerve head (ONH), either by humans or by deep learning systems (DLS). In order to automatically identify ONH photographs of optimal quality, we have developed, trained, and tested a DLS, using an international, multicentre, multi-ethnic dataset of 5015 ocular fundus photographs from 31 centres in 20 countries participating to the Brain and Optic Nerve Study with Artificial Intelligence (BONSAI). The reference standard in image quality was established by three experts who independently classified photographs as of “good”, “borderline”, or “poor” quality. The DLS was trained on 4208 fundus photographs and tested on an independent external dataset of 807 photographs, using a multi-class model, evaluated with a one-vs-rest classification strategy. In the external-testing dataset, the DLS could identify with excellent performance “good” quality photographs (AUC = 0.93 (95% CI, 0.91–0.95), accuracy = 91.4% (95% CI, 90.0–92.9%), sensitivity = 93.8% (95% CI, 92.5–95.2%), specificity = 75.9% (95% CI, 69.7–82.1%) and “poor” quality photographs (AUC = 1.00 (95% CI, 0.99–1.00), accuracy = 99.1% (95% CI, 98.6–99.6%), sensitivity = 81.5% (95% CI, 70.6–93.8%), specificity = 99.7% (95% CI, 99.6–100.0%). “Borderline” quality images were also accurately classified (AUC = 0.90 (95% CI, 0.88–0.93), accuracy = 90.6% (95% CI, 89.1–92.2%), sensitivity = 65.4% (95% CI, 56.6–72.9%), specificity = 93.4% (95% CI, 92.1–94.8%). The overall accuracy to distinguish among the three classes was 90.6% (95% CI, 89.1–92.1%), suggesting that this DLS could select optimal quality fundus photographs in patients with neuro-ophthalmic and neurological disorders affecting the ONH.
dc.language.isoen
dc.publisherMDPI
dc.sourceElements
dc.subjectScience & Technology
dc.subjectLife Sciences & Biomedicine
dc.subjectMedicine, General & Internal
dc.subjectGeneral & Internal Medicine
dc.subjectretinal image quality assessment
dc.subjectartificial intelligence
dc.subjectdeep learning
dc.subjectoptic nerve head
dc.subjectpapilledema
dc.subjectDIABETIC-RETINOPATHY
dc.subjectARTIFICIAL-INTELLIGENCE
dc.subjectMODEL
dc.typeArticle
dc.date.updated2023-02-13T06:26:59Z
dc.contributor.departmentDUKE-NUS MEDICAL SCHOOL
dc.contributor.departmentOPHTHALMOLOGY
dc.description.doi10.3390/diagnostics13010160
dc.description.sourcetitleDIAGNOSTICS
dc.description.volume13
dc.description.issue1
dc.published.statePublished
Appears in Collections:Staff Publications
Elements

Show simple item record
Files in This Item:
File Description SizeFormatAccess SettingsVersion 
A Deep Learning System for Automated Quality Evaluation of Optic Disc Photographs in Neuro-Ophthalmic Disorders.pdf3.12 MBAdobe PDF

OPEN

PublishedView/Download

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.