Please use this identifier to cite or link to this item: https://doi.org/10.1016/j.ebiom.2023.104770
DC FieldValue
dc.titleBenchmarking large language models’ performances for myopia care: a comparative analysis of ChatGPT-3.5, ChatGPT-4.0, and Google Bard
dc.contributor.authorLim, ZW
dc.contributor.authorPushpanathan, K
dc.contributor.authorYew, SME
dc.contributor.authorLai, Y
dc.contributor.authorSun, CH
dc.contributor.authorLam, JSH
dc.contributor.authorChen, DZ
dc.contributor.authorGoh, JHL
dc.contributor.authorTan, MCJ
dc.contributor.authorSheng, B
dc.contributor.authorCheng, CY
dc.contributor.authorKoh, VTC
dc.contributor.authorTham, YC
dc.date.accessioned2023-11-17T01:26:10Z
dc.date.available2023-11-17T01:26:10Z
dc.date.issued2023-09-01
dc.identifier.citationLim, ZW, Pushpanathan, K, Yew, SME, Lai, Y, Sun, CH, Lam, JSH, Chen, DZ, Goh, JHL, Tan, MCJ, Sheng, B, Cheng, CY, Koh, VTC, Tham, YC (2023-09-01). Benchmarking large language models’ performances for myopia care: a comparative analysis of ChatGPT-3.5, ChatGPT-4.0, and Google Bard. eBioMedicine 95 : 104770-. ScholarBank@NUS Repository. https://doi.org/10.1016/j.ebiom.2023.104770
dc.identifier.issn2352-3964
dc.identifier.urihttps://scholarbank.nus.edu.sg/handle/10635/246018
dc.description.abstractBackground: Large language models (LLMs) are garnering wide interest due to their human-like and contextually relevant responses. However, LLMs’ accuracy across specific medical domains has yet been thoroughly evaluated. Myopia is a frequent topic which patients and parents commonly seek information online. Our study evaluated the performance of three LLMs namely ChatGPT-3.5, ChatGPT-4.0, and Google Bard, in delivering accurate responses to common myopia-related queries. Methods: We curated thirty-one commonly asked myopia care-related questions, which were categorised into six domains—pathogenesis, risk factors, clinical presentation, diagnosis, treatment and prevention, and prognosis. Each question was posed to the LLMs, and their responses were independently graded by three consultant-level paediatric ophthalmologists on a three-point accuracy scale (poor, borderline, good). A majority consensus approach was used to determine the final rating for each response. ‘Good’ rated responses were further evaluated for comprehensiveness on a five-point scale. Conversely, ‘poor’ rated responses were further prompted for self-correction and then re-evaluated for accuracy. Findings: ChatGPT-4.0 demonstrated superior accuracy, with 80.6% of responses rated as ‘good’, compared to 61.3% in ChatGPT-3.5 and 54.8% in Google Bard (Pearson's chi-squared test, all p ≤ 0.009). All three LLM-Chatbots showed high mean comprehensiveness scores (Google Bard: 4.35; ChatGPT-4.0: 4.23; ChatGPT-3.5: 4.11, out of a maximum score of 5). All LLM-Chatbots also demonstrated substantial self-correction capabilities: 66.7% (2 in 3) of ChatGPT-4.0's, 40% (2 in 5) of ChatGPT-3.5's, and 60% (3 in 5) of Google Bard's responses improved after self-correction. The LLM-Chatbots performed consistently across domains, except for ‘treatment and prevention’. However, ChatGPT-4.0 still performed superiorly in this domain, receiving 70% ‘good’ ratings, compared to 40% in ChatGPT-3.5 and 45% in Google Bard (Pearson's chi-squared test, all p ≤ 0.001). Interpretation: Our findings underscore the potential of LLMs, particularly ChatGPT-4.0, for delivering accurate and comprehensive responses to myopia-related queries. Continuous strategies and evaluations to improve LLMs’ accuracy remain crucial. Funding: Dr Yih-Chung Tham was supported by the National Medical Research Council of Singapore (NMRC/MOH/HCSAINV21nov-0001).
dc.publisherElsevier BV
dc.sourceElements
dc.subjectChatGPT-3.5
dc.subjectChatGPT-4.0
dc.subjectChatbot
dc.subjectGoogle Bard
dc.subjectLarge language models
dc.subjectMyopia
dc.subjectHumans
dc.subjectChild
dc.subjectBenchmarking
dc.subjectSearch Engine
dc.subjectConsensus
dc.subjectLanguage
dc.subjectMyopia
dc.typeArticle
dc.date.updated2023-11-17T00:35:37Z
dc.contributor.departmentDEAN'S OFFICE (DUKE-NUS MEDICAL SCHOOL)
dc.contributor.departmentOPHTHALMOLOGY
dc.description.doi10.1016/j.ebiom.2023.104770
dc.description.sourcetitleeBioMedicine
dc.description.volume95
dc.description.page104770-
dc.published.statePublished
Appears in Collections:Staff Publications
Elements

Show simple item record
Files in This Item:
File Description SizeFormatAccess SettingsVersion 
Benchmarking large language models performances for myopia care a comparative analysis of ChatGPT-3.5, ChatGPT-4.0, and Goog.pdf648.63 kBAdobe PDF

OPEN

PublishedView/Download

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.