Please use this identifier to cite or link to this item: https://doi.org/10.1016/j.ebiom.2023.104770
Title: Benchmarking large language models’ performances for myopia care: a comparative analysis of ChatGPT-3.5, ChatGPT-4.0, and Google Bard
Authors: Lim, ZW
Pushpanathan, K 
Yew, SME 
Lai, Y 
Sun, CH
Lam, JSH 
Chen, DZ
Goh, JHL
Tan, MCJ 
Sheng, B
Cheng, CY 
Koh, VTC 
Tham, YC 
Keywords: ChatGPT-3.5
ChatGPT-4.0
Chatbot
Google Bard
Large language models
Myopia
Humans
Child
Benchmarking
Search Engine
Consensus
Language
Myopia
Issue Date: 1-Sep-2023
Publisher: Elsevier BV
Citation: Lim, ZW, Pushpanathan, K, Yew, SME, Lai, Y, Sun, CH, Lam, JSH, Chen, DZ, Goh, JHL, Tan, MCJ, Sheng, B, Cheng, CY, Koh, VTC, Tham, YC (2023-09-01). Benchmarking large language models’ performances for myopia care: a comparative analysis of ChatGPT-3.5, ChatGPT-4.0, and Google Bard. eBioMedicine 95 : 104770-. ScholarBank@NUS Repository. https://doi.org/10.1016/j.ebiom.2023.104770
Abstract: Background: Large language models (LLMs) are garnering wide interest due to their human-like and contextually relevant responses. However, LLMs’ accuracy across specific medical domains has yet been thoroughly evaluated. Myopia is a frequent topic which patients and parents commonly seek information online. Our study evaluated the performance of three LLMs namely ChatGPT-3.5, ChatGPT-4.0, and Google Bard, in delivering accurate responses to common myopia-related queries. Methods: We curated thirty-one commonly asked myopia care-related questions, which were categorised into six domains—pathogenesis, risk factors, clinical presentation, diagnosis, treatment and prevention, and prognosis. Each question was posed to the LLMs, and their responses were independently graded by three consultant-level paediatric ophthalmologists on a three-point accuracy scale (poor, borderline, good). A majority consensus approach was used to determine the final rating for each response. ‘Good’ rated responses were further evaluated for comprehensiveness on a five-point scale. Conversely, ‘poor’ rated responses were further prompted for self-correction and then re-evaluated for accuracy. Findings: ChatGPT-4.0 demonstrated superior accuracy, with 80.6% of responses rated as ‘good’, compared to 61.3% in ChatGPT-3.5 and 54.8% in Google Bard (Pearson's chi-squared test, all p ≤ 0.009). All three LLM-Chatbots showed high mean comprehensiveness scores (Google Bard: 4.35; ChatGPT-4.0: 4.23; ChatGPT-3.5: 4.11, out of a maximum score of 5). All LLM-Chatbots also demonstrated substantial self-correction capabilities: 66.7% (2 in 3) of ChatGPT-4.0's, 40% (2 in 5) of ChatGPT-3.5's, and 60% (3 in 5) of Google Bard's responses improved after self-correction. The LLM-Chatbots performed consistently across domains, except for ‘treatment and prevention’. However, ChatGPT-4.0 still performed superiorly in this domain, receiving 70% ‘good’ ratings, compared to 40% in ChatGPT-3.5 and 45% in Google Bard (Pearson's chi-squared test, all p ≤ 0.001). Interpretation: Our findings underscore the potential of LLMs, particularly ChatGPT-4.0, for delivering accurate and comprehensive responses to myopia-related queries. Continuous strategies and evaluations to improve LLMs’ accuracy remain crucial. Funding: Dr Yih-Chung Tham was supported by the National Medical Research Council of Singapore (NMRC/MOH/HCSAINV21nov-0001).
Source Title: eBioMedicine
URI: https://scholarbank.nus.edu.sg/handle/10635/246018
ISSN: 2352-3964
DOI: 10.1016/j.ebiom.2023.104770
Appears in Collections:Staff Publications
Elements

Show full item record
Files in This Item:
File Description SizeFormatAccess SettingsVersion 
Benchmarking large language models performances for myopia care a comparative analysis of ChatGPT-3.5, ChatGPT-4.0, and Goog.pdf648.63 kBAdobe PDF

OPEN

PublishedView/Download

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.