Purpose: ChatGPT has gained popularity as a web application since its release in 2022. While artificial intelligence (AI) systems’ potential in scientific writing is widely discussed, their reliability in reviewing literature and providing accurate references remains unexplored. This study examines the reliability of references generated by ChatGPT language models in the Head and Neck field. Methods: Twenty clinical questions were generated across different Head and Neck disciplines, to prompt ChatGPT versions 3.5 and 4.0 to produce texts on the assigned topics. The generated references were categorized as “true,” “erroneous,” or “inexistent” based on congruence with existing records in scientific databases. Results: ChatGPT 4.0 outperformed version 3.5 in terms of reference reliability. However, both versions displayed a tendency to provide erroneous/non-existent references. Conclusions: It is crucial to address this challenge to maintain the reliability of scientific literature. Journals and institutions should establish strategies and good-practice principles in the evolving landscape of AI-assisted scientific writing.
Assessing the accuracy of ChatGPT references in head and neck and ENT disciplines / Frosolini, A.; Franz, L.; Benedetti, S.; Vaira, L. A.; de Filippis, C.; Gennaro, P.; Marioni, G.; Gabriele, G.. - In: EUROPEAN ARCHIVES OF OTO-RHINO-LARYNGOLOGY. - ISSN 0937-4477. - 280:11(2023), pp. 5129-5133. [10.1007/s00405-023-08205-4]
Assessing the accuracy of ChatGPT references in head and neck and ENT disciplines
Vaira L. A.;
2023-01-01
Abstract
Purpose: ChatGPT has gained popularity as a web application since its release in 2022. While artificial intelligence (AI) systems’ potential in scientific writing is widely discussed, their reliability in reviewing literature and providing accurate references remains unexplored. This study examines the reliability of references generated by ChatGPT language models in the Head and Neck field. Methods: Twenty clinical questions were generated across different Head and Neck disciplines, to prompt ChatGPT versions 3.5 and 4.0 to produce texts on the assigned topics. The generated references were categorized as “true,” “erroneous,” or “inexistent” based on congruence with existing records in scientific databases. Results: ChatGPT 4.0 outperformed version 3.5 in terms of reference reliability. However, both versions displayed a tendency to provide erroneous/non-existent references. Conclusions: It is crucial to address this challenge to maintain the reliability of scientific literature. Journals and institutions should establish strategies and good-practice principles in the evolving landscape of AI-assisted scientific writing.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.