Is ChatGPT an Accurate and Reliable Source of Information for Patients with Vaccine and Statin Hesitancy?
PDF
Cite
Share
Request
Original Article
P: 1-7
March 2024

Is ChatGPT an Accurate and Reliable Source of Information for Patients with Vaccine and Statin Hesitancy?

Medeni Med J 2024;39(1):1-7
1. Istanbul Goztepe Prof. Dr. Suleyman Yalcin City Hospital, Clinic of Internal Medicine, Istanbul, Turkey
2. Istanbul Goztepe Prof. Dr. Suleyman Yalcin City Hospital, Clinic of Microbiology, Istanbul, Turkey
No information available.
No information available
Received Date: 19.09.2023
Accepted Date: 09.01.2024
Publish Date: 21.03.2024
PDF
Cite
Share
Request

ABSTRACT

Objective:

Chat Generative Pre-trained Transformer (ChatGPT) is an artificial intelligence (AI) language model that is trained to respond to questions across a wide range of topics. Our aim is to elucidate whether it would be beneficial for patients who are hesitant about vaccines and statins to use ChatGPT.

Methods:

This cross-sectional and observational study was conducted from March 2 to March 30, 2023, using OpenAI ChatGPT-3.5. ChatGPT provided responses to 7 questions related to vaccine and statin hesitancy. The same questions were also directed at physicians. Both the answers from ChatGPT and the physicians were assessed for accuracy, clarity, and conciseness by experts in cardiology, internal medicine, and microbiology, who possessed a minimum of 30 years of professional experience. Responses were rated on a scale of 0-4, and the ChatGPT’s average score was compared with that of physicians using the Mann-Whitney U test.

Results:

The mean scores of ChatGPT (3.78±0.36) and physicians (3.65±0.57) were similar (Mann-Whitney U test p=0.33). The mean scores of ChatGPT were 3.85±0.34 for vaccination and 3.68±0.35 for statin use. The mean scores of physicians were 3.73±0.51 for vaccination and 3.58±0.61 for statin use. There was no statistically significant difference between the mean scores of ChatGPT and physicians for both vaccine and statin use (p=0.403 for vaccination, p=0.678 for statin). ChatGPT did not consider sources of conspiratorial information on vaccines and statins.

Conclusions:

This study suggests that ChatGPT can be a valuable source of information for guiding patients with vaccine and statin hesitancy.

Keywords:
Primary prevention, artificial intelligence, medication hesitancy

INTRODUCTION

Primary prevention is an essential strategy for preventing the onset of diseases or conditions. Vaccination is one of the most effective primary prevention strategies. It has played a critical role in reducing the incidence of several infectious diseases and some cancers1,2. While drugs are predominantly employed for therapeutic purposes after the onset of a disease, their role in primary prevention has proven to be highly beneficial. Statins have proven effective in both primary and secondary prevention, mitigating the onset and progression of cardiovascular diseases (CVDs)3. A meta-analysis of 65,000 patients showed that statins have a clear role in the primary prevention of CVD mortality and major events4. Primary prevention strategies have also been shown to reduce healthcare expenditure5.

Despite the significant benefits of primary prevention, some individuals and groups are skeptical about its safety and efficacy. Understanding the reasons for this skepticism is important for developing strategies to increase vaccination rates and improve the uptake of preventive medicines.

One of the main causes of skepticism is the spread of misinformation and conspiracy theories about vaccines and preventive medicine6,7. Social media platforms can facilitate the spread of misinformation and create echo chambers where individuals are only exposed to information that confirms their existing beliefs.

There is a need for online platforms where patients can receive accurate, clear, and sufficient information on health-related issues. Chat Generative Pre-trained Transformer (ChatGPT) is a large language model developed by OpenAI that can generate human-like responses to various questions and topics. It is trained on large amounts of data and uses advanced machine learning techniques to generate responses that are often highly accurate and informative. As an artificial intelligence (AI) language model, ChatGPT can be a valuable source of information on health-related topics. However, ChatGPT’s responses are based on the information it has been trained on and may not always be up-to-date or fully accurate. As there is limited research in this field, it is crucial to assess the reliability of ChatGPT as a source of information for patients. This study was designed to elucidate ChatGPT’s success in responding to frequently asked questions by patients about vaccines and statins in terms of accuracy, clarity, and conciseness.

MATERIALS and METHODS

Study Design

nducted between March 2, 2023 and March 30, 2023. The study was approved by the Istanbul Medeniyet University Goztepe Training and Research Hospital Ethics Committee (decision no: 2023/0910, date: 13.12.2023). Written consent was obtained from all volunteers.

We conducted a qualitative search using OpenAI (OpenAI GPT-3.5, L.L.C., San Francisco, CA, USA) on March 2, 2023. The open-ended questions posed were grounded in the clinical expertise of the investigators and prior research on vaccine and statin treatment hesitancy8,9. There were four questions about vaccine hesitancy and three questions about statin hesitancy (Table 1). The questions were used by a single user to interact with ChatGPT. We used the “regenerate response” button to obtain two different outputs from ChatGPT. The study involved posing the same set of questions to ten internists (with 5 to 25 years of professional experience) and ten microbiologists (with 5 to 30 years of professional experience) and recording their responses. Both the answers from ChatGPT and the physicians were assessed for accuracy (scientific correctness of content), clarity (ability to be understood by patients), and conciseness (degree to which all the available information is conveyed) by experts in cardiology, internal medicine, and microbiology, who possessed a minimum of 30 years of professional experience. Responses were rated on a scale of 0-4, with a score of (0) indicating a completely incorrect, unclear, or unconcise response and a score of (4) indicating a completely accurate, clear, or concise response. The average score of the three ratings was used as the final score for analysis.

Statistical Analysis

Descriptive statistics are expressed as mean and standard deviation. Normal distribution was tested using the Kolmogorov-Smirnov test. Because the data were not normally distributed, a non-parametric statistical test was used. The total scores of the answers were compared using the Mann-Whitney U test. A p-value <0.05 was considered statistically significant. The sample size was considered in accordance with the minimum ratio of participants to variables required in multivariate methods, which should be at least 510. In this study, a total of 3 variables were used, and it was seen that the minimum sample size requirement was met according to the criteria. SPSS (ver. 23) and R-4.2.2 for Windows were used for calculations.

RESULTS

Table 2 presents a comparison between the scores of ChatGPT and physicians. The final mean scores for ChatGPT and physicians were similar (3.78±0.36 and 3.65±0.57, respectively, Mann-Whitney U test p=0.33). The mean scores for ChatGPT were 3.85±0.34 and 3.68±0.35 for vaccination and statin use, respectively, whereas for physicians, they were 3.73±0.51 and 3.58±0.61 for vaccination and statin use, respectively. The mean scores of ChatGPT and physicians did not differ significantly in either subject (Mann-Whitney U test p=0.403 for vaccination, p=0.678 for statin use).

ChatGPT did not consider the sources of conspiratorial information on vaccines and statins. It received a high score for clarity and conciseness (with a mean score of 3.86±0.29 for both), but its accuracy was relatively lower (with a mean score of 3.62±0.44). Table 3 presents instances where ChatGPT provided incorrect or inadequate information, which may lead patients to make erroneous decisions. The statement that the diet has a greater impact on reducing low-density lipoprotein cholesterol (LDL-C) than other types of cholesterol is incorrect. In addition, there is no evidence from randomized controlled studies to suggest that the coronavirus disease-2019 (COVID-19) vaccine does not cause blood clots. Furthermore, failing to mention immunosuppressed children for whom live vaccines are not recommended constitutes incomplete information.

DISCUSSION

In our study, ChatGPT was shown to provide accurate, informative, and concise answers to patients’ frequently asked questions about vaccines and statins, which are two primary preventive medications. When the same questions were posed to medical experts in the field, the accuracy, clarity, and conciseness of the answers were found to be comparable to those provided by ChatGPT.

The internet has become an important source of information for people with health concerns11. However, studies evaluating social media content related to health issues have shown that the information is of variable quality and that inaccurate or negative content predominates12. Scullard et al.13 showed that when parents researched online whether there was a link between the measles, mumps, and rubella (MMR) vaccine and autism, only half of the information sources correctly stated that there was no link between MMR vaccine and autism. Given this situation, it is clear that correct, explanatory, and reliable sources should be available to people seeking information on health-related issues.

AI is a rapidly advancing technology that has the potential to revolutionize many areas, including healthcare. With the rise of digital health records and the vast amounts of data they generate, AI has become a powerful tool for healthcare providers to analyze and interpret patient information14,15. Patients are also starting to use AI tools to manage their own health concerns. Chatbots and other AI-powered tools can provide patients with personalized advice and support, thereby helping them make better decisions about their health16,17. This can lead to better patient outcomes and a more proactive approach to healthcare. However, as with any technology, there are also challenges associated with the use of AI in healthcare. One of the biggest challenges is ensuring the accuracy and reliability of AI algorithms. While AI can analyze vast amounts of data, it can also be susceptible to bias and other errors if the data it is trained on is not representative of the population as a whole.

ChatGPT is an AI chatbot launched in November 2022, and studies have been conducted to explore its potential use in various fields, including healthcare. Promising results have been obtained from studies conducted with the idea that ChatGPT could be useful in medical education18. Similarly, studies have examined whether ChatGPT is useful for doctors to help them make diagnoses. In a study by Hirosawa et al.19, ChatGPT was asked to list possible diagnoses based on patients’ common complaints, and it was shown that 93.3% of the initial diagnoses were correct, but it was not successful enough in its ranking. However, it is also important in terms of health literacy and medication adherence that patients are able to obtain useful and accurate information when consulting ChatGPT for health-related issues, but there are few studies on this area. In Johnson et al.20, ChatGPT provided 96.9% correct answers to frequently asked questions about cancer myths and misconceptions. In a study related to COVID-19, ChatGPT was shown to provide clear and concise answers to patients’ frequently asked questions about the COVID-19 virus and vaccine21.

In our study, ChatGPT’s responses regarding the use of statins and vaccines for primary prevention were mostly accurate and understandable and did not consider conspiratorial sources of information, which is consistent with the results of the recent study. However, it should be noted that ChatGPT is only a powerful language bot that generates text through linguistic connections. Therefore, if the question is not phrased correctly, the possibility of misleading answer increases. In our study, although we asked questions that expressed patients’ concerns, we tried to phrase the question as accurately as possible. However, when we inquired about the effectiveness of the diet in reducing LDL-C, ChatGPT responded that the diet was effective in lowering cholesterol, especially LDL-C, as the emphasis was on LDL-C. Similarly, when asked about triglycerides, ChatGPT stated that the diet had a significant impact on reducing triglycerides. Another study by Huh22 found that ChatGPT struggled to comprehend the logic of multiple-choice parasitology questions and marked multiple options as correct, demonstrating inferior performance compared with medical students. These findings indicate that ChatGPT can sometimes provide misleading information regarding healthcare-related matters. In addition, although ChatGPT can analyze vast amounts of data, it may be susceptible to bias and other errors if the data it is trained on is not representative of the overall population.

Nevertheless, the goal of this study was not to replace the doctor-patient relationship with ChatGPT, but rather to evaluate whether it could be a helpful supplementary tool in this relationship. The study’s strength is that it is the first to assess ChatGPT conversations in the context of primary prevention. However, several limitations must be noted. First, the evaluation of ChatGPT responses was subjective, despite being assessed by field experts. Second, best practices for patient care may differ depending on the region and healthcare environment. Lastly, this study utilized GPT3.5, but with the advent of GPT4, the error rate is likely to decrease with each subsequent AI model, meaning that our results only pertain to the evaluation of a single model rather than a comprehensive assessment of AI technologies.

CONCLUSION

ChatGPT shows promise in boosting patient confidence in primary prevention. While AI provides valuable information on vaccines and statins, it is crucial to remain vigilant about AI’s challenges, including potential algorithmic bias due to data imperfections.

Integrating AI responsibly can benefit patients and healthcare providers. Because ChatGPT is the first of many models that will undoubtedly improve rapidly, further studies are needed.

References

1
Nandi A, Shet A. Why vaccines matter: understanding the broader health, economic, and child development benefits of routine vaccination. Hum Vaccin Immunother. 2020;16:1900-4.
2
Athanasiou A, Bowden S, Paraskevaidi M, et al. HPV vaccination and cancer prevention. Best Pract Res Clin Obstet Gynaecol. 2020;65:109-24.
3
Mora S, Glynn RJ, Hsia J, MacFadyen JG, Genest J, Ridker PM. Statins for the primary prevention of cardiovascular events in women with elevated high-sensitivity C-reactive protein or dyslipidemia: results from the Justification for the Use of Statins in Prevention: An Intervention Trial Evaluating Rosuvastatin (JUPITER) and meta-analysis of women from primary prevention trials. Circulation. 2010;121:1069-77.
4
Mills EJ, Rachlis B, Wu P, Devereaux PJ, Arora P, Perri D. Primary prevention of cardiovascular mortality and events with statin treatments: a network meta-analysis involving more than 65,000 patients. J Am Coll Cardiol. 2008;52:1769-81.
5
Gatwood J, Bailey JE. Improving medication adherence in hypercholesterolemia: challenges and solutions. Vasc Health Risk Manag. 2014;10:615-25.
6
Gangarosa EJ, Galazka AM, Wolfe CR, et al. Impact of anti-vaccine movements on pertussis control: the untold story. Lancet. 1998;351:356-61.
7
Mason BW, Donnelly PD. Impact of a local newspaper campaign on the uptake of the measles mumps and rubella vaccine. J Epidemiol Community Health. 2000;54:473-4.
8
Dubé E, Laberge C, Guay M, Bramadat P, Roy R, Bettinger J. Vaccine hesitancy: an overview. Hum Vaccin Immunother. 2013;9:1763-73.
9
Lansberg P, Lee A, Lee ZV, Subramaniam K, Setia S. Nonadherence to statins: individualized intervention strategies outside the pill box. Vasc Health Risk Manag. 2018;14:91-102.
10
Althubaiti A. Sample size determination: A practical guide for health researchers. J Gen Fam Med. 2022;24:72-8.
11
Bujnowska-Fedak MM, Waligóra J, Mastalerz-Migas A. The Internet as a Source of Health Information and Services. Adv Exp Med Biol. 2019;1211:1-16.
12
Zimmerman RK, Wolfe RM, Fox DE, et al. Vaccine criticism on the World Wide Web. J Med Internet Res. 2005;7:e17.
13
Scullard P, Peacock C, Davies P. Googling children’s health: reliability of medical advice on the internet. Arch Dis Child. 2010;95:580-2.
14
Esteva A, Robicquet A, Ramsundar B, et al. A guide to deep learning in healthcare. Nat Med. 2019;25:24-9.
15
Obermeyer Z, Emanuel EJ. Predicting the Future - Big Data, Machine Learning, and Clinical Medicine. N Engl J Med. 2016;375:1216-9.
16
Krebs P, Duncan DT. Health App Use Among US Mobile Phone Owners: A National Survey. JMIR Mhealth Uhealth. 2015;3:e101.
17
Aung YYM, Wong DCS, Ting DSW. The promise of artificial intelligence: a review of the opportunities and challenges of artificial intelligence in healthcare. Br Med Bull. 2021;139:4-15.
18
Gilson A, Safranek CW, Huang T, et al. How Does ChatGPT Perform on the United States Medical Licensing Examination? The Implications of Large Language Models for Medical Education and Knowledge Assessment. JMIR Med Educ. 2023;9:e45312.
19
Hirosawa T, Harada Y, Yokose M, Sakamoto T, Kawamura R, Shimizu T. Diagnostic Accuracy of Differential-Diagnosis Lists Generated by Generative Pretrained Transformer 3 Chatbot for Clinical Vignettes with Common Chief Complaints: A Pilot Study. Int J Environ Res Public Health. 2023;20:3378.
20
Johnson SB, King AJ, Warner EL, Aneja S, Kann BH, Bylund CL. Using ChatGPT to evaluate cancer myths and misconceptions: artificial intelligence and cancer information. JNCI Cancer Spectr. 2023;7:pkad015.
21
Sallam M, Salim NA, Al-Tammemi AB, et al. ChatGPT Output Regarding Compulsory Vaccination and COVID-19 Vaccine Conspiracy: A Descriptive Study at the Outset of a Paradigm Shift in Online Search for Information. Cureus. 2023;15:e35029.
22
Huh S. Are ChatGPT’s knowledge and interpretation ability comparable to those of medical students in Korea for taking a parasitology examination?: a descriptive study. J Educ Eval Health Prof. 2023;20:1.