Dear Editor,
We are grateful for the comments provided in the letter by Daungsupawong and Wiwanitkit1, and we extend our thanks to the editors for giving us the chance to address the criticisms raised and broaden the discussion on our study.
During our study (March 2023), the easy accessibility and high popularity of ChatGPT 3.5 as an artificial intelligence (AI) tool were the reasons for our preference2. However, considering the rapid advancement of AI technology, we noted in our study that conducting research with both ChatGPT 4 and various machine learning tools would offer a more comprehensive understanding of the usefulness of AI technologies in addressing vaccine and statin hesitancy.
This study assessed the accuracy and explanatory nature of the responses provided by ChatGPT 3.5 to questions from individuals with vaccine and statin hesitancy. The viewpoints of skeptical individuals could be the subject of another study.
In our study, we demonstrated that when patients’ frequently asked questions regarding vaccine and statin hesitancy were directed to ChatGPT, it did not provide responses that supported misleading information. However, as highlighted by Daungsupawong and Wiwanitkit1, it was also observed that ChatGPT did not confront this misleading information. Nevertheless, our study was not intended to evaluate the capacity of AI technology to combat misinformation related to vaccines and statins. Especially on social media, the variety and quantity of negative content regarding both vaccine and statin use are too vast to respond to each one individually3, 4. Mentioning these negative contents in AI outputs may further confuse individuals seeking information and spread skepticism among patients. Given this scenario, it should be a separate research topic to determine which approach, providing only accurate information or confronting misinformation, would be more effective in reducing vaccine and statin hesitancy.
Thank you for your consideration.
Sincerely