A study indicates that the ChatGPT chatbot may serve as an effective tool to save time by addressing patient inquiries directed to the urologist’s office. This study was featured in the September edition of Urology Practice®, the Official Journal of the American Urological Association (AUA). The journal is part of the Lippincott portfolio published by Wolters Kluwer (1âś” âś”Trusted Source
Assessing Artificial Intelligence-Generated Responses to Urology Patient In-Basket Messages
).
The artificial intelligence (AI) tool produced “acceptable” responses to approximately fifty percent of a sample size of actual patient inquiries. This finding was according to research conducted recently by Michael Scott, MD, a urologist at Stanford University School of Medicine. “Generative AI technologies may play a valuable role in providing prompt, accurate responses to routine patient questions – potentially alleviating patients’ concerns while freeing up clinic time and resources to address other complex tasks,” Dr. Scott comments.
Assessment of Urology-Related Responses Generated by ChatGPT
ChatGPT is an innovative large language model (LLM) that has sparked interest across a wide range of settings, including health and medicine. In some recent studies, ChatGPT has performed well in responding to various types of medical questions, although its performance in urology is less evaluated.
Modern electronic health record (EHR) systems enable patients to send medical questions directly to their doctors. “This shift has been associated with an increased time burden of EHR use for physicians with a large portion of this attributed to patient in-basket messages,” the researchers write. One study estimates that each message in a physician’s inbox adds more than two minutes spent on the EHR.
Dr. Scott and colleagues collected 100 electronic patient messages requesting medical advice from a urologist at a men’s health clinic. The messages were categorized based on the type of content and difficulty and then entered into ChatGPT. Five experienced urologists graded each AI-generated response in context to accuracy, completeness, helpfulness, and intelligibility. Raters also indicated whether they would send each response to a patient.
ChatGPT Responses Show the Potential to Improve Clinical Efficiency in Patient Communication
The ChatGPT-generated responses were judged to be accurate, with an average score of 4.0 on a five-point scale; and an intelligible, average score of 4.7. Ratings of completeness and helpfulness were lower but with little or no potential for harm. Scores were comparable for different types of question content (symptoms, postoperative concerns, etc).
“Overall, 47% of responses were deemed acceptable to send to patients,” the researchers write. Questions rated as “easy” had a higher rate of acceptable responses: 56%, compared to 34% for “difficult” questions.
Advertisement
“These results show promise for the utilization of generative AI technology to help improve clinical efficiency,” Dr. Scott and coauthors write. The findings “suggest the feasibility of integrating this new technology into clinical care to improve efficiency while maintaining quality of patient communication.”
The researchers note some potential drawbacks of ChatGPT-generated responses to patient questions: “ChatGPT’s model is trained on information from the Internet in general, as opposed to validated medical sources,” with a “risk of generating inaccurate or misleading responses.” The authors also highlight the need for safeguards to ensure patient privacy.
Advertisement
“While our study provides an interesting starting point, more research will be needed to validate the use of LLMs to respond to patient questions, in urology as well as other specialties,” Dr. Scott comments. “This will be a potentially valuable healthcare application, particularly with continued advances in AI technology.”
Reference:
- Assessing Artificial Intelligence-Generated Responses to Urology Patient In-Basket Messages – (https://www.auajournals.org/doi/10.1097/UPJ.0000000000000637)
Source-Eurekalert