Brief Review — Answering Hospital Caregivers’ Questions at Any Time: Proof-of-Concept Study of an Artificial Intelligence–Based Chatbot in a French Hospital
AI Chatbot for Caregivers’s Questions
Answering Hospital Caregivers’ Questions at Any Time: Proof-of-Concept Study of an Artificial Intelligence–Based Chatbot in a French Hospital
AI Chatbot for Caregivers’s Questions, by Paris Saint-Joseph Hospital Group, and University of Paris Nanterre
2022 JMIR Human Factors (Sik-Ho Tsang @ Medium)Healthcare/Medical NLP/LLM
2017 … 2024 [ChatGPT & GPT-4 on Dental Exam] [ChatGPT-3.5 on Radiation Oncology] [LLM on Clicical Text Summarization] [Extract COVID-19 Symptoms Using ChatGPT & GPT-4] [ChatGPT on Patients Medication]
My Healthcare and Medical Related Paper Readings
==== My Other Paper Readings Are Also Over Here ====
- The ADDIE (Analysis, Design, Development, Implementation, and Evaluation) model was used by a multiprofessional team composed of 3 hospital pharmacists, 2 members of the Innovation and Transformation Department, and the IT service provider.
- Based on an analysis of the caregivers’ needs about drugs and pharmacy organization, a chatbot is designed and developed.
- The tool was then evaluated before its implementation into the hospital intranet. Its relevance and conversations with testers were monitored via the IT provider’s back office.
Outline
- Needs Analysis
- Design and Development
- Implementation and Evaluation
- Discussions
1. Needs Analysis
The first step, which consisted of collecting and analyzing the needs of future users, was conducted in the hospital in February 2021.
In total, 50 questions asked by 33 nurses, 3 head nurses, and assistant nurses were collected and classified into 7 categories.
- Five different health care units were included: Oncology Outpatient Service, Oncology Inpatient Service, Weekday Inpatient Oncology Service (WIOS), Pneumo-Oncology Service, and Neonatology Service.
- Suggestions of 5 hospitals pharmacists concerning the drug circuits and anticancer drugs are also considered.
- Other topics included therapeutic equivalences and anticancer drugs.
2. Design and Development
- The 50 topics indicated during the needs analysis led to the writing of 32 working documents, gathered by item. From the working documents, we created 41 skills into the chatbot, which formed the database with the corresponding resource documents.
- For each skill, a conversational tree is also built.
For example, depending on the type of question asked, the chatbot might ask the user for further information, answer directly, or provide a web link to find the correct answer on a reliable website.
- A conversation tree example is as follow:
An initial dialogue is shown to guide the users more quickly on frequently asked topics and used technical solutions called “fuzzy matching” and “auto completion” to help users formulate their requests and make them understandable by the chatbot.
3. Implementation and Evaluation
- The beta test of the chatbot tool was conducted between January and February 2022.
- In total, 20 caregivers from 4 different services (Oncology Outpatient Service, WIOS, Pneumo-Oncology Service, and pharmacy department) attempted the proof-of-concept version of the chatbot during test sessions.
A total of 14 nurses and head nurses participated, as well as 6 members of the pharmacy staff. The beta test led to 214 conversations, and testers were invited to complete a satisfaction questionnaire.
Overall, 8 of 20 (40%) testers used to call the pharmacy 1 to 5 times a week, and 11 (55%) used to do it more. Only one person, an experienced head nurse, said that she never had to call the pharmacy.
The chatbot’s speed was rated 8.2 out of 10 (range 3–10), its ergonomics was rated 8.1 (range 5–10), and its appearance was rated 7.5 (range 4–10) as shown in Figure 4 above. One person did not rate the chatbot.
- Overall, 14 of 20 (70%) users were satisfied or very satisfied with the tool. In the back office, the estimated relevance (ie, conversations with positive feedback) was 76% for of interactions that were not aborted (146/214, 68% of conversations).
At the end of the beta tests, the main improvements that were suggested by the testers were related to the ergonomics (n=5, 25% of testers), as well as the implementation of the database (n=7, 35% of testers), with the inclusion of answers related to medical devices (n=4, 20% of testers), adverse drug effects (n=4, 20% of testers), and drug prices (n=2, 10% of testers), as shown in Figure 5 above.
4. Discussions
4.1. Findings & Limitations
- Authors claimed that this is the first chatbot to specifically target the hospital health staff and answer questions related to drug circuits and pharmacy organization.
- Authors also concluded that the chatbot would perfectly fit in the daily medical practice and that it was positively perceived by the physicians.
- However, the main limitation of the work remained the lack of information included in the database.
- The small number of beta testers (n=20) is a limitation of the work.
- Another technical limitation of the chatbot tool is the fact that it was not possible to interface the chatbot with their business software because of an incompatible application programming interface.
- There is also resistance to change, as the use of the chatbot is not currently part of the habits and routine of the staff.
- Regular updating of the information integrated in the database is a central issue in maintaining a high level of reliability.
4.2. Future Works
- In the future, it may also be possible to expand the scope of the chatbot’s skills even further, by including, for example, a list of medical devices available in the hospital.
- Another major element in the future of the chatbot is its deployment on the 2 hospital sites.
- The development of different type of interfaces, such as smartphones or a speech-to-text option.
Ultimately, the chatbot may allow pharmacists to optimize their time for value-added tasks.