Fifth Annual Simulation Summit Examines Potential, Perils of AI
Experts on artificial intelligence (AI) in education from nursing, medicine, engineering, and patient care, among other disciplines, gathered on October 27 for the 2023 Helene Fuld Health Trust Innovations in Simulation Summit, held at The Forum at Columbia University.
The event is the fifth and last in a series funded by the Helene Fuld Health Trust, which also supports Columbia Nursing’s state-of-the-art simulation center. It featured plenary sessions and presentations from companies using AI to improve health care. Speakers agreed that AI has huge potential as a tool in nursing education, if faculty provide their students with the right guidance, and that keeping humans in the AI loop will be essential to address bias and avoid disaster.
AI is a “hot topic,” event organizer Kellie Bryant, DNP, assistant dean, clinical affairs and simulation, noted in her introduction. “AI helps improve efficiency, it helps reduce costs and increases precision, it automates time-consuming tasks,” she said. Nevertheless, she added, AI brings multiple challenges, which the plenaries would explore from the perspective of nursing instruction, patient care, biomedical informatics, and more.
Companies presenting at the summit included Chiefy, which is developing an app that facilitates a pre-procedure digital huddle for surgical teams; Mytonomy, which produces patient education materials and is testing avatars that can deliver information in different languages; and Epistemic, which uses AI to help health system leaders plan ahead for their system resource needs.
ChatGPT as classroom guide
Laura Gonzalez, PhD, vice president of healthcare innovation at Sentinel U and immediate past president of the International Nursing Association for Clinical Simulation and Learning, presented “Artificial Intelligence and Chat GPT in Nursing Education.”
“We shouldn’t be afraid of it, we should embrace it, and I know that’s really hard for us right now especially in academia because there’s so many uncertainties,” Gonzalez said.
As a 24-7, always accessible source of information, ChatGPT can be a useful classroom tool, she added but it’s imperative for educators to help their students understand its limitations and learn with the technology, rather than from it.
“AI is not an expert and is only as accurate as the information it consumes,” Gonzalez said. “We as educators or as scientists, as researchers, have to have a good index of suspicion. And this is where our students and our learners are going to struggle.”
Ethical guardrails are also needed to keep AI in line and address the bias that can arise with its use, Gonzalez warned. “There’s a risk of reproducing those real-world biases, fueling those divisions and threatening, honestly, fundamental rights and freedoms.”
She cited “10 Core Principles of Human-Centered AI,” a UNESCO white paper released this year, as a good resource for ethical AI development.
Ethical machine learning
In her talk, “Towards Ethical and Safe Machine Learning for Health and Medicine,” Shalmali Joshi, PhD, an assistant professor of biomedical informatics at Columbia University, described the challenge of creating and maintaining learning health systems that deliver fair and equitable treatment, given that bias can be introduced at every step of the process.
According to Joshi, an electrical engineer and computer scientist, a safe learning health system includes the following components:
- Robust learning and evaluation
- Reliable diagnostics and explanations
- Safety mechanisms and ethics
“The work that you’ll see is all the algorithmical stuff I do to make these technologies a bit safer and better and more reliable,” Joshi explained. Her work involves testing and “poking at” machine learning models over time, identifying bias, and tracking down the source.
As an example of what can go wrong, Joshi cited the Epic Sepsis Model in use at hundreds of U.S. hospitals, which was found to miss most cases of sepsis while causing alert fatigue among clinicians. “We need to be very careful about how we are using such models,” she said.
Chat GPT as role player
Maxim Topaz, PhD, Elizabeth Standish Gill Associate Professor at Columbia Nursing, discussed how generative AI could revolutionize health care simulation education in his talk, “Current AI Opportunities and Challenges.”
Generative AI, Topaz explained, is a subfield of AI focused on generating new data on the spot, and it could be used to create realistic and interactive scenarios in health care simulations. The most important skill in using generative AI in simulation is “PROMPT engineering,” according to Topaz: tailoring input to guide AI response and bridge the gap between the user’s intention and the AI’s understanding. Best practices include clear and specific prompts, iterative refinement based on feedback, and consideration of context and desired outcomes, he added.
Using his smartphone, Topaz demonstrated how when directed by a detailed prompt ChatGPT can respond as a convincing simulated patient; a simulated nurse caring for a curmudgeonly patient, Max; and an experienced nursing instructor, evaluating the simulated nurse’s performance.
“That was just out of the box,” he said. “Imagine what we can do if we fine tune that, make it better.”
Plenaries by Kenrick Cato, PhD, professor of informatics at the Children’s Hospital of Philadelphia, on “Leveraging AI to Characterize Clinical Workflow for Optimization and Education,” and Scott Pappada, PhD, associate professor and director of research at the University of Toledo College of Medicine and Life Sciences, on “A Novel Multimodal Assessment Platform (PREPARE),” concluded the event.
“While it is with a heavy heart we conclude the Innovations in Simulation Summit after five enriching years, I remain optimistic about finding a path forward for this esteemed annual event,” Bryant said. “I am grateful to the global thought leaders in the field of simulation who have shared their insights at the summit and look forward to continuing this important dialogue in years to come.”