
Columbia Nursing Study Shows AI Can Help Remove Bias from Patient Records
Words matter, especially in health care, where the language written in patient charts influences how people are treated and how they feel about their care. Imagine reading your medical records and finding yourself described as “noncompliant” or “drug-seeking” or seeing that you “claim” to have pain. For millions of patients who now have digital access to their health records, encountering such language can damage trust and even lead them to disengage from care.
A Columbia University School of Nursing study led by Zhihong Zhang, PhD, a postdoctoral researcher, explores whether ChatGPT could help identify and rewrite such biased language in medical records.
“Previous research shows stigmatizing language in charts can lead to less aggressive pain management and more diagnostic errors,” says Zhang. “With the 21st Century Cures Act giving patients full access to their records, addressing this isn’t just about better care—it’s about preserving the patient-provider relationship.”
The study, “Toward Equitable Documentation: Evaluating Chat-GPT’s Role in Identifying and Rephrasing Stigmatizing Language in Electronic Health Records,” was published on June 5, 2025, in Nursing Outlook.
ChatGPT's nearly perfect results
In their analysis of 140 notes from two major urban hospitals, the research team found while ChatGPT did not catch every instance of stigmatizing language, it was strong (scoring a nearly perfect 2.7-3.0 out of 3) at rewriting terms to be respectful while preserving medical accuracy. Researchers found an average of two instances of stigmatizing language per note. The model struggled with automatic detection, catching only about half of the problematic language overall, though it performed well in specific categories like doubt markers.
Broader Integration into Health Care
While more work is needed before artificial intelligence can be fully integrated into health care settings, the study is particularly timely as health systems grapple with implementing equitable care practices while navigating the reality that patients can now access and read everything clinicians write about them. The researchers of this study envision ChatGPT being integrated into electronic health record systems to flag potentially stigmatizing phrases in real time and suggest alternatives.