Research graphic

Transforming Wisdom into Data

November 16, 2022

The headlines about artificial intelligence, or AI, have become downright scary. AI technology, which is designed to mimic humans’ ability to solve complex problems using intuition and understanding, is getting frighteningly good at tasks once considered exclusive to Homo sapiens. AI programs can now create photorealistic artworks and write fluent book reviews. They can hold conversations so convincing that a Google engineer recently concluded the company’s language-modeling system had become sentient. Although most analysts rejected his claim (and he was fired after going public with it), such advances make many skilled workers worry that the bots are coming for their jobs.

Perhaps even more unnerving is how bad these programs can be at overcoming failings ingrained in their designers or data sources. One widely reported flaw is the tendency of AI tools to reflect a given culture’s common biases. Facial recognition programs often mistakenly identify people of color as criminals, for example. Credit-rating algorithms are all too likely to discriminate against Black loan applicants. Chatbots trained by interacting with thousands of random internet users can end up spewing racist, sexist, and anti-Semitic untruths.

What gets less media attention, however, is some more encouraging news: AI can also be used to enhance the work of skilled professionals—including to fight bias in all its forms. At Columbia Nursing, researchers are pioneering such applications, harnessing the power of big data to improve outcomes for patients of every race, ethnicity, and gender.

“At its best, AI enables nurses to apply their knowledge and experience at the highest possible level,” says Sarah Rossetti, PhD ’09, an associate professor of bioinformatics and nursing. “It has the potential to transform the way we deliver care.”

Detecting Hidden Risks

That potential can be glimpsed in a tool that Rossetti herself has spent more than a decade developing—a clinical decision support system known as CONCERN (Communicating Narrative Concerns Entered by RNs). She conceived of it while working as a nurse informaticist at Brigham and Women’s Hospital in Boston, after noticing that nurses often changed their documentation habits when they were concerned about patients’ status long before their deterioration became evident on vital sign monitors. Rossetti hoped to identify early signals of patients’ impending decline by examining nurses’ documentation patterns in electronic health records (EHRs) and then create a predictive model that could form the basis for an AIdriven alert system.

In 2011, she enlisted a friend from her Columbia Nursing days— Kenrick Cato, PhD ’14—as co-principal investigator on a multisite study funded by the National Institute of Nursing Research. Analyzing EHRs for 15,000 acute care patients and 145 cardiac arrest patients over 15 months, the team found that the records for those who died had significantly more optional comments and vital sign checks by nurses over the 48 hours just before their death than did the records for those who survived—the first time such an association had been shown.

The researchers used AI techniques such as natural language processing (which enables computers to understand text and spoken words) and machine learning (in which computers teach themselves to detect patterns in huge masses of data) to determine when early intervention was warranted. Then the team devised an app, after consulting with clinicians on its design, that displays a colored icon in a patient’s EHR: green for low risk of deterioration in the next 12 hours, yellow for increased risk, and red for high risk. Clinicians can click on the icon to see what factors determined the score and track the individual’s status over time. “Our goal is to focus attention on patients in the yellow zone, who might not be on the care team’s radar yet,” explains Rossetti, who returned to Columbia Nursing as a faculty member in 2018.

She and Cato—who is now an assistant professor of nursing at the school—launched a clinical trial of CONCERN at Brigham and Women’s in early 2020; a second trial, delayed by the COVID-19 pandemic, began at NewYork-Presbyterian Hospital in August 2021. (Meanwhile, the team applied AI techniques they’d developed for CONCERN to help plan resource allocation for the COVID patients then flooding the hospital.) The results of those studies won’t be published until later this year, but feedback from clinicians at both sites has been overwhelmingly positive.

In May, the team received a grant from the American Nurses Foundation’s Reimagining Nursing Initiative to launch the next phase of their study: a trial of an implementation tool kit designed to support large-scale adoption of CONCERN. The researchers are partnering on that effort with three hospital systems—Mass General Brigham in Massachusetts, Vanderbilt University Medical Center in Tennessee, and Washington University School of Medicine/Barnes-Jewish Hospital in Missouri—to test the effectiveness of the tool kit in a variety of settings.

Far from threatening nurses’ jobs, Rossetti suggests, this AI-based app embodies the wisdom that makes them indispensable. “CONCERN shows what nurses already know: Our risk identification isn’t just a clinical hunch,” she says. “We’re demonstrating that nurses have objective, expert knowledge that drives their practice and brings tremendous value to the entire care team.”

Improving Home Care

Columbia Nursing researchers are also using AI to improve outcomes for patients whose care takes place at home. Max Topaz, PhD, the Elizabeth Standish Gill Associate Professor of Nursing, is leading two such studies. The first, using an approach resembling CONCERN’s, aims to prevent emergency department visits and hospitalizations among home care patients by analyzing nursing notes for early signs of a patient’s deterioration. 'The challenges of finding such signs in a home setting are very different, however. “In the hospital, clinicians are present 24/7,” Topaz explains. “In home care, the interactions are a lot less frequent. A nurse might come to see a patient once every three or four days.” To detect danger signals under these conditions, an AI tool can’t rely on the frequency of nursing comments or vital sign checks, as the CONCERN app does; instead, it must seek out specific types of comments that indicate increased risk.

Topaz does such sleuthing using a natural language processing software he invented himself, dubbed NimbleMiner, which can flag bits of meaningful data in hundreds of thousands of patient records at a time. Over the past two years, he and his team have used the program in conjunction with other AI techniques to search for seven factors commonly associated with imminent deterioration—anxiety, cognitive disturbance, depressed mood, fatigue, sleep disturbance, pain, and impaired well-being—in 2.5 million home care records from the Visiting Nurse Service of New York. The system has proved capable of identifying those factors as accurately as human clinicians. The team is now working to design a tool that can alert clinicians in a user-friendly way to which patients are at heightened risk.

Topaz’s second home care-centered study aims to prevent rehospitalization among newly discharged inpatients by helping home care nurses determine which patients need immediate care. Medicare requires that all patients receive a home visit within 48 hours of discharge. For high-risk patients, however, waiting that long could be disastrous. But because EHRs don’t include risk data, visiting nurses have no way of knowing which patients to prioritize.

Topaz and his team developed an AI program called PREVENT (Priority for the First Visit Nursing Tool), which analyzes patients’ nursing records for five factors known to correlate with risk of rehospitalization—sociodemographics, medication, depression, learning ability, and living arrangements. The tool then sends an email to each patient’s clinician, reporting its findings and recommending an expedited visit for those deemed high-risk. In a small pilot study published in 2018, the researchers found that when such patients received a nursing visit half a day sooner than those in a control group, they were far less likely to be readmitted. The team is now conducting a clinical trial, and results are expected to be published sometime next year.

“We’re building tools that can help standardize decision-making, but the last word always remains with the nurse,” says Topaz. “Our goal is for AI to function as an assistant—taking care of the boring administrative tasks so that nurses can focus more fully on their patients’ needs.”

Mining Conversations

Another possible target for AI tools is data that nurses don’t include in their notes. A postdoctoral fellow studying under Topaz, Jiyoun Song, PhD ’20, investigated the potential for identifying high-risk home care patients by analyzing their conversations with their nurses. In a pilot study led by Song, five nurses made audio recordings of their patient visits, 22 of which were typed up by a medical transcriptionist. The transcripts, and the nurses’ clinical notes, were then analyzed using a natural language processing tool.

Song and her team found that over 50% of health problems discussed during the home visits were not mentioned in the nurses’ EHR entries. “That’s a lot of missing information,” she says. “To do a better job of predicting patient outcomes, we really need to focus on those verbal communications.” A presentation that Song gave on this finding took first place in the “AI in Nursing” competition at the 20th International Conference on AI in Medicine this past June. The next step for Song and Topaz will be using AI programs to analyze such conversations in real time, rather than waiting for transcripts to be completed. They also believe that patients’ vocal mannerisms could offer important data, beyond the content of their words. “Things like intonation, the number of pauses, the richness of vocabulary can all be revealing,” Topaz explains. “When people develop cognitive impairment or dementia, for example, their speech becomes more monotone. We can learn much more by capturing not just what people say, but how they say it.”

Battling Bias

Another disturbing aspect of Song’s study was the racial disparities it revealed. Among white patients, about 33% of problems discussed verbally went unrecorded in nurses’ notes; for Black patients, the proportion was twice as high—over 65%. Medication regimens were also recorded less frequently for Black than for white patients, and the average conversation was four minutes shorter for Black patients. Although the sample size was too small to draw firm conclusions, it’s likely that the clinicians who participated in the study were not free of the biases found in society at large.

Such inequalities don’t surprise Kenrick Cato. “I recently worked on a paper with some colleagues where we were able to predict patients’ racial identity just by the content of the nursing notes,” he says. “We took out all the words that would denote race, but the differences in documentation patterns for Blacks and whites were unmistakable.”

Racial bias in AI can arise from several sources, Cato notes. One is a failure to ensure diversity in the data—for example, training a wound-identification tool mostly using photos of white patients’ lesions, so that it performs poorly when used with patients of color. Another common error is failing to recruit researchers from diverse backgrounds, who’d be likely to notice such shortcomings and correct them. “If you’re not careful,” Cato warns, “you’ll end up reproducing pernicious biases that already exist and making them more powerful in the AI.”

But AI can also be a powerful tool for exposing bias and for pushing back against it. That’s what drew Veronica Barcelona, PhD, to the field, after two decades combating health care inequities by other means. Barcelona, now an assistant professor of nursing at Columbia Nursing, began her career working on maternal mortality reduction in Latin America. She then pursued a PhD in epidemiology, with a focus on disparities in pregnancy and birth outcomes. As a postdoc at Yale, she studied under Jacquelyn Taylor, PhD, known for her work on the health impacts of racism and discrimination in minority populations. In January 2021, after Taylor joined Columbia Nursing as the Helen F. Petit Professor of Nursing, Barcelona followed her mentor to New York.

Soon, she began talking with Topaz and Cato about how AI tools could be used to examine the effects of racial bias in obstetrics—particularly, health care providers’ use of stigmatizing language in EHRs. Previous studies had shown that clinicians’ notes can reveal unconscious biases and stereotypes in the use of judgmental, skeptical, or even mocking language—such as “patient insists analgesic dosage is inadequate” or “patient claims smoking cessation but ashtray still noted on nightstand.” One paper had recently reported that physician notes about Black people were up to 50% more likely to contain such language than those about white people. No one, however, had examined its prevalence specifically in obstetrics EHRs—or how it might affect pregnancy-related morbidity, which is significantly higher among people of color. With help from her new colleagues, Barcelona designed a study aimed at answering those questions.

In May, Barcelona won a grant from the Betty Irene Moore Fellowship to pursue the groundbreaking project. (The study is also funded by Columbia University’s Data Science Institute.) She and her team will use tools such as NimbleMiner to analyze records from NewYork-Presbyterian Hospital between 2017 and 2019, examining language disparities by patients’ race and ethnicity, as well as by clinician type—nursing, social work, or medicine. They’ll look for associations between stigmatizing language and the incidence of infection, bleeding, or unplanned cesarean births.

Once the results are in, Barcelona hopes to develop interventions that can trigger change. “One possibility is to create programs that foster self-examination and discussions about how language reflects institutional values,” she says. “Another is to incorporate our findings into educational curricula for nursing, medicine, and allied health. And I can imagine creating a software program that could be integrated into EHR systems, which would flag stigmatizing phrases and give clinicians a chance to say things differently.”

In the U.S., Barcelona adds, “white supremacy and structural racism have infiltrated every aspect of life, including the health care system. That’s a huge thing to try to dismantle, but if we can do it in one area, maybe the effect can spread. This is a chance to use AI for good.”


This article originally appeared in the Fall 2022 issue of Columbia Nursing Magazine.