AI System Helps Doctors Detect Suicide Risk Among Patients

Vanderbilt University Medical Center’s AI system, VSAIL, enhances suicide risk screening in medical clinics by prompting timely assessments, offering a potent tool in suicide prevention efforts.

An innovative study from Vanderbilt University Medical Center (VUMC) indicates that artificial intelligence can play a critical role in identifying patients at risk for suicide, thereby enhancing prevention strategies during regular medical visits. The research, spearheaded by Colin Walsh, an associate professor of biomedical informatics, medicine and psychiatry, tested the efficacy of the Vanderbilt Suicide Attempt and Ideation Likelihood (VSAIL) model in prompting doctors to screen patients for suicide risk in neurology clinics.

The study, published in the journal JAMA Network Open, compared the effects of automatic pop-up alerts with a more passive system that simply displayed risk information in patients’ electronic health records.

The results showed that interruptive alerts were significantly more effective, leading to suicide risk assessments in 42% of cases, compared to just 4% with the passive system.

“Most people who die by suicide have seen a health care provider in the year before their death, often for reasons unrelated to mental health,” Walsh said in a news release. “But universal screening isn’t practical in every setting. We developed VSAIL to help identify high-risk patients and prompt focused screening conversations.”

The study underscores the importance of targeted screening in suicide prevention. In the United States, suicide rates have been escalating, making it the 11th leading cause of death, with an estimated 14.2 per 100,000 Americans affected annually. Nearly 77% of individuals who die by suicide have had contact with primary care providers within the year.

VSAIL utilizes routine data from electronic health records to estimate a patient’s 30-day risk of attempting suicide. Prior testing demonstrated the model’s accuracy, with about one in 23 flagged patients reporting suicidal thoughts. In this new study, interruptive alerts were randomly assigned to doctors when high-risk patients visited the neurology clinics.

“The automated system flagged only about 8% of all patient visits for screening,” Walsh added. “This selective approach makes it more feasible for busy clinics to implement suicide prevention efforts.”

The study involved 7,732 patient visits over six months, resulting in 596 screening alerts. During a 30-day follow-up, no instances of suicidal ideation or attempts were recorded among the alerted groups. Although interruptive alerts proved beneficial, they also raised concerns about “alert fatigue,” where excessive notifications could overwhelm healthcare providers.

“Health care systems need to balance the effectiveness of interruptive alerts against their potential downsides,” added Walsh. “But these results suggest that automated risk detection combined with well-designed alerts could help us identify more patients who need suicide prevention services.”

The team proposes that similar AI-driven systems could be adapted for various medical settings, enhancing suicide prevention across the board.