Researchers at the University of Michigan have developed an algorithm to correct racial bias in medical data, ensuring AI systems can make fairer and more accurate predictions. This advancement promises to minimize health care disparities, significantly affecting how AI aids in diagnosing illnesses like sepsis.
A new study from researchers at the University of Michigan has revealed significant racial disparities in medical testing, underscoring the urgency for equitable health care solutions. The researchers have identified a systematic bias that puts Black patients at a disadvantage, affecting the accuracy of AI models used in diagnosing critical conditions.
Unveiled in two key studies, one published in PLOS Global Public Health and another presented at the International Conference on Machine Learning in Vienna, Austria, this innovative research demonstrates how medical data often used to train AI is biased against Black patients. The team discovered that, with identical medical conditions, Black patients are less likely than their white counterparts to receive essential diagnostic tests.
“If there are subgroups of patients who are systematically undertested, then you are baking this bias into your model,” corresponding author Jenna Wiens, an associate professor of computer science and engineering at the University of Michigan, said in a news release. “Adjusting for such confounding factors is a standard statistical technique, but it’s typically not done prior to training AI models. When training AI, it’s really important to acknowledge flaws in the available data and think about their downstream implications.”
The research found that white patients were tested up to 4.5% more often than Black patients with similar medical needs. This bias was evident in data from Michigan Medicine in Ann Arbor and the Medical Information Mart for Intensive Care (MIMIC) dataset from Beth Israel Deaconess Medical Center in Boston.
To tackle this issue, the team developed a novel algorithm designed to identify patients who, despite being untested, were likely suffering from severe conditions based on their race and vital signs. This allowed AI models to compensate for the bias without excluding any patient records.
“Approaches that account for systematic bias in data are an important step towards correcting some inequities in health care delivery, especially as more clinics turn toward AI-based solutions,” Trenton Chang, a doctoral student in computer science and engineering at the University of Michigan and the first author of both studies, said in the news release.
Using simulated data, the algorithm significantly improved the accuracy of AI models in diagnosing illnesses like sepsis, achieving accuracy rates comparable to models trained on unbiased datasets.
This pioneering work reveals a path forward for AI in health care that does not perpetuate existing biases but strives to mitigate them. The integration of such bias-correcting algorithms could lead to more equitable health care outcomes and inspire the development of fairer, more inclusive AI systems.
This research not only highlights a crucial flaw in current AI training practices but also offers a pragmatic solution, promoting equity in medical treatment through advanced technology. As more clinics adopt AI-based solutions, the implementation of these algorithms could be a significant step towards a more just health care system.