How Racialized Data Shapes Healthcare
AI has increasingly become an integral part of healthcare, such as the interpretation of brain scans and detecting early diseases. In particular, AI has revolutionized and has been at the helm of predictive healthcare. It has brought with it a more accurate system to see illnesses before they even manifest in people. However, these algorithms are constructed on data that were built off ethnic disparities, ultimately exacerbating already deep-rooted biases. Datasets often reflect the political zeitgeist of when it was gathered. Decades of unequal healthcare access, misdiagnosis, and clinical bias carve their way into medical records, and when AI trains on these patterns, it treats these biases as the “truth”, encoding racism into law. Although AI is often touted as the ultimate neutral tool, neutrality itself is flawed when the data is built on the backs of segregation, underdiagnosis, and unequal care.
“For example, among all patients classified as very high-risk, black individuals turned out to have 26.3 percent more chronic illnesses than white ones (despite sharing similar risk scores).”
In this case, the algorithm assumed black patients were healthier, because the demographic has spent less on healthcare when compared to their white counterparts. However, black patients often spent less due to barriers like mistrust in the institutions and overall discrimination. Algorithmic biases are defined by both its statistical and social definitions, ultimately referring to “systemic errors in AI systems that can lead to results, interpretations or recommendations that unfairly advantage or disadvantage certain individuals or groups.” These biases lead AI to severely underestimate risks of illnesses when it comes to historically marginalized and underrepresented groups.3 For instance, studies show that clinicians are more likely to describe black patients with verbiage like “agitated” or “drug-seeking”, and AI models internalize these narratives. There needs to be a concerted effort to address these inherent inclinations, and narrative medicine is a powerful tool to guide it. Interdisciplinary collaboration is key.
“Interdisciplinary research should focus on developing innovative and inclusive AI applications that effectively serve diverse populations without perpetuating existing disparities.”
Narrative medicine is, by practice, directly involved in active communication between the patient and the doctor. By creating and sustaining these bridges that link patient and healthcare, it can mitigate these informational biases, by recognizing and addressing underrepresentation. Slowly, narrative medicine can help improve this dataset for future use, ultimately reducing future AI bias.
AI needs to be paired with disciplines that understand and accurately reflect the nuance behind the surface of the patient, to create a future where predictive healthcare truly serves everyone.
By: Jacob Nacomel (he/him) | Blog Committee Member
References:
1. Vartan S. Racial bias found in a major health care risk algorithm. Scientific American. February 20, 2024. Accessed December 1, 2025. https://www.scientificamerican.com/article/racial-bias-found-in-a-major-health-care-risk-algorithm/
2. Aquino YS, Carter SM, Houssami N, et al. Practical, epistemic and normative implications of algorithmic bias in Healthcare Artificial Intelligence: A qualitative study of multidisciplinary expert perspectives. Journal of Medical Ethics. 2023;51(6):420-428. doi:10.1136/jme-2022-108850
3. Hussain SA, Bresnahan M, Zhuang J. The bias algorithm: How ai in healthcare exacerbates ethnic and racial disparities – a scoping review. Ethnicity & Health. 2024;30(2):197-214. doi:10.1080/13557858.2024.2422848
4. Ratwani RM, Sutton K, Galarraga JE. Addressing AI algorithmic bias in health care. JAMA. 2024;332(13):1051. doi:10.1001/jama.2024.13486