Same Symptoms, Different Care: How AI’s Hidden Bias Alters Medical Decisions
AI tools designed to assist in medical decisions may not treat all patients equally. A new study shows that these systems sometimes alter care recommendations based on a patient’s background, even when their medical conditions are identical.
Researchers at Mount Sinai tested leading generative AI models and found inconsistencies in treatment suggestions depending on socioeconomic and demographic information, highlighting a major challenge in building fair and reliable AI for healthcare.
As artificial intelligence (AI) becomes more integrated into health care, a new study from the Icahn School of Medicine at Mount Sinai shows that generative AI models can recommend different treatments for the same medical condition, based solely on a patient’s socioeconomic or demographic background.
Published online today (April 7, 2025) in Nature Medicine, the study underscores the need for early testing and oversight to make sure AI-driven care is fair, effective, and safe for everyone.