Foundation AI models trained on EHRs may inadvertently retain and expose sensitive patient information, according to a study by researchers at the Massachusetts Institute of Technology in Cambridge. The study examined how clinical AI models can “memorize” individual patient data rather than generalize from broader trends. Researchers developed structured tests to determine how easily an attacker with partial knowledge — such as lab results or demographic details — could extract identifiable information from a model. The team found that some patients, particularly those with rare conditions, may be more susceptible to privacy risks, even in de-identified datasets. While some disclosures — such as a patient’s age or gender — were seen as lower risk, others, including diagnoses related to HIV or substance use, were flagged as potentially harmful. The post EHR-trained AI could compromise patient privacy: MIT appeared first on Becker's Hospital Review | Healthcare News & Analysis.