A study of a healthcare artificial intelligence program that inputs patient responses rather than information from doctors found that the new approach could reduce racial disparities (Source: “New Algorithms Could Reduce Racial Disparities in Health Care,” Wired, Jan. 25, 2021).
Health diagnostic software typically learns from doctors by digesting thousands or millions of x-rays or other data labeled by expert humans until it can accurately flag health problems by itself. A study published last month in the journal Nature Medicine took a different approach—training algorithms to read knee x-rays for arthritis by using patients as the AI arbiters of truth instead of doctors. The results revealed that radiologists may be missing important details when it comes to reading Black patients’ x-rays.
The algorithms trained on patients’ reports did a better job than doctors at accounting for the pain experienced by Black patients by discovering patterns of disease in the images that humans usually overlook.
“This sends a signal to radiologists and other doctors that we may need to reevaluate our current strategies,” says Said Ibrahim, a professor at Weill Cornell Medicine, in New York City, who researches health inequalities, and who was not involved in the study.
Algorithms designed to reveal what doctors don’t see, instead of mimicking their knowledge, could make health care more equitable. In a commentary on the new study, Ibrahim suggested it could help reduce disparities in who gets surgery for arthritis. African American patients are about 40 percent less likely than others to receive a knee replacement, he says, even though they are at least as likely to suffer osteoarthritis. Differences in income and insurance likely play a part, but so could differences in diagnosis.