Graphic of the week
Growing list of states consider ways to regulate ‘forever chemicals’

Expansion of health AI could be hindered by racial bias, Google, Microsoft executives warn

As new generative AI models like ChatGPT gain popularity, some experts are saying that to ensure such tools work in healthcare, implicit racial biases baked into health data must be accounted for (Source: “Google, Microsoft execs share how racial bias can hinder expansion of health AI,” Fierce Healthcare, Feb. 23). 
 
The goal is for AI to one day “support clinical decision-making [and] enhance patient literacy with educational tools that reduce jargon,” said Jacqueline Shreibati, M.D., senior clinical lead at Google. 
 
However, there are gaps around the use of these models in healthcare. Chief among them is that clinical evidence is always evolving and changing. Another key problem is the data themselves may have racial bias that needs to be mitigated. 
 
“A lot of our data has structural racism baked into the code,” Shrebati said.