“Data is like a mirror, reflecting reality. And when data is distorted, it can magnify societal bias. But trying to fix data to solve problems “It’s like wiping a mirror to get a stain off your shirt,” said lead author Xiaoxuan. said Liu, an associate professor of AI and digital health technology at the University of Birmingham in the UK.
“To create lasting change in health equity, we must focus on fixing the causes, not just reflection,” Liu said.
Key recommendations include creating an overview of datasets and presenting them in plain language; This includes establishing standards for comprehensiveness and versatility.
Known or expected sources of bias, error, and other factors affecting the dataset should also be identified, the authors said.
Find a story that interests you
Additionally, the performance of AI health technologies needs to be evaluated and compared across study populations as well as across contextual groups of interest. Uncertainties identified in AI performance should be documented with strategies to monitor, manage, and mitigate these risks during technology implementation, as well as clearly documenting and mitigating the clinical impact of these findings. should be managed through planning, the authors say.
“We want to raise awareness that no data set is unrestricted, so transparent communication of data restrictions should be recognized as valuable, and this lack of information It should be recognized as a limitation,” they said.
“We hope that adoption of the STANDING Together recommendations by stakeholders across the lifecycle of AI health technologies will ensure that everyone in society can benefit from safe and effective technologies,” they said. Ta.