The FDA has introduced a new plan to improve transparency and reduce bias in artificial intelligence software. How will this impact healthcare organizations?

Artificial Intelligence

As artificial intelligence and machine learning are included in an expanding array of medical devices, concerns about the algorithms’ lack of transparency and potential bias driving patient outcomes have led the Food and Drug Administration to tackle the issue with a multipronged approach.

The FDA announced an initial plan in January, outlining five actions the agency aims to take. These include a transparent, patient-centered approach, the establishment of new pilot programs to enable real-world performance monitoring and the development of a regulatory framework.

Among the greatest benefits of AI/ML in software are its ability to learn from real-world use and experience, and to improve its performance. But as an October 2020 study from Harvard’s T.H. Chan School of Public Health pointed out, there are still concerns regarding transparency related to how data is collected, the overall quality of the data and how it’s being validated.

Part of the FDA’s action plan includes support for the development of machine learning best practices to evaluate and improve ML algorithms for topics such as data management, interpretability and documentation, as well as advancing real-world performance monitoring pilots. 

The FDA also noted that the action plan would continue to evolve to stay current with developments in the field of AI/ML-based software as a medical device (SaMD).

As the agency pointed out in an April 2019 discussion paper, the potential power of AI/ML-based SaMD lies within its ability to continuously learn, where the adaptation or change to the algorithm is realized after the SaMD is distributed for use and has learned from real-world experience.

In turn, the autonomous and adaptive nature of these tools requires a new, total product lifecycle regulatory approach that supports a rapid cycle of product improvement, allowing SaMD to continually improve.

To address this, premarket submissions to the FDA for AI/ML-based SaMD would include a “predetermined change control plan,” which would describe the types of anticipated modifications that the AI/ML would generate.

By comparison, traditional software solves problems by being explicitly programmed by the development team. The team knows how to solve the problem, or consults an expert with domain knowledge, and creates the software algorithm accordingly, says Pat Baird, senior regulatory specialist and head of global software standards at Philips.

“However, for many types of AI applications, the development team doesn’t know how to solve the problem. Instead, they make a problem-solving engine that learns from data that is provided to it,” Baird says.

This opaqueness raises concerns for stakeholders, including users and patients, so building trust and being able to explain the data used to train the system and the quality processes that are in place will be key factors in the adoption of AI in healthcare.

‘Responsible and Explainable AI Is Essential’

Medical AI already has a bias problem because it’s not always easy for researchers to obtain large, sufficiently varied data sets, which can then lead to those biases being baked into algorithms from the start.

“I think the first step in reducing bias is to raise awareness about different kinds of bias that can occur, remind people to challenge the assumptions that they have, share techniques on how to detect and manage bias, share examples and so on,” Baird says. “To improve machine learning, we need to be better at sharing our collective learning.”

by Nathan Eddy Source: https://healthtechmagazine.net/article/2021/05/fda-plans-oversight-ai-medical-devices-addressing-bias