AMA Adopts New Policy to Ensure Transparency in AI Tools for Medical Care

0
As augmented intelligence tools become more prevalent in healthcare, the American Medical Association (AMA) has adopted a new policy at its Annual House of Delegates Meeting to enhance trust and transparency in how these tools generate their results. The policy specifically advocates for clinical AI tools to be explainable and supported by data on safety and effectiveness. For an AI tool to be considered explainable, it must offer clear insights into its outputs that physicians and other qualified professionals can understand and use to guide patient care decisions.

As augmented intelligence tools continue to emerge in medical care, the American Medical Association (AMA) adopted policy during the Annual Meeting of its House of Delegates aimed at maximizing trust in and increasing transparency around how these tools arrive at their conclusions. Specifically, the new policy calls for explainable clinical AI tools that include safety and efficacy data. To be considered explainable, these tools should provide explanations behind their outputs that physicians, and other qualified humans, can access to interpret and act on when deciding on the best possible care for their patients.

Furthering the AMA’s support for more oversight and regulation of augmented intelligence (AI) and machine learning (ML) algorithms used in clinical settings, the new policy calls for requiring an independent third party, such regulatory agencies or medical societies, to determine whether an algorithm is explainable, rather than relying on claims made by its developer. The policy states that explainability should not be used as a substitute for other means of establishing safety and efficacy of AI tools, such as randomized clinical trials. Additionally, the new policy calls on AMA to collaborate with experts and interested parties to develop and disseminate a list of definitions for key concepts related to medical AI and its oversight.

“With the proliferation of augmented intelligence tools in clinical care, we must push for greater transparency and oversight so physicians can feel more confident that the clinical tools they use are safe, based on sound science, and can be discussed appropriately with their patients when making shared decisions about their health care,” said AMA Board Member Alexander Ding, M.D., M.S., M.B.A. “The need for explainable AI tools in medicine is clear, as these decisions can have life or death consequences. The AMA will continue to identify opportunities where the physician voice can be used to encourage the development of safe, responsible, and impactful tools used in patient care.”

The AMA Council on Science and Public Health report that served as the basis for this policy noted that when clinical AI algorithms are not explainable, the clinician’s training and expertise is removed from decision-making, and they are presented with information they may feel compelled to act upon without knowing where it came from or being able to assess accuracy of the conclusion. The report also noted that intellectual property concerns, when provided as a rationale for not explaining how an AI device created its output, should not nullify a patient’s right to transparency and autonomy in making medical decisions. To this end, the new policy states that while intellectual property should be afforded a certain level of protection, concerns of infringement should not outweigh the need for explainability for AI with medical applications.

To learn about the new policy, visit the AMA website here.

Related News:

Q&A: Healthcare Innovation with Jason Povio, CEO of Eagle Telemedicine

AgencyBloc Launches Rx Collect: Smarter Medicare Data Collection

Share.

About Author

Taylor Graham, marketing grad with an inner nature to be a perpetual researchist, currently all things IT. Personally and professionally, Taylor is one to know with her tenacity and encouraging spirit. When not working you can find her spending time with friends and family.