The Future of AI in Health Care: FDA Authorizes a Software Involving AI
Earlier this year, the U.S. Food and Drug Administration (FDA) authorized the marketing of a software program, Caption Guidance, that uses artificial intelligence (AI) to help medical professionals administer cardiac ultrasounds. Caption Guidance helps the user capture high quality diagnostic images of a patient’s heart. After the images are captured, a cardiologist reviews the images for diagnosis. Robert Ochs, Ph.D., deputy director of the Office of In Vitro Diagnostics and Radiological Health at the FDA noted that the FDA’s authorization “demonstrates the potential for artificial intelligence and machine learning technologies to increase access to safe and effective cardiac diagnostics that can be life-saving for patients.” The FDA first authorized a device that uses AI in 2018. This new authorization demonstrates that the FDA continues to recognize the significant role AI can play in health care.
FDA premarket authorizations of AI products have led to AI’s wide use in radiology. For example, AI can aid in taking MRI scans and reduce the time patients spend inside the scanner. Less time spent inside the MRI scanner means less movement from patients, which leads to better quality images. Furthermore, studies have demonstrated the AI’s accuracy and success reading CT scans and mammograms. A study by Google, Stanford University, and Northwestern University found that an AI model algorithm was better at detecting lung cancer from CT scans than doctors. In a another study examining mammograms, an AI model led to a 9.4% reduction in false negatives and a 5.7% reduction in false positives when compared to doctors.
The FDA defines AI as “the science and engineering of making intelligent machines, especially intelligent computer programs.” It specifically defines machine learning as “artificial intelligence technique that can be used to design and train software algorithms to learn from and act on data.” Software that accompanies medical devices is often referred to as “software as a medical device (SaMD).”
The use of AI in SaMD can pose a risk to patients. A SaMD that uses an unverified, untested AI algorithm may lead to a misdiagnosis. Therefore, depending on the risk a SaMD poses to patients, the SaMD has to be authorized by the FDA to ensure the it is safe and effective. In many cases, companies have to provide clinical data showing that the medical device and its SaMD performs as intended and with accuracy. So far, the FDA has approved SaMDs, such as Caption Guidance, that utilize locked algorithms. Locked algorithms are algorithms that stay the same and where the function does not change based on new learned data.
On the other hand, adaptive algorithms learn and change over time based on new learned data. Adaptive algorithms potentially pose a greater risk to patients than locked algorithms. As the adaptive algorithm receives and process new data over time, the algorithm changes to optimizes the device. This now modified SaMD would not have been tested and verified to ensure that is accurate and performing as intended, raising concerns that the updated SaMD would not be as safe and effective as the original SaMD that was authorized by the FDA.
To help companies bring SaMD with adaptive algorithms to market, the FDA released a discussion paper last year on a proposed authorization process for adaptive AI SaMDs. Under this proposal, companies will be required to classify their SaMD based on whether the SaMD treats or diagnoses a disease, drives clinical management, or simply informs clinical management. Companies will then have to consider the state of the patient’s situation and whether the SaMD is used for a patient with a critical, serious, or non-serious condition. The degree of potential risk on patients will determine the level of scrutiny that the FDA will apply when evaluating each SaMD for premarket authorization.
The FDA will first review the SaMD premarket application to determine that the SaMD is safe and effective. Along with this submission, companies would be required to submit a “predetermined change control plan,” which sets forth potential modifications the adaptive AI SaMD may make. The plan will include performance evaluation protocols, which require the companies to implement quality systems and good machine learning (adaptive algorithm) practices to monitor risk and incorporate risk management approaches. If an AI modification goes beyond the predetermined change control plan, FDA will conduct a focused review. This proposed framework allows companies to learn, optimize, and update their SaMD while reassuring the FDA that the SaMD poses little risk to patients.
Although FDA has authorized AI in medical products in some capacity, it continues to work on a regulatory scheme that would allow adaptive AI to enter health care. The potential for AI in the health care arena is great. Former FDA Commissioner Scott Gottlieb stated that, with AI, society “can expect to see earlier disease detection, more accurate diagnosis, more targeted therapies and significant improvements in personalized medicine.”
GLTR Staff Member; Georgetown Law, J.D. expected 2020; University of Maryland, Baltimore, M.S. 2015; University of Maryland, College Park, B.S. 2013.