Finding a Safe, Efficacious, and Ethical Path Through the Development Process
Michael D. Abramoff, Danny Tobey, and Danton S. Char
AUTONOMOUS AI HAS THE POTENTIAL TO LESSEN PHYSICIAN BURDEN, INCREASE PATIENT ACCESS, AND LOWER COST
Artificial Intelligence or Augmented Intelligence (AI) describes systems capable of making decisions of high cognitive complexity; autonomous AI systems in healthcare are AI systems that make such clinical decisions without human oversight, where the autonomous AI creator assumes medical liability.¹ For example, a diagnostic autonomous AI system for the point-of-care diagnosis of diabetic retinopathy provides a direct diagnostic recommendation without physician or human interpretation. Thus, it performs a cognitive, highly complex task that was previously only performed by ophthalmologists and optometrists—representing 0.02% of all Americans— after extensive, specialized training. Such rigorously validated medical diagnostic autonomous AI systems hold great promise for improving access to care, increasing accuracy, and lowering cost, while enabling specialist physicians to provide the greatest value by managing and treating those patients whose outcomes can be improved.²,³ Ensuring that autonomous AI provides these benefits requires negotiating multiple ethical and practical challenges.
Recently, the first autonomous point-of-care diabetic retinopathy examination was de novo authorized by the US Food and Drug Administration (FDA), after a preregistered clinical trial, and became part of the American Diabetes Association’s Standard of Diabetes Care. No prior approval could serve as guidance, and there are significant ethical and legal concerns raised around introducing autonomous AI into healthcare.6 We describe the concerns, go through the ethical and accountability principles we drew on to create evaluation rules for autonomous AI, and explain how we addressed them practically through the clinical trial and de novo authorized FDA clearance process and during ongoing implementation.