Publication

Identifying Ethical Considerations for Machine Learning Healthcare Applications

Danton Char, Michael Abramoff; Chris Feudtner

With the FDA authorization of an autonomous artificial intelligence diagnostic system based on machine learning (ML), which employs algorithms that can learn from large data sets and make predictions without being explicitly programmed, ML healthcare applications (ML-HCAs) have transitioned from being an enticing future possibility to a present clinical reality (Abramoff et al. 2018; Commissioner Office of the FDA 2020). Almost certainly, ML-HCAs will have a substantial impact on healthcare processes, quality, cost, and access, and in so doing will raise specific and perhaps unique ethical considerations and concerns in the healthcare context (Obermeyer and Emanuel 2016; (Rajkomar et al. 2019; Maddox et al. 2019; Matheny et al. 2019, 2020). This has been the case in non-healthcare contexts (Char et al. 2018; Bostrom and Yudkowski 2011), where ML implementation has generated toughening scrutiny due to scandals regarding how large repositories of private data have been sold and used (Rosenberg and Frenkel 2018), how the ML design of algorithmic flight controls resulted in accidents (Nicas et al. 2019), and how computer-assisted prison sentencing guidelines perpetuate racial bias (Angwin et al. 2016), to name but a few of the growing number of examples. Regarding specifically ML-HCAs, our review of the literature (see appendix for review methods) identified a variety of ethical considerations and concerns that have been cited, such as bias arising from the training data set (Challen et al. 2019), the privacy of personal data in business arrangements (Comfort 2016; Hern 2017), ownership of the data used to train ML-HCAs (Ornstein and Thomas 2018) and accountability for ML-HCA’s failings (Ross and Swelitz 2017)