A Framework to Evaluate Ethical Considerations with ML-HCA Applications—Valuable, Even Necessary, but Never Comprehensive
Machine learning (ML) is fundamental to multiple visions of health care’s future, from precision medicine (Frohlich et al. 2018; Petrie Flom (PMAIL) 2020) to a model of health delivery and research as a combined “learning healthcare system” (Davenport and Kalakota 2019). Not surprisingly, as awareness of ethical failings with ML applications outside of healthcare grows (Char, Shah, and Magnus 2018), there has been expanding concern about ethical challenges in applying ML tools within health care (HAI 2019; Obermeyer and Emanuel 2016; Obermeyer et al. 2019; Rajkomar et al. 2018). In fact, ethical concerns with ML use made up a significant part of the Food and Drug Administration (FDA) scrutiny prior to granting approval for the first autonomous diagnostic system based on ML (Abramoff, Tobey, and Char 2020). Failure to adequately anticipate and address ethical problems before they emerge has set back other promising technologies for health care, such as stem-cell therapies (Lo and Parham 2009).