Schedule a Meeting

This is Conversations with Digital Diagnostics, a series where we delve into insightful discussions with company thought leaders.

This time, we are having a conversation with Michael D. Abramoff, MD, PhD, Founder and Executive Chairman of Digital Diagnostics.  

Dr. Abramoff is a fellowship-trained retina specialist, computer scientist, and entrepreneur. He is the Robert C. Watzke, MD Professor of Ophthalmology and Visual Sciences at the University of Iowa, with a joint appointment in the College of Engineering. As the author of over 350 peer-reviewed publications in this field, he has been cited over 42,000 times, and is the inventor on 20 issued patents and many patent applications. 

Digital Diagnostics: As a creator of the first FDA-cleared autonomous artificial intelligence (AI) system in healthcare, a great place to start would be to ask you what AI means in the context of healthcare? Also, can you explain the difference between autonomous AI and assistive AI?  

Michael Abramoff, MD, PhD: In the context of healthcare, AI is computer systems or algorithms that can perform highly cognitive tasks that are typically done by physicians or other providers, such as specialists.  

Now, let’s discuss the difference between autonomous AI and assistive AI. Early on in my career, I saw a lot of assistive AI being used. Assistive AI means there’s a provider, physician, or specialist involved, and the AI simply helps them make better decisions faster. I didn’t really see assistive AI addressing the real problems that exist in healthcare, which are high cost, varying quality, difficulties in access, health inequities, and other issues with accessing the care that people need and deserve.  

When I didn’t see assistive AI moving the needle on the core issues affecting healthcare, it became obvious that, if you really want to change healthcare for the better, the focus should be on autonomous AI. With autonomous AI, the medical decision is made by a computer. Despite this automated decision-making, the technology still fits within a larger healthcare system. However, successful adoption requires acceptance from all stakeholders and collaboration with physicians and other providers.  

Digital Diagnostics: LumineticsCore™, Digital Diagnostics’ flagship product, is the first autonomous AI platform to be cleared by the FDA, but it’s not the first instance of AI in healthcare. Can you elaborate on the history of AI in healthcare and how you’ve seen the technology advance over your career?   

Michael Abramoff, MD, PhD: AI isn’t a new concept. It was around before I was born, and the first forays into AI in healthcare took place decades ago. However, the concept started to gain popularity in the late 1960s, when people at Stanford attempted to build systems that assisted physicians with prescribing antibiotics. 

At the time, inputting medical information into a computer, which is now known as electronic health records (EHR), was a novel concept. During that era, the development of EHR was actually more exciting than the idea of a medical decision being made by, or assisted by, a computer. Thus, machine-made medical decisions didn’t go very far, mostly due to a lack of objective, high quality data. Unlike today, where we have abundant access to high quality data from advanced sensors, cameras, iPhones, and microphones.  

Then in the 1980s, I was involved in some research around medical decision-making using what was then called neural networks (now commonly referred to as machine learning and deep learning). During that time, we attempted to build healthcare AI systems. Again, though, high quality data just wasn’t available. I literally had to digitize thousands of slides to train these systems. Ultimately, it’s only in the past 10 – 20 years that we finally gained access at large scale to high quality, affordable data. This accessibility is partly due to lower-cost cameras and medical equipment that can collect objective data. This advancement has led to increased performance levels, allowing people to feel comfortable with the safety and efficacy of today’s AI technology. 

Digital Diagnostics: Given your background in machine learning, neuroscience, and software development, can you provide some insight into the reaction of the scientific and physician communities regarding the integration of AI in healthcare? Do these communities typically trust healthcare AI or are they skeptical?  

Michael Abramoff, MD, PhD: I think both the scientific and physician communities didn’t look far beyond the fields in which they were working. So, let me address the broader concerns. People often complain about healthcare being slow to change and slow to improve things, even if evidence exists that better patient outcome or population outcome is possible. In this case, that way of thinking helped us at Digital Diagnostics.  

I’m proud of the fact that healthcare was the first to deploy autonomous AI and make it accessible to the public. There’s no self-driving car. Loan decisions are not solely made by an algorithm. There’s always a human specialist involved, but now in healthcare, AI can autonomously make a diagnosis. The entire healthcare system as well as the public is comfortable with that. This widespread confidence can be attributed to the work we’ve done at Digital Diagnostics to ensure the comfort of every stakeholder in healthcare – be it patients, physicians, providers, ethicists, regulators, or payers. This emphasis on stakeholder acceptance has also helped to minimize pushback.  

We’ve seen pushback from stakeholders in the past and, sometimes, rightfully so. For instance, in the 1990s, gene therapy was on an upswing and it was very promising. People were expecting a lot from it; however, some unethical trials were performed, resulting in the deaths of several young people and the abrupt termination of the study. This led to a substantial delay in what ultimately could have been a huge advancement in healthcare had it been done right from the start. 

These previous instances paved the way for AI and autonomous AI in healthcare. It’s because of what happened in the past that we prioritized the comfort of all stakeholders, and that’s ultimately why you see rapid adoption of the technology. I don’t think that would have been possible if some stakeholders were uncomfortable or didn’t approve of the technology and started pushing back.  

And of course, there are some physicians worried about job loss. We’ve addressed that by showing that the integration of AI is not a threat to their jobs, but rather will allow them to practice at the top of their license. There’s also some concern about racial, ethnic, and other biases by AI; however, thanks to clinical trials and a wealth of evidence, we’ve shown that AI and autonomous AI not only mitigate bias, but can extend benefits to ethnic, racial, and rural groups. I believe there are good answers to all concerns regarding healthcare AI, which is why I’m glad we addressed them head-on, early on.  

Digital Diagnostics: Digital Diagnostics’ mission is to benefit patients by transforming the accessibility, affordability, equity, and quality of global healthcare through the application of technology in the medical diagnosis and treatment process. How does the work conducted at Digital Diagnostics, aligning with the company mission, differ from or relate to what others have done in the past in the realm of glamour AI? 

Michael Abramoff, MD, PhD: At Digital Diagnostics, our north star is patient outcome. So, when developing our flagship product, LumineticsCore™, we started with a focus on patient well-being and then worked our way back. Throughout the development phase, we considered things like, what process in healthcare is best for patient or population outcome and what role can AI play in that? If you build your technology with those things in mind, you get to a very different point than if you start the process by saying, “Well, I have this cool algorithm. I like developing algorithms. Let’s see what problem I can address with it.” If you take that approach, you’re essentially a solution, looking for a problem, and that’s what I would consider glamour AI.  

I’m very proud of our strategic approach and what we’ve accomplished at Digital Diagnostics. I’m also very excited that we started all of this in Iowa, which is right in the center of the country where you rarely see tech companies developing algorithms and looking for clinical problems to solve. The focus, however, should always be on what is best for patient outcome, and what role, if any, AI and autonomous AI can play in that. That’s why we build LumineticsCore™.  

Digital Diagnostics: Your training as an ophthalmologist and retina specialist aided in the development of LumineticsCore™. However, with the rising popularity of AI technology, many large tech companies, traditionally not engaged in patient care, are venturing into healthcare with varying degrees of success. From your standpoint, what factors do you believe contribute to the challenges encountered by these big tech companies entering healthcare?  

Michael Abramoff, MD, PhD: Healthcare is a very challenging ecosystem with many stakeholders – all of whom look to each other for guidance, but also have their independent preferences. It’s a conservative system. However, it can be changed as we’ve shown through the work we do at Digital Diagnostics.  

As far as big tech companies venturing into healthcare AI, issues often arise when only one stakeholder is considered. When these companies exclusively heed the perspective of a singular stakeholder, such as the payer or the physician, and neglect crucial factors, such as patient outcome and the convictions of other stakeholders, challenges quickly emerge.   

On the other hand, I think that tech companies utilizing a comprehensive approach will be much more successful. These companies should ask, “What should the healthcare system look like to produce the best outcome for the patient? Where does the physician fit in? Where does the AI fit in? Where do other providers fit in?” By focusing on these essential questions, companies will have an easier path to success.  

Digital Diagnostics: You mentioned the healthcare ecosystem; let’s dive further into that. More specifically, based on your experience in developing the first FDA-cleared autonomous AI system in healthcare, how important is it to consider the entire healthcare ecosystem when bringing a novel technology to market?  

Michael Abramoff, MD, PhD: It’s of prime importance to consider the entire healthcare ecosystem when bringing an AI product to market. As I mentioned earlier, it’s imperative that developers of AI prioritize solving for patient outcome, rather than starting with the algorithm. That’s a really simple way of putting it. There are, of course, many factors that must be considered.  

I’d also like to mention the ethical framework we developed, which I believe played a pivotal role in our gaining support from the healthcare system. The comprehensive ethical framework addressed various concerns surrounding healthcare AI, including reimbursement, safety, patient outcome, liability, racial and ethnic bias, safeguarding of data, etc. Of course, you can’t address every concern because you never know what may manifest in the future.  

It’s interesting to look at the bioethical principles that physicians and healthcare providers have abided by for thousands of years. They are very elementary principles that exist in every culture around the world and revolve around benefitting the patient and doing them no harm. Central to these principles is the pursuit of justice, meaning people should be treated equally. Then there’s autonomy of the patient, which refers to a patient’s ability to make decisions about their own health.  

We can never perfect all these principles. That is not the point. The point is that people across all cultures care about these things and have cared about them for thousands of years. And so, if you have a framework that addresses these principles and can measure how much an AI, healthcare system, or physician is able to meet each of these ethical principles, you can say, “Maybe we’re leaning a bit too much on autonomy, let’s focus more on the justice side,” or the other way around. 

Digital Diagnostics: As you’ve mentioned, when developing LumineticsCore™, you gave special consideration to the ethical aspect of trusting a computer rather than a doctor to make a diagnosis. Can you expound on that and delve deeper into how the bioethical principles you previously discussed are being addressed, either implicitly or explicitly, and the impact they have on the development and implementation of AI systems in the healthcare industry? 

Michael Abramoff, MD, PhD: At Digital Diagnostics, we made our consideration of bioethical principles very explicit by publishing the framework I mentioned earlier. We continue our work on that in terms of reimbursement and regulatory approach to AI, focusing on things like safety and efficacy. A lot of this is implicit. Physicians, providers, nurses, and payers may not think about bioethical principles, but these fundamental ideals affect everything they do, influencing their actions and impacting the comfort level of patients.  

So either implicitly or explicitly, these principles are being addressed. It’s just easier if it’s explicit because then you can say, “This is how we meet that biological principle.” In contrast, a more implicit approach might involve simply stating that work is conducted in an ethical manner. In this case, it’s important to delve deeper and ask, “How do you meet this ethical principle? What does ‘ethical’ mean in your context?” I think regardless of the approach – implicit or explicit – it’s a good idea to ask these questions. I also believe that as we make our ethical commitments explicit, others, such as the FDA, are quicker to jump on board.  

For instance, after closely monitoring our work on the ethical framework, the FDA incorporated it into their regulatory considerations. The same goes for Centers for Medicare and Medicaid Services (CMS), which is the federal agency that runs Medicare and Medicaid. The framework is positively impacting various AI systems, not only the ones we build. 

Digital Diagnostics: As the founder of Digital Diagnostics and someone who was actively involved in introducing an innovative AI technology to the market, what work would you say is left to do as we look to drive adoption of AI in healthcare? 

Michael Abramoff, MD, PhD: There’s a lot of promise in healthcare AI, such as lower cost, better quality care, better access, and the potential to eliminate health inequities. It’s easy to see how all of this would be possible with autonomous AI healthcare solutions. However, the responsible adoption of AI in healthcare requires proving the technology’s effectiveness.  

At Digital Diagnostics, for instance, we did all the work creating our autonomous AI platform, LumineticsCore™. We built it, employed it, deployed it, and now, it’s widely used. Our work didn’t end there, though. We needed to demonstrate its effectiveness. This is where randomized clinical trials played a pivotal role. The initial outcomes from these trials indicated that health equities were being improved and essentially eliminated. Additional studies have also shown an increase in productivity, specifically specialist productivity, which has never been done with any type of technology. So, we have evidence that our autonomous AI is functioning as designed, and I believe that’s hugely important.  

We have support from all stakeholders, but that support is contingent on our ability to demonstrate an improvement in patient and population outcomes. I think this tangible proof is vital not only for fostering more adoption, but also for sustaining success, securing improved reimbursement, and attracting more investment on the R& D side. I also believe all of this will follow the success stories that are starting to develop, which is why it’s important to continue to make sure that all stakeholders accept what the healthcare AI community is doing and that companies operating in this space continue to do AI the right way. 

Ultimately, I think that in 5 – 15 years, people will be more at ease with computers making medical decisions and performing other tasks in healthcare. As healthcare AI becomes more common, it will be easier for stakeholders to differentiate between helpful and harmful AI.  

There can easily be backlash. If those operating within the healthcare space commit unethical acts or take unethical approaches, there may be concerns from regulators, Congress, payers, or patients. That is why as we continue to deploy healthcare AI solutions, we must remain committed to doing AI the right way.