Minimizing risk in AI-driven healthcare

With the digital health revolution firmly underway, AI is proving to be a great enabler for better quality care at a lower cost. But to unlock AI’s true potential, providers must address new exposures across an evolving regulatory and liability landscape.

Digital healthcare Article 4 min Tue, Jun 13, 2023

Delivering the future of healthcare

It feels like we’re just scratching the surface of artificial intelligence (AI), and yet rapid development is driving nothing short of a digital health revolution.

From assistive diagnostic and medical triaging tools to medical scribing and chatbots, providers worldwide are finding new, efficient ways to optimize systems, reduce errors and cut costs, all while elevating the employee experience and patient care. It’s no surprise that the AI-healthcare market is set to explode from USD$14.6 billion in 2023 to $102 billion in 2028.

But is AI moving too fast? There’s no doubt it’s making a huge impact across healthcare. But the danger of rapidly introducing any new technology is that it can pose major risks to the provider, and AI is no different.

Emerging exposures

While the benefits of AI are clear to see, the pace of technological advancement is making it hard for us humans to keep up. The U.S. Medical Licensing Examination recently passed two AI programs, but it’s still possible for software to be released without third-party testing and before the Food and Drug Administration (FDA) review it. In the US, steps are being taken towards regulating AI in healthcare and science sectors, with a blueprint for an AI Bill of Rights.

In an industry where lives are at stake, the inability to truly validate a technology can have severe consequences down the line. Just look at the potential for algorithmic bias in diagnostic tools. The Framingham Heart Study cardiovascular score showed just how AI can be a double-edged sword, effectively diagnosing heart disease in Caucasian patients and underperforming for all other ethnicities. This led to an inaccurate and unequal distribution of care and directly increased the likelihood of non-Caucasian patients suffering from heart attacks.

At the same time, a perfect storm of increased regulatory scrutiny and little case law is creating a landscape of uncertainty; if AI is involved, who is liable when something goes wrong, the healthcare professional or the software that failed?

Bridging the gap

AI has the potential to transform healthcare, but clinicians, patients, data providers, healthcare systems and regulators need to play their part.

Patients are naturally hesitant when thinking about AI tools handling their health, and must give consent for their data to be used. By putting the right data protocols in place, providers can help assure patients that their data is being used responsibly, while securing and validating AI inputs to deliver accurate, trusted outputs. With the right training, clinicians can handle AI tools with confidence, and learn of the potential exposures so they can do their part in minimizing risk.

Each provider comes with its own set of unique exposures and varying levels of risk. A dedicated eHealth policy can help cover evolving AI exposures including bodily injury caused not only by a healthcare service but also by cyber and technology events, where a traditional policy could fall short.

As AI and its capabilities develop, so do the risks. With the right coverage, providers can protect their technology, clinicians and patients, to unlock the true potential of AI and transform the quality of care they deliver.

 

Get in touch with any questions at healthcare@cfc.com. You can find out more about CFC’s eHealth policy here.