AI risks: Getting cover in healthcare

Healthcare’s unique risk profile is changing, driven by the proliferation of AI tools. So which exposures do you need to be aware of, and how can you move to cover them?

Technology Article 8 min Tue, Feb 13, 2024

In the race to innovate, the healthcare industry has been transformed by an influx of AI tools. Sure enough, these tools enable significant improvements in the provision of care. But they also raise profound—and in some cases, existential—questions around what these tools really are, whether they expose providers to new risks and who is responsible for them if an error occurs.

This really is unchartered territory. At present, there is little case law around the use of AI in healthcare and a lack of regulation for AI in general, making it difficult for decision makers to resolve AI-centered healthcare claims. At the same time, insurance policies are not adapting to AI innovations quickly enough, leaving providers exposed.

With lives quite literally at stake, the last thing any provider wants is to find itself liable and without cover for an AI-related data breach or malpractice claim. So which exposures do you need to be aware of, and how can you get the protection you need?

A skincare application: blurring the liability lines

In traditional methods of medicine, it’s easy to trace the blame when something goes wrong—think a clinician making a mistake or a piece of equipment malfunctioning. AI is blurring this liability line, bringing the concept of medical responsibility into question.

Say a healthcare company has developed an AI-driven skincare application. The app is used to assess whether skin lesions are cancerous via images taken with a smartphone camera, before referring the patient to an in-person appointment. However, on one occasion the app determined a patient’s lesions to not be a risk, and so the patient did not attend a dermatology appointment. But the AI was focusing on the wrong part of the skin, a failure which meant the patient’s skin cancer went undiagnosed for six months, increasing the bodily injury risk.

If the patient had been evaluated by a human clinician, it would be clear where the responsibility lies. But AI is a technology tool that relies on significant human input, with clinicians often involved in building the algorithm. So does the error fall with the healthcare practitioner who helped design the technology product, the tech company that developed the app for clinical use, or the patient’s regular dermatologist who prescribed the app to the patient?

Our eHealth policy is a one-stop-shop that offers bodily injury cover for healthcare providers and app creators from failure to adequately assess a patient’s symptoms via digital means, as well as failure in technology products. 

An AI scribe: causing significant financial loss

In computer science, ‘garbage in, garbage out’ is the idea that AI’s output is only as good as its human input. If the AI learns from a limited dataset or there’s a flaw in the program’s code, it’s more likely that errors will occur. We’ve seen how this can impact diagnostic tools—an area that’s being transformed by AI—but lots of different use cases apply.

For instance, we’re now seeing some digital health companies license AI-enabled medical scribe solutions to a number of telemedicine entities. The software can take clinical notes, determine appropriate coding for medical billing and assess the eligibility for payments under the patient’s insurance plan. However, if there are limitations in the algorithm, patient conditions and courses of treatment can be inaccurately captured, reimbursements incorrectly billed and health insurance claims erroneously denied, perhaps resulting in the company being subjected to a patient class action.

When an error like this occurs, how long until it is spotted? Considering the broad application of AI and the range of issues it faces, the financial cost and patient impact can really ramp up, making it essential for providers to get the right cover.

To ensure our eHealth policy didn’t leave any gaps in cover, we provide protection against technology E&O as well as breach of contract.

Generative AI: the IP problem

In an issue that impacts industries from the art world to healthcare, the nature of today’s AI tools means they can infringe on existing intellectual property (IP)—often without the user’s knowledge. That’s because AI tools learn and produce output by processing vast volumes of training data, scraped from various public and private sources. But without the right consent and licenses, users can find themselves facing a claim.

In healthcare, a clear example is companies using large language models to aid in medical education. If the AI-generated responses were obtained from existing medical literature without the medical authors knowledge or consent, the authors can claim copyright infringement. Without an eHealth policy that includes a clause for IP infringement, the healthcare company could find itself dealing with a claim alone.

As generative AI tools continue to transform how we live and work, the number of IP infringement claims will rise with them. It’s vital that providers discover if and how employees are using this technology, so that they can begin to close this protection gap.

Our eHealth policy affirmatively covers the breach and/or infringement of IP rights in the course of business activities as standard, allowing digital healthcare companies to invest and innovate.

The need for a dedicated policy

Soon, every healthcare provider will be a technology company to some extent, as innovations like AI continue to transform the delivery of care. But the majority of errors and omissions (E&O) policies are completely inflexible and can leave providers utterly exposed. That’s why we built a dedicated eHealth policy, providing clarity and bridging the protection gap so that providers who harness the power of technology know they have the right cover.

Where non-dedicated policies fall short, CFC’s eHealth policy offers comprehensive coverage for bodily injury and financial loss across technology E&O, cyber and privacy protection, and professional liability, applying automatic coverage for practitioners leveraging these new technologies. 

Get to know CFC’s comprehensive eHealth policy in our product brochure. If you have any questions, please reach out to healthcare@cfc.com. Our team would love to hear from you.