Revisiting AI risk: 3 more threats shaping the future

As AI becomes deeply embedded in everyday operations, businesses are facing a new wave of risks. From unpredictable outputs to questions of transparency and accountability, here are three risks every business should be watching closely.

Emerging Risk Article 2 min Wed, May 21, 2025

12 months on since we first explored top AI risks, the threat landscape has evolved dramatically. Rapid advances in technology have brought AI further into the mainstream, reshaping how businesses achieve tasks from AI-driven customer support to drawing insights through data and analytics. Yet a recent CFC survey found that while over 70% plan to use AI more frequently, just 34% feel they have a strong understanding of AI.

Bridging the information gap is essential for businesses to manage emerging threats and build smarter, safer AI strategies. But with threats shifting fast, staying on top of AI risks has never been more challenging. To help you get ahead, we’re spotlighting three more AI risks every business needs to understand today.

3 common points of failure

  1. Hallucinations

    AI hallucinations occur when a model generates information that sounds convincing but is actually false or misleading. These errors often arise when an AI system attempts to fill in gaps in its training data with incorrect assumptions or patterns. In high-stake environments like healthcare, law, finance or engineering, such hallucinations can easily go unnoticed and be relied upon, such as how two US lawyers cited non-existent legal cases in a court filing after using ChatGPT.

    When AI generated misinformation is treated as fact, the fallout can be severe, leading to misdiagnoses, flawed legal advice, financial errors and structural damage. That makes strong verification processes, ongoing human oversight and comprehensive insurance to support AI integration and usage more vital than ever. 

  2. Black box dilemma

    The black box dilemma is the challenge of understanding how and why an AI system arrives at a particular decision. While we can often see the inputs and final outputs, the inner workings of complex models remain largely opaque. This can create a major hurdle when it comes to building trust, ensuring accountability and assessing whether an AI is acting fairly or without bias.

    Imagine an autonomous vehicle faced with a split-second decision at a yellow light. It chooses to accelerate instead of stopping and causes a minor collision. If no one can explain how or why the AI reached that decision, it becomes challenging to assign responsibility or correct the error. Ultimately, a continued lack of transparency around how AI systems reach their conclusions can erode trust. As companies continue to explore new ways to use these tools, comprehensive insurance can offer a valuable safety net—helping support AI adoption with greater confidence.

  3. Intellectual property (IP) infringement

    AI-generated content is creating new challenges in the world of IP, raising questions around originality and ownership. Generative AI models are often trained on vast datasets that can contain copyrighted, trademarked or patented material, raising the risk that outputs could unintentionally replicate protected material. Without careful oversight, businesses using AI may find themselves falling foul of the rules by infringing on existing IP.

    Take a look at Getty Images Inc and others v Stability AI Ltd. In this case, Getty Images alleged that Stability AI trained its Stable Diffusion model on copyrighted images without its permission and enabled a range of AI-generated outputs that allegedly infringed on their IP rights. This illustrates how AI models can lead to unauthorized reproduction which raises significant legal and ethical concerns around IP. With the legal framework still catching up, companies adopting AI for content creation or distribution should tread carefully and consider insurance to safeguard against potential financial losses and reputational fallout.

How to mitigate AI risk

AI risks are evolving at breakneck speed, whether it’s misinformation, hidden decision-making or the threat of IP infringement. But that’s not to say we should avoid AI and limit innovation. It’s just important to get the right protections in place, enabling businesses to harness the full power of AI with greater confidence and security.

For businesses embracing AI, the best way to stay resilient is by managing risk proactively. That includes comprehensive insurance coverage designed to keep pace with innovation. 

Ready for the latest insights on how AI is transforming risk, from regulatory shifts to real-world business impact? Sign up now to receive exclusive content and early access to expert analysis.