Future trends shaping AI regulation

As artificial intelligence (AI) continues to transform industries far and wide, it’s time for regulation to catch up. Here’s how the landscape will change.

Technology Article 6 min Tue, Jul 11, 2023

When you consider the impact of AI on our lives as consumers, it’s really no surprise that the industry worldwide is also being transformed.

Just as predictive text on smartphones uses language models to suggest words far faster than fingers can type, AI is making all kinds of business processes more accurate and efficient. But is AI developing too fast? 

We’re not suggesting that AI will become uncontrollable and set out on world domination. But it does have key industry figures worried, such as the ‘godfather of AI’ Geoffrey Hinton, who resigned from Google over his prognosis of the direction of AI.

This is why we’re talking about an expanding regulatory gap. As AI continues to develop and improve ways of working, does the regulation currently exist to safeguard those involved in case of a mistake or fault?

Yes or no, as we increasingly use AI, what’s certain is that new exposures will emerge. This will cause big changes in how we approach the technology, as the global community looks to strike a balance between innovation and ethical considerations. Still, the best way of minimising risk is to keep two eyes fixed on today’s rapidly changing regulatory landscape – and forecast where it’ll turn next.

 

The current state

Today, the global regulatory landscape is seeing different countries adopting different strategies. 

Some nations have taken a proactive approach, implementing comprehensive frameworks which govern AI development and deployment. Look no further than the European Union's (EU) General Data Protection Regulation (GDPR), crucial in setting standards for transparency, data protection and the right to explanation in AI systems. 

Presently, the EU is looking to classify AI systems on the risk they may pose to users, with a sliding scale of penalties and enforcement actions. While China, with a fear that AI could undermine national unity, is racing to regulate. The country is drafting regulations that require companies to register generative AI products with their cyberspace agency and submit them to a security assessment before release. 

 

Future trends

Globally, we are likely to see a greater focus on AI regulation centred around five key trends. 

  1. Harmonisation

With the same AI products being used in multiple countries, it stands to reason that these countries will ramp up their collaboration efforts. As different regions work together to establish common standards and best practices for AI, this should also ease global cooperation and streamline AI development.

  1. Ethical frameworks

Much of AI is unchartered territory, so the coming years will see more comprehensive ethical guidelines that address bias, fairness and accountability. It will become standard for organisations to implement measures that ensure responsible AI development and usage across their entire operations.

  1. Transparency

To most of us, AI operates in a shroud of secrecy. But going forwards, demand for transparency in AI algorithms and decision making will grow. Regulators will likely require organisations to provide clearer explanations for AI-driven outcomes. Particularly in industries like healthcare and finance, this will strengthen trust and mitigate the risk that comes with black-box AI systems. Already technology is emerging to assess the reliability of AI models.

  1. Risk-based approaches

Focus is set to intensify on the potential harm AI can cause. Regulators will seek to identify and assess risks associated with AI systems, leading to targeted regulatory interventions – an approach that will foster innovation while safeguarding users against risk.

  1. Constant adaptation

AI development shows no signs of slowing, and regulation will have to constantly evolve to keep up. As we look to build an agile regulatory framework, expect to see AI developers, industry experts and society at large collaborate closer than ever to deliver the flexibility that’s required.

 

Staying onside

Even as the regulatory landscape matures, AI will still introduce new exposures. Taking out the right insurance policy can protect organisations against their unique risks, and empower them to use and benefit from AI with confidence.

Traditionally, technology companies don’t consider their regulatory exposure as they’re not usually governed by bodies in the same way more typical professions are. But as technology pervades daily life, regulatory risk will undoubtedly grow into a much bigger exposure for emerging technology companies.

That’s where CFC comes in. Uniquely, we provide broad regulatory cover including defence, fines and penalties – helping to give today’s technology companies priceless peace of mind.

Learn more about our technology coverage by getting in touch with our team of leading technology underwriters here.