AI has the potential to reshape the global economy, as vendors continue to innovate and users hurry to take advantage. Already, 70% of businesses worldwide recognize that investing in AI is vital in ensuring they don’t get left behind by their competitors. Just look at how the technology is helping to drive workplace efficiencies, improve levels of service and generate infinite ideas for content.
But while AI undoubtedly holds many benefits, it also comes with risk. The recent AI Safety Summit showed how the upper echelons of government are concerned with AI’s potential impact, as global leaders and leading innovators came together to consider the risks of AI today and in the future.
Whether AI innovation will continue its exponential curve is anyone’s guess. But a unique combination of exposures is already taking shape. Here are the top four risks that AI vendors and users need to be aware of.
-
Consent
Vast volumes of data are used to train AI tools, with this ‘training data’ gathered from a variety of public or private data sources. Issues of consent can arise where data is gathered by scraping online data sources or where the tool cannot be sure where that data was collected from. Litigation is already starting to emerge where intellectual property (IP) owners allege their IP has been used in that training data without permission. It’s also possible for the user of the tool to face an IP infringement claim because that tool didn’t have the requisite permissions of licenses for the underlying data set, without even realising it.
Imagine you’re an artist whose work is included in the training data for a tool that produces high-quality art. Users can generate art in your style, yet you won’t be attributed or receive any royalties, and since anyone can now produce similar work quickly and cheaply, it’ll affect your livelihood too. You’d be doubly incentivized to take action, as your data is being used without your consent and your business is being impacted negatively.
-
Bias
Just because a decision is made by AI doesn’t mean it’s free from bias. Algorithms are only as good as the information they’re given, so if underlying factors aren’t considered when gathering training data, the AI can actually bake in, reinforce and scale bias. This not only goes against a primary purpose of many AI tools, which is to remove all element of bias, but is especially problematic as it contradicts progressive values like inclusivity.
Failure of this kind can destroy a model’s value in legal, financial and healthcare analysis, result in a discriminatory outcome at an individual level and impact the reputation of the vendor and user of the tool. And in the case of healthcare, lives are at stake. The Framingham Heart Study unveiled how AI can misdiagnose due to its dataset, in this case identifying risk in Caucasian patients while underperforming for all other ethnicities.
-
Data privacy
Since AI tools typically end up collecting large volumes of potentially sensitive data, they’re at risk of falling foul of privacy laws if that data isn’t handled in the correct way. This is particularly true for chatbots due to their integration with backend systems, and in biometrics where concerns are growing over personal datapoints like retina scans and fingerprints. If the proper risk controls aren’t in place, the tool could be vulnerable and the vendor or user deemed liable if an incident occurs.
Then there’s the question of who owns information that’s produced. If an AI tool processes sensitive corporate information that belongs to your business, there’s a risk that ownership will pass to the business who owns the tool in the eyes of the law.
-
Regulatory action
While AI regulation is still in its infancy, we’re expecting an influx of rules as the world seeks to strike a balance between tackling AI risks and fuelling innovation. The EU AI Act is the world’s first comprehensive AI law, and demonstrates how risk will be categorized as well as the regulatory hurdles that vendors and users must overcome, whether that’s designing the model to prevent it from generating illegal content, showing how the model works and reaches a conclusion or disclosing that any outcomes are generated by AI.
Still, there’s no global, unified approach to AI. Different jurisdictions are proposing different regulations, and vendors and users must comply with them all at once if they deploy the tool in multiple regions. Failing to comply will put businesses at significant risk, with the EU AI Act introducing a four-tier system for imposing hefty fines for infringements.
As a technology, AI has been around for a long time. But the systems introduced in the late 1900s are dwarfed by the intelligence and complexity of today’s models, which bring with them a raft of IP and liability issues. And with new innovations yet to come, what’s clear is that each exposure must be considered on its individual merits with products tailored accordingly.
By better understanding the AI risks of today, we can better prepare ourselves for the challenges—and opportunities—that lay ahead, and empower businesses around the world to maximize the AI opportunity.
Ready to talk more about AI? Put forward any questions to our experts here.