There is no wonder it is generating a huge amount of interest with its potential to change how businesses operate and how people live their lives (as well as the technophobia that comes with advancement). For some we are seeing a new era of automation to make our lives simpler, while for others our very own living Black Mirror nightmare. So what is Chat AI and how is GPT-4 different?
What is chat AI?
Chat AI is a subsection of AI that has become increasingly common – most people experience it when using websites with interactive customer service chat windows (known as chatbots.) ChatGPT is at the pioneering end of this fast-evolving technology.
In short, chat AI takes an input such as a question, image or video feed, and provides an output such as an answer, decision or piece of information. The goal is to create machines that can perform with a degree of human intelligence, recognise speech, understand natural language and make decisions.
As good as the most sophisticated chat AI is, it’s not perfect. There are various reports of bias and inaccuracy in outputs, but the very nature of the technology means it’s capable of learning and improving its performance.
The technology underpinning ChatGPT relies, in part, on a technique called Reinforcement Learning from Human Feedback. This can be a double-edged sword. Human input helps to generate the most accurate results, but it also infuses these results with the bias and prejudice inherent in the real-life trainers.
Just as human input can skew the quality of the results, the size and range of the data sets available to the technology will also affect the performance. Developers are continually assessing such issues to mitigate their impact and improve the technology.
GPT-4 is described by OpenAI as having ‘broader general knowledge and problem-solving abilities.’ For starters it can process 25,000 words at once, which is 8x more powerful than its predecessor. It can now learn from imagery (such as creating a recipe using a picture of available ingredients) and learn from input you give it (such as composing songs, writing screenplays, or in your writing style). Some big-name companies are finding the value in the technology, such as Microsoft using it to turbocharge their Bing search engine or Duolingo presently testing for the language learning app.
OpenAI also claims the new version is safer and more accurate. It’s 82% less likely to respond to requests for disallowed content, and 40% more likely to produce factual information that the previous iteration.
Exposures attached to chat AI
As chat AI becomes more prevalent, there’s a need to recognise the exposures it creates and to understand where the associated liabilities attach.
For example, who is liable if misinformation from a chatbot leads to a person taking actions in a professional capacity – architect, lawyer, accountant – that negatively impact their business or their clients?
And what about in the field of healthcare? Inaccurate, AI-generated information could result in a patient receiving the wrong diagnosis and inappropriate treatment. Who is liable if the patient suffers injury or death? Is it the medical professional, the medical centre or the technology provider?
Intellectual property is another area where chat AI is creating significant exposures to understand and address. Who owns the content produced by an AI-driven chatbot? Is it the technology developer or the corporate user? There are also questions about what source material has been mined to generate the output and whether it has been accessed and/or used improperly.
Those developing the technology may face accusations of infringing another company’s intellectual property. Conversely, they may discover a third party has used their own intellectual property without the appropriate permissions. Defending and/or pursuing such IP claims can be costly, complex, and time-consuming.
Cover for chat AI risks
While AI may seem new, the insurance industry has been providing solutions for this technology as it has continued to evolve, with coverage extended under various policies such as professional indemnity, intellectual property infringement, digital health and fintech. Businesses working with AI also have a significant data exposure because the data is what powers AI’s decision making, so cover is also available for cyber exposures such as data breaches and ransomware.
The law AI is evolving quicky with proposals for regulating AI emerging across the globe. As liability issues emerge, a comprehensive policy addressing the range of concerns should be considered for all those not only developing AI, but also using it.
For the insurance industry, identifying and understanding these liabilities will be crucial to support the wider use of AI technology as automation advances.
If this is how far ChatGPT has come in the past few months, what is in store for us next?! The insurance industry would be remiss to not keep pace.
If you have any questions about Chat(GPT), or AI and how it can be insured please email tech@cfc.com.