Chat GPT, Core Systems and Insurance Executive FOMO

I’ve been speaking to a lot of insurance industry executives, vendors, and analysts recently, and there has been a consistent theme of insurance board members and C-level executives asking about their company’s use of AI, specifically Chat GPT and GAN networks. In many ways, this is a very positive thing. AI in general can be an enabler and disruptor of business and it’s important for senior leaders to track such technology at the macro level.

However, there does seem to be a level of Executive FOMO among insurance leaders driven by the high profile of Chat GPT generative AI tools which may create static around the broader value of AI throughout the insurance enterprise. There are also legitimate industry concerns on bias in AI models that have the potential to create serious regulatory issues, requiring a deeper understanding of how these models learn.

Discriminative vs. Generative AI Modeling

Most of the AI models used in insurance today are discriminate learning models that are trained with user input to do predictive classification, seeking patterns that indicate things like fraud or likelihood of short-term disability rollover to long-term disability, based on real-world feedback. They are generally reasonable to audit and actually have been used for years.

Generative AI, based on descriptive deep-learning generative adversarial network (GAN) models, enables creative, qualitative functions like smart interactive chat, pattern analysis, content creation and summarization, and media generation. Stated simply, generative adversarial models seek interesting patterns in data by pitting two unsupervised sub-models against each other in a zero-sum game that takes real data inputs and attempts to create new high-quality outputs based on that data. For example, inputting a large number of online customer service rep chat logs to train a model to supplement or replace that service rep in chat. There is a generator model that analyzes the input to create a proposed answer to a question and a discriminator model that has access to the same inputs that tries to prove the proposed answer is created and an original input. The two models train on each other until the discriminator can’t get a better than 50% chance (coin flip) of predicting what is original and what is created. Because the model is unsupervised, it can iterate much faster than a model that requires human feedback, enabling much faster turnaround but less human curation of odd behavior.

Group Insurance, AI & Core Platforms

Group insurance is highly regulated and requires insurers to jump through hoops to meet state and federal regulations. Ask any insurer about readability regulations and the need to create “pixel perfect” documents. There is a precision the insurance industry requires, and our technology use has to support that precision. Core insurance systems in particular need to be very transparent and accurate in managing customer data, maintaining contractual policy obligations, and tracking financial transactions over time.

What’s important for insurance executives and board members to understand is that AI modeling ─ and especially generative AI ─ can lead to market issues. Unlike directed algorithms, it can be difficult to ensure illegal or unethical bias is not built into these models based on their input data. A good example of this is early AI face recognition. It did not have a diverse population of headshots for training, leading to many false positives and police stops for some minorities. For insurance underwriting, AI needs to be firmly taught that redlining is illegal even if logically excluding neighborhoods provides better short-term results.

At FINEOS, we recognize there are some areas of the core system lifecycle where different kinds of AI are useful to enhance the process, but they need to be thought through carefully in application to avoid crossing regulatory guidelines. Discriminative AI is very useful in predictive models for fraud prediction or other behavioral analysis and for supercharging process workflows with recommended or automated next-best-action capabilities. Generative AI is more useful in customer communication, service support and looking for macro business patterns that are not obvious but could aid strategic planning and long-term trend analysis.

Insurance executive leaders are right in asking their organizations to explore and report on the organizations’ use of AI, good and bad. The impact of AI on the industry going forward will be as broad and deep as the impact of mainstream internet in the early 2000s, with the same magnitude of opportunities, risks, and false starts.

For a deeper dive, watch our digital chat, “How AI Is Impacting the Insurance Industry.”

You may also be interested in