Ethical AI includes explaining how models work
by Sarah Si
L-R: Mr John Maroney, Ms WenWen Chen, Mr Ian Chapman-Banks, Mr Marc Haushofer, Ms Sofia Kyriakopoulou and Ms Anne Lohbeck
AI first made its way to the industry in late 2022, according to SCOR group chief data and analytics officer Sofia Kyriakopoulou, where it “created a lot of excitement”.
However, Ms Kyriakopoulou also said, “The struggle was always scalability. The previous models were simply very laborious to train or to annotate. It really took an effort … to make them work. And then it was very expensive to [scale them up] so they could cover the line of business or a certain region, but it was very hard to go beyond.”
Generative AI models then came pretrained, she said.
“They worked very well with unstructured data and insurance,” she said, leading to the creation of proxies.
This year, the insurance industry then moved to shaping strategies, creating foundations and building the right frameworks around what could be called “responsible AI”, according to Ms Kyriakopoulou.
“As we move ahead, we are going to [see some] players getting all the benefits. But the technology is also still evolving, right? And we see incredible improvements of versions that come out,” she said.
An example, Ms Kyriakopoulou said, includes extracting elements and summaries from electronic health records that would allow underwriters to “consider more data points that they would have previously considered”.
Role of the regulator
According to Swiss Re CUO Specialty Anne Lohbeck, using AI to extract elements and summaries entails the ever-evolving legal, ESG and sanctions scope would need to be “reflected in a near real time manner in the underwriting process”.
In this case, Global Asia Insurance Partnership CEO John Maroney said, as the regulator would be “better informed in terms of what the risks [and] benefits would be”, they would be important in what reinsurers were allowed to do.
Responsible/ethical AI
Responsible or ethical AI can be “manifested”, Ms Kyriakopoulou said, as long as there is emphasis on the right framework and approach.
She said, “But how do you start the first steps of validating the performance, fine tuning and capturing all the artifacts we collect as we perform this process?”
When the model is deployed, she said, performance would need to be monitored for decay, so that necessary actions could be taken.
“Ultimately … emphasis [should be on] explaining to people how the models work. Explainability is now very much a tagline. We are using it [and] making the consciousness of the bias real. And I think the regulators are definitely driving this agenda,” Ms Kyriakopoulou said.
Cyber security
An area of risk for insurers is cyber security. “Any risk linked to data privacy and the use of sensitive data, we believe these are on top of regulators’ agendas,” S&P Global Ratings director, lead analyst WenWen Chen said.
However, SQREEM Technologies CEO and co-founder Ian Chapman-Banks also noted that regulations can never keep up with technology.
Mr Chapman-Banks said, “AI is probably going to make attempts at fraud more frequent and more complex and creative. But at the same time, (re)insurers and reinsurers will be using AI to improve fraud detection … there is certainly no doubt that criminals will be trying to keep ahead of both the regulators, the enforcers and the industry terms of using AI.”
Cyber risk should be higher priority
“Malfunctions, copyright violations, faulty advice and data bias introduced by AI are just a few of the dangers we and our clients will have to contend with as we put these new technologies to use. We will also hear about data … which all of us here work with on a daily basis and is crucial to the success of using any AI driven tools,” Ms Lohbeck said.
With AI and digitalisation, as well as use of, and access to, data on the rise, “cyber risks should be higher on our radar”, said Ms Lohbeck.