AI-written content can spark liability concerns
Source: Asia Insurance Review | Jan 2024
Media organisations that publish AI-generated content should be transparent about how and when they are using AI and ensure that human checks and balances are in place.
Insurers are not excluding or limiting coverage for AI-related exposures, but media organisations can expect more questions about their use of AI-generated content when they renew their media liability policies according to insurance brokers.
Aon managing director and national leader Eric Boyum said, “The risks have to be assessed for each individual application. It’s not just what it is and how do its risks work but what are you doing with it, how have you trained it, and in what ways are you governing that.”
Most large companies are experimenting with generative AI and there are many potential applications. Whether AI-generated works can be protected by copyright under US law remains unclear and disputes are being handled on a case-by-case basis in court.
Statutory or case law has yet to establish definitively whether using an AI model is legitimate. It is becoming increasingly difficult to distinguish between AI versus human-generated content and transparency and disclosure are important according to Marsh McLennan.
Whenever businesses use AI to generate content, there should also still be human oversight, she said. From a coverage perspective, media liability, cyber liability and technology errors and omissions are among the policies that could respond. A