Australian Securities & Investments Commission (ASIC) is urging financial services and credit licensees, including insurance companies, to ensure their governance practices keep pace with their accelerating adoption of artificial intelligence (AI).
The call comes as ASIC’s first state-of-the-market review of the use and adoption of AI by 23 licensees found there was potential for governance to lag AI adoption, despite current AI use being relatively cautious. The licensees are in the retail banking, credit, general and life insurance, and financial advice sectors, where AI interacts with or impacts consumers.
During 2024, ASIC analysed information about 624 AI use cases that were in use or being developed, as at December 2023, and met 12 of the 23 licensees in June 2024 to understand their approach to AI use and how they were considering and addressing the associated consumer risks.
ASIC chair Mr Joe Longo says that making sure governance frameworks are updated for the planned use of AI is crucial to licensees meeting future challenges posed by the technology.
Findings of review
“Our review shows AI use by the licensees has to date focussed predominantly on supporting human decisions and improving efficiencies. However, the volume of AI use is accelerating rapidly, with around 60% of licensees intending to ramp up AI usage, which could change the way AI impacts consumers,” Mr Longo said.
ASIC’s findings reveal nearly half of licensees did not have policies in place that considered consumer fairness or bias, and even fewer had policies governing the disclosure of AI use to consumers.
"It is clear that work needs to be done—and quickly—to ensure governance is adequate for the potential surge in consumer-facing AI," Mr Longo said.
He says that AI can bring significant benefits, but without governance processes keeping pace, significant risks can emerge.
“When it comes to balancing innovation with the responsible, safe, and ethical use of AI, there is the potential for a governance gap – one that risks widening if AI adoption outpaces governance in response to competitive pressures,” Mr Longo said.
“Without appropriate governance, we risk seeing misinformation, unintended discrimination or bias, manipulation of consumer sentiment and data security, and privacy failures, all of which have the potential to cause consumer harm and damage to market confidence.”
Mr Longo says licensees must consider their existing obligations and duties when it comes to the deployment of AI and avoid simply waiting for AI laws and regulations to be introduced.
“Existing consumer protection provisions, director duties, and licensee obligations put the onus on institutions to ensure they have appropriate governance frameworks and compliance measures in place to deal with the use of new technologies,” Mr Longo added. “This includes proper and ongoing due diligence to mitigate third-party AI supplier risk.”