Skip to main content

ASIC warns licensees governance must match AI adoption

Mike Taylor30 October 2024
Cbus employees win AI protections in new EA

The Australian Securities and Investments Commission (ASIC) has sent a clear message to financial services licensees that they should not allow their adoption of artificial intelligence (AI) to get ahead of their governance structures.

ASIC has issued the warning at the same time as releasing a new report on AI (Report 798) derived from an examination of AI use by 23 Australian Financial Services Licensees (AFSLs) which revealed only 23 licensees had appropriate policies in place.

ASIC’s new report and its warning come just a fortnight out from Financial Newswire’s Future of AI in Wealth Management event in Sydney.

The report noted that licensees were looking to increase their use of AI but that ASIC is “concerned that not all licensees are well positioned to manage the challenges of their expanding AI use”.

“Some licensees were updating their governance arrangements at the same time as increasing their use of AI. And in the case of two licensees, AI governance arrangements lagged AI use,” the ASIC report said.

Commenting on the report, ASIC chair, Joe Longo said that, simply put, some licensees are adopting AI more rapidly than their risk and governance arrangements are being updated to reflect the risks and challenges of AI.

“There is a real risk that such gaps widen as AI use accelerates and this magnifies the potential for consumer harm,” Longo said.

“While the approach to using AI where it impacts consumers has mostly been cautious for licensees, it is worrying that competitive pressures and business needs may incentivise industry to adopt more complex and consumer-facing AI faster than they update their frameworks to identify, mitigate and monitor the new risks and challenges this brings,” he said.

“As the race to maximise the benefits of AI intensifies, it is critical that safeguards match the sophistication of the technology and how it is deployed. All entities who use AI have a responsibility to do so safely and ethically.”

“Our review comes at a pivotal time in the development of AI regulation in Australia. We support the Australian Government’s Voluntary AI Safety Standard and intention to introduce mandatory guardrails ensuring testing, transparency and accountability for AI in high-risk settings,” Longo said.

“However, licensees and those who govern them should not take a waitand-see approach to legislative and regulatory reform. Current licensee obligations, consumer protection laws and director duties are technology neutral and licensees need to ensure that their use of AI does not breach any of these provisions.”

Mike Taylor

Mike Taylor

Managing Editor/Publisher, Financial Newswire

Subscribe to comments
Be notified of
0 Comments
Inline Feedbacks
View all comments