ASIC to fight advice AI with compliance AI says leading wealthtech
The Australian Securities Investment Commission (ASIC) will likely deploy artificial intelligence (AI) technologies to perform regular audit checks of financial advisers, including assessments of advice document submissions generated by machines, predicts Matt Esler, chief executive of wealthtech developer Padua.
Esler said the corporate regulator will soon leverage AI to check for compliance in advice documents and file notes, including records of advice (ROAs), statements of advice (SoAs), various DDO reporting documents, and data datasets – all of which are increasingly being generated by in-house AI systems.
As advice firms seek out AI systems to help meet an increasing compliance burden, Esler said the corporate regulator will also leverage its own AI system to monitor and assess these documents and ensure they meet Australian regulations and laws.
A recent survey by chip maker NVIDIA found that 91% of financial services firms worldwide are either assessing AI or currently using the technology in production.
For instance, he noted that many advisers are leveraging natural language and generative AI systems in day-to-day operations to record meeting minutes or to generate fact find information and file notes.
Esler warns, however, that by doing so, advice firms risk “inadvertently providing recommendations in the information gathering stage”. This, he added, would necessitate the submission of a follow-up advice document with five days.
“We expect ASIC will be monitoring this.”
Esler noted that large language models (LLMs) can perform routine data analytics to ensure the validity of client datasets during the advice process. This helps to confirm that the ‘client story’ is viable and consistent throughout the entire advice process, saving advisers significant time in checking and correcting client data, he said.
“Within investment management, AI can be used for portfolio optimisation by analysing data and developing portfolios that maximise risk-adjusted returns, as well as to helping to execute trades automatically based on pre-defined strategies or parameters.
AI is not without risks – and unchecked adoption can cause more problems than it solves. Esler cautions that advice firms will need to manage their use of AI “through a combination of robust technical measures, comprehensive understanding of the regulatory environment and continuous oversight”.
With the breakneck advancement of generative AI technologies, he further urged advice firms to establish clear AI governance frameworks, ethical principles, and risk management strategies, as well as to invest in employee training and upskilling opportunities to build AI capabilities or collaborate with experts in the space.
“Progress is being made across so many different channels. Our research team uses AI within their primary and secondary research functions but also in automating data collection and analytics.
“We are also evaluating AI avatar video presentation capability for advisers to present advice via video to clients, and Padua is currently testing this option,” Esler said.
“However, scrutiny is essential. As some licensees have already experienced, AI does not come without its risks with comments made by advisers during this process recorded, and the potential for advice to be provided in these meetings, requires caution.
“There’s also the concern around ‘deep fakes’ – impersonating another person using AI, with fraud a major challenge for advisers, and every person online.”
Can’t wait to surf full time whilst my AI avatar completes client meetings, writes SoAs, presents SoAs, implements all investments and structures, creates a wonderful client experience and completes a couple of reviews per year, that my clients are happy to pay top dollar for my avatar.
I call BS
Tell’em he’s dreaming for a long time.
One day maybe but yeh nah.