Clarity urged around using AI in advice
Major accounting group, CA-ANZ has pointed to a potential dilemma which will confront both accountants and financial advisers when they provide advice which is derived from the use of artificial intelligence (AI).
Responding to a proposals paper for introducing mandatory guardrails for AI in high-risk settings, the big accounting group has called for greater clarity around what might ultimately represent a “high risk” setting.
It said this clarity is necessary in circumstances where accountants provide advice based on an AI system which analysed data.
“While our members offering services to consumers use a multitude of systems, which incorporate AI tools to deliver those services, who they offer their services to is determined by their business model rather than an AI system,” CA-ANZ said. “For example, a small accounting practice may only offer its services to consumers in its local area, a larger regional practice only to consumers that have a certain net worth.”
“To our understanding of the principles, the AI systems these practices use to deliver their services would not be considered a high-risk AI system. What is unclear, is when accountants that provide advice based on an AI system to analyse data whether that system would be designated a high-risk AI system. For example, strategies to improve cash flow or strategies to grow a business in line with industry benchmarks. “
“Potentially, the larger the practice the more likely it is to deploy AI to analyse data to provide more fulsome information to their clients so they can make an informed decision.”
“As we understand the principles, key will be to assess the severity and extent of potential impacts of a decision based on the information generated by an AI system,” the response said.
CA-ANZ said, on this basis, further guidance is needed to clarify the elements to be considered in assessing the risk of an AI system, such as illustrative use cases of both high-risk AI systems and low-risk AI systems.
“Guidance could also reflect the evidence that will be expected of deployers of AI systems to prove to a regulator, or regulators, how they have assessed their AI system against the proposed principles and concluded the risk is low or high,” it said.
“If this is to be mandated, we are concerned that the lack of clarity will lead to inconsistent interpretation making it difficult to apply in practice. A clear definition of high-risk will be key to the effectiveness of mandatory guardrails.”
AI is going to change everything within the next ten years. Many will be lucky to have jobs.