APRA’s warnings on fast-moving AI implementation

Financial services companies have been placed on notice that they will be held responsible not only for their own management of the use of artificial intelligence (AI) but that of their suppliers.
The Australian Prudential Regulation Authority (APRA) has made the position clear in a letter to its regulated entities in which it admits that the adoption of AI is running well ahead of existing regulatory capabilities and that gaps have already become obvious.
The letter notes that “AI threats are increasing, but information security practices are struggling to keep pace”.
“APRA observes that AI adoption is materially changing the cyber threat landscape for regulated entities. The use of AI increases the pathways that cyber attackers can use and lead to more frequent cyber-attacks.
“Common attack pathways observed include prompt injection, data leakage, insecure integrations, exploit injection and the manipulation or misuse of autonomous AI agents. AI can shorten the attack cycle and increase speed, coordination and impact. At the same time, entities are using AI to improve threat hunting and vulnerability identification, with the challenge being remediating at the speed with which vulnerabilities are identified,” it said.
“Concerns were noted across several areas. Identity and access management capabilities have not yet adjusted to nonhuman actors such as AI agents. The volume and speed of AI assisted software development is placing strain on the effectiveness of change and release management controls.
“APRA observed gaps in the scope and coverage of security testing programmes for both AI implementation and responding to the AI augmented threat environment. The implementation timelines for information security remediation activities, such as patching and configuration management, are not consistently aligned to the accelerated threat environment. These issues are compounded by the variability across organisations technology deployments and the increasing volume of discovered vulnerabilities and threats requiring priority remediation, without a significant backlog,” it said.
“The use of enterprise AI tools by staff outside approved control frameworks is also a concern,” the APRA letter said.
“Whilst strategies to encourage staff experimentation and progress cultural change are commended, the calibration of these activities to risk appetite appears weak. In many cases, preventative controls were lacking, with entities relying primarily on policy direction or detective, after-the-fact measures, rather than enforceable technical restrictions or robust preventative controls.”
The letter also noted that APRA had observed some entities heavily dependent on a single provider for multiple AI use cases. Few entities had demonstrated robust contingency planning or tested exit and substitution strategies for critical AI providers. Contractual arrangements often lagged practice, with limited evidence of specific provisions addressing audit rights, model updates and deviations, incident notification or changes to data handling.
AI capabilities are increasingly embedded within software, platforms or developer tools. This can mean upstream dependencies such as foundation models, training data sources and fourth party service providers are opaque which limits entities’ ability to independently assess model performance, bias, resilience and security. Taken together these variables challenge an entity’s ability to completely and effectively assess and manage risk.
APRA expects entities to manage supplier risks, this would include, at a minimum:
- mapping and maintain visibility over the full AI supply chain, including material, third‑party and fourth‑party dependencies;
- contractual and governance arrangements which provide sufficient transparency, auditability and assurance over AI services.
- entities should have the ability to understand model behaviour, material changes, performance issues and outcomes, and risk management practices across the service lifecycle; and
- active management of concentration risk. This would include plausible and systemic failure scenarios, the credibility and feasibility of substitution, portability or exit arrangements for critical AI providers.









Meanwhile, financial advisers are fully accountable for tax outcomes relating to advice and still cannot access the ATO portal. Accountability…
PJ is usually too busy organising his next conference to attend, or sending by accidental emails to members, .or even…
Unless I’m misinterpreting this this ‘bonus’ is just giving the member what they technically already own as I’m assuming this…
Let’s see the new AIOFP white label platform real soon. As long as it’s price competitive and works, with no…
YES! you idiots. Of course it should. Then can we sue the government for forcing the closure of small /…