Why APRA’s super performance test is failing members
A Financial Newswire roundtable of superannuation chief executives, board members and consultants sponsored by Zurich Life and Investments has revealed that lack of member focus make the APRA performance test badly flawed.
LW Laura Wright, CEO NGS Super
JB PCH Peter Chun, CEO, Unisuper
AP Andrew Proebstl, CEO, LegalSuper
PC Paul Cahill, CEO, NESS
DW Darren Wickham, Head of Group Insurance, Zurich Life and Investments
RM Russell Mason, Partner, Deloitte
Chairman: Mike Taylor, Managing Editor, Financial Newswire
I don’t think any performance test is going to be fair because there are going to be arbitrary benchmarks, cut-offs and whatever.
I think it is a shame the way it has been carried out and members reading in the press that they’re in an under-performing fund when they may not be in the under-performing option – in lifecycle options, not all the lifecycle bands under-perform – and it is very arbitrary.
So we saw 39 funds resubmit their data and they have every right to do that but in my understanding that resubmission of data dragged some funds from slightly under the benchmark to slightly over the benchmark and that just shows how fine a line it is.
So I have sympathy for the 13 funds. They are not necessarily poor funds rather they at a point in time have underperformed and that does not make them a poor fund. They might have at the same time offered superior insurance benefits and they may offer other things that members greatly value.
So members might have over-reacted when they are not in the investment option that underperformed so I think it put those funds in a very difficult position.
I understand the policy objective and I think it’s a noble one but I do think there’s a danger that funds focus on passing the test rather than on maximising the returns so there is a danger of a race to the middle.
For me, one of the issues is that it is a backward-looking test so that if a fund makes a change to improve things in the future then they will be penalised for a number of years for what they’ve done in the past.
The test limits the ability to make decisions about down-side protection and that is particularly important for older members so I understand the policy objective but I think there are some challenges with the way it is being set at the moment.
Picking up on Darren we’ve concentrated on risk-adjusted returns for a long, long time and at the last investment committee meeting we had to start thinking, well, what does that mean in terms of the performance test and the heat map. You just have to be realistic about these things.
I think it has sharpened the focus of the investment team and whether that makes them less adventurous when they have the opportunity to be more adventurous I a not sure but, really, the performance test and the publicity that has attached to it and the reference to the Australian Taxation Office (ATO) website is certainly having an impact on all funds – it’s not just about poor performing products because a member has no idea about the difference between a product and a fund and when you read that letter that has to be sent it says you’re in a poor performing product and therefore you might look to go into another product and by default that just sends you to another fund so I think that is very interesting because although it is the product failure it is interpreted by everyone as a failure of a fund and the ATO website directs you to a part of the website that is showing you funds you can move to.
Now, I don’t know how that is going to play out when you get all of the different choices there because I don’t think the public will be able to make head nor tail of it but the money is not just going from the 13 funds that have failed and I’ve spoken to a few different CEOs of funds over recent times and one of the first questions I ask is how much money are you losing to AustralianSuper and everyone is losing money to AustralianSuper.
Now those funds weren’t named as failed funds, so how do people know. So I think it had an interesting impact in terms of the movement away from a number of funds that have not failed.
When I reflect on the amount of time we, as an industry, have devoted to lobbying and seeking to influence the performance test when you add all that time up what sort of a drain on our organisations and where our organisations need to go is that activity having?
And when you look at the ideological thinking behind the Government’s changes its very clear where they are going and what they’re trying to do and when they see us advocating and often advocating different views it is very easy for them to dismiss the advocacy and feedback that they’re getting as an industry that is inherently conflicted and can’t even amongst itself.
I really think we have to question the best way to advocate for change in the future not just for the sake of getting a better outcome but so our organisations can have more time to just get on and do the things we need to do. I think it is a really significant issue.
I reflect upon the performance test as implemented and I think the part that could have been different is once the test was applied and the under-performing funds were determined could that have been scoped for a relatively short period for those funds to address the underlying issues and to work with the regulator?
Did it have to be overnight that you’re an underperforming fund and that’s it? I know there were lots of discussions going on between the various funds and the regulator but in terms of the outcome of the performance test the incredible uncertainty it created not just for funds but for the members themselves at a time when we were all distracted by COVID-19 and all that COVID brought in terms of the community having confidence in superannuation it was possibly the worst time to go through such a difficult process and confuse, distract and disappoint people.
Treasury and Government not for turning
Look, it has focused people’s attention and I think, in fairness, the regulator has for some time been trying to focus the attention of the industry.
Who remembers the fee template? And we still can’t compare fees and get things right.
So the challenge of a comparative tool is always going to be difficult. I would have loved to have seen this as MVP 1.0 but I think it is MVP 2.0 because in fairness Treasury did make some changes along the way.
There was intense lobbying and there was actually very united lobbying when it came to the investment side of things and Treasury and Government were not for turning and the reason will come out in the economic committee hearings.
APRA was battered through the Royal Commission and it doesn’t want to go through that again.
So there was a huge regulatory framework around this test and the unfortunate outcome of it is that we’re going to be beholden to a statutory benchmark in terms of how we deliver the best outcomes to members and I don’t know how you do a statutory benchmark with markets, with investment performance, with competitiveness, with member focus and all those things so I think that has been a really unfortunate thing.
I think another thing for those 13 funds and I am really not referencing any in particular is that to make those changes to meet the test doesn’t just happen overnight. There are costs involved you don’t just go from 10% to 23% in a short space of time so I think there are some real issues around portfolio construction and its easy to drop the administration fee you just take a hit on revenue or whatever it is you measure by but on performance it’s much harder so I think some funds will struggle to turn things around to meet the second test.
The third thing is that I saw some interesting research done by Nick Callil who says, basically, as time goes on as you look at your rolling seven to eight-year returns the chances of failing the test are actually going to increase.
So the chances of failing the test will increase over time and I think that makes it very difficult but all in all I think APRA has a new stick to hold funds to account that has data behind it and I think they have been looking for this in various iterations over time and this is just another one so it has served their purpose.
A very blunt instrument
I understand what they are trying to do but I think it is a very blunt approach to us – the old sledgehammer and walnut analogy comes up – I feel terribly at pains to say I feel sorry for those 13 funds some of them deserved what they got but others who missed by a tiny, little bit I feel terrible for because they should have been given a period to rectify because you’re going to be on this list.
I would have thought they should have had a 12 month period to show cause and if they couldn’t then comply then they absolutely should have been given the rounds of the kitchen.
It is a very blunt instrument and the law of unintended consequences is quite dramatic and you only have to look at the trade magazines to see how much money Blackrock is collecting on a daily basis from indexing because we are now all hyper-focused on risk from investments underperforming and the cost associated with those investments because that impacts on the performance test because you don’t want to be paying too much your investments or lags on them where you might find yourself dropping towards the dreaded red line.
So attitudes towards costs are changing, attitudes towards performance and the knock-on effect to investment markets won’t be felt today, next year or even the next couple of years but you watch how much that legislation affects capital markets in Australia in the five to 10 year period and you may find funding dries up for some of the longer-dated investments – infrastructure, private equity – because we just can’t afford to sit on a J-curve as we have in the past.
So I don’t think they’ve thought out the unintended consequences for a range of things.
Their sentiment is correct but I think they could have been a little more user-friendly and more scientific on some of the out-workings of it.
Where is the member focus?
I want to focus on one thing, the actual test itself is far removed from the member outcome, it actually lost sight of the member because at its core it’s about how well has the fund implemented a particular strategy it has nothing to do with how good the strategy was, it doesn’t look at the actual member return and what they got, it doesn’t look at risks, so our industry has come with assessing it against a notional benchmark.
So the poor member gets a blunt letter that directs them to the ATO website and the ATO website comparison tool is a peer performance measure – relative returns – so I think the test should have been more based on that.
Fundamentally, everyone is saying the policy objective makes sense let’s weed out under-performing funds but to a consumer, an underperforming fund is a peer-relative concept and I don’t how we’ve landed on this funny notional benchmark..
If we think about how this works when we go into the choice products and the first choice product test is only the multi-sector funds the methodology they’ve adopted I think works for a single sector fund, consumers can understand that.
But multi-sector funds and in all our MySupers, they’re not single-sector they’re all about diversification, they’re all about how does the trustee deliver the best member outcome and most of us in the room do it CPI plus and when you do it like that a notional strategic asset allocation doesn’t make sense.
So my thing is, here we are, members, get this letter, they get pushed to a comparison tool but the comparison tool doesn’t actually reflect this test because it’s just peer returns so for me it has just lost sight of the member.