Commonly used investment criteria that can negatively impact returns
What is the most important factor when assessing fund performance? CleverAdviser, looked at nine commonly used criteria and found three that can have a negative impact on returns. Colum Wilde, managing director CleverAdviser, explains
If you are a practicing IFA, then it is likely that you don’t have a great deal to do with assessing the performance of the funds your clients are invested in.
But somebody does.
Whether it’s an internal investment committee or an external partner, someone – or, more likely, a group of people – are looking at funds across each sector and assessing when to make changes to your clients’ portfolios.
As IFAs, we’re almost taught these days to ignore investment choices. Our focus is, rightly, on the client and their goals, and importantly so. But a healthy interest in the investment proposition your firm offers, and what it means for your clients, is also important.
Take the myriad ways in which you can assess fund performance. Which does your investment proposition favour? Some firms will look closely at Sharp and Average 36 Month Performance, for example. Others might favour Beta, Volatility and Research Rating.
What impact do these decisions and preferences have on the return your clients receive?
Our research suggested that three commonly used factors do not just have no impact, they can have a negative impact on the returns you generate for your clients if your investment proposition pays any attention at all to them.
We took nine common criteria; Maximum Loss, Average 6 Month Performance, Average 36 Month Relative Performance, Research Rating, Star Rating, Sharpe, Alpha, Beta and Volatility. Using the Clever programme, we can test what impact weighting these would have on investment performance.
For instance, your firm may value 36 month performance, Sharpe, Beta and Research rating first, followed by the other criteria, in diminishing importance, to the point where you pay no attention at all to Volatility and Alpha. This is one possible variation. We can assign each used criteria with a rating (from most important to least important and everything in between) and test the outcomes.
There are 362,880 possible variations on the order of importance for those assessment criteria, including single criteria on their own, through to all criteria enabled.
Of course, we needed some financial data to test the weightings against. We used data from Financial Express, looking at the UK All Companies Sector, from June 1998 to December 2015, a data set which created a need for us to carry out 886,475 back tests!
Just to recap: it’s perfectly possible that your firm assessed this sector during this time. We were looking at whether the criteria you used for that assessment could have impacted the results for your clients and exactly how big of a swing the outcomes of that assessment could have produced. This data could then be used to assess the effectiveness of the assessment criteria themselves.
To establish a judgement on which criteria have positive influence and which have negative influence, we looked at which criteria were most prominently weighted in the top 10% best returning setups and the bottom 10% returning setups.
This highlighted that Beta, Maximum Loss and Volatility have a drag on performance, whilst using Alpha, Average 6 Months and Sharpe appear to be significantly correlated with positive performance, as shown below.
To take the experiment one stage further, we began removing those bottom three criteria from data setups to see what the influence could have been on performance. The swing in performance in these results would have been significant: a huge 3%, based upon average discrete rolling 1-5 year performance.
Time to check up on which criteria your investment proposition is paying attention to?
Visit the Clever Adviser website