A funny thing happened at the CFOs and COOs Forum.
Well, it was more interesting than funny, depending on what makes you tick: four GPs sitting down for a Chatham House Rules session on how they calculate and benchmark their performance and finding differences across the board.
The absence of a standardised way of calculating performance is striking, and nuance after nuance put the CFOs at odds with each other. Do you include the GP commitment when calculating net IRR for the fund? How do you account for recycled capital? How does the preferred return compound? How long do you leave your credit line outstanding (assuming you are one of the 90 percent of managers at the conference that use one)? Are cashflows aggregated into monthly or quarterly periods?
And if much of the performance is based on unrealised net asset value (which it likely is), what is the valuation methodology used? And does the reported NAV account for carried interest?
“There is variability all over the place,” noted the moderator.
Walking out of the session it was clear that, for any investor conducting due diligence on these funds, a comparison between them would not be meaningful, at least without going back to cashflow data and recalculating each of their track records.
To be clear: this did not seem to be a case of opacity or wilful evasiveness. The panellists all agreed that they present their performance in what they deem to be the most appropriate way and footnote anything they think needs clarification. The implication was this: managers are transparent, but they are not standardised.
Should investors care? Is it worth a busy investor’s time to comb through every cashflow and NAV calculation to find that it should be a net internal rate of return of 22 percent rather than 23?
“It’s a good question,” says Graeme Faulds, a former fund of funds manager who co-created performance analysis software TopQ, now part of eVestment. “Directionally it probably isn’t too meaningful, but as a proportion it can be a big difference. And if it puts a manager in a different quartile then it is even more relevant.”
That brings us to another reason this matters: benchmarking.
Many GPs and investors buy in benchmark data for marketing and decision-making, respectively. But if four GPs in a room all arrive at their net IRRs in a different way, what does that say about the aggregated performance data that they might use as a benchmark? Footnotes are lost in the morass.
Taz Katira, a principal in the fund investment team at Hamilton Lane, says his firm started using its own data for benchmarking – built from the ground up – about five years ago. In pockets of the market, there is a “rubbish in, rubbish out” problem with some external data sets, says Katira; some rely fully on data sourced from GPs on a self-reporting basis or from public sources like pension fund results. That process does not consistently address issues around currency fluctuations or have consistent accounting for accrued carried interest, for example.
So those GPs that choose to benchmark against other private equity funds are measuring themselves against a very muddy pool (hence firms increasingly using public markets equivalents).
This is not the first time that the methodology for calculating performance has been discussed – the Securities and Exchange Commission first started asking these questions back in 2014 – and it surely won’t be the last. But what stood out most from the session at the forum was that, while the private funds industry as whole is moving towards institutionalisation, transparency – at least from a performance data perspective – is proving stubbornly elusive.
Write to the author: firstname.lastname@example.org