Many investors who relied on the rating agencies’ to pick a sure thing have been disappointed since the start of the credit crunch. Many Triple A rated deals have been hammered in the market but nobody quite knows where the blame lies. What have the rating agencies done wrong and what can they do better, asks Solomon Teague.
Each rating agency has its own philosophy and methodology, capturing a different snapshot of risk to its competitors. None claim to provide an exhaustive assessment of the riskiness of an investment, but each believes it provides the best snapshot. And each insists investors must be responsible for conducting their own due diligence and apply their own common sense.
All three international rating agencies acknowledge that they got things wrong in the run up to the credit crunch in the summer of 2007. But what caused the collective failure of the rating industry is not yet clear.
It could be a failure of models, or it could be a paradigm shift in behavioural financial patterns, said Barbara Ridpath, head of ratings services for Europe at Standard & Poor’s: at this stage, she added, it is too early to say which.
But the biggest failing of the rating agencies, as far as they themselves are concerned, has been one of educating investors and managing expectations over what a rating signifies – a task complicated by the differences between the agencies. None of the agencies take liquidity or likely returns into account when rating, but Moody’s factors the probability of loss redemption, which Fitch, for example, does not.
This has been misunderstood by some investors who took a Fitch Triple A status as tantamount to a capital guarantee, said Richard Hunter, chief credit officer for Europe, the Middle East, Africa and Asia at Fitch Ratings. The onus is on investors to understand the implications of a specific rating.
Neither does a rating from any of the main agencies aim to capture the probability of fraud, or assess operational risk. A Triple A rated product should avoid default except in such exceptional circumstances, but it is important investors do not confuse the function of the agencies with that of the auditors.
Investor education was inevitably going to become a more important function of the rating agencies’ work as institutional investors – traditionally the biggest users of ratings – were joined in increasing numbers by retail investors. These investors have to use their common sense when analysing Triple A products. After all, one product could pay 20bp under Libor and another 200bp over and yet still both be rated Triple A. That should tell investors something about the risk profiles of the two products, whether it is their relative complexity, volatility or liquidity, said Ridpath.
Doing it better
There is no doubt the events of the last 12 months have led to calls from some quarters for rating agencies to evolve to provide a more comprehensive view of the risk of the entities they monitor. The three main agencies have each undergone a period of introspection to determine what, if anything, they can do to help investors avoid the surprises many have endured since the start of the credit crunch in 2007.
There has been a consistent call for ratings to factor in liquidity constraints, but as desirable as this might be for investors, this would be a very difficult thing for rating agencies to quantify, said Frederic Drevon, head of Moody’s EMEA.
In fact, any extension of the universe assessed by the agencies would have complications. “People overestimate how easy it is to quantify risk as a number. It can be done with one risk, but when you start adding risks together it becomes much more complex,” said Hunter. Any effort to improve the accuracy of a rating is going to mean a compromise of the simplicity of the rating system. Most investors would not want to see that.
Rating stability would be another casualty of improved rating accuracy, said Drevon. Considering the implications of a rating downgrade, which can trigger forced selling and a downward spiral for the entity in question, increased ratings volatility is also a price many investors would not be willing to pay.
Notice can be given on a possible change of rating ahead of the event with an "outlook" or "negative watch", though S&P does not use Outlooks for structured products – traditionally because their ratings have been so stable. S&P has been developing tools to help investors anticipate changes in structured product ratings, including a traffic light system to give a simple indicator of an instrument’s prospects. These would factor in the likelihood of ratings migration by looking at things like its concentrations of counterparty, event risk and operational risk exposures.
“The future is not linear,” said Hunter. “Ratings don’t capture the likelihood that something will happen in the future. They indicate what level of stress an entity can be subjected to without default. It is up to the investor to assess the likelihood of that level of stress occurring in the market.”
Some investors have also called for a new rating above the Triple A level – Quadruple A? – to distinguish the impeccable credits (such as US Treasuries) from the merely very good ones. This would have questionable value, however, as such a rating would necessarily include very few credits, which can already be identified anyway by the price at which they trade. In fact, outside of structured credit there are already very few Triple A rated entities.
The focus on default probability explains why there are so many Triple A rated structured products, but relatively few equivalent rated corporates. There is no suggestion a Triple A rated corporate is comparable to a Triple A rated Constant Proportion Debt Obligation, for example, other than in terms of default probability.
Rating structured products is a huge challenge for rating agencies. Where shares, bonds or other securities usually have published and transparent data over long periods, the default data on a pool of assets is more complex and less transparent. In emerging markets this is especially true. Therefore, the agencies have to fall back on the quality of the originator as a guide to the quality of the underlying product.
The same rating, but worlds apart
Keeping consistency between the ratings of different types of product requires consideration of the different factors influencing default probability of different types of entity. It is important to keep that consistency, said Drevon, and Moody’s has modified its ratings methodology for structured products to ensure their expected loss levels were in line with the equivalent ratings for other rated entities. This was the advent of the joint default analysis – or JDA – methodology.
Each different rated entity is exposed to different kinds of risks. A corporate, for example, is likely to move gradually towards a default, because it is actively managed, and therefore able to react to circumstances to avoid risk. Structured products are not usually actively managed – at least not to the same extent – which means their own progression towards default can be more rapid. So although default statistics are similar, according to Hunter, they get there differently.
Although structured products have had a lot of negative publicity for defaults during the credit crunch, they have had proportionally less defaults than corporates, said Drevon, but with greater loss severity when they do. In 2007 structured products that were downgraded were dropped by an average of five notches, compared to just two for corporates.
The agencies are also braced for the likelihood of increased competition from new sources as investors regroup after the credit carnage – their trust in ratings, in some instances, much dented. The next tier of rating agencies below the big three cannot boast a better record than their larger competitors, but some investors may turn to buyside research houses, for example, for information. Hunter welcomed this possibility as a healthy development. “The market needs strong, independent voices,” he said.
There is also the possibility the market will revert to the increased use of prices as a guide to credit quality, acknowledged Ridpath, though she believed this would be to the detriment of the quality of investors’ decisions. Alternatively, technology vendors could – and should – renew their efforts to develop systems to better model default risk, to give investors a better breadth of information from which to make decisions.
Yet there is a limit to how much competition others can provide to the rating agencies, which have advantages of scale and geographical coverage that would be hard or impossible to replicate. Some structured finance transactions can take weeks, months or even years to rate, said Drevon, which makes it hard to envisage anyone but a rating agency doing it. And for all the criticism levelled at the agencies for the conflict of interest inherent in their model, at least this conflict is well understood and managed. Investment research conducted by banks, for example, is subject to less transparent conflict of interest.