The Default Line: THE INSIDE STORY OF PEOPLE, BANKS AND ENTIRE NATIONS ON THE EDGE (30 page)

BOOK: The Default Line: THE INSIDE STORY OF PEOPLE, BANKS AND ENTIRE NATIONS ON THE EDGE
10.91Mb size Format: txt, pdf, ePub

So in due course Vasicek’s formulae were to spawn the industry of quantitative credit-risk modelling, taken up with gusto and commercialised by J. P. Morgan and described in the book
Fool’s Gold
by Gillian Tett. At its core the Vasicek model required two inputs or parameters: a probability of default for the individual corporate loans, and a measure of the ‘correlation’ or connectedness between seemingly unrelated loans in the portfolio. The first part, the ‘PD’, could come from the bank’s internal rating, or from credit ratings, or from KMV. The second parameter, the correlation, came from KMV’s database.

For a large number of loans, Vasicek’s equation could approximate the probability of certain levels of loan losses. The shape of the answer hinged crucially on the correlations. It is worth understanding a little about correlation. Imagine a competition for ten gardeners in a local village. The chances of Barry winning the marrow contest is 10 per cent, the chances of his marrows being eaten by slugs is 10 per cent, the chances of a drought are 10 per cent and the chances of him forgetting to use fertiliser is also 10 per cent. A trader would price these risks the same, given the probabilities.

But if his next-door neighbour Colin was also growing marrows, what are the chances of both gardeners ending up in the above predicaments? Even if Colin and Barry have the same-sized garden, soil quality, equipment and skills, the so-called conditional probabilities are much more difficult to calculate. Barry and Colin both forgetting to use fertiliser is unconnected. Colin’s forgetfulness is not dependent on Barry’s, so the correlation is zero. On the other hand, if Barry’s garden suffers a drought, it’s almost certain that his neighbour’s garden will as well, so the correlation is 1. If Barry’s marrows get eaten by slugs, then it is not certain, but the chances are Colin’s marrow’s will also get slimed. Call that correlation 0.4. And then if Barry wins the gardening competition, Colin cannot and the correlation is negative – in fact it is −1.

‘If investors were trading securities based on the chances of these things happening to both [Barry and Colin in my example], the prices would be all over the place, because the correlations vary so much,’ explained Felix Salmon, the writer who first brought Gaussian copulas to the attention of the non-financial world.

Back to Vasicek. If the recipients of the loans are all truly unconnected, for example a food-processing company in Canada and a software developer in Cambridge, then the likely losses cluster pretty closely to the probability of default, the other parameter. The higher the correlation, say a food company and a farmer in the same state, the wider the spread of likely losses. It is no longer wise simply to expect the average outcome: losses much lower and much higher are plausible. At very high levels of correlation the Vasicek equation showed that the bundle of loans behave like a single loan, and the probability of likely losses became, in the jargon, ‘bimodal’: losing everything is on the cards. The equation illustrated how a diverse uncorrelated loan book could help to neuter credit risk – in theory. It was an essential insight into how the world financial system developed over the next two decades. ‘Correlation’ became the core of credit-risk trading. It became a shorthand for assessing the risk locked up in a portfolio. But then what began as an observation of patterns in data, itself became the object of trading.

For many years, Vasicek’s formula was something of a secret, the proprietary knowledge underpinning KMV’s sales. A Scottish academic called Donald MacKenzie, based at the University of Edinburgh, conducted interviews with many of the quants (‘quantitative analysts’) who created the financial models that underpinned the great expansion of credit. David X. Li, one of the most famous quants, recalled how he had seen a photocopy of a handwritten version of the Vasicek equation, probably written by the man himself. ‘That was one of the most beautiful pieces of math I had ever seen in practice,’ Li wrote to Mackenzie. ‘But it was a one period model.’ Vasicek had assumed all loans were paid off at the end of the time period. Others would introduce time into the equation.

David Li built on Vasicek’s work by introducing a concept from the insurance industry to ‘couple’ together different probabilities in a loan portfolio. ‘Broken heart syndrome’ is the name given to the actuarial phenomenon of a surviving spouse dying shortly after the death of a husband or wife. Johnny Cash and June Carter are the most famous example, the latter dying just four months after the former in 2003. In insurance, the maths behind this phenomenon reduced the value of joint annuities by up to 5 per cent. Li thought that the survival time of a company after the default of another company could be modelled like a bereaved Johnny Cash, through a piece of maths that enabled a joint probability of corporate survival to be calculated from individual probabilities. This was the Gaussian copula.

Specifically, Li’s formula based these correlations on trading movements for the two corporations in the growing credit default swap (CDS) market. KMV’s historic data points on corporate asset prices weren’t required. An appreciation of historic defaults was not required. And even if this measure of default risk was not deeply flawed, the data underpinning it stretched back for, at most, only a few years. Did the equations include data, patterns and correlations during periods of time when there were actually lots of defaults? For corporate debt, not really. When it was to come to mortgages, the answer was ‘not at all’. A fanatical supporter of the model might suggest that all of that information was contained within the wisdom of the crowd – the efficient market price – of the credit default swap. Obviously that was nonsense.

Never mind, the fully fledged Gaussian copula helped the market for corporate Collateralised Debt Obligations (CDOs) explode. These were pools of corporate debt that were then sliced up to offer bespoke returns for investors, depending on the risk they were willing to take. If you were confident that the pool of debt would repay, you could opt for a higher annual return, in exchange for being far back in the queue for repayments if defaults actually occurred.

The game for the credit engineers was to put together packages of debts, or even synthetic debts, with low correlations so that as many tranches as possible would qualify for high credit ratings, preferably the much-coveted AAA rating. The formulas were put to work within banks and credit-ratings agencies to evaluate these new products.

The modellers were still having difficulty finding values for the correlations in products that simply had not existed for that long. By 2003 a new, quite crazy, innovation had appeared: back-solving the formulae to extract ‘implied correlations’ from the market prices for the CDO tranches. There was one problem, though: J. P. Morgan discovered that at some prices back-solving the models failed to work, or when it did, it gave two wildly different answers for ‘implied correlation’. J. P. Morgan stepped up with ‘base correlation’ – a final tweak creating what became the standard way of modelling CDOs: the Gaussian copula base correlation model. It is still in use today.

It is a thicket of complex maths and finance, and it’s designed to confuse and obfuscate. Only the help of Edinburgh’s Donald MacKenzie and longstanding derivatives writer Nick Dunbar helped me clear some of the branches away. But it is hugely important to at least try to see what was going on.

The remarkable series of interviews with dozens of quants, conducted by Donald MacKenzie and his colleague Taylor Spears, led them to conclude that there was a singular motivation for the use of these deeply flawed extensions of Vasicek’s model: bonuses – specifically the ability to mark an uncertain pile of byzantine debt slices as being worth a certain value at a certain time. As one quant told the academics, even more accurate models were sacrificed at the altar of a model that could fatten up bonuses by claiming a future stream of profits in advance. ‘That effectively allows me to do a ten-year trade and book P&L [Profit & Loss] today… without that people would be in serious trouble, all their traders would leave and go to competitors,’ was how one quant explained it anonymously to the researchers. Flawed models were fine, as long as everyone used the same flawed model, because at least they could come up with a price for these exotic and toxifying debts, upon which to base their remuneration. The key here is that the model was ‘tractable’, and could be used as a common market standard.

According to the Edinburgh study, ‘Interviewees reported a universal desire amongst traders for the profits on a credit derivatives deal (most of which lasted for between five and ten years) to be recognised in their entirety as soon as the deal was done – as “day-one P&L [profit and loss]”– and so to boost that year’s bonus as much as possible.’ Such an arrangement would not necessarily be malign. There are simple trades where it might be justified to book profits on a stream of future certain payments. But in combination with faulty pricing models, up-front bonuses were inflated, while at the same time shunting the risk of losses to a future date after profits had been booked and bonuses paid.

The net result of this was that a transformation began in the balance sheets of the banks. The formulae had liberated credit risk from the banker’s loan book. Traditional bank lending saw the loan and therefore the credit risk stay on the bank’s balance sheet, subject to fairly straightforward capital requirements. Capital ratios were like dams on fast-flowing rivers. Derivatives enabled the water flow to be rerouted through a different valley, via the largely undammed trading book. Governments and regulators could marvel that the dam was still holding at one point in the topography of world finance. But downstream, the inundation was overwhelming. In the much less constrained world of their trading books, banks could now take huge credit risks in the markets, using the methods described. So where were the regulators? On the sidelines, cheering on ‘innovation’, and participating in a beauty contest, led by London. From 2000 the UK Financial Services Authority (FSA) began handing out approvals for credit-risk models that allowed the massive expansion of credit-derivatives business with only tiny amounts of capital set aside for the bad times.

In his book
The Devil’s Derivatives
, Nick Dunbar recounts how New York firms found it difficult to trade derivatives under the rules of the US Security and Exchange Commission (SEC) – rules that had not been updated since the 1970s. For them, ‘the new opportunities in Europe were hard to resist, so they set up London-based companies’. According to Dunbar, the FSA boasted that ‘international firms that have established their operations here welcome the flexibility of the UK regulatory regime’. New York regulators began to fret about London’s light touch, so the Wall Street firms extracted from the SEC ‘broker dealer lite’ rules that meant the rubberstamping of internal risk models. Regulatory arbitrage meant one thing: massive transfers of credit risk from a bank’s loan book to its trading book.

Still, at least models of copula base correlation made some sense in relation to the corporate debts. The slicing and repackaging of these debts passed risks from one side of a bank’s balance sheet where they used up part of the bank’s precious capital to another part where it did not. But it was the move into mortgages that was to stretch the models designed to shift corporate risk to breaking point. In particular, it was the misuse of the Gaussian copula in relation to mortgages by ratings agencies that was to prove disastrous.

‘Empirically estimating the correlation between asset-backed securities is an even harder econometric problem than estimating corporate correlations,’ write MacKenzie and Spears. They go on to say that with little data to draw upon, the CDO groups at ratings agencies ‘employed largely judgement-based correlation estimates’. Standard & Poor’s CDO group, for example, simply used the same correlation (0.3) for mortgage securities as for companies in that industry. Remember that the formulae were developed with corporate debt in mind. Even so, the assumption of low correlation for mortgages was a wild stab in the dark made by ratings agencies, with catastrophic consequences. The end result? More of the tranches or slices of the pools of mortgage loans could be given the feted gold-standard AAA rating, despite low underlying mortgage quality. This is the alchemy behind changing subprime mortgages into riskless debts.

A board member responsible for the retail side of one of Britain’s largest banks recalls the realisation that his investment-banking colleagues were effectively bypassing him and lending direct to clients. ‘We spend ages controlling our exposures to consumer debt and limiting ratios on mortgage lending,’ he tells me, ‘only to see the investment bank writing billions of mortgages to people they’ve never met in a country where we have no branches.’

Attempts to model the risk in these loans centred on a history of defaults during a time of almost no recession, when national house prices had only risen. ‘HPA’ was the acronym of the type of modelling done by the Wall Street banks on the income flowing from the complex packages of mortgage debt. It stood for ‘house price appreciation’ of various levels. At least one British analyst, Toby Nangle, asked the American bankers about HPD – house price depreciation. ‘Everyone thought it was an absolutely terrific joke,’ he told me. ‘We were being quite serious though.’ One model was based solely on defaults in Louisiana after the collapse in the oil price. A more widespread form of modelling involved gaming the system and working out the worst mortgage quality possible to secure the AAA rating. One quant in the Edinburgh study ‘reported that there were companies that discreetly sold software packages designed to perform this fatal optimisation’. As MacKenzie and Spears conclude, ‘The use by rating agencies of Gaussian copula models with low default probabilities for mortgage-backed securities and only modest correlations among those securities helped create (via the “gaming” of those models) an outcome that involved huge levels of highly correlated default.’

In other words, even given the flaws in the use of these models and formulae, when applied en masse, they can change the characteristics of the financial patterns originally observed. The models stop being merely useless and become dangerous. This is known, somewhat awkwardly, as counterperformativity. It was seen with other formulae in the LTCM hedge-fund crisis of the late 1990s, and also in the 1987 stock market crash.

Other books

If Love Dares Enough by Anna Markland
Presa by Michael Crichton
Queen of Hearts by Jayne Castle
The French Admiral by Dewey Lambdin