price sensitivities = #1 valuable output of risk-run

[[complete guide]] P433, P437 …

After reading these pages, I can see that per-deal PnL and markt-to-market numbers are essential, but to the risk manager, the most valuable output of the deal-by-deal “risk run” is the family of sensitivities such as delta, gamma, vega, dv01, duration, convexity, correlation to a stock index (which is different from beta) , ..

Factor-shocks (stress test?) would probably use the sensitivity numbers too.

In Baml, the sensitivity numbers are known as “risk numbers”. A position has high risk if it has high sensitivity to its main factor (whatever that is.)

Advertisements

VaR can overstate/understate diversification benefits

understate the curse of concentration overpraise diversified portfolio
mathematically definitely possible probably not
correlated crisis yes possible, since VaR treats the tail as a black box. yes. portfolio becomes highly correlated. Not really diversified
chain reaction possible. Actually, Chain reaction is still better than all-eggs]1-basket yes. diversification breaks down

Well-proven in academic — VaR is, mathematically, not a coherent risk measure as it violates sub-additivity. Best illustration — Two uncorrelated credit bonds can each have $0 VaR but as a combined portfolio the VaR is non-zero. The portfolio is actually well diversified, but VaR would show risk is higher in the diversified portfolio — illogical, because the individual VaR values are simplistic. Flaw of the mathematical construction of VaR.

Even in a correlated crisis, the same could happen — based on probability distribution, individual bond’s 5% VaR is zero but portfolio VaR is non-zero.

A $0 VaR value is completely misleading. It can leave a big risk (a real possibility) completely unreported.

[[Complete guide]] P 434 says the contrary — VaR will always (“frequently”, IMHO) say the risk of a large portfolio is smaller than the sum of the risks of its components so VaR overstates the benefit of diversification. This is mathematically imprecise, but it does bring my attention to the meltdown scenario — two individual VaR amounts could be some x% of the $X original investment, and y% of $Y etc, but if all my investments get hit in GFC and I am leveraged, then I could lose 100% of my total investment. VaR would not capture this scenario as it assumes the components are lightly correlated based on history. In this case, the mathematician would cry “unfair”. The (idealized) math model assumes the correlation numbers to be reliable and unchanging. The GFC is a “regime change”, and can’t be modeled in VaR, so VaR is the wrong methodology.

maturity bucketing #StirtRisk

[[complete guide]] P 457 pointed out VaR systems often need to aggregate cashflow amounts across different deals/positions, based on the “due date” or “maturity date”.

Example — On 12/31 if there are 33 payable amounts and 88 receivable amounts, then they get aggregated into the same bucket.

I think bucketing is more important in these cases:

  • a bond has maturity date and coupon dates
  • a swap has multiple reset dates
  • most fixed income products
  • derivative products — always has expiry dates

In StirtRisk, I think we also break down that 12/31 one day bucket by currency — 12/31 USD bucket, 12/31 JPY bucket, 12/31 AUD bucket etc.

Q: I wonder why this is so important to VaR and other market risk systems. (I do understand it hits “credit risk”.)
%%A: For floating rate products, the cashflow amount on a future date depends on market factors.
%%A: FX rate on a future date 12/31 is subject to market movements
%%A: contingent claim cashflow depends heavily on market prices.
%%A: if 12/31 falls within 10D, then 10D VaR would be impacted by the 12/31 market factors

CVA=mktVal@ option-to-default

Monte Carlo is the only way to estimate it…

Classic PresentValue discounts each cash flow , but ignores the possibility of non-payment.

CVA simulates more than 1000 “paths” into the future over 50 to 75 years. Each path probably has a series of (future) valuation dates. On each valuation date, there’s a prediction of the market. A prediction includes many market factors. I believe my FRM book lists 9 standard market factors in the “stress test” chapter.

Each path can be described as a predicted evolution of the entire “universe”.

On each path, a specific shock or stress can be applied.

I guess that on each valuation day, the net amount Alan owes me is predicted (and the net amount Bob owes me is predicted) , known as my exposure to Alan. Multiply this exposure by the probability of Alan’s default and also the recovery rate, we get a kind of predicted loss. I think this is the basis of the CVA.

Most of the contracts are derivative contracts. Max Expiry is 75 years.

Even exchanged-traded assets’ valuation need to be predicted on a given valuation date on a simulation path. That’s because the exchange-traded product could be a collateral. A falling collateral value impacts the recovery amount, so this valuation affects the exposure indirectly.

risk-factor-based scenario analysis [[Return to RiskMetrics]]

Look at [[return to risk riskMetrics]]. Some risk management theorists have a (fairly sophisticated) framework about scenarios. I feel it’s worth studying.

Given a portfolio of diverse instruments, we first identify the individual risk factors, then describe specific scenarios. Each scenario is uniquely defined by a tuple of 3 numbers for the 3 factors (if 3 factors). Under each scenario, each instrument in the portfolio can be priced.

I think one of the simplest set-up I have seen in practice is the Barcap 2D grid with stock +/- percentage changes on one axis and implied vol figures on the other axis. This grid can create many scenario for an equity derivative portfolio.

I feel it’s important to point out two factors can have non-trivial interdependence and influence each other. (Independence would be nice. In a (small) sample, you may actually observe statistical independence but in another sample of the same population you may not.) Between the risk factors, the correlation is monitored and measured.

risk mgmt gotcha – monitor current market

[[FRM1]] P127 gave an example of this common failure by risk mgmt department — historical data (even very recent ones) could be misleading and under-report the VaR during a crisis.

I guess it takes unusual insight and courage to say “Historical data exhibits a variance that’s too low for the current situation. We must focus on the last few days/hours of market data.”

risk mgmt gotcha – correlation spike

[[FRM1]] P126 gave an example to illustrate a common failure of risk mgmt in finance firms. They fail to anticipate that correlation of everything, even correlation between credit risk and mkt risk, will increase during a crisis.

I remember the Goldman Sachs annual report highlighting the correlations between asset classes.

Increased correlation moves the distribution of potential loss, presumably leftward. Therefore a realistic VaR analysis need to factor it in during a crisis.

2 key risks in U.S. treasury ^ IRS trading

The LTCM case very briefly outlined that for a stand-alone 20Y T bond position, there’s

1) interest rate risk [1], and
2) liquidity risk of that particular instrument. I guess this means the bid/ask spread could become quite bad when you need to sell the bond to get much-needed cash.

LTCM case analysed a swap spread trade, whereby the IRS position provides a perfect hedge for the interest rate risk. I think we can also consider duration hedge.

As to liquidity risk. I feel T bond is more liquid than IRS.

generalized tail risk n selling puts, intuitively

There are many scenarios falling into this basic pattern — earn a stream of small incomes periodically, and try to avoid or hedge the tail risk of a sudden swing against you. (Some people call these disasters “black swans” but I think this term has a more precise definition.)
eg: AIA selling CDS insurance…

eg. market maker trading against an “informed trader”, suffering adverse selection.

eg: Currently, I’m holding a bit of US stocks in an overvalued market…
eg: merger arbitrage …
eg: peso problem and most carry trades.

back testing a VaR process, a few points

–Based on http://www.jpmorgan.com/tss/General/Back_Testing_Value-at-Risk/1159398587967

Let me first define my terminology. If your VaR “window” is 1 week, that means you run it on Day 1 to forecast the potential loss from Day1 to Day7. You can run such a test once a day, or once in 2 days etc — up to you.

The VaR as a big, complicated process is supposed to be a watchdog over the traders and their portfolios, but how reliable is this watchdog? VaR is a big system and big Process involving multiple departments, hundreds of software modules, virtually the entire universe of derivatives and other securities + pricing models for each asset class. Most of these have inherent inaccuracies and unreliability. The most visible inaccuracy is in the models (including realized volatilities).

VaR is a “policeman”, but who will police the policeman? Regular Back test is needed to keep the policeman honest — keep VaR realistic and consistent with market data. Otherwise VaR can become a white elephant and an emporer’s new dress.