risk-factor-based scenario analysis#Return to RiskMetrics

Look at [[return to risk riskMetrics]]. Some risk management theorists have a (fairly sophisticated) framework about scenarios. I feel it’s worth studying.

Given a portfolio of diverse instruments, we first identify the individual risk factors, then describe specific scenarios. Each scenario is uniquely defined by a tuple of 3 numbers for the 3 factors (if 3 factors). Under each scenario, each instrument in the portfolio can be priced.

I think one of the simplest set-up I have seen in practice is the Barcap 2D grid with stock +/- percentage changes on one axis and implied vol figures on the other axis. This grid can create many scenario for an equity derivative portfolio.

I feel it’s important to point out two factors can have non-trivial interdependence and influence each other. (Independence would be nice. In a (small) sample, you may actually observe statistical independence but in another sample of the same population you may not.) Between the risk factors, the correlation is monitored and measured.

risk mgmt gotcha – monitor current market

[[FRM1]] P127 gave an example of this common failure by risk mgmt department — historical data (even very recent ones) could be misleading and under-report the VaR during a crisis.

I guess it takes unusual insight and courage to say “Historical data exhibits a variance that’s too low for the current situation. We must focus on the last few days/hours of market data.”

risk mgmt gotcha – correlation spike

[[FRM1]] P126 gave an example to illustrate a common failure of risk mgmt in finance firms. They fail to anticipate that correlation of everything, even correlation between credit risk and mkt risk, will increase during a crisis.

I remember the Goldman Sachs annual report highlighting the correlations between asset classes.

Increased correlation moves the distribution of potential loss, presumably leftward. Therefore a realistic VaR analysis need to factor it in during a crisis.

hazard rate – online resources

http://www.mbaskool.com/business-concepts/statistics/8930-hazard-rate.html
is decent —

Average failure rate is the fraction of the number of units that fail during an interval, divided by the number of units alive at the beginning of the interval. In the limit of smaller time intervals, the average failure rate measures the rate of failure in the next instant for those units surviving to time t, known as instantaneous failure rate.

http://en.wikipedia.org/wiki/Failure_rate#Failure_rate_in_the_continuous_sense
is more mathematical.

http://www.omdec.com/articles/reliability/TimeToFailure.html has short list of jargon

2 key risks in U.S. treasury ^ IRS trading

The LTCM case very briefly outlined that for a stand-alone 20Y T bond position, there’s

1) interest rate risk [1], and
2) liquidity risk of that particular instrument. I guess this means the bid/ask spread could become quite bad when you need to sell the bond to get much-needed cash.

LTCM case analysed a swap spread trade, whereby the IRS position provides a perfect hedge for the interest rate risk. I think we can also consider duration hedge.

As to liquidity risk. I feel T bond is more liquid than IRS.

generalized tail risk n selling puts, intuitively

There are many scenarios falling into this basic pattern — earn a stream of small incomes periodically, and try to avoid or hedge the tail risk of a sudden swing against you. (Some people call these disasters “black swans” but I think this term has a more precise definition.)
eg: AIA selling CDS insurance…

eg. market maker trading against an “informed trader”, suffering adverse selection.

eg: Currently, I’m holding a bit of US stocks in an overvalued market…
eg: merger arbitrage …
eg: peso problem and most carry trades.

back testing a VaR process, a few points

–Based on http://www.jpmorgan.com/tss/General/Back_Testing_Value-at-Risk/1159398587967

Let me first define my terminology. If your VaR “window” is 1 week, that means you run it on Day 1 to forecast the potential loss from Day1 to Day7. You can run such a test once a day, or once in 2 days etc — up to you.

The VaR as a big, complicated process is supposed to be a watchdog over the traders and their portfolios, but how reliable is this watchdog? VaR is a big system and big Process involving multiple departments, hundreds of software modules, virtually the entire universe of derivatives and other securities + pricing models for each asset class. Most of these have inherent inaccuracies and unreliability. The most visible inaccuracy is in the models (including realized volatilities).

VaR is a “policeman”, but who will police the policeman? Regular Back test is needed to keep the policeman honest — keep VaR realistic and consistent with market data. Otherwise VaR can become a white elephant and an emporer’s new dress.

beta, briefly

Beta is calculated using regression analysis, and you can think of beta as the tendency of a security’s percentage Returns (not the continuously compounded return) to respond to swings in the market (represented by a benchmark). A beta of 1 indicates that the security’s price will move at the same magnitude with the market. A beta of less than 1 means that the security will be less volatile than the market. A beta of greater than 1 indicates that the security’s price will be more volatile than the market. For example, if a stock’s beta is 1.2, it’s theoretically 20% more volatile than the market.

For example, many utilities stocks have a beta of less than 1. Conversely, most high-tech stocks have a beta of greater than 1, offering the possibility of a higher rate of return, but also posing more risk.

Zero beta means 0 correlation with the index (i.e. the market), i.e. independent, insulated.

Negative beta means anti-correlation, or bucking the market.

If the market is always up 10% and a stock is always up 20%, the correlation is one (correlation measures direction, not magnitude). However, beta takes into account both direction and magnitude, so in the same example the beta would be 2 (the stock is up twice as much as the market).

I feel Beta is more important to the buy-side than the sell-side. Note many sell-side megabanks have buy-side units too.

Beside the standard beta on Return, there’s also what I call “vol-space” beta — where a beta of 1 means IBM realized vol over the past 2 years has identical magnitude of ups and downs as s&p (the benchmark) realized vol. This vol-space beta is calculated using 2 years of historical volatility numbers.

go short on tail-risk — my take

Many sell-side [1] traders are described as being short tail-risk. In other words, they go short on tail-risk.

[1] some hedge funds too

*** If you are long tail-risk (insurance buyers), you are LONGING for it to increase. You stand to profit if tail risk increases, such as underlier moving beyond 3sigma. Eg — buy deep OTM options, buy CDS insurance.

*** If you are short tail-risk (insurance sellers), you hope tail risk drops; you mentally downplay the extreme possibilities; you stand to Lose if tail risk actually escalates. Eg — sell OTM options, sell CDS insurance agressively (below the market).

As a result, you would earn premiums quarter after quarter, but when an extreme tail risk does materialize, your loss might not be fully compensated by the premiums, because the insurance was (statistically) underpriced, because you underestimated the probability and magnitude of tail risk.

Maybe you (the trader) is already paid the bonus, so the consequence is borne by the insurance seller firm. In this sense, the compensation system encourages traders to go short on tail risk.