Monte Carlo is the only way to estimate it…
Classic PresentValue discounts each cash flow , but ignores the possibility of non-payment.
CVA simulates more than 1000 “paths” into the future over 50 to 75 years. Each path probably has a series of (future) valuation dates. On each valuation date, there’s a prediction of the market. A prediction includes many market factors.
Each path can be described as a predicted evolution of the entire market.
On each path, a specific shock or stress can be applied.
I guess that on each valuation day, the net amount Alan owes me is predicted (and the net amount Bob owes me is predicted) , known as my exposure to Alan. Multiply this exposure by the probability of Alan’s default and also the recovery rate, we get a kind of predicted loss. I think this is the basis of the CVA.
Most of the contracts are derivative contracts. Max Expiry is 75 years.
Even exchanged-traded assets’ valuation need to be predicted on a given valuation date on a simulation path. That’s because the exchange-traded product could be a collateral. A falling collateral value impacts the recovery amount, so this valuation affects the exposure indirectly.
Look at [[return to risk riskMetrics]]. Some risk management theorists have a (fairly sophisticated) framework about scenarios. I feel it’s worth studying.
Given a portfolio of diverse instruments, we first identify the individual risk factors, then describe specific scenarios. Each scenario is uniquely defined by a tuple of 3 numbers for the 3 factors (if 3 factors). Under each scenario, each instrument in the portfolio can be priced.
I think one of the simplest set-up I have seen in practice is the Barcap 2D grid with stock +/- percentage changes on one axis and implied vol figures on the other axis. This grid can create many scenario for an equity derivative portfolio.
I feel it’s important to point out two factors can have non-trivial interdependence and influence each other. (Independence would be nice. In a (small) sample, you may actually observe statistical independence but in another sample of the same population you may not.) Between the risk factors, the correlation is monitored and measured.
[[FRM1]] P127 gave an example of this common failure by risk mgmt department — historical data (even very recent ones) could be misleading and under-report the VaR during a crisis.
I guess it takes unusual insight and courage to say “Historical data exhibits a variance that’s too low for the current situation. We must focus on the last few days/hours of market data.”
[[FRM1]] P126 gave an example to illustrate a common failure of risk mgmt in finance firms. They fail to anticipate that correlation of everything, even correlation between credit risk and mkt risk, will increase during a crisis.
I remember the Goldman Sachs annual report highlighting the correlations between asset classes.
Increased correlation moves the distribution of potential loss, presumably leftward. Therefore a realistic VaR analysis need to factor it in during a crisis.
The LTCM case very briefly outlined that for a stand-alone 20Y T bond position, there’s
1) interest rate risk , and
2) liquidity risk of that particular instrument. I guess this means the bid/ask spread could become quite bad when you need to sell the bond to get much-needed cash.
LTCM case analysed a swap spread trade, whereby the IRS position provides a perfect hedge for the interest rate risk. I think we can also consider duration hedge.
As to liquidity risk. I feel T bond is more liquid than IRS.
–Based on http://www.jpmorgan.com/tss/General/Back_Testing_Value-at-Risk/1159398587967 —
Let me first define my terminology. If your VaR “window” is 1 week, that means you run it on Day 1 to forecast the potential loss from Day1 to Day7. You can run such a test once a day, or once in 2 days etc — up to you.
The VaR as a big, complicated process is supposed to be a watchdog over the traders and their portfolios, but how reliable is this watchdog? VaR is a big system and big Process involving multiple departments, hundreds of software modules, virtually the entire universe of derivatives and other securities + pricing models for each asset class. Most of these have inherent inaccuracies and unreliability. The most visible inaccuracy is in the models (including realized volatilities).
VaR is a “policeman”, but who will police the policeman? Regular Back test is needed to keep the policeman honest — keep VaR realistic and consistent with market data. Otherwise VaR can become a white elephant and an emporer’s new dress.
PnL explain is used more by finance department, less by traders/risk managers.
theta contrib (option decay)
vanna vega contrib
Beta is calculated using regression analysis, and you can think of beta as the tendency of a security’s percentage Returns (not the continuously compounded return) to respond to swings in the market (represented by a benchmark). A beta of 1 indicates that the security’s price will move at the same magnitude with the market. A beta of less than 1 means that the security will be less volatile than the market. A beta of greater than 1 indicates that the security’s price will be more volatile than the market. For example, if a stock’s beta is 1.2, it’s theoretically 20% more volatile than the market.
For example, many utilities stocks have a beta of less than 1. Conversely, most high-tech stocks have a beta of greater than 1, offering the possibility of a higher rate of return, but also posing more risk.
Zero beta means 0 correlation with the index (i.e. the market), i.e. independent, insulated.
Negative beta means anti-correlation, or bucking the market.
If the market is always up 10% and a stock is always up 20%, the correlation is one (correlation measures direction, not magnitude). However, beta takes into account both direction and magnitude, so in the same example the beta would be 2 (the stock is up twice as much as the market).
I feel Beta is more important to the buy-side than the sell-side. Note many sell-side megabanks have buy-side units too.
Beside the standard beta on Return, there’s also what I call “vol-space” beta — where a beta of 1 means IBM realized vol over the past 2 years has identical magnitude of ups and downs as s&p (the benchmark) realized vol. This vol-space beta is calculated using 2 years of historical volatility numbers.
Many sell-side  traders are described as being short tail-risk. In other words, they go short on tail-risk.
 some hedge funds too
*** If you are long tail-risk (insurance buyers), you are LONGING for it to increase. You stand to profit if tail risk increases, such as underlier moving beyond 3sigma. Eg — buy deep OTM options, buy CDS insurance.
*** If you are short tail-risk (insurance sellers), you hope tail risk drops; you mentally downplay the extreme possibilities; you stand to Lose if tail risk actually escalates. Eg — sell OTM options, sell CDS insurance agressively (below the market).
As a result, you would earn premiums quarter after quarter, but when an extreme tail risk does materialize, your loss might not be fully compensated by the premiums, because the insurance was (statistically) underpriced, because you underestimated the probability and magnitude of tail risk.
Maybe you (the trader) is already paid the bonus, so the consequence is borne by the insurance seller firm. In this sense, the compensation system encourages traders to go short on tail risk.