long-term daily stock returns ~ N(m, sigma)

Label: intuitiveFinance

A basic assumption in BS and most models can be loosely stated as

“Daily stock returns are normally distributed, approximately.” I used

to question this assumption. I used to feel that if the 90% confidence

interval of an absolute price change in IBM is $10 then it will be

that way 20 years from now. Now I think differently.

When IBM price was $1, daily return was typically a few percent, i.e.

a few cents of rise or fall.

When IBM price was $100, daily return was still a few percent, i.e. a

few dollars of rise or fall.

So the return tends to stay within a narrow range like (-2%, 2%),

regardless of the magnitude of price.

More precisely, the BS assumption is about log return i.e. log(price

relative). This makes sense. If %return is normal, then what is a

-150% return?

Advertisements

Hull: estimate default probability from bond prices

label: credit

The arithmetic on P524-525 could be expanded into a 5-pager if we were to explain to people with high-school math background…

 

There are 2 parts to the math. Part A computes the “expected” (probabilistic) loss from default to be $8.75 for a notional/face value of $100. Part B computes the same (via another route) to be $288.48Q. Equating the 2 parts gives Q =3.03%.

 

Q3: How is the 7% yield used? Where in which part?

 

Q4: why assume defaults happen right before coupon date?

%%A: borrower would not declare “in 2 days I will fail to pay the coupon” because it may receive help in the 11th hour.

 

–The continuous discounting in Table 23.3 is confusing

Q: Hull explained how the 3.5Y row in Table 23.3 is computed. Why discount to  the T=3.5Y and not discounting to T=0Y ?

 

The “risk-free value” (Column 4) has a confusing meaning. Hull mentioned earlier a “similar risk-free bond” (a TBond). At 3.5Y mark, we know this risk-free bond is scheduled to pay all cash flows at future times T=3.5Y, 4Y, 4.5Y, 5Y. We use risk-free rate 5% to discount all cash flows to T=3.5Y. We get $104.34 as the “value of the TBond cash flows discounted to T=3.5Y”

 

Column 5 builds on it giving the “loss due to a 3.5Y default, but discounted to T=3.5Y”. This value is further discounted from 3.5Y to T=0Y – Column 6.

Part B computes a PV relative to the TBond’s value. Actually Part A is also relative to the TBond’s value.

 

In the model of Part B, there are 5 coin flips occurring at T=0.5Y   1.5  2.5  3.5  4.5 with Pr(default_0.5) = Pr(default_1.5) = … = Pr(default_4.5) = Q. Concretely, imagine that Pr(flip = Tail) is 25%. Now Law of total prob states

 

100% = Pr(05) + Pr(15) + Pr(25) + Pr(35) + Pr(45) + Pr(no default). If we factor in the amount of loss at each flip we get

 

Pr(05) * $65.08 + Pr(15) * $61.20 + Pr(25) * $57.52 + Pr(35) * $54.01 + Pr(45) * $50.67 + Pr(no default, no loss) + $0 == $288.48Q

so-called tradable asset – disillusioned

The touted feature of a “tradable” doesn’t impress me. Now I feel this feature is useful IMHO only for option pricing theory. All traded assets are supposed to follow a GBM (under RN measure) with the same growth rate as the MMA, but I’m unsure about most of the “traded assets” such as —
– IR futures contracts
– weather contracts
– range notes
– a deep OTM option contract? I can’t imagine any “growth” in this asset
– a premium bond close to maturity? Price must drop to par, right? How can it grow?
– a swap I traded at the wrong time, so its value is decreasing to deeply negative territories? How can this asset grow?

My MSFM classmates confirmed that any dividend-paying stock is disqualified as “traded asset”. There must be no cash coming in or out of the security! It’s such a contrived, artificial and theoretical concept! Other non-qualifiers:

eg: spot rate
eg: price of a dividend-paying stock – violates the self-financing criteria.
eg: interest rates
eg: swap rate
eg: future contract’s price?
eg: coupon-paying bond’s price

Black’s model isn’t interest rate model #briefly

My professors emphasized repeatedly
* first generation IR model is the one-factor models, not Black model.
* Black model initially covered commodity futures
* However, IR traders adopted Black’s __formula__ to price the 3 most common IR options
** bond options (bond price @ expiry is LN
** caps (libor rate @ expiry is LN
** swaptions ( swap rate @ expiry is LN
** However, it’s illogical to assume the bond price, libor ate, and swap rates on the contract expiry date (three N@FT) ALL follow LogNormal distributions.

* Black model is unable to model the term structure. I think it doesn’t eliminate arbitrage. I would say that a proper IR model (like HJM) must describe the evolution of the entire yield curve with N points on the curve. N can be 20 or infinite…

mean reversion in Hull-White model

The (well-known) mean reversion is in drift, i.e. the inst drift, under physical measure.

(I think historical data shows mean reversion of IR, which is somehow related to the “mean reversion of drift”….)

When changing to RN measure, the drift is discarded, so not relevant to pricing.
However, on a “family snapshot”, the implied vol of fwd Libor rate is lower the further out accrual startDate goes. This is observed on the market [1], and this vol won’t be affected when changing measure. Hull-White model does model  this feature:
<!–[if gte msEquation 12]>σte-a T-t <![endif]–>
[1] I think this means the observed ED future price vol is lower for a 10Y expiry than a 1M expiry.

HJM, again

HJM’s theory started with a formulation containing 2 “free” processes — the drift (alpha) and vol (sigma) of inst fwd rate

df­T=α(t) dt+σ(t) dW –>    
, both functions of time and could be stochastic.
Note the vol is defined differently from the Black-Scholes vol.
Note this is under physical measure (not Q measure).
Note the fwd rate is instantaneous, not the simply compounded.
We then try to replicate one zero bond (shorter maturity) using another (longer maturity), and found that the drift process alpha(t) is constrained and restricted by the vol process sigma(t), under P measure. In other words, the 2 processes are not “up to you”. The absence of arbitrage enforces certain restrictions on the drift – see Jeff’s lecture notes.
Under Q measure, the new drift process [1] is completely determined by the vol process. This is a major feature of HJM framework. Hull-white focuses on this vol process and models it as an exponential function of time-to-maturity:
<!–[if gte msEquation 12]>σte-a T-t <![endif]–> 
That “T” above is confusing. It is a constant in the “df” stochastic integral formula and refers to the forward start date of the (overnight, or even shorter) underlying forward loan, with accrual period 0.
[1] completely unrelated to the physical drift alpha(t)
Why bother to change to Q measure? I feel we cannot do any option pricing under P measure.  P measure is subjective. Each investor could have her own P measure.
Pricing under Q is theoretically sound but mathematically clumsy due to stochastic interest rate, so we change numeraire again to the T-maturity zero bond.
Before HJM, (I believe) the earlier TS models can’t support replication between bonds of 2 maturities — bond prices are inconsistent and arbitrage-able

HJM, briefly

* HJM uses (inst) fwd rate, which is continuously compounded. Some alternative term structure models use the “short rate” i.e. the extreme version of spot overnight rate. Yet other models [1] use the conventional “fwd rate” (i.e. compounded 3M loan rate, X months forward.)

[1] the Libor Mkt Model

* HJM is mostly under RN measure. The physical measure is used a bit in the initial SDE…

* Under RN measure, the fwd rate follows a BM (not a GBM) with instantaneous drift rate and instantaneous variance both time-dependent but slow-moving. Since it’s not GBM, the N@T is Normal, not LG
** However, to use the market-standard Black’s formula, the discrete fwd rate has to be LN

* HJM is the 2nd generation term-structure model and one of the earliest arbitrage free model. In contrast, the Black formula is not even an interest rate model.

[[Hull]] is primarily theoretical

[[Hull]] is first a theoretical / academic introductory book. He really likes theoretical stuff and makes a living on the theories.

As a sensible academic, he recognizes the (theory-practice) “gaps” and brings them to students’ attention. but I presume many students have no spare bandwidth for it. Exams and grades are mostly on the theories.

don’t use cash instrument in replication strategies

update — use “bank account” …

Beginners like me often intuitively use cash positions when replicating some derivative position such as a fwd, option or swap.

I think that’s permissible in trivial examples, but in the literature, such a cash position is replaced by a bond position or a MMA. I think the reason is, invariably the derivative position has a maturity, so when we lend or borrow cash or deploy our savings for this replication strategy, there’s a fixed period with interest . It’s more like a holding a bond than a cash position.

Z_0 = discount factor

Background – in mathematical finance, DF is among the most basic yet practical concepts. Forward contracts (including equity fwd used in option pricing, FX fwd, FRA…) all rely directly on DF. DF is part of most arbitrage discussions including interview questions.

When we talk about a Discount Factor value there are always a few things implicit in the context

* a valuation date, which precedes

* a cash flow date,

* a currency

* a financial system (banking, riskfree bond…) providing liquidity, which provides

* a single, consistent DF value, rather than multiple competing values.

* [1] There's no uncertainty in this DF value, as there is about most financial contracts

– almost always the DF value is below 1.0

– it's common to chain up 2 DF periods

An easily observable security price that matches a DF value is the market price of a riskless zero-coupon bond. Usually written as Z_0. Now we can explain [1] above. Once I buy the bond at this price today (valuation date), the payout is guaranteed, not subject to some market movement.

In a math context, any DF value can be represented by a Z_0 or Z(0,T) value. This is the time-0 price of some physical security. Therefore, the physical security “Z” is a concrete representation of the abstract _concept_ of discount factor.

math power tools transplanted -> finance

南橘北枳

* martingale originates in gambling…
* Brownian motion originates in biology.
* Heat equation, Monte Carlo, … all have roots in physical science.

These models worked well in the original domains, because the simplifications and assumptions are approximately valid even though clearly imperfect. Assumptions are needed to simplify things and make them /tractable/ to mathematical analysis.

In contrast, financial mathematicians had to make blatantly invalid assumptions. You can find fatal flaws from any angle. Brian Boonstra told me all good quants appreciate the limitations of the theories. A small “sample”:

– The root of the randomness is psychology, or human behavior, not natural phenomenon. The outcome is influenced fundamentally by human psychology.
– The data shows skew and kurtosis (fat tail).
– There’s often no way to repeat an experiment
– There’s often just a single sample — past market data. Even if you sample it once a day, or once a second, you still operate on the same sample.

rolling fwd measure#Yuri

(label: fixedIncome, finMath)

 

In my exam Prof Yuri asked about T-fwd measure and the choice of T.

I said T should match the date of cashflow. If a deal has multiple cashflow dates, then we would need a rolling fwd measure.  See [[Hull]

However, for a standard swaption, I said we should use the expiry date of the option. The swap rate revealed on that date would be the underlier and assumed to follow a LogNormal distro under the chosen T-fwd measure.

time-series sample — Normal distribution@@

Q: What kind of (time-series) periodic observations can we safely assume a normal distribution?
A: if each periodic observation is under the same, never-changing context

Example: suppose every day I pick a kid at random from my son’s school class and record the kid’s height. Since the inherent distribution of the class is normal, my periodic sample is kind of normal. However, kids grow fast, so there’s an uptrend in the time series. Context is changing. I won’t expect a real normal distribution in the time series data set.

In finance, majority of the important time-series data are price-related including vol and return. Prices change over time, sometimes on an uptrend, sometimes on a downtrend. Example: if I ask 100 analysts to forecast the upcoming IBM dividend, I could perhaps assume a Normal distribution, but not the time-series.

In conclusion, in a finance context my answer to the opening question is “seldom”.

I would even say that financial data is no natural science but behavior science. Seldom has an inherent Normal distribution. How about central limit theorem? It requires iid, usually not valid.

backfill bias n survivorship bias, briefly

based on http://oyc.yale.edu/sites/default/files/midterm_exam1_solutions.pdf

A hedge fund index has a daily NAV value based on the weighted average NAV of constituent funds. If today we discover some data error in the 1999 NAV, we the index provider are allowed to correct that historical data. Immediately, many performance stats would be affected and needs update. Such data error is rare (I just made it up for illustration.) This procedure happens only in special scenarios like the 2 scenarios below.

Survivorship bias: When a fund is dropped from an index, past values of the index is adjusted to remove that fund's past data.

Backfill bias: For example, if a new fund has been in business for two years at the time it is added to the index, past index values are adjusted for those two years. Suppose the index return over the last 2 years was 33%, based on weighted average of 200 funds. Now this new fund is likely more successful than average. Suppose its 2Y return is 220%. Even though this new fund has a small weight in the index, including it would undoubtedly boost the 2Y index return – a welcome “adjustment”.

While backfilling is obviously a questionable practice, it is also quite understandable. When an index provider first launches an index, they have an understandable desire to go back and construct the index for the preceding few years. If you look at time series of hedge fund index performance data, you will often note that indexes have very strong performance in the first few years, and this may be due to backfilling.

math tools used in option pricing vs risk mgmt – my take

In general, I feel statistics, as applied math, is a more widely used branch of math than probability. Both are used in finance. I feel their usage is different in the field of option pricing vs risk mgmt. Both efforts attempt to estimate the future movements of underlier prices. Both rely on complicated probability and statistics theories. Both try to estimate the “histogram” of a portfolio’s market value on a future date.

In option pricing, the future movement of the Underlyer is precisely modeled as a GBM (geometric Brownian motion). IMHO Stochastic is probability, not stats, and is used in option math. When I google “stochastic”, “volatility” always shows up. “Rocket science” in finance is usually about implied volatility — more probability less statistics.

In VaR, future is extrapolation from history. Risk manager doesn’t trust theoretical calculations but relies more [1] on historical data. “Statistical risk management” clearly shows the use of statistics in risk management.

In contrast, historical data is used much less in option pricing. Calibration uses current day’s market data.

[1] the “distribution” of past daily returns is used as a distribution of plant growth rate. There’s no reason to believe plant will grow any faster/slower in the future.

See other posts on probability vs stats. Risk management uses more stats than Option pricing.

Incidentally, If a portfolio include options,  then VaR would need both theoretical probability and statistics.

iid assumption in cumulative return

Time diversification? First look at asset diversification. Split $200k into 2 uncorrelated investments so when one is down, the other might be up. Time-div assumes we could add up the log returns of perod1 and period2. Since the 2 values are two N@Ts and very likely non-perfectly-correlated (i.e. corr < 1.0), one of them might cushion the other.

 

Background — the end-to-end (log) return over 30 years is (by construction) sum of 30 annual returns —

 

r_0to1 is a N@T from noisgen1 with mu and sigma

r_1to2 is a N@T from noisgen2.



r_29to30 is a N@T

 

So the sum r_0to30 (denoted r) is also a random var with a distribution. Without assuming normality of noisegen1, if the 30 random variables are IID, then the sum would follow a normal distribution with E(r) = 30mu and stdev(r) = sigma * sqrt(30)

 

This is a very important and widely used result, at the heart of a lot of quizzes, a lot of financial data. However, the underlying IID assumption is controversial.

 

* The indep assumption is not too wrong. Stock return today is not highly correlated with yesterday's. Still AR(1) models include preceding period's return …. Harmless.

* The ident assumption is more problematic. We can't go back in time to run the noisegen1 again, but there are data to prove that the ident assumption is not supported by real data.

 

Here's my suggestion to estimate noisegen1's sigma. Look at log return. r_day1to252 = r_day1to2 + r_day2to3 + … + r_day251to252. Assuming the 252 daily return values are a sample of a single noisegenD, we can estimate noisegenD's mean and stdev, then derive the stdev of r_day1to252. This stdev is the stdev of noisegen1.

 

stdev measures how Fast price fluctuates

Stdev (MSExcel function name) measures dispersion in a sample. In the context of historical vol, stdev indicates

1) dispersion
2) how Fast price swings

If you plot log(periodic Price Relative) in a histogram, it will be a bell curve.  Periodic typically means “daily”, meaning we compute closingPrice(Day N)/closingPrice(Day N-1) and plot the log of these price relatives in a histogram.

Any such bell curve will Flatten (scatter) out if the sampling period lengthens from 24 hours to 7 days or 30 days, but a high historical-vol means a flat bell curve at a high sampling frequency (such as daily or hourly).

4 distinct sub-domains of financial math

Even though many real world scenarios involve more than a single topic below, it’s still practically useful if you understand one of the topics well.

* bond math including IRS, FRA…
* option
* VAR — correlation, simulation
* credit risk — essential to commercial banks, consumers, corporate borrowers… though I don’t know how much math

Every single topic is important to risk management (like the OC quant team); Half the topics are important to pre-trade pricing. For example, a lot of bond math (duration, OAS, term structure) is not that critical to pre-trade quote pricing.

All derivatives always involve more math than the underlying instrument do, partly because of the expiration built into every derivative instrument. Among derivatives, option math complexity tends to exceed swaps, futures, and forwards.

SABR model #pdf of forward rate

IR Swap – often using SABR (citi/barc/Macquarie…)

A key factor in Fixed Income VaR is vol calculation. You need (?) a pdf of the forward-rate, either normal, log-normal or something similar.

A popular model in Citi muni department is the SABR model. This model describes a single forward rate, such as a forward Libor rate.

Yuri — it is related to CEV and models the vol skew.