Any stochastic differential equation has to be specified under a specific probability measure. When we change measure, the SDE will need change.
Any BM must be described under a specific probability measure, since the normal distribution assumes a probability measure…
Any expectation must be specified under a specific probability measure.
Greg Lawler’s notes point out that cond expectation (CE) is a random variable, and we frequently take UE of CE, or variance of a CE. The tower property (aka iterated expectations) covers the expectation of CE…
Simple example: 2 dice rolled. Guess the sum with one revealed.
The CE depends on the one revealed. The revealed value is a Random Variable, so it follows that the CE is a “dependent RV” or “derived RV“. In contrast, the UE (unconditional exp) is determined by the underlying _distribution_, the source of randomness modeled by a noisegen. This noisegen is unknown and uncharacterized, but has time-invariant, “deterministic” properties, i.e. each run is the same noisegen, unmodified. Example – the dice are all the same. Therefore the UE value is deterministic, with zero randomness. The
variance of UE is 0.
Now we can take another look at … sample mean — a statistical rather than probabilistic concept. Since the sample is a random sample, the sample mean is a RV(!) just as the CE is.
Variance of sample mean > 0 i.e. if we take another sample the mean may change. This is just another way of saying the sample mean is a random variable.
The pdf curve can be very intuitive and useful in understanding this concept.
1st example — given U, the standard uniform RV between 0 and 1, the PDF is a square box with area under curve = 1. Now what about the derived random variable U’ := 2U? Its PDF must have area under the curve = 1 but over the wider range of [0,2]. Therefore, the curve height must scale DOWN.
2nd example — given Z, the standard normal bell curve, what about the bell curve of 2.2Z? It’s a scaled-down, and widened bell curve, as http://en.wikipedia.org/wiki/Normal_distribution shows.
In conclusion, when we scale up a random variable by 2.2 to get a “derived” random variable, the density curve must scale Down by 2.2 (but not a simple multiply). How about the expectation? Must scale Up by 2.2.
Jargon: B3 or B(3) means the random position at time 3. I think this is a N@T.
Q: Are the 2 random variables B3 and B5 independent?
A: no. Intuitively, when B3 is very high, like 3 sigma above the mean (perhaps a value of 892), then B5 is likely to Remain high, because for the next 2 seconds, the walker follows a centered, symmetric random walk, centered at the realized value of 892.
We know the increment from time 3 to 5 is ind of all previous values. Let d be that increment.
B5 = B3 + d, the sum of two ind Normal RV. It’s another normal RV, but dependent on the two!
A common Brownian Motion quiz ([[Zhou Xinfeng]]): Given a simple BM, what's the probability that it hits 3 before it hits -5?
This is actually identical to the BM with upper and lower boundaries. The BM walker stops when it hits either boundary. We know it eventually stops. At that stopping time, the walker is either at 3 or -5 but which is more likely?
Ultimately, we rely on the optional stopping theorem – At the stopping time, the martingale's value is a random variable and its expectation is equal to the initial value.
label – stoch
Background — There's no way to beat a fair game. Your winning always has an expected value of 0, because winning is a martingale, i.e. expected future value for a future time is equal to the last revealed value.
Now, what if there's a stopping time i.e, a strategy to win and end the game? Is the winning at that time still a martingale? If it's not, then we found a way to beat a fair game.
For a Simple Random Walk (coin flip) with upper/lower bounds, answer is intuitively yes, it's a martingale.
For a simple random walk with only an upper stopping bound (say $1), answer is — At the stopping time, the winning is the target level of $1, so the expected winning is also $1, which is Not the starting value of $0, so not a martingale! Not limited to the martingale betting strategy. So have we found a way to beat the martingale? Well, no.
“There's no way to beat a martingale in __Finite__ time”
You can beat the martingale but it may take forever. Even worse (a stronger statement), the expected time to beat the martingale and walk away with $1 is infinity.
The OST has various conditions and assumptions. The Martingale Betting Strategy violates all of them.
https://www.math.nyu.edu/faculty/varadhan/stochastic.fall08/3.pdf has a more detailed definition than Lawler's.
If a discrete martingale M(n) is a SIM, then
E[ M(99)^2 ] is finite, and so is E[ M(99999)^2 ].
Each (unconditional) expectation is, by definition, a fixed number and not random.
Consider another number “lim_(n-> inf) E[ M(n)^2 ]”. For a given martingale, this “magic attribute” is a fixed number and not random. A given square-integrable martingale may have an magic attribute greater than any number there is, i.e. it goes to infinity. But this magic attribute isn't relevant to us when we talk about square-integrable martingales. We don't care about the limit. We only care about “any number n”.
It's relevant to contrast that with quadratic variation. This is a limit quantity, and not random.
For a given process, Quadratic variation is a fixed value for a fixed timespan. For processA, Quadratic variation at time=28 could be 0.56; at time=30 it could be 0.6.
In this case, we divide the timespan into many, many (infinite) small intervals. No such fine-division in the discussion on square-integrable-martingales
See gc.get_referrers() and sys.getrefcount() …
Do you know — list Objects are mutable; integer Objects are Immutable? In java, c#, c, int object is Mutable.
—-Now look at integer objects, which are Immutable
i=2 # rebinding
i=1 # rebinding back to original object!
j=i # ref count = 2 on the shared object
—-Now look at lists. Do lists hold items by value or reference? More specifically,
– Add() by pbref? yes according to many posts in this blog and also [[python essential ref]]
– Read() by pbref? Yes. pbclone would make myList.edit() meaningless.
– qq(mylist = ….) is by reference? Probably yes, otherwise counter-intuitive.
How do you verify? How about id(mylist). Note
id( i )
arr.append(i) # pbref. Ref count = 2
id( arr )
i=2 # Rebinding! not “content edit”. i now points to a new object, so ref count = 1 on the original object! Counter-intuitive to me.
arr = 2.1 # Rebinding, not “content edit”. arr now points to a new object, so ref count = 0 on the original object “1.9”
id( arr )
Q: is it possible to change the value of arr?
A: arr currently points to an int object. We can rebind this pointer to a “9.89” object. However, if you want to change pointee object content at arr, then it depends on the pointee object type. For integer objects, content is immutable, so answer is NOWAY.
In a Riemann integral, each strip has an area-under-the-curve being either positive or negative, depending on the integrand’s sign in the strip. If the strip is “under water” then area is negative.
In stochastic integral , each piece is “increment * integrand”, where both increment and integrand values can be positive/negative. In contrast, the Riemann increment is always positive.
With Riemann, if we know integrand is entirely positive over the integration range, then the sum must be positive. This basic rule doesn’t apply to stochastic integral. In fact, we can’t draw a progression of adjacent strips as illustration of stochastic integration.
Even if the integrand is always positive, the stoch integral is often 0. For an (important) example, in a fair game or a drift-less random walk, the dB part is 50-50 positive/negative.
 think of the “Simple Process” defined on P82 by Greg Lawler.
On P80, Greg pointed out
* if integrand is random but the dx is “ordinary” then this is an ordinary integral
* if the dx is a coin flip, then whether integrand is random or not, this is a stoch integral
So the defining feature of a stoch integral is a random increment
[[complete guide]] P 457 pointed out VaR systems often need to aggregate cashflow amounts across different deals/positions, based on the “due date” or “maturity date”.
Example — On 12/31 if there are 33 payable amounts and 88 receivable amounts, then they get aggregated into the same bucket.
I think bucketing is more important in these cases:
- a bond has maturity date and coupon dates
- a swap has multiple reset dates
- most fixed income products
- derivative products — always has expiry dates
In StirtRisk, I think we also break down that 12/31 one day bucket by currency — 12/31 USD bucket, 12/31 JPY bucket, 12/31 AUD bucket etc.
Q: why is this so important to VaR and other market risk systems? (I do understand it hits “credit risk”.)
%%A: For floating rate products, the cashflow amount on a future date subject to market movements
%%A: FX rate on a future date 12/31 is subject to market movements
%%A: contingent claim cashflow depends heavily on market prices.
%%A: if 12/31 falls within 10D, then 10D VaR would be impacted by the 12/31 market factors
See http://athenasc.com/Bivariate-Normal.pdf. Idea is to decompose X into two parts, a multiple of Y + something independent of Y, like
X’ := the multiple of Y. Specifically, the constant multiplier c is given by rho * sigma_x/sigma_y
X” := X-X’ , the part of X that’s ind of Y.
E[X” | Y] works out to be rather Counter-intuitive. Let’s denote it as Ans.
On one hand, X” is ind of Y, so E[X” | Y] = E[X”] := E[X] – c*E[Y] = 0 – 0. Note the uncond expectations.
On the other hand, E[X” | Y] = E[X – X’|Y] = E[X|Y] – E[cY|Y] but this doesn’t lead to anywhere, since I’m not so skillful.
Actually E[X|Y] = E[X’|Y] = cY
Lawler’s examples show that given iid standard normals Z1, Z2 …, two “composed” random vars “X” and “Y” can be made independent of each other by adjusting their composition multipliers a b c d:
X:= a Z1 + b Z2
Y:= c Z1 + d Z2
(Simplest example — X:= Z1 + Z2 and Y:= Z1 – Z2. See Lawler’s notes P39.
Note X = Y + 2*Z2 so they look like related but actually independent!)
This independence is counter intuitive. I’m stilling look out for an intuitive interpretation.
Note X is never independent of Z1.
For 2 joint normal RVs (and only joint normals), 0 correlation implies independence…. Therefore, we only need to show E[XY] = E[X]E[Y]. In our simple example, RHS = 0*0 and
LHS: E[XY] := E[ (Z1+Z2)(Z1-Z2) ] = E[ Z1 Z1 ] – E[ Z2 Z2 ] = 0, since the 2 terms have identical expectations.
A classic counter-example. There’s a textbook on counter-examples in calculus, in which the authors argued for the importance of counter examples.