Most developers come to the “STL supermarket” looking for containers to complement the basic array container. They like a container and use it, and soon realize they must return to the supermarket and pick the associated algo/iterator etc. Many developers find it hard to avoid the STL algorithms. Many feel in their project STL algorithms are avoidable if they write their own home-grown functions access the containers. Iterators, however, are more necessary.
In option pricing, we encounter realized vs implied vol (not to be elaborated here). In market risk (VaR etc), we encounter
past-realized-vol vs forecast-realized-vol. Therefore, we have 3 flavors of vol
PP) past realized vol, for a historical period, such as Year 2012
FF) forecast realized vol, for a start/end date range that's after the reference date or valuation date. This valuation date is
II) implied vol, for a start/end date range that's after the reference date or valuation date. This valuation date is typically
PP has a straightforward definition, which is basis of FF/II.
Why FF? To assess VaR of a stock (I didn't say “stock option”) over the next 365 days, we need to estimate variation in the stock
price over that period.
FF calculation (whenever you see a FF number) is based on historical data (incidentally the same data underlying PP), whereas II
calculation (whenever you see a II number like 11%) is based on quotes on options whose remaining TTL is being estimated to show an
(annualized) vol of 11%.
See http://www.core.ucl.ac.be/econometrics/Giot/Papers/IMPLIED3_g.pdf compares FF and II.
I feel the #1 most useful integral (and derivative) is that of exponential function and the natural log function. Here is a
cheatsheet to be internalized.
for f (x) = e^x, f ' (x) = e^x
for f (x) = a^x, f ' (x) = ln(a) a^x
for f (x) = ln(x), f ' (x) = 1/x
for f (x) = log_a(x), then simpliy recognize f (x) = ln(x) / ln(a). The rest is really really simple.
Now the integrals
For f ' (x) = a^x, f (x) = a^x / ln(a)
For f ' (x) = ln(x), f (x) = x ln(x) – x See http://www.math.com/tables/integrals/more/ln.htm. Classic integration-by-parts
For f ' (x) = log_a(x), then simpliy recognize f ' (x) = ln(x) / ln(a). The rest is really really simple.
[[The Art of Readable Code]] has tips on naming, early return (from functions), de-nesting, variable reduction, and many other topics…. Here are my own thoughts.
I said in another blog that “in early phrases (perhaps including go-live) of a ent SDLC, a practical priority is instrumentation.” Now I feel a 2nd practical priority is readability and traceability esp. for the purpose of live support. Here are a few unconventional suggestions that some authorities will undoubtedly frown upon.
Avoid cliche method names as they carry less information in the log. If a util function isn’t virtual but widely used, try to name it uniquely.
Put numbers in names — the less public but important class/function names. You don’t want other people outside your team to notice these unusual names, but names with embedded numbers are more unique and easier to spot.
Avoid function overloads (unavailable in C). They reduce traceability without adding value.
Log with individualized, colorful even outlandish words at strategic locations. More memorable and easier to spot in log.
Asserts – make them easy to use and encourage yourself to use them liberally. Convert comments to asserts.
If a variable (including a field) is edited from everywhere, then allow a single point of write access – choke point. This is more for live support than readability.
How do global variables fit in? If a Global is modified everywhere (no choke point), then it’s hard to understand its life cycle. I always try to go through a single point of write-access, but Globals are too accessible and too open, so the choke point is advisory and easily bypassed.
Q: can we say every linear transformation (linT) can be /characterized/expressed/represented/ as a multiplication by a specific (often square) matrix? Yes See P168 [[the manga guide to LT]]
 BTW, The converse is easier to prove — every multiplication by a matrix is a linT, assuming input is a columnar vector.
Before we can learn the practical techniques applying MxV on LinT, we have to clear a lot of abstract and confusing points. LinT is one of the more abstract topics.
1) What kind of inputs go into a LinT? By LinT definition, real numbers can qualify as input to a LinT. With this kinda input, a LinT is nothing but a linear function of the input variable x. Both the Domain and the Range of the linT consist of real numbers.
2) This kinda linear transform is too simple, not too useful, kinda degenerate. The kinda input we are more interested in are vectors, expressed as columnar vectors. With this kinda inputs, each LinT is represented as a matrix. A simple example is a “scaling” where input is a 3D vector (x,y,z). You can also say every point in the 3D Space enters this LinT and “maps” to a point in another 3D space. This transform specifies how to map Every single point in the input space. “Any point in the 3D space I know exactly how to map!”. Actually this is a kind of math Function. Actually Function is a fundamental concept in Linear Transformation.
This particular transform doesn’t restrict what value of x,y or z can come in. However, the parameters of the function itself is locked down and very specific. This is a specific Function and a specific Mapping.
3) Now, since matrix multiplication can happen between 2 matrices, so what if input is a matrix? Will it be a LinT? I don’t know too much but I feel this is not practically useful. The most useful and important kind of Linear transformation is the MxV.
4) So what other inputs can a LinT have? I don’t know.
To recap, there are unlimited types of linear transformations, and each LinT has an unlimited, unconstrained Domain. This makes LinT a a rather abstract topic. We must divide and conquer.
First divide the “world of linear transforms” by the type of input. The really important type of input is columnar vector. Once we limit ourselves to columnars, we realize every LinT can be written as a LHS multiplying matrix.
To get a concrete idea of LinT, we can start with the 2D space — so all the input columnars come from this space. These can be represented as points in the 2D space.
To my surprise, practically all of the important concepts in introductory linear algebra are related to one operation – a LHS “multiplier” matrix multiplying a RHS columnar (i.e. a columnar vector). I call it a MxV
I guess LA as a branch grew to /characterize/abstract/ and solve real problems in physics, computer graphics, statistic etc. I guess many of the math tools are about matrices, vectors and … hold your breath … MxV —
– Solving linear system of equations. The coefficients form a LHS square matrix and the list of unknowns form the columnar vector
– transforming 3D space to 2D space — when the columnar is 3D and the matrix is 2×3
– range, image … of a transform function — the function often represented as a MxV multiplication.
– inverse matrix
– linear transform
To keep things simple and concrete, let’s limit ourselves to square matrices up to 3D.
I’m no expert on linear transform (LinT). I feel LinT is about mapping a columnar vector (actually ANY columnar in a 2D space) to another vector in another 2D space. Now, there are many (UNLIMITED actually) such 2D mapping functions. Each _specific_ mapping function can be characterized by a _specific_ LHS multiplying matrix. MxV again!
Now eigenvector is about characterizing such a matrix. Suppose we are analysing such a matrix. The matrix accepts ANY columnar (from the 2D space) and transforms it. Again there are UNLIMITED number of input vectors, but someone noticed one (among a few) input vector is special to this particular matrix. It goes into the transform and comes out perfectly scaled. Suppose this special input vector is (2,1,3) , it comes out as (20,10,30). The scaling factor (10 in this case) is the eigenvalue corresponding to the eigenvector.
Let’s stop for a moment. This is rare. Most input vectors don’t come out perfectly scaled – they get linearly transformed but not perfectly scaled. This particular input vector (and any scaled version of it) is special to this matrix. It helps to characterise the matrix.
It turns out for a 3D square matrix, there are up to 3 such special vectors — the eigenvectors of the matrix.
The set of all eigenvectors of a matrix, each paired with its corresponding eigenvalue, is called the eigensystem of that linear transform.
Instead of “eigen”, the terms “characteristic vector and characteristic value” are also used for these concepts.
 should be written as a columnar actually.