trFolder = 'datammmSH600519T';

trFiles = dir(fullfile(trFolder, 'trade*2013013*.csv'));

tr1D =read1csv(fullfile(trFolder, trFiles(1).name));

tic

for i=1:length(tr1D.textdata(:,4))

tt=tr1D.textdata(i,4);

dummy = sscanf(tt{:}, '%f');

end

toc

%%%%%%%%%%%

tic

str2double(tr1D.textdata(:,4));

toc

# Month: May 2014

# mean reversion, deviation detector, pattern recognition

Case in point — When I saw historical highs in copper price, I thought it would drop (reversion) within hours, or at most couple of days, but it just kept rising and destroyed my position. (A necessary condition for my undoing is margin. No margin, no collapse.)

I guess China Aviation Oil might have something like this?

# dark pools – a few observations

Most common Alt Trading Service is the dark pool, often operated by a sell-side bank (GS, Normura etc).

A “transparent” exchange (my own lingo) provides the important task of _price_discovery_. A dark pool doesn’t. It receives the price from the exchanges and executes trades at the mid-quote.

Market order can’t specify a price. You can think of a market buy order as a marketable limit order with price = infinity. Therefore, when a market order hits a limit order, they execute at the limit price. When 2 limit orders cross, they execute at the “earlier” limit price.

Therefore, on the exchange, I believe all trades execute either on the best bid price or best ask. I guess all the mid-quote executions happen on the ATS’s.

Dark pool is required to report trades to the regulator, but often with a few sec longer delay than an exchange.

Dark pool may define special order types beside the standard types like limit orders or market orders.

Forex is quote driven, not order driven. Forex has no exchange. The dominant market is the interbank market. Only limit orders [1] are used. However, within a private market operated by a single dealer, a “market order” type can be defined. I feel the rules are defined by the operator, rather than some exchange regulator.

[1] A Forex limit order is kind of fake – unlike the exchange’s guarantee, when you hit a fake limit order that dealer may withdraw it! I believe this is frowned upon by the market operator (often a club of FX banks), so dealers are pressured to avoid this practice. But I guess a dealer may need this “protection” in a fast market.

# print a c# array – one-liner

myArray.Aggregate(“”, (a, b) => a + b + “, “)

# use YC slope to predict 5Y bond’s return over the next 12M

However, If we are unlucky, the return factor (observable in a year) could come below the riskfree return factor today. (Note both deals cover the same loan period.)

* But then, we could cancel our plan and hold the bond to maturity and realize a total return of 44%. This is somewhat risky, because bond yield could rise further beyond 8.8%, hurting our NAV before maturity.

* Crucially, if the return over the next 12 months turns out to be lower than riskfree rate, then the subsequent 4 years must return more than 8.8% pa, since the return-till-maturity is fixed at 44%.

I have a spreadsheet illustrating that yield shifts in the next year may hurt the then NAV but the total return till maturity is unaffected.

# increasing corporate bond issues -> swap spread narrowing

Look at the LTCM case.

Almost all the issuers are paying fixed coupons. Many of them want to swap to Receive fixed (and pay floating). This creates an (increasing supply for the LFS i.e. Libor floating stream * and) increasing demand on the fixed rate*. Suppose Mark is the only swap dealer out there, so he could lower the swap spread to be as Low as he likes, so low that Mark’s paying fixed rate is barely above the treasury yield.

Note increasing demand on the fixed rate doesn’t raise it higher but rather hammer it down. Here’s why — if more job seekers now want to earn a fixed salary as a carpenter, then that fixed salary would Drop.

Oversupply to bonds would suppress bond prices, and increase bond yields. Oversupply of bank loans suppresses interest rate. I get many credit line promotion calls offering very low interest rates.

—

Now I feel it’s easier to treat the Libor floating stream (LFS) as an asset. The price is the swap spread.

When there’s over-supply of LFS, swap spread will tighten;

When there’s over-demand of LFS, swap spread will widen.

# IRS – off-balancesheet #T-bond repo

The LTCM case P12 illustrated (with an example) a key motivation/benefit of IRS — off balance sheet. The example is related to the swap spread trade briefly described in other posts.

For a simple T-bond purchase with repo financing, the full values (say $500m) of the bond and the loan appear on the balance sheet, increasing the fund’s leverage ratio. In contrast, if there’s no T-bond purchase, and instead we enter an IRS providing the same(?? [1]) interest rate exposure, then the notional $500m won’t appear on balance sheet, resulting in a much lower leverage ratio. Only the net market value of the existing IRS position is included, usually a small value above or below $0. (Note IRS market value is $0 at inception.)

[1] An IRS position receiving fixed (paying float) is considered similar to the repo scenario. The (overnight?) rollling repo rate is kind of floating i.e. determined at each rollover.

Other positions to be recorded off balance sheet ? I only know futures, FX swaps, …

# UIP carry trade n risk premium

# python performance, brief notes #2

Compared to java, Python is slower and less portable. Solution – Jython creates java bytecode.

In Quartz, python performance is considered a real issue.

Toa Payoh library has a book on Python performance…

Compiling to C … cython.

# 100-leveraged long Treasury position

# merger arbitrage, basics

# alpha or beta@@ illustrated with treasury spread

# 2 key risks in U.S. treasury ^ IRS trading

The LTCM case very briefly outlined that for a stand-alone 20Y T bond position, there’s

1) interest rate risk [1], and

2) liquidity risk of that particular instrument. I guess this means the bid/ask spread could become quite bad when you need to sell the bond to get much-needed cash.

LTCM case analysed a swap spread trade, whereby the IRS position provides a perfect hedge for the interest rate risk. I think we can also consider duration hedge.

As to liquidity risk. I feel T bond is more liquid than IRS.

# minimize locking – queue for producer/consumer

I was asked this question in a big bank IV.

Q: if our queue needs to be as fast as possible, we want to avoid a global lock. How?

%%A1: multi-queue, based on cusip or account prefix. I implemented this partially in a JPM take-home coding test

%%A2: if were are very sure the queue is never depleted, then use 2 locks at both ends, so consumer threads only need to contend for the consumer lock.

%%A3: lock free queues are probably quite common in c#, java and c++

I feel in reality, there is usually some bound on the capacity of producer, consumer or queue. Sometimes producer will be too fast (overflow), or too slow (cross), so a queue without any bound check is unreliable in practice.

# python shortcomings for large enterprise app

– performance, efficiency. Compare — C statements translate to brief machine codes.

-? scalability

– threading – GIL, no precise control

– industrial strength commercial support. Only c++/java/c# has big tech powerhouses. Linux is widely adopted by enterprises, but is an exception that proves the rule.

– ? commercial auxiliary software to support the ecosystem, including IDE, debuggers, jars, code generators, profilers, … I feel python has these.

-? most enterprise apps are now in java or c# as these are proven, in terms of performance, resource scalability, commercial support etc.

# generalized tail risk n selling puts, intuitively

eg. market maker trading against an “informed trader”, suffering adverse selection.

# Poisson basics #2

Derived Rigorously from binomial when the number of coins (N) is large. Note sample size N has completely dropped out of the probability function. See Wolfram.

Note the rigorous derivation doesn’t require p (i.e. probability of head) to be small. However, Poisson is useful mostly for small p. See book1 – law of improbable events.

Only for small values of p, the Poisson Distribution can simulate the Binomial Distribution, and it is much easier to compute than the Binomial. See umass and book1.

Actually, It is only with rare events (i.e. small p, NOT small r) that Poisson can successfully mimic the Binomial Distribution. For larger values of p, the Normal Distribution gives a better approximation to the Binomial. See umass.

Poisson is applicable where the interval may be time, distance, area or volume, but let’s focus on time for now. Therefore we say “Poisson Process”. The length of “interval” is never mentioned in Poisson or binomial distributions. The ~~Poisson distribution vs Poisson process~~ are 2 rather different things and confusing. I think it’s like Gaussian distro vs Brownian Motion.

I avoid “lambda” as it’s given a new meaning in the Poisson _process_ description — see HK.

Poisson is discrete meaning the outcome can only be non-negative integers. However, unlike binomial, the highest outcome is not “all 22 coins are heads” but infinite. See book1. From the binomial view point, the number of trials (coins) during even a brief interval is infinitely large.

—Now my focus is estimating occurrences given a time interval of varying length. HK covers this.

I like to think of each Poisson process as a noisegen, characterized by a single parameter “r”. If 2 Poisson processes have identical r, then the 2 processes are indistinguishable. In my mind, during each interval, the noisegen throws a large (actually inf.) number of identical coins with small p. This particular noisegen machine is programmed without a constant N or p, but the product of N*p i.e. r is held constant.

Next we look at a process where the r is proportional to the interval length. In this modified noisegen, we look at a given interval, of length t. The noisegen runs once for this interval. The hidden param N is proportional to t, so r is also proportional to t.

References –

http://mathworld.wolfram.com/PoissonDistribution.html – wolfram

http://www.umass.edu/wsp/resources/poisson/ – umass

My book [[stat techniques in biz and econ]] – book1

http://www.math.ust.hk/~maykwok/courses/ma246/04_05/04MA246L4B.pdf – HK

# Poisson (+ exponential) distribution

See also post with 4 references including my book, HK, UMass.

Among discrete distributions, Poisson is one of the most practical yet simple models. I now feel Poisson model is closely linked to __binomial __

* the derivation is based on the simpler binomial model – tossing unfair coin N times

* Poisson can be an approximation to the binomial distribution when the number of coins is large but not infinite. Under infinity, I feel Poisson is the best model.

I believe this is probability, not statistics. However, Poisson is widely used in statistics.

Eg: Suppose I get 2.4 calls per day on average. What’s the probability of getting 3 calls tomorrow? Let’s evenly divide the period into many (N) small intervals. Start with N = 240 intervals. Within each small interval,

Pr(a=1 call) ~= 1% ( ? i.e. 2.4/240?)

Pr(a=0) = 99%

Pr(a>1) ~= 0%. This approximation is more realistic as N approaches infinity.

The 240 intervals are like 240 independent (unfair) coin flips. Therefore,

Let X=total number of calls in the period. Then as an example

Pr(X = 3 calls) = 240-choose-3 * 1%^{3} * 99%^{237}. As N increases from 240 to infinite number of tiny intervals,

Pr(X = 3) = exp(-2.4)2.4^{3}/ 3! or more generically

Pr(X = x) = exp(-2.4)2.4^{x}/ x!

Incidentally, there’s an **exponential distribution** underneath/within/at the heart of the **Poisson Process** (I didn’t say Poisson Distro). The “how-long-till-next-occurrence” random variable (denoted T) has an exponential distribution whereby Pr (T > 0.5 days) = exp(-2.4*.5). In contrast to the discrete nature of the Poisson variable, T is a continuous RV with a PDF curve (rather than a histogram). This T variable is rather important in financial math, well covered in the U@C Sep review.

For a credit default model with a constant hazard rate, I think this expo distribution applies. See other posts.

# SLOOB locations of xap files – win7winxp

C:Usersa5xxxxxxAppDataLocalMicrosoftSilverlightOutOfBrowser – win7

C:Documents and SettingsA5XXXXXXLocal SettingsApplication DataMicrosoftSilverlightOutOfBrowser – winxp

# matlab | append to an array

b_mkt(end+1)=1

# matlab | iterate over an array

for elm = list

%# do something with the element

end

# matlab | index of max item in an array

[max_value, index] = max(r2);

# matlab | str2double converts array of strings

cell2mat seems to be for another purpose

# Re: HW6 Q2.3b regress +! intercept@@

# collateralized 100% financing on a treasury trade

Develop instincts with these concepts and numbers — common knowledge on IR trading desks. P10 of the LTCM case has an example on Treasury trading with repo financing.

Most buy-side shops work hard to get 100% collateralized financing. Goal = avoid locking up own capital. 100% means buying $100m T bond and immediately pledge it for repo and use the borrowed $100m for the bond purchase. If only $99m cash borrowed (1% haircut), then LTCM must commit $1m of own capital, a.k.a. $1m “equity investment”.

P14 explains that many buyers choose overnight and short term repo, but LTCM chose 6-12M term repo, where the repo rate is likely higher.

LTCM managed to keep most of the $6.7b capital in liquid cash, generating about 5% interest income annually. This $350m interest adds almost 50% on top of the average $750m trading profit annually.

# how many job interviews Focused on financial analytics quesitons

All of these jobs require more than 90% programming.

OC

Platimus

Baml STIRT

Barclays (Neresh) vol fitter

— less heavy focus

pimco bond position overnight risk

Citi Changi equity derivative

MS IRD trading desk team

# ways to reduce java OOM@@

- Allocate large native buffer, such as NIO buffers.
- reused objects in my own object pool, usually (circular) arrays
**homemade ringBuffer@pre-allocated objects to preempt JGC**

Q: Increase OS swap file size?

Q: add RAM?

# 4th data source to a yield curve – year-end Turn

See http://www.jonathankinlay.com/Articles/Yield%20Curve%20Construction%20Models.pdf

for more details.

The year-end turn of the yield curve is defined as the sudden jump in yields during the change of the year. This usually happens at the end of the calendar year, reflecting increased market activity related to

year-end portfolio adjustments and hedging activity….When there is a year turn(s), two discount curves are

constructed: one for turn discount factors and one for the discount factors calculated from the input instruments after adjustments and the discount factor at any time is the multiplication of two.

# trading swap spread – LTCM case (with wrong intuitions

See Harvard Biz School case study 9-200-007 on LTCM. I feel this is a good simple scenario to develop analytic instinct/intuition about IRS.

I believe USD swap spread is similar to the TED (which is 3M). A very narrow TED means on the lower side T (i.e. treasury) yield too high and on the upper side fwd Libor too low.

T yield too high means T bonds too cheap. Therefore, LTCM would BUY T bonds.

Expected series of Libor cashflow is too low, so the equivalent fixed leg is also too low. Therefore LTCM would PAY the fixed rate. The par swap rate is the price you lock in today, which buys you the Libor stream, which you believe to be rising.

In the orange case, you as a Libor/orange buyer lock in a price today and you expect the oranges to get bigger soon.

For a 10Y swap, we are saying the forward Libor rates over 3->6M, 6->9M, … 120->123M… are too low and may rise tomorrow. There are many wrong ways to interpret this view.

correct – Since the floating income will rise, we would want to receive those future Libor interests.

correct – We can think of the floating leg as a hen giving eggs periodically. The market now forecasts small eggs, but LTCM feels those eggs will be bigger, and the hen is under valued. So LTCM buys the hen by paying the (low) fixed rate.

# trading swap spread – LTCM case, again

Here’s a simpler way to look at it. When the swap spread is too narrow, T yield is too high and swap fixed rate is too low. …. (1)

Key – use a par bond as a simple (but not simplistic) example, so its yield equals its coupon interest rate.

Now we can rephrase (1) as — T bond interest too high and swap fixed rate too low, and they are going to widen. Now it’s obvious we should Buy to receive T interest (too high). And we pay the swap fixed rate (too low), and consequently receive Libor.

When we say “swap rate is too low and is likely to rise in 4 months”, i think we are predicting a “rise” in Libor. Swap rate is like a barometer of the Libor market and the Libor yield curve.

A simple “rise” is a parallel shift of the Libor yield curve. A less trivial “rise” would involve a tilt. Rise doesn’t mean upward sloping though.

It’s rather useful to develop instinct and intuition like this.

# some IKM java questions

Q:

Base aa = new Sub(); //both Base and Sub defines STATIC method m1()

aa.m1();

(Sub)aa.m1(); // which one?

Q: Deep copy a java array?

– clone()?

– serialization?

Q: a base class method without access modifier is callable by subclass?

I think java default method access level is “package, not subclass”. In contrast, c# (and c++) default is private — http://msdn.microsoft.com/en-us/library/ms173121.aspx.

Q: if interface1 declares method2 returning Set, can an implementing class’s method return SortedSet?

# benchmark a static factor model against CAPM

http://bigblog.tanbin.com/2014/04/risk-premium-clarified.html explains …

Let me put my conclusion up front — now I feel these factor models are an economist's answer to the big mystery “why some securities have consistently higher excess return than other securities.” I assume this pattern is clear when we look long term like decades. I feel in this context the key assumption is iid, so we are talking about steady-state — All the betas are assumed time-invariant at least during a 5Y observation window.

There are many steady-state factor models including the Fama/French model.

Q: why do we say one model is better than another (which is often the CAPM, the base model)?

1) I think a simple benchmark is the month-to-month variation. A good factor model would “explain” most of the month-to-month variations. We first pick a relatively long period like 5 years. We basically “confine” ourselves into some 5Y historical window like 1990 to 1995. (Over another 5Y window, the betas are likely different.)

We then pick some security to *explain*. It could be a portfolio or some index of an asset class.

We use historical data to calibrate the 4 beta (assuming 4 factors). These beta numbers are assumed steady-state during the 5Y. The time-varying (volatile) factor values combined with time-invariant constant betas would give a model estimate of the month-to-month returns. Does the estimate match the actual returns? If good match, then we say the model “explains” most of the month-to-month variation. This model would be very useful for hedging and risk management.

2) A second benchmark is less intuitive. Here, we check how accurate the 2 models are at “explaining” _steady_state_ average return.

Mark Hendricks' Econs HW2 used GDP, recession and corp profits as 3 factors (without the market factor) to explain some portfolios' returns. We basically use the 5Y average data (not month-to-month) combined with the steady-state betas to come up with an 5Y average return on a portfolio (a single number), and compare this number to the portfolio actual return. If the average return matches well, then we say …”good pricing capability”!

I feel this is an economist's tool, not a fund manager's tool. Each target portfolio is probably a broad asset class. The beta_GDP is different for each asset class.

Suppose GDP+recession+corpProfit prove to be a good “pricing model”, then we could use various economic data to forecast GDP etc, knowing that a confident forecast of this GDP “factor” would give us a confident forecast of the return in that asset class. This would help macro funds like GMO making asset allocation decisions.

In practice, to benchmark this “pricing quality”, we need a sample size. Typically we compare the 2 models' pricing errors across various asset classes and over various periods.

When people say that in a given model (like UIP) a certain risk (like uncertainty in FX rate movement) is not priced, it means this factor model doesn't include this factor. I guess you can say beta for this factor is hardcoded to 0.

# backfill bias n survivorship bias, briefly

based on http://oyc.yale.edu/sites/default/files/midterm_exam1_solutions.pdf —

A hedge fund index has a daily NAV value based on the weighted average NAV of constituent funds. If today we discover some data error in the 1999 NAV, we the index provider are allowed to correct that historical data. Immediately, many performance stats would be affected and needs update. Such data error is rare (I just made it up for illustration.) This procedure happens only in special scenarios like the 2 scenarios below.

Survivorship bias: When a fund is dropped from an index, past values of the index is adjusted to remove that fund's past data.

Backfill bias: For example, if a new fund has been in business for two years at the time it is added to the index, past index values are adjusted for those two years. Suppose the index return over the last 2 years was 33%, based on weighted average of 200 funds. Now this new fund is likely more successful than average. Suppose its 2Y return is 220%. Even though this new fund has a small weight in the index, including it would undoubtedly boost the 2Y index return – a welcome “adjustment”.

While backfilling is obviously a questionable practice, it is also quite understandable. When an index provider first launches an index, they have an understandable desire to go back and construct the index for the preceding few years. If you look at time series of hedge fund index performance data, you will often note that indexes have very strong performance in the first few years, and this may be due to backfilling.

# extreme long-short allocations by MV optimizer

This is stressed over and again in my MV optimization course…

Suppose we have only 2 securities with high correlation.

Often one of them (AA) has a slightly higher Sharpe ratio than the other. The optimizer would go long a few hundred percent (say 300%) on it, and short 200% on the other (BB). These allocation weights add up to 100%.

If we tweak the historical mean return a bit so AA’s Sharpe ratio becomes slightly below BB, then the optimizer would recommend go deep short AA and deep long BB.

This is a common illustration of the over-sensitivity and instability of MV allocation algorithm. In each case, the optimization goal is maximum Sharpe ratio of the portfolio. Why Sharpe? MV

# difference – discount factor ^ (Libor,fwd,spot…)rates

Discount factor is close to 1.0, but all the rates are annualized and usually between 0.1% ~ 8%.

This simple fact is often lost in the abstract math notations. When I get a long formula with lots of discount factors, forward rates, (forward) Libor rates, floating payments, future fixed payments… I often substitute typical numbers into the formula.

Also, due to annualizing, the rate number for overnight vs long tenors (like 1Y) are similiar, at least the same order of magnitude.

# 3 types – pricing curves (family video…

# normal variable to lognormal variable

log of the rand variable is normal.

Q: given a LogNormal variable H, how do I generate a normal variable?

A: take the log of that variable, i.e. log(H) ~ Normal()

Q: Now given a Normal variable Z ~ Normal(), how do I generate a lognormal variable?

A: exp(Z) ~ LN

# subverting kernel’s resource-allocation – a few eg

[[Linux sys programming]] explains several techniques to subvert OS resource-allocation decisions. Relevant to performance-critical apps.

P275 mlockall() — **mlock() : prevent paging ] real time apps**

P173 sched_setaffinity(). A strongly cache-sensitive app could benefit from hard affinity (stronger than the default soft affinity), which prevents the kernel scheduler migrating the process to another

processor.

[[The Linux Programmer’s Toolbox]]. A write-only variable will be removed by the compiler’s optimizer, but such a variable could be useful to debugger. I read somewhere that you can mark it volatile — subversive.

Any way to prevent “my” data or instruction leaving the L1/L2 cache?

Any way to stay “permantly” in the driver’s seat, subverting the thread scheduler’s time-slicing?

# Towards expiration, how option greek graphs morph

Each curve is a rang-of-possibility curve since the x-axis is the (possible range of) current underlier prices.

As expiration approaches, …

As expiration approaches, …

the curve descends closer to the kinked hockey stick payout diagram

— the delta curve

As expiration approaches, …

the climb (for the call) becomes more abrupt.

See diagram in http://www.saurabh.com/Site/Writings_files/qf301_greeks_small.pdf

— the gamma curve

As expiration approaches, …

the “bell” curve is squeezed towards the center (ATM) so the peak rises, but the 2 tails drop

— the vega curve

As expiration approaches, …

the “bell” curve descends, in a parallel shift

# python template method; parent depending on child

Background — classic template method pattern basically sets up a base class dependency on (by calling) a subclass method, provided the method is abstract in base class.

Example — doHtml(), doParameters() and doProperties() methods are abstract in the base EMPanel class.

1) Python pushes the pattern further, when method can be completely _undeclared_ in base class. See runCommand() in example on P222 [[Programming Python]].

* When you look at the base class in isolation, you don’t know what self.runCommand() binds to. It turned out it’s declared only in subclass.

2) Python pushes the pattern stillllll further, when _undeclared_ **fields **can be used in base class. The self.menu thing looks like a data field but undeclared. Well, it’s declared in a subclass!

3) I have yet to try a simple example but multiple sources [3] say python pushes the pattern yeeeet further, when a method can be invoked without declaring it in any class — if it’s declared in an Instance. That instance effectively is an instance of an anonymous subclass (Java!).

* There’s no compiler to please! At run time, python can “search” in instance and subclass scopes, using a ** turbo charged template-method search engine**.

In conclusion, at creation time a python base class can freely reference any field or method even if base class doesn’t include them in its member-listing.

[3] P96 [[ref]]

# FX vs IR trading desks, briefly

Now I know that in a large sell-side, FX trading is “owned” by 2 desks – the “cash” FX desk and the IR desk. Typically, anything beyond 3 months is owned by the Interest Rate desk (eg STIRT). It seems that these FX instruments have more in common with interest rate products and less in common with FX spot. They are sensitive to interest rates of the 2 currencies.

In one extreme case every fx forward (outright?) deal is executed as a FX spot trade + a FX swap contract. The FX swap is managed by the interest rate desk.

FX vol is a 3rd category, a totally different category.

# y use OIS instead of Libor discounting — random notes

Cash-flow discounting (to Present Value) should use a short rate, “instantaneously short”, ideally a risk-free rate, which is theoretical. In reality, there are various candidates —

Candidate: treasury bill rate. The rate is artificially low due to tax benefit leading to over-demand, higher price and lower yield. There are other reasons explained in ….

Candidate: Libor. In recent years, Libor rates are less stable compared to OIS. Libor is also subject to manipulation — the scandals. OIS is actual transaction rate, harder to manipulate.

Q: why OIS wasn’t chosen in the past?

%%A: not as actively traded (and influential) as Libor

# Modified duration^Macaulay duration, briefly again

The nice pivot diagram on http://en.wikipedia.org/wiki/Bond_duration is for Macaulay duration — dollar-weighted average maturity. Zero bond has duration equal to its maturity. (I think many textbooks use this diagram because it’s a good approximation to MD.)

The all-important sensitivity to yield is …. MD i.e. modified duration. Dv01 is related to MD (not Macaulay) — http://bigblog.tanbin.com/2012/05/bond-duration-absolute-1-relative-x.html

MD is the useful measure. It turned out that MD is different from Macaulay duration by a small factor.

# risk premium – dynamic factor models

# risk premium – static factor models

# strace, ltrace, truss, oprofile, gprof – random notes

# vbscript can …

For localized/sandbox tasks like file processing or DB, xml…, perl and python are nice, but I feel vbscript is the dominant and standard choice for system automation. Vbscript integrates better into Windows. In contrast, on Linux/Unix, python and perl aren't stigmatized as 2nd-class-citizens

— are based on [[automating windows administration]] —

access registry

connect to exchange/outlook to send/receive mails

regex

user account operations

**query group membership

**query Active Directory

**CRUD

file operations

** size

** delete folder

** read the version of a (DLL, EXE) file

** recursively find all files meeting a (size, mtime, atime..) criteria

** write into a text file

# matlab | swap x / y axises

x = 0:.01:pi ; plot(x,sin(x),'b-') ; % example plot

view(-90,90)

set(gca,'ydir','reverse')

# /proc/{pid}/ useful content

Based on [[John Fusco]] —

./cmdline is a text file …

./cmd is a symlink to the current working dir of the process

./environ is a text file showing the process’s env vars

./fd/ hold symlinks to the file descriptors, including sockets

./maps is a text file showing user space memory of the process

./smaps is a text file showing detailed info on shared lib used by the process

./status is a more human-readable text file with many process details