The “c” in cin/cout/cerr/clog means Character.

There’s also “wc” versions — 1) wclog 2) wcerr 3) wcin 4) wcout

Skip to content
# keep learning 活到老学到老

## to remove two-column,resize your browser window to narrow

# Author: old_post_owner

# narrow^wide-character iostreams in c++

# for YuJia – parameter optimization for the strategy

# basic order book data structure

# credit risk analysis in Singapore

the yahoo-linked account

The “c” in cin/cout/cerr/clog means Character.

There’s also “wc” versions — 1) wclog 2) wcerr 3) wcin 4) wcout

The SP strategy has 2 parameters to optimize — g and j. Very briefly, we monitor Price(futures) – 10x Price(ETF). When the spread exceeds g (a positive param), then buy the ETF and sell the futures. Remain market-neutral. When the spread drops below j (positive or negative param), we close all position and exit. The goal is to find optimal values of j and g to produce highest return on investment.

The matlab fminsearch and fmincon optimizers could find only local minimum around the initial input values. If our initial values are (j=-1, g=3) then the optimizers would try j values of -0.95, -1.05 and similar values. Even though the optimizer would try dozens of combinations it would not try far-away values like -5 or +5.

Therefore these standard optimizers are limited in their search capabilities. We took inspiration from a sample solution to Homework 3 and came up with our own optimizer — i call it randshock. It’s very fast and fairly stable.

unbeatenTimesMax=1000;

yellowJersey = 0;

unbeatenTimes = 0;

while (unbeatenTimes<unbeatenTimesMax)

Target = -getMetric(mu);

if Target > yellowJersey

unbeatenTimes=0;

unbeatenHistory=nan(unbeatenTimesMax,1);

yellowJersey = Target;

mu_yj = mu;

fprintf(‘! n’);

else

unbeatenTimes = unbeatenTimes + 1;

unbeatenHistory(unbeatenTimes) = Target;

fprintf(‘n’);

end

power = 2*log(5)*rand(1,1)-log(5);

multiplier = exp(power);

%hist(multiplier,100)

g = mu_yj(2) * multiplier;

gap = g * (0.5 + rand);

j = g – gap;

mu = [j g];

end

The entire optimizer is only 20 lines short. Can be easily tested by selecting ENV.MODE_OPTIMIZE. It gives a random “shock” to the input params, and checks out the result, then repeat. If any result is better than ALL previous results, then this result gets the yellow-jersey. We would declare it the final champion only after it survives 1000 random shocks.

The random shock is designed to be big enough to reach far-away values, and sufficiently “local” to cover many many values around the current param values i.e. the current “yellow-jersey”.

Without loss of generality, we will focus on the most important parameter “g”. The shock is implemented as a random multiplier between 0.2 and 5. This means the randshock optimizer has a good chance (1000 tries) to reach a value 80% below the yellow-jersey or 5 times above the yellow-jersey. If all values between these 2 extreme values don’t outperform the yellow jersey, then there’s no point exploring further afield.

Between these 2 extremes, we randomly try 1000 different multipliers. We apply the multipler to the current “g” value. If none of them beats the yellow jersey, then we conclude the optimization.

If any one of the 1000 tries is found to outperform the yellow jersey, then it gets the yellow jersey, and we reset the “unbeatenTimes” counter, and give 1000 random shocks to the new yellow jersey.

In theory this process could continue indefinitely but in practice we always found a champion within 2000 shocks, which takes a few minutes only.

Using the randshock optimizer we found optimal param values, which are the basis of all the analysis on the strategy. Next let us look at the optimal parameter values.

The optimal values are g=6.05 and j= -2.96. ROI = $194k from initial seed capital of $1k. In other words, $1k grew almost 200 times over the InSample period (about 10 years).

Using these two optimized parameter values, the OutSample period (from late 2009 till now) generated a profit of $7576, from initial seed capital of $1k. The return is not as “fast” as InSample, but still the optimized values are usable and profitable.

Requirement:

An order book quickly adds/removes order objects against a sorted data structure. Sorting by price and (within the same price) sequence #. Both price and seq are represented by integers, not floats. Sequence # is never reused. If due to technical reason the same seq# comes in, then data structure should discard it.

—————————————————-

This calls for a sorted set, not multiset. Maps are less memory-efficient.

You can actually specialize a STL set with a comparator functor.

The overloaded () operator should return a bool, where true means A is less than B, and false means A >= B. Note equality should return false, as pointed by [[effective stl]]. This is important to duplicate detection. Suppose we get 2 dupe order objects with different timestamps. System compares price, then compares sequence #. We see A is NOT less than B, and B is NOT less than A, thanks to the false return values. System correctly concludes both are equivalent and discards one duplicate.

(a blog post. Your comments are appreciated.)

To me, credit risk is all about default risk. There's a whole industry around the rating, measurement/analysis, monitoring, hedging and control of default risk. As such, Credit risk is relevant to both investment banking (buy/sell, underwriting, M&A etc) and commercial banking (ie lending), but how relevant? I feel credit risk is one of many components of market risk in investment-banking, but credit risk is absolutely central to commercial banking.

For the Singapore financial industry, commercial banking generates (much) larger revenue than i-banking, and is a far more important industry to the national economy. Most S'pore businesses need to borrow from banks.

I guess credit risk analysis is more important than market risk analysis in S'pore.