c++ reference variable is like …. in java@@

Q: a c++ reference is like a ….. in java?
A: depends (but i'd say it's not like anything in java.)

A1: For a monolithic type like int or char, a c++ reference variable is like a Integer.java variable.  Assignment to the reference is like calling a setValue(), though Interger.java doesn't have setValue().

A2: For a class type like Trade, a c++ reference is like nothing in java. When you do refVar2.member3, the reference variable is just like a java variable, but what if you do

Trade & refVar = someTrade; //initialize
refVar2 = …//?

The java programmer falls off her chair — this would call the implicit op=

refVar2.operator=(….)

##Just what are(not) part of Architecture (too-late-to-change)

Dogfight – When you develop a dogfight jet over 30 years, you stick to your initial design. You try to build on the strengths of your design and protect the weaknesses. You don’t suddenly abandon your design and adopt your rival’s design because you would be years behind on that path. Software architecture is similar.

Anyone with a few years experience can specify an architecture in some details, but those decisions are flexible and could be modified later on, so they aren’t make-or-break decisions, though they could waste time.

Q1: what Features define a given architecture? I’m a fellow architect. If you must describe in 10 phrases your system architecture and provide maximum Insight, which 10 aspects would you focus on?

Q2: (another side of the same coin) What technical Decisions must be made early when creating a system from scratch?
Q2b: in other words, what design decisions will be expensive to change later? Why expensive?

Disclaimer — In this context, “later” means somewhere during those pre-UAT phases, not LATE into the release process. Obviously after release any config/DB/code change requires regression test and impact analysis, but that’s not our topic here.

A colleague of mine liked to discuss in terms of layering. In OO architecture, A layer consists of a bunch of similar classes (or interfaces) dedicated to a logically coherent high-level task. 2 interacting layers can exist in the same OS process or (less common but very critical) across processes. Intra-process layering basically means the 2 interacting classes (belonging to 2 layers) have well-defined responsibilities and a well-defined contract between, where one is always a Service-provider (“dependency”) to the service-consumer. Though popular among architects, I’m not sure these are the most important architectural features.

Here are my answers to the question “Is the decision costly to change later in SDLC?”

? anything in the wsdl/schema of a web service? yes if i’m not the only developer of this web service’s clients. A change would need these fellow programmers to change their code and test, test, test
? fundamental data constraints like “if this is positive then that must be non-null”? Yes because other developers would assume that and design code flows accordingly.
? database schema, the ER between a bunch of tables? Yes since it usually requires code change “upstairs”. DB schema is usually a Foundation layer, often designed up front.
? choose a DB vendor? yes if you happen to have truly skillful database developers in the team. They will use proprietary vendor extensions behind the manager’s back when the temptation is too strong. Such non-portable business logic will be expensive to re-implement in another database.
? use MOM or RPC-style communication? yes
? Serialization format? yes. “Later” means implementation effort wasted
? XML or binary or free text format? yes probably
? make a bunch of objects immutable/read-only/const? yes. Adding this feature later can be hard. Look into C++ literature.
? shared vs unshared cache? yes. effort is non-trivial
? What logic to put into SQL vs java/C#? sometimes No. You can change things later on, but usually wasting sizable implementation effort
? Choose programming language? Yes. Implementation effort wasted if switching late in the game.
? make a batch task re-runnable? yes. Adding this feature later on can be hard

? MVC (the highest-level design pattern there is)? well… yes, though you can adjust this design later on
? where to implement authentication, entitlement? Many say yes, but I feel no. Make the damn system work first, then worry about securing it. On fast-paced trading floor, Business usually can tolerate this feature being not so perfect. Business and upper management worry more about ….. You got it — Functionality.

? Which commercial(or free) product to choose? sometimes NO. You can use mysql or activemq and then change to commercial products. In fast projects, We can often delay this decision. Start coding without this particular commercial product.

? make a module thread safe? no, even though adding this feature can be hard
? how many staging tables? no
? choose a decorator/factor/singleton (or any of the classic) design pattern? no. They are usually too  low-level to be among the “10 architectural features”. I don’t know why a lot of architect interview questions hit this area.

c# abstract class – a few language rules

Common IV topic…

[j] abstract class with 0 abstract member is perfectly fine in c#/java. http://stackoverflow.com/questions/2999944/why-does-c-sharp-allow-abstract-class-with-no-abstract-members

[j] if you simply add “abstract” to an existing (concrete) class C, then compiler will no longer allow new C()  [1]

[j] abstract class C is perfectly free to define a ctor, but it can only be invoked from a subclass ctor, not by new C() 

[j] abstract sealed/final class? won’t compile.

[j] Interface methods are always public. Abstract methods in abstract class are not.

[j=100% identical to java]

PAC (Pure-Abstract Class) is an extremely useful technique in c++ but in c#/java, an obscure cousin of “interface”. Given an interface MyType source code, you could replace the word “interface” with “abstract class”. Now a subtype must Singly inherit from MyType as the single parent. I feel the only justification is future-proof i.e. enforce that subtypes don’t inherit from another non-interface. This is deliberate self-crippling or self-constraint, just like [1]

Another (obscure) way to get a PAC — remove all concrete methods from a regular class. Unlike an interface, such a PAC could have fields and protected abstract methods.

specializing class templates (C++) #1 rule

Say you have Account objects to put into a Sorter class template. Generic Sorter was written for built-in types and uses operator-less-than (the left-arrowhead) in compare(), sort(), search() etc. However, Account class offers this->compareBalance() method and you want to use it in Account sorter. One solution is template specialization. Specifically, a specialization of the Sorter class template.

So You provide a custom definition of method compare(). How about other Sorter methods? There’s nothing to change in them so can we somehow “default” them to the Generic Sorter template?

Answer — no. As soon as you specialize the Sorter template for Account, your specialized template is disconnected from the original Generic Sorter template. Umbilical cord is cut once and for all. As described on (lower) P859 [[c++primer]], You must re-implement every public method. This is the Rule #1 in template specialization.

That was a regular/standard specialization. How about a partial specialization? Same answer.

Q: is the so-called specialized template a true template or just a regular class?
A: template. Difference is clear to the compiler. Syntax is vastly different[1]. A fully specialized template ironically has no dummy type, but it is technically still a “template” for creating classes[2], just like a document template. Only when you Instantiate the template with Account type, does the compiler actually create the real class. When given an Account type, the compiler will choose the specialized template over the generic or “default” template.

Here’s the paradox — in normal circumstances, a “document template” is meaningful when some section has dummy content. For example, a report template would have a dummy date, dummy author, dummy intro/summary, and dummy title. In a fully specialized template though, there’s nothing dummy, so is it a meaningful template? However, compiler still treats it technically as a template not a real class.

A partial specialization is like a report template with the dummy author replaced with a real author “hard-coded”. It is still meaningful as a template since it has *some* dummy content.

[1] template class Sorter {/*…*/} ; //note the syntax – template before the empty diamond
 no such thing when you instantiate a template. You instantiate a template simply in a variable declaration !
[2] though a fully specialized template can create no more than one class !

input iterator #& other iterator categories

(It’s probably enough to know when to use each. Internals may be muirky, undocumented and trivial…)

Note many of these categories are fake types and mean nothing to the compiler. Compiler knows classes, typedef, pointers … 
Note some of these categories are real types, some are characteristics of a given iterator object….

output iterator vs the outstream? See P5 [[eff STL]]

(see other post for const vs input iterators)

* input iterator is “Rw” iterator ie must let clients read container, may let clients write to container.
* const iterator is R~W iterator as it blocks write-access.
* output iterator is “rW” iterator — it may let clients read container
– many iterator classes [1] are “input-AND-output iterators” ie RW iterators ==> must let clients read and let clients write to container

When you see InputIterator in a template, remember input iterator is (not a type but) a family of types. I guess it could be fake type.

I guess you can visualize each iterator having these 2 groups of boolean flags. First group:
+ R or r
+ W or w or ~W (3 values)
+ const or mutable
++ const is actually part of declarations. const means “unwrapped iterator as lval won’t compile” [2]

And 2nd group:
+ F or ~F — forward or no-forward ie forward move unsupported
+ V or ~V — reverse or no-reverse
+ A or ~A — random or no-random

Between the 2 groups, there’s fairly good orthogonality. Within a group however, flags are not really independent
$ const ~W
$ const => R
$ A => F and V flags both set

[1] vector::iterator, but not vector::const_iterator
[2]
for(list::const_iterator myIt=myList.begin(); myIt!=myList.end(); myIt++){
*myIt += 10; // won’t compile with the const_ prefix
}

When reading function signatures, you will encounter a lot of different iterators, each having these flags. To simplify things, it’s best to focus on one group of flags at a time. Examples:

T a[n] can produce a qq(const T* ) iterator. First, this is a const R ~W. Secondly, it’s A F V

perl bless, briefly

2nd argument to bless is a …. well, a classname and also a package name. Whatever string you put there must be a valid package name (or the current package’s name if omitted). That package name is interpreted as a classname. The new object becomes an instance of that class. “The new object is blessed INTO the class”, as they like to say.

This 2nd argument is fundamental to constructor-inheritance. See P318 [[ programming perl ]]

The referent is often an empty hash. In other words, the reference to bless often points to an empty hash.

In Perl lingo, you can bless a referent or bless a reference, and everyone knows what you mean — no confusion.

Q: why do we need bless when a referent is already a chunk of memory
%%A: a bare reference can’t invoke a method. No inheritance of methods

car racing while holding coffee

I now recall that my manager (Yang?) mentioned (in my first performance review?) that I had a tendency to be tense and look worried, and improved a year later. Now I slowly understand why.

Now I think many people in finance IT have a lower standard on test “coverage”, bugs, code consistency, code smell…

For a few years in GS I felt like a heart surgeon — any (avoidable) logical bug would cost lives, and put me to jail for negligence of professional duty. Finance app development is like car racing while holding a cup of coffee. I was too scared of spillage but still had to drive as fast as others – so tense!

Now I know many colleagues design code less rigorously than I feel done elsewhere, and cover only the realistic use cases. If input is unexpected or erroneous, then damn it — “undefined behaviour”. That’s what I see in some quant library too. Subtle errors could sometimes spread like cancer undetected.

We relied on recon or other system’s validation to detect (preventable) bad data created by our engine.
Rather than “stop error propagation at the earliest”, we rely on downstreams.

Another reason — GS was my first job on Wall St. I struggled to break into Wall St and I didn’t want to be inadequate and lose self-confidence.

smart pointer casts – using shared_ptr for illustration

Smart pointers are the infrastructure of infrastructures for a quant library.

Minor performance issues become performance critical in this foundation component. One example is the allocation of the reference count. Another example is the concurrent update of the reference count. But here my focus is yet another performance issue — the lowly pointer-cast. (I don’t remember why performance is a problem … but I believe it is.)

C++ support 4 specific casts but smart pointers need to -efficiently- implement const_cast and dynamic_cast, in addition to the implicit upcast.

Q1: why can’t I assign a smartPtr-of-int instance to a variable declared as smartPtr of Dog?
A: compiler remembers the type of each pointer variable.

Q2: in that case, should c++ compiler allow me to assign a SP-of-Derived instance to a variable declared as SP of Base?
%%A: I don’t think so. I don’t think every smart pointer automatically supports upcast, but boost shared_ptr does, by design. See below.

For Boost, a Barcap expert said there’s an implicit cast from shared_ptr of T (RHS) to (LHS) shared_ptr of const T. See the special casting methods in http://www.boost.org/doc/libs/1_48_0/libs/smart_ptr/shared_ptr.htm

Specifically, boost says shared_ptr can be implicitly converted to shared_ptr whenever T* can be implicitly converted to U*. This mimics the behavior of raw pointers. In particular, shared_ptr is implicitly convertible to shared_ptr (i.e. add constness), to shared_ptr where U is an public/protected base of T (i.e. upcast), and to shared_ptr.I believe this is the same “member template” magic in the smart pointer chapter of [[more effC++]]

Allow me to explain the basics — in basic C++, a raw-ptr-to-T instance can be assigned to these variables —
T const *  my_ptr_to_const_T;
U * my_ptr_to_a_T_parent_class_U;
void * my_ptr_to_void;

Therefore a raw ptr supports these 3 (among others) casts implicitly. So does shared_ptr.

c++ method hiding, redefining, overriding – fundamentals

Background — When reading a particular function call in the context of a c++ class hierarchy, we need to identify exactly which function is selected at compile/runtime. In the case of “No match”, we get a compile-time error (never run time?).

Non-trivial. It’s easy to lose focus. Focus on the fundamental principles — only a few.

– Fundamental — override is strict [1]. If overriding, then vtbl dynamic binding. Simple and clear. Otherwise, it’s always, always static binding.
– Fundamental — if static binding, then remember the hiding rule. Per-name basis — See [[Eff C++] last item. As a result, some base class methods become unavailable — compiler errors. [3]
– Fundamental — compiler attempts implicit type conversion on every argument.

Redefining is an important special case of hiding, but fundamentally, it’s plain vanilla function hiding.

It was said that Overriding resolution is done “after” hiding? Does it mean that the hiding rules kick in first before system goes through overriding resolution? But I don’t think hiding would kick in at all.

[1] see http://bigblog.tanbin.com/2011/02/runtime-binding-is-highly-restrictive.html
[3] Fixable with a local “using” directive — Using Defeats Hiding

KDB – phrasebook

Columnar Database
time-series data
memory/disk — in-memory DB and disk too
Both live and historical data
stores databases as ordinary native files.
kdb+ can handle millions of records per second

KDB is optimized for bulk inserts and updates, and not just one-transaction-at-a time. Exchange data typically comes in blocks, and does not have to be written record by record.

–A) real time
live data – uses kdb+TICK, in-memory
millions of “records” per second

–B) historical
back-testing – is a major use case
terabytes of disk — needed for historical. Remember the oneTick market data system at CS?

##[19] Beat Sg2011Search

  • reason: I’m now open to java
  • reason: I’m now open to non-finance
  • reason: I will only work in SG for 1.5 years till 2021 spring, so not that much at stake.
  • reason: my USD salary is much lower.
  • tactic: will accept 140k jobs without complaint
  • tactic: will seriously target a few fertile grounds — FX, mkt data…
  • tactic: will seriously prepare for architect roles
  • tactic: will fly back for job interviews
  • reason: c++/algo body-building showing

A busy live mkt-data GUI table – realistic

1000 records are batched into each chunk and posted to event queue. Every chuck needs to be displayed on a “main” table. Killer is the re-sort/re-filter, which must happen on EDT. As a result, EDT is overwhelmed — when you re-size or move the app, it responds, but is sluggish and leaves an ugly trail.

My suggestions –
* Break up into smaller tables, so sorting is faster.
* Break up into smaller datasets. In fact most users are only interested in a subset of the live data.
* If sort (filter) is the killer….
… adopt a a sort-free JTable that efficiently accesses pre-sorted underlying data. Underlying data can be swapped in/out efficiently, and sorted on a “mirror-site” using another thread, possibly a swing worker. Take inspiration from the 2 GC survivor spaces and use 2 identical data structures. One live, the other used as a sort worktable. Swap (by pointer) and fire table event. However this increase memory consumption.

We also need to completely disable user-triggered sorts. Use a dialog box to educate users that changing sorting policy is non-trivial, and they should configure and install the new sorting policy and be prepared to wait a while to see its effect.

In general, you can only get any 2 of
E) cell editing
R) real time view of fast changing mkt data
C) full rather than subset of data streams

In mkt data streaming,  R alone is often enough — read-only view. Some users needs R+C. Some needs E+R

columnar DB + time series in-memory DB

(Focus of this write-up is in-memory time series DB, with some comments on columnar DB.)

Time series data have timestamps at fixed intervals. (“Usually fixed” is the correct wording in general purpose definitions of Time Series.) Strictly fixed intervals make system design much cleaner and more efficient.

When a new snapshot (with 9 fields for example) arrives, inserting a single row to disk is efficient for row-oriented. Columnar DB would be slow for such a disk insert, but in-memory is fine.

For bulk insert/updates however, columnar wins. (Exchange data typically come in bursts.)

My idea — A simplistic and idealistic implementation could use 9 pre-allocated half-empty arrays for that many columns of the table. New snapshot is broken down to the 9 constituents, each inserted to one of the 9 arrays. I’d say each array can (and therefore should) be fixed-width since each element of the “price” array occupies exactly 4 bytes. Such an array is random-access if in-memory. Extreme simplicity begets extreme efficiency. Reconstructing a full snapshot record isn’t that hard given the position in the array — random-access by position hitting all 9 arrays (concurrently?). I feel it’s no slower than (the conventional practice of) storing the entire row record in contiguous memory.

(SecDB includes a regular data store + a time-series data store but not sure of its implementation…)
———-
Columnar DB has many applications besides time-series, such as OLAP/business-intelligence, warehouse. Major implementations include Sybase-IQ, MS-SQL, BigTable, Cassandra, HBase.

Benefit #1 — aggregation — fast aggregate operations (max,avg,count..) over a single column. You need not read all the other irrelevant columns.

Benefit — write — bulk inserts and updates, perhaps concurrently on the 9 arrays.

Benefit — index — easier and faster. “columnar structure for the database simplifies indexing and joins, therefore dramatically speeds search performance”

java/c++overriding: 8 requirements #CRT

Here’s Another angle to look at runtime binding i.e. dynamic dispatch i.e. virtual function. Note [[effModernC++]] P80 has a list of requirements, but I wrote mine before reading it.

For runtime binding to work its magic (via vptr/vtbl), you must, must, must meet all of these conditions.

  • method must be –> non-static.
  • member must be –> non-field. vtbl offers no runtime resolution of FIELD access. See [[java precisely]]. A frequent IV topic.
  • host object must be accessed via a –> ref/ptr, not a non-ref variable. P163 [[essential c++]]. P209 [[ARM]] explains that with a nonref, the runtime object type is known in advance at compile time so runtime dispatch is not required and inefficient.
  • method’s parameter types must be —> an exact copy-paste from parent to subclass. No subsumption allowed in Java [2]. C++ ARM P210 briefly explains why.
  • method is invoked not during ctor/dtor (c++). In contrast, Java/c# ctor can safely call virtual methods, while the base object is under construction and incomplete, and subclass object is uninitialized!
  • method must be –> virtual, so as to hit the vtbl. In Java, all non-static non-final methods are virtual.
  • in c++ the call or the function must NOT be scope-qualified like ptr2->B::virtF() — subversion. See P210 ARM
  • the 2 methods (to choose from) must be defined INSIDE 2 classes in a hierarchy. In contrast, a call to 2 overload methods accepting a B param vs a D param respectively will never be resolved at runtime — no such thing as “argument-based runtime binding”. Even if the argument is a D instance, its declared type (B) is always used to statically resolve the method call. This is the **least-understood** restriction among the restrictions. See http://bigblog.tanbin.com/2010/08/superclass-param-subclass-argument.html

If you miss any one condition, then without run/compile-time warnings compiler will __silently__ forgo runtime binding and assume you want compile-time binding. The c++11 “overload” and java @Override help break the silence by generating compiler errors.

However, return type of the 2 functions can be slightly different (see post on covariant return type). P208 ARM says as of 1990 it was an error for the two to differ in return type only, but [[c++common knowledge]] P100 gives a clear illustration of clone() method i.e. virtual ctor. See also [[more eff C++]] P126. CRT was added in 1998.

[2] equals(SubclassOfObject) is overloading, not overriding. @Override disallowed — remember Kevin Hein’s quiz.

Here’s a somewhat unrelated subtle point. Suppose you have a B extended by C, and a B pointer/ref variable “o” seated at a C object, you won’t get runtime binding in these cases:

– if you have a non-static field f defined in both B/C, then o.f is compile-time binding, based on declared type. P40 [[java precisely]]
– if you have a static method m() defined in both B/C, then o.m() is compile-time binding, based on declared type. [1]
– if you have a nonref B variable receiving a C object, then slicing — u can’t access any C part.

[1] That’s well-known in java. In C++, You can also “call a static member function using the this pointer of a non-static member function.”

(archtect IV) what I wish java to have

— big wishes —
* NullPointerException — too many of these are thrown in production systems and can take hours of wild goose chase. Developers must be very very thorough, and adopt a lot of defensive coding habits. Java won’t help you.
* easier tools for byte code engineering
* easier reflection — Look at dynamic scripting languages
* programmatic class creation; runtime class creation.
* memory leak — hard to detect
* easier immutable objects — String is great but we need more

–small wishes
* simpler getter/setter — look at C# properties
* Bags as collections.
* serialization — is a bit murky. I feel this is an important area neglected by many developers, perhaps because it’s murky. Perhaps java can support a special debug-serialization so we can see what it does to a complicated object graph
* checked exception — is a mistake in many developers’ opinion. Narrow the scope of this construct.

#1 essential communication skill #in office

Whenever a finance IT job spec says “communication skill” or “communicator”, it could mean several things
– can understand business users
– can understand requirements
– can write (non-trivial) email
– can write reports
– can present

But I feel the most important criteria is — articulation.

I don’t know for sure, but i feel most people can do a good enough job of understanding users or requirements, but the level of articulation varies among developers.

I guess outside US and UK, some developers have problem with English.

##major corp bond ECNs

“credit e-trading” basically means corporate/muni bond ECN trading. “Credit” doesn’t cover CDS in this context.

Major ECNs include
– bloomberg — FIX
– knight – FIX
– the muni center – FIX confirmed
– tradeweb – FIX

Many investment banks translate external FIX messages to in-house FIX format

KDB is used as in-memory DB or cache for “history” only

how many tosses to get 3 heads in a row – markov

A common probability quiz — Keep tossing a fair coin, how many tosses on average until you see 3 heads in a row?

Xinfeng Zhou’s book presents a Markov solution, but I have not seen it. Here’s my own. The probability to reach absorbing state HHH from HH, or from H or from *T turns out to be 100%. This means if we keep tossing we will eventually get a HHH. This finding is trivial.

However, the “how many steps to absorption” is very relevant.

null address ^ null-pointer-variable

@ A null address is the fake address of 0. It doesn’t exist physically. Compiler treats it differently (Don’t ask me how…)
@ A null-pointer-variable is a pointer variable holding a null address.

I think this is a source of confusion to newbies. A so-called “Null pointer” means one of these 2.

There’s just one null address, /to the extent that/ there’s just one Address 0xFF8AE0 or whatever. But there can be 5 (or 5555) null pointer variables. Note each pointer variable doesn’t [1] always occupy 32 bits (assuming a 32-bit bus), but usually does. If it does, then the pointer variable’s own address is never 0. (Anything that’s /allocated/ an address is never at Address 0 since Address 0 doesn’t exist.)

[1] I guess if a pointer variable has a brief lifespan it may live in the cache or thread register??

FI — divided into short-term ^ long-term markets

Fixed Income is broad divided into rates + credit markets. Rates business is further divided into short term rates (i.e. money market) and long term rates.

(In terms of pricing, risk…) Since rates are the basis of credit,
^ short term rates are the basis of short term credit — eg repo, short term corporates/munis
^ long term rates are the basis of long term credit

The instruments are obviously different between short vs long term rates. Therefore the /markets/ are distinct, since a short term instrument (like ED futures) is traded only in the short term market.

However, Swap curve covers short/long terms, just like T yield curve.  In contrast, Libor is strictly (unsecured) short term lending.

how many tosses to get 3 heads in a row

A common probability quiz — Keep tossing a fair coin, how many tosses on average until you see 4 heads in a row? There’s a Markov chain solution, but here’s my own solution.

I will show a proposed solution using math induction. Suppose the answer to this question is A4. We will first work out A2 and A3.

Q1: how many tosses to see a head?

You can get h, th, tth, ttth, or tttth …  Summing up 1/2 + 2/4 + 3/8 + 4/16 + 5/32 == 2.0. This is A1.

Q2: how many tosses to see 2 heads in a row?

t: Total tosses in this senario = 1+A2. Probabily = 50%
hh: 2. P = 25%
ht: 2 + A2. P = 25%

(1+A2:50%) + (2:25%) + (2+A2:25%) = A2. A2 works out to be 6.0

Q3: how many tosses to see 3 heads in a row?
After we have tossed x times and got hh, we have 2 scenarios only
…..hhh — total tosses = x + 1. Probability = 50%
…..hht: x + 1 + A3 : 50%

(x+1: 50%) + (x+1+A3 : 50%) = x+1 + 0.5x A3= A3, So A3 = 2*(1+x), where x can be 2 or 3 or 4 ….

In English, this says

“If in one trial it took 5 tosses to encounter 2 heads in row, then this trial has expected score of 2*(1+5) = 12 tosses.”
“If in one trial it took 9 tosses to encounter 2 heads in row, then this trial has expected score of 2*(1+9) = 20 tosses.”

Putting on my mathematics hat, we know x has some probability distribution with an average of 6 because A2 = 6. We can substitute 6 for x, so

A3 = 2x(1+A2) = 14.0. This is my answer to Q3.

Now we realize A2 and A1 are related the same A2 == 2x(1+A1)

I believe A4 would be 2x(1+A3)==30.0. Further arithmetic shows A[n] = 2(A[n-1]+1) = 2^n – 2

correlation and realized volatility — won’t stay constant

I read these findings sometime ago and now i feel more strongly that the asset correlation theory is rather impractical, unreliable, even misleading. It can give naive users a false sense of security, just like VaR does.

One problem i feel strongly about is that strength of correlation doesn’t last.

(There are many commonalities between vol and correlation as 2 statistical power-tools.) Just as observed volatility[1] changes over time[2], observed correlation between any 2 assets seldom stays stable. Intuitively, 2 assets can be highly correlated now and uncorrelated later[3]. This is an important fact to bear in mind when using correlation numbers to predict anything long-term.

FX rates often show strong correlation with one subset of “drivers” now, and then another subset of drivers later. If you follow the correlation with one particular driver, it rises and falls.

[2] Note vol is often assumed to have a lasting character particular to a given stock. Such an assumption in turn assumes history tends to repeat itself. What’s the pattern that repeats? For realized volatility of stocks/indices/currencies, there’s often an observed pattern that periods of high vol follow periods of low vol. I think a small number of symbols exhibit a consistently high volatility, often for obvious reasons.

The volatility of SPX (i.e. S&P 500) is reflected in the Volatility Index (VIX). You can see how VIX rises and ebbs

[1] using daily closing prices, but how about using hourly prices or monthly closing prices? Stdev/vol may look very different

[3]but probably won’t become anti-correlated. I guess 2 anti-correlated assets can show positive correlation in a credit crisis, when every security loses value relative to hard currencies or commodities.

spot ^ fx-options — 2 "extreme" segments of FX market

Within the FX space, I get the impression that the trend of open standard, transparency, inter-connectedness, commoditization, shrinking spread and margin, widening competition among liquidity-pools (decentralization)… and all the other “good” stuff (to users) is primarily in the spot market, not much in the FX options market. The Reasons beneath are intriguing and offers a glimpse of the true drivers in FX markets.

Spot and Options are kind of 2 extremes in terms of commoditization…..

FX Futures (mostly on CME) is fully standardized and “bulldozed” as equities markets were bulldozed decades earlier. But I was told forward market is still bigger and more popular. Why?

Buy-side probably prefer standardized instruments — cheaper, transparent, fierce sell-side competition … but
Sell-side probably prefers the good old OTC contracts.

Dealers like things murky, and worry about too much transparency. The more murky, the more they can charge big spreads for the “service” they provide. A major reason for the shrinking margin in equity dealing desks over the past decades was the regulatory change to drive trading on to exchanges.

FX options are largely unaffected. Dealers don’t want to publish their quotes. Instead they respond to RFQ, according to a friend. (In theory, there might exist some *private* market maintained by a single market-maker, but I don’t know any.)

Fwd is similar. However, I was told the standard spot trade has a T+2 settlement and works like a short-term forward contract. Also, spot traders do a lot of rolling, so fwd trades and rates are ubiquitous.

Bank sends quotes (perhaps RFQ replies) to ECNs and also privately to individual clients. Known as “Bank feeds”. Notably, for a given currency pair at a given time, a bank sends slightly different spreads depending on audience — tiered quote pricer. A sign of market fragmentation, IMHO. Note, bank feeds aren’t always from commercial megabanks — investment banks also produce bank feeds.

slicing/vptr/AOB — pbref between base^derived

(For vptr during slicing, see other posts)

Q: Any Slicing when func1(const B& b) receives a D argument?
A: no, since there’s just _one_ object (see posts on implicit cloning). But not sure afterwards.

Background — On the “real estate” of the Single-Inheritance D object, there is a B object, like a basement. B’s real estate is part of D’s real estate — these 2 objects share[3] the same “postal address”, making pointer casting possible.

[3] with multiple inheritance, Derived object and the embedded Base sub-object don’t necessarily share the same postal address of “this”

(Warning: AOB is inapplicable for multiple-inheritance. See other posts for details.) In the opening question, when you get a B ref to a D object, you get the address of the basement (AOB). Using this B reference, you can access B’s fields only. To access D’s fields, you must downcast a B ptr into a D ptr. AOB is the basis of most if not all slicing problems such as

* copy ctor qq( B b=d ) — In the implicit copier call, the implicit param “const B&” gets the AOB so can only access the B fields of the D object. After the copier call, the LHS becomes a sliced copy of the D.

* assignment qq( B b; b=d ) — The param “const B&” gets the AOB so can only access the B fields of the D object. LHS becomes a sliced copy of the D.

Remember the onion — slice off the outer layer.

long constructor signature vs tons of setters

A mild code smell is a long list of (say 22) constructor parameters to set 22 fields. Alternative is 22 setters.
If some field should be final, then the ctor Pain is justified.

If some of the 22 fields are optional, then it’s good to initialize only the compulsory fields in the ctor. Optional fields can use setter.

If one field’s initialization is lengthy, then this single initialization can prolong the ctor call, during which the incomplete new-born reference can leak. Therefore, it’s perhaps safer to postpone this slow initialization to a setter.

If setter has non-trivial logic for a field, then setter is a reasonable choice.

Best benefit in the ctor route is the final keyword. With setters, we need to deny access to a field before initialization. Final fields are beautiful.

It’s often hard to know which is which looking at the 22 arguments in a ctor call (e.g. they include literals). However, a cheap sweetener is the IDE-refactor tool introduce-local-variable. In contrast, setter serves as simple documentation, but some IDE are unable to update setter names when we rename a field.

try to constrain objects to Eden #UBS tip

If you want to avoid full GC and can tolerate short jitters due to partial GCs, then here is a practical and effective technique.

Idea is to minimize promoting objects into the 2 survivor spaces or further into OldGen.

Make most of your newly created objects unreachable (i.e. garbage-collectible) ASAP. Well, even if I do that asap, it may still be 3000 instructions after instantiation, and in a multi-threade JVM, we don’t know how long that means to the memory allocator. Here’s the deal — Suppose we were a spy inside the eden and monitor the amount of reachable vs unreachable objects. Whenever eden gets full, I  want to see a good amount of unreachables.

If a large object is needed over and over again, put it into object pool (or local cache) and get it promoted into OldGen.

Keep eden the standard size, perhaps 1/8 (not 25%) of heap. No need to increase it.

idle timer inside app server engines

I just realized idle timers are often “passive”. You may imagine a separate thread performing periodic polling to detect idlers[1]. Well, not common.

Idle Timer is just a “state” added to the “decoratee”. When the decoratee becomes idle, that time is saved in that state field. When a “Request” is received from a client Call from any thread, the state is checked. If idle time has been too long, we kill that decoratee or mark it as obsolete

See P189 [[Weblogic]]

[1] Heart beat checker is that way.

DCBC — dtor execution order

DDDerived class dtor, which should not explicitly invoke C or B.
CCComponent (ie field) dtor
BBBase dtor
CCComponent of BBBase class

Compiler arranges these steps to execute in a single thread. See P277 ARM

A1: same DCBC according to EffC++. I probably tested it.

A3: See P60 [[safe c++]]. The exception triggers stack unwinding — C B C. Exact same sequence except the very first step i.e. the D is skipped. Derived dtor is skipped because Derived ctor didn’t complete.

The simple rule in this “exceptional” scenario is “whenever a ctor completes without throwing, its dtor would be executed.

A2: constructed First, so destructed Last. Remember dtor^ctor is ALWAYS reverse order

Quizzes:
Q1: but what if i invoke Derived virtual dtor through “delete”?
Q2: Virtual base?
Q3: what if the derived ctor throws exception after B and C completed? Which dtors will/won’t run and in what order?

(To aid blog search, D-C-B-C)

producer/consumer – fundamental to MT(+MOM/trading) apps

I hardly find any non-trivial threading app without some form of producer/consumer.

#1) Async — always requires a buffer in a P/C pattern. See http://bigblog.tanbin.com/2011/05/asynchronous-always-requires-buffer-and.html

# dispatcher/worker — architectures use a task-queue in a P/C pattern.
# thread pools — all (java or C++) use a task-queue in a P/C pattern.
# Swing — EDT comes with a event queue in a P/C pattern
# mutex — Both producer and consumer needs write access to a shared object, size 1 or higher. Always needs a mutex.
# condVar — is required in most cases.

1 JMS queue, multiple sesssions and threads

My friend Kunal Khosla pointed out developers can increase queue reading performance by connecting 2 (or more) listener sessions to the same physical queue.

JMS rule — Each message is delivered to exactly 1 session.

Since the 2 sessions are active-active (no “standby”), who will get it? Round robin is the choice of OpenJMS.

JMS rule — 1 thread per session. So you end up with 2 threads load balancing off the queue.

JMS rule — ClientID is a unique ID. The 2 sessions probably need different ClientIDs.

See http://activemq.apache.org/multiple-consumers-on-a-queue.html and
http://activemq.apache.org/how-does-a-queue-compare-to-a-topic.html

static object initialization order #lazy singletons

Java’s lazy singleton has a close relative among C++ idioms, comparable in popularity and implementation.

Basically, local statics are initialized upon first use, exactly. Other static Objects are initialized “sometime” before main() starts. See the Item on c vs c++ in [[More eff c++]] and P222 [[EffC++]].

In real life, this’s hard to control — a known problem but with standard solutions — something like a lazy singleton. See P 170 C++FAQ. Also addressed in [[EffC++]] P221.

challenges in your swing projects@@

Challenge: layout. If it can take a few hours to implement new biz logic it can take 2 days to get the layout exactly right. Dotnet solutions probably takes half a day only. Sub-panels take memory. WYSIWYG layout? WPF separates layout and code into separate physical files

Challenge: memory consumption is higher in swing than other GUI frameworks. Many times a swing app take up 900MB and will not be able to grow further.

Challenge: threading. Manageable unless messaging rate is very high.
Challenge? robust performance. App should remain fast and responsive in the face of various hazards — slow DB/network; memory leak; slow action listeners; ….
Challenge? bloated fields and method params
Challenge? memory leak due to listeners. Swing is a memory hog.
Challenge? EDT overloaded? 
Challenge? code duplication — a lot of similar callbacks
Challenge? a long and ever-growing class with lots of inner classes — including invokeLater() and callbacks

some basic forex market facts

Most ECN’s support spot only. Banks don’t want to publish quotes on fwd and options because that’s proprietary.

Fwd market is bigger than futures mkt (mostly CME)

EBS min lot is 1 mio.  EBS is same nature as the ECNs but EBS pool is deeper than ECN pools. Reason — EBS is older and has more participation.

Billion-dollar deals – client would choose RFQ. These large deals are relatively rare so private negotiation is usually required.

book-value risk ^ mkt-value risk – 2 views on interest rate risk

http://www.riskglossary.com/link/interest_rate_risk.htm points out that interest rate sensitivity is measured in 2 (completely) different ways
– a book value perspective, which perceives risk in terms of its effect on accounting earnings,
– a market value perspective – sometimes called an economic perspective – which perceives risk in terms of its effect on the market value of a portfolio.

I believe Book-value perspective is the layman’s perspective. It ignores time value of money or NPV and treats future cash flows same as current cash flow, assuming a single universal discount factor of
100%.

learning common code smells #time-savers

XR,

Every experienced and self-respecting developer cares about maintainability, because everyone of us has first hand experience tracing, changing, and “visualizing” convoluted logic in other people’s code. Once you know the pain, you intuitively try to reduce it for your successors — altruistic instinct.

At the management level, every experienced /application-manager/ recognizes the importance of maintainability since they all have witnessed the dire consequence (earthquake, explosion, eruption…[1]) of making changes to production codebase without fully understanding the codebase. More frequently, management knows how much developer Time (most Wall St teams are tiny — nimble and quick turnaround) it takes to find out why our system behaves this way. Bottom line — unreadable code directly impacts an application team’s value-add.

Technical debt is built up when coders take shortcuts, hide dirt under the carpet,  and “borrow time”.

If we ask every developer in each of our past teams to write down their dreaded code smells, some common themes will emerge — like rampant code duplication,  VeryLongVeryUgly methods, VLVU method signatures or 10-level nested blocks. Best practice means no universally-hated code smells.

Beyond those universally-hated, code smells can be widely hated, hated by some, frowned-on or tolerated … — different levels of smelliness.

I am trying to understand the common intolerance of the common developers (not the purists). You seem to suggest there’s no point trying to. If someone says there’s code smell in my code, i’d like to know if it’s universally hated, or hated-by-some.

In many case, our code is seldom debugged/modified/reviewed by others. In those cases, it doesn’t matter that much.

Happy new year …

[1] In reality, they often get the “patient” to sigh on the not-responsible-for-death before operating on the patient. They also have a battery of UAT test cases to make sure all the (common) use cases are covered. So the code change is a best-effort and may break under unusual circumstances.

credit-based forex pip spread

http://www.investopedia.com/articles/forex/06/interbank.asp#ixzz1ieCkYs8D says something like

In The interbank market (EBS and Reuters), banks trade based solely on the credit relationships they have established with one another. All can see the narrowest spread but each bank must have a specific credit relationship with another bank to trade at the best spread. The bigger the banks, the more credit relationships they can have and the better pricing they will get. Similarly, The larger a retail forex broker is in terms of capital, the better pricing it can get from the interbank market. If a client or even a bank is small, it gets wider spread.

I believe nothing else counts when it comes to the bid/ask spread you receive — only your credit counts. So how does a dealer size up your credit? I feel it’s largely due to your cash pile.

Why is an exchange conferred higher credit than all the banks and even the national government? Because an exchange has the most strict credit control and credit risk monitoring in place. No exchange has ever failed to deliver on its obligations.

STL/boost in quant lib

To a quant developer, I feel STL containers are more useful than STL algorithms. However, if you use STL containers, then some STL algorithms will be handy.

1-D and 2-D arrays are the bread-and-butter of quant lib. STL vector and vector-of-vector are good enough.

Sets and maps are widely used too in a few banks.

Daniel Duffy said STL isn’t enough for quant, but according to 2 friends (GS and an FX house) STL is the most important library for quant codebase; boost is clearly the 2nd (for quant library).
* The most important boost component is smart pointers.
* boost serialization

delta 1, non-linear, optionality

Delta 1 derivatives are intuitive — you can “feel” that a 1-cent drop in underlier causes an (almost exactly[3]) 1-cent drop in the derivative. A common synonym for Delta 1 is ….. Linearity. I think Linear basically means the graph of MV/underlier is straight.

Why is delta one important? Price sensitivity (including greeks, duration, dv01) are the focus on market risk, and delta is the #1 most important sensitivity.

Optionality is the defining feature of non-delta1.

A big bank often puts FX options in a different trading desk than FX forwards/futures and spot — delta 1

A big bank often organizes equity trading along delta 1 (NOT along OTC/listed). As a result,
* all equity options (including index or OTC structured options) and variance swaps are grouped into “volatility” desk. I think convertible bonds too.
* equity futures and basket trading (including ETF) are grouped into equity cash desk. I think equity swaps too.

How about IRS or bonds? I believe they are not delta 1. For interest rate sensitivity in general, DV01 and duration are more useful (probably more comprehensive) than delta. There's not a single underlier variable like a ETF or currency price, but a term structure of Spot yields. Notice I don't mean forecast values of yield but rather spot yield. Let me try to explain a bit.

Outside the fixed income space, a derivative contract uses a Reference variable. The variable has a spot value. On any valuation-date now or in the future, there's just one spot value. In the IR space, the reference entity exists on a yield curve. On any valuation-date, there's an entire spot yield curve (so-called term structure) [2]. A particular position is often sensitive to changes in the reference yield curve [1]. Therefore, an analysis of market-value change due to a change in one interest-rate is inadequate.

[2] actually I feel volatility smile is a “spot” term structure of vol. We construct a smile curve using today's implied vol and get a spot smile. If we use 1/1/2009 implied vol, then we get a spot smile curve as of that date. Using today's information, can we forecast next year's vol smile curve? I don't think we can, but people try anyways.

[1] example – a vanilla bond paying 600bps/year. It's trading t 98.1. What if a there's a parallel yield shift? All upcoming payouts will devalue.  What if there's a yield drop in the far end? Now you see IR sensitivity is not about a single reference variable, but a change to a yield curve.

[3] if delta is almost 1.0, then it's delta 1.

Forex ECNs cater to banks more than buy-side

Forex ECNs compete by attracting buy-side and sell-side, but they work harder to /please/ and satisfy the sell-side — primarily banks.

In the FX business, i was told liquidity and credit ultimately depends on cash pile. All spot trades (trillions each day) must be settled in cash. Banks are the only big shots. Hedge funds can play liquidity providers but they don’t have anything close to that amount of cash. (Hedge funds ultimately need banks to support their FX trades.) I’d guess even investment banks can’t match (and have never taken the top spot). Of course, banks aggregate so much cash because of depositors.

Now, there are only a small number of mega-banks. The entire market is largely controlled by these banks (except central banks). Therefore ECNs loop them in first (as what I call “anchors”), and customers will soon follow.

Therefore, in all the bank-to-client ECNs [1], many rules are bent towards the banks — privileges of liquidity providers.  One of the privileges is the Last Look. I was told this is a protection demanded by liquidity providers. I was told LPs always need and have something like a last look. Here’s a “creative” abuse of LastLook by Barx —

  • client sees an offer of 1.117 on the Barx screen and sends a market-buy
  • Barx (backend) looks at open market and notices a temporary dip to 1.116 and anticipates a recovery so it Buys at 1.116.  Then it waits.
  • If price indeed recovers, Barx tells client “We bought for you at 1.1165”, and pockets the difference in price
  • If price /unexpectedly/ sinks further to 1.115, then Barx can tell client “we bought for you at 1.116, before prices sinks further”, so Barx doesn’t suffer any loss.
  • My own assumption — client has right to see the price history on the open market — transparency. This protects clients from large slippage in an /erratic/ market.

Another privilege is requote…

The playing field is tilted towards the liquidity providers. LPs provide an “essential service” to the (huge number of) clients, and LPs basically dictate the rules to protect themselves. However, these rules aren’t arbitrary, unfair or unreasonable. There’s no monopoly here.

In contrast, Credit bond dealers presumably have weaker stranglehold on clients. The need to convert between currencies is more basic and critical than the buy-side’s need in a credit market
* issuers have other means of borrowing, though bond issue is the most important
* bond buyers have many many investment choices.

[1] Is there any bank-to-bank network? I was told EBS was (http://www.investopedia.com/articles/forex/06/interbank.asp#axzz1ie9e5WS0), but then customers are also allowed.

implement vtbl using ANSI-C

See also [[understanding and using c pointers]], which has about 5 pages (with sample code) on how to implement polymorphism. It covers some non-trivial details of the implementation.

Basic techniques and principles…. Emphasis in this write-up is clarity through simplicity, not rigor or correctness.

_instance_field_ — implemented as field of struct.

** Let’s say the struct type is MyClass.

_instance_method_ — function pointer field in the struct. Each function is an ordinary free functions taking ptr-to-MyClass as 1st parameter. Compiler converts all instance-method calls to function calls with “this” as 1st argument.

_static_method_ — instance methods without that 1st parameter.
_this_ — special hidden read-only field in MyClass of “ptr-to-MyClass” type. Through this pointer, Each instance MyClass knows the address of its own real-estate. As explained in other posts such as Object.java size, such a 32-bit real estate usage in every MyClass instance is rather costly and probably avoided in a c++ compiler. Can we avoid it in our home-made class?
_virtual_methods_ — a bunch of identically named free functions each taking a different type of 1st parameter. Note overload isn’t allowed in C.
— ptr-to-ClassB vs ptr-to-ClassD
_vptr_ — another hidden field in the struct, pointing to an array of function pointers.
_inheritance_ — MyClassD struct encloses/embeds (not a pointer to but) an entire MyClassB struct. Note the instances of MyClassB and MyClassD have the same address, permitting pointer cast.
_private_ — a compile-time access check on members of the struct

Treasury algo trading (low-latency), basics

In this European megabank, “treasury algo-trading” basically means RFQ auto-quoting.

Bloomberg and tradeweb allow an institutional client to send RFQ to a selection of (her “favourite”) Treasury dealers (also Agency bond dealers and IRS dealers). RFQ can be both bid and ask.

Off-the-run securities have much lower liquidity than on-the-run, but still higher liquidity than muni or corp bonds. Basically, who wants to buy the off-the-run when they easily get the on-the-run?

Typical latency [ from receiving request at gateway, to sending out response from gateway ] is typically 60~100ms (some say up to 200ms). 500ms (half a second) would be too high. I mean millis, not micros.

The RFQ itself takes 20-30ms to price.

functional programming – personal observations #FMD

(( FP]%%projects —
– XSLT
– python – first-class functions, lambda, closure, reduce(), zip(), apply(), map()
– perl – first class functions, closures, sort(), map()
– FMD ))

I feel FP is another buzzword people like to attach liberally (loosely) to various languages and software “Modules” like classes, functions, APIs.

The strict definition of FP is non-trivial, but let’s look at some salient features — the handful of pillars beneath a sprawling complex. If you have a module (a piece of software) that shows any one of these features, you are often free to say it’s kinda FP.

* [t] side-effect-free, always-repeatable functions — for a given set of input values to a function, you get the same output from every invocation
* [t] stateless functions — I guess C functions without static or global variables are simple examples.
* stateless objects? — stateless objects implies immutable. Mutable always refers to some stateful objects. However, I feel strictly stateless object is impractical and useless.
(Q: good example of immutable but stateful? Try google)
* [t] everything immutable — if all objects in your module is immutable, then probably it’s FP

[t = important to multithreading]

Actually, definition of FP is less important than the FP best practices encouraged and embedded in the popular FP languages. If any defining feature of FP isn’t widely adopted, then it gradually loses relevance. Therefore I’d rather spend more time on the common denominators of FP languages than the strict definition.

http://www.ibm.com/developerworks/linux/library/l-prog/index.html?ca=drs- has a similar view.

From now on I’d use “math” to mean “math-like”.
——————
The only FP language I used is Functional Model Deployment (FMD). I confirmed with other FMD users — all objects are immutable but (to be of any use) definitely stateful. Every c++ non-static method becomes a math-function. Host instance i.e. ” *this ” becomes the first argument, and a new instance is returned, to preserve the immutability of original instance.

The truly defining feature is the eval (fn, values, labels) construct.
– value can be any object, like some street address
– label is just a string, like “address”
– fn is what I call an incomplete object. It looks like an object with (potentially complex) state, but when you view it, it is presented as a function — a math function — with at least one variable unbound. If that variable is named “address”, then the above eval() would provide the required variable binding, and transform that function into a complete object.

Each object is written as some math-expression, so it usually depends on some objects. There could be a very long (thousands of nodes) object graph behind an object. Just think of each object in the graph as a math expression, one feeding into the next.

If any upstream object has an unbound variable, then all downstream objects become incomplete objects i.e. math-functions.

reflex about option premium at different strikes

Example — Among spx calls, at lower strikes, premium should be ….? Higher. We need to develop reflex.

There's no greek or graph of premiums against strikes. This relation is too simple. But many option newbies lack quick confidence about this “simple” thing. When you reason about option spread strategies, you need this quick confidence.

Think of calls as shopping coupons “get a beer at $1 with this coupon”. The lower the “strike”, the more Valuable is the coupon.

Puts are actually more intuitive. Higher the strike, higher the premium, since buyer can “cash in” more.

##job spec + %%observations: tech lead

#1 [SY,pwmcore] make architecture decisions for the team(s)
#2 [S] give software development direction to developer teams, including sample code
#3 [S,pwmcore] maintain and bug fix key components and infrastructure of the architecture.
#4 [S,pwmcore] code reviews
#5 policeman — ensure implementation follows firmwide standards. But I feel this is often impractical under tight deadlines. Wild west is the norm under tight deadlines
#6 whip up interfacing code to downstream (upstream too?) systems.
# buildmaster
# automated blackbox testing

Above are based on a well-written job spec — a perspective from the senior management. Below are a few more job duties I have observed

[S=Sundip of Stirt Risk]
[Y=Yang]

# [S, pwmcore] go-to person and problem solver on just about anything technical, but (bandwidth of this person is limited so) particularly the components that’s not owned by any single application.
# automated test framework
# automated build and deploy
# [SY] represent the department and interface with firmwide architect councils on decisions affecting the entire firm
# [SY] long-term planning; long-term bets; monitor long-term trends
# evaluate new “not-invented-here” software products proposed for adoption by application teams

The same job spec also said “bring a strong sense of ownership and must be driven to deliver tasks to completion”. So this is a doer not talker role. I guess more problem solving, getting-things-done, less (but still a great deal of) presentation, persuasion, explaining.

How about debugging? A software architect (see other posts) must be quick with debuggers etc.

no 2 thread for 1 symbol: fastest mkt-data distributor

Quotes (and other market data) sent downstream should be in FIFO sequence, not out-of-sequence (OOS).

In FX and cash equities (eg EURUSD), I know many major market data aggregators design the core of the core feed engine to be single-threaded — each symbol is confined to a single “owning” thread. I was told the main reason is to avoid synchronization between 2 load-sharing threads. 2 threads improve throughput but can introduce OOS risk.

You can think of a typical stringent client as a buy-side high-frequency trader (HFT). This client assumes later-delivered quote is physically “generated” later. If 2 quotes arrive on the same name, one by one, then the later one always overwrites the earlier one – conflation.

A client’s HFT can react in microseconds, from receiving quote (data entering client’s network) to placing orders (data leaving client’s network). For such a fast client, a little bit of delay can be quite bad, but not as bad as OOS. I feel OOS delivery makes the data feed unreliable.

I was told many automated algo trading engines (including automatic offer/bid pricers in bond) send fake orders just to test the market. It sends a test order and waits for the response in the data feed. An OOS delivery would confuse this “observer”.

A HFT could be trend-sensitive. It monitors the rise and fall of sizes of the quotes on a given name (say SPX). It assumes the market data are delivered in-sequence.