[12] erase while reverse iteration: map,list #noIV

Sugg: reverse iterator is too complicated when you want to erase. Just use forward iterator!

Tip: reverseItr.base() is designed for insert-by-fwd-iterator. To emulate insertion at a position of a reverse_iterator named ri, insert() at the position ri.base() instead.

Now let’s be specific. When we say “insert at ri” we mean insert to the Right of ri, or insert “inFrontOf” ri in the iteration direction. That’s your goal, but not your code, because insert() can only use fwd iterators. So you use ri.base() to insert and it exactly inserts to the Right of ri. Therefore, For purposes of insertion, ri and ri.base() are equivalent, so to speak.

For the purpose of logical insertion, ri.base() is truly the fwd iterator corresponding to ri. See http://www.drdobbs.com/three-guidelines-for-effective-iterator/184401406

Warning: For purposes of erasure, ri and ri.base() are Not equivalent, and ri.base() is Not the fwd iterator corresponding to ri.

Tip: After erase, the r-iterator value is less intuitive to predict. (I believe it points to ri + 1.) Better  continue/break the for-loop, rather than executing to end of the current iteration. It’s not always straightforward to directly “dump” the reverse-iterator.

Tip: after erasing, print all Keys to be sure.

Tip: instrument dtor to see which node is erased

Tip #1: if you need to continue looping, then don’t use for-loop. Use a while loop to gain control over exit condition and over increment on the reverse iterator.

 

for (MapType::reverse_iterator ri = bidBook.rbegin(); ri != bidBook.rend();){

ListType & li = ri->second;
li.remove_if(mem_fun(&LimitOrder::notLive));
if (li.empty()) {
bidBook.erase((++ri).base());
//showKeys(bidBook);
continue;
} else {
bestBid = ri->first;
return;
}
++ri;
}

Advertisements

iterators : always pbclone !! pbref or by pointer

Iterators – generalizations of pointers – are expected to be copied cheaply. The copy cost is never raised as an issues.

Iterators are usually passed by value. Sutter and Alexandrescu recommended (P154 in their book) putting iterators into containers rather than putting pointers into containers. Containers would copy the iterators by value.

someContainer.end() often returns a temp object, so taking its address is a bug. The returned iterator object from end() must be passed by Value.

Someone online said that If an argument is taken by value, this makes it usually easier for the compiler to optimize the code. Look at the advantage of using function objects by value instead of taking function pointers. This is a similar reason for by-value parameters reasoned on the level of functions objects.

Note java/c# arguments are predominently passed by reference.

silverlight – Boris

SL runs on both client machine and server machine. SL always needs a serverside. Server must be windows, usually IIS, but can also be WCF services hosted in WinServices or consolt hosts.

SL can run standalone outside a browser, but is usually run inside a browser.

You write SL just like you write WPF. Rather different from ASP

essential FIX messages

Focus on just 2 app messages – new order single + a partial fill. Logon/Logout/heartbeat… LG2 — important session messages.

Start with just the 10 most common fields in each message. Later, you need to know 20 but no more.

For the most important fields – tag 35, exectype, ordstatus, focus on the most important enum values.

This is the only way to get a grip on this slippery fish

25-13:34:27:613440 |250—-> 16 BC_ITGC_14_CMAS_104 1080861988 |

8=FIX.4.2^A9=0162^A35=D^A34=00250^A52=20120725-17:34:27.610^A49=BARCAP014A^A56=ITGC^A115=BARCAP1^A11=1040031^A22=5^A38=100^A40=1^A44=0.01^A47=A^A48=RIM.TO^A54=1^A55=RIM^A59=0^A100=ITGC^A376=-863982^A10=124^A

|

— A NewOrderSingle followed by 2 exec reports —

25-13:34:27:802535 |<—-252 13 BC_ITGC_14_CMAS_104 1079105241 |

8=FIX.4.2^A9=220^A35=8^A128=BARCAP1^A34=252^A49=ITGC^A56=BARCAP014A^A52=20120725-17:34:27^A37=869-1^A17=10^A150=0^A20=0^A39=0^A55=RIM^A54=1^A38=100^A32=0^A31=0.000000^A151=100^A14=0^A6=0.000000^A11=1040031^A40=1^A22=5^A48=RIM.TO^A60=20120725-17:34:27^A59=0^A47=A^A10=002^A

|

25-13:34:27:990564 |<—-253 13 BC_ITGC_14_CMAS_104 1079105241 |

8=FIX.4.2^A9=230^A35=8^A128=BARCAP1^A34=253^A49=ITGC^A56=BARCAP014A^A52=20120725-17:34:27^A37=869-1^A17=20^A150=2^A20=0^A39=2^A55=RIM^A54=1^A38=100^A32=100^A31=7.010000^A151=0^A14=100^A6=7.010000^A11=1040031^A40=1^A22=5^A48=RIM.TO^A60=20120725-17:34:27^A59=0^A47=A^A30=MSCN^A10=076^A

|

c++/vol IV: FX risk (barcap@@) mid2012

Q: basic exception guarantee
Q: stochastic vol vs local vol?
Q: what’s RAII? What are the major classes you know using RAII
%%A: smart pointers, locks, stl containers, strings??

Q: what synchronization classes are there in c++?
Q: can a static member function be const?
%%A: no. the const is on “this”

Q: is it ok to mark a field mutable
%%A: I think so. student.getAge() can modify lastAccessed timestamp.

Q: what are the option models you know

boost intrusive smart ptr phrase book #no IV

* MI — P35 [[beyond c++ standard lib]] shows your pointee CLASS can be written to multiple-inherit from a general-purpose ref_count holder class. This is a good use case of multiple inheritance, perhaps in Barclays QA codebase.

* real-estate — The 32-bit ref counter lives physically on the real estate of the pointee. Pointee type can’t be a builtin like “float”. In contrast, a club of shared_ptr instances share a single “club-count” that’s allocated outside any shared_ptr instance.

* legacy — many legacy smart pointer classes were written with a ref count in the pointee CLASS like QA YieldCurve. As a replacement for the legacy smart pointer, intrusive_ptr is more natural than shared_ptr.

* builtin — pointee class should not be a builtin type like float or int. They don’t embed ref count on their real estate; They can’t inherit; …

* TR1 — not in TR1 (http://en.cppreference.com/w/cpp/memory), but popular

* ref-count — provided by the raw pointee Instance, not the smart pointer Instance. Note the Raw pointER Instance is always 32 bit (assuming 32-bit bus) and can never host the reference count.

* same-size — as a raw ptr

* expose — The pointee class must expose mutator methods on the ref count field

single-stepping 4 stages of g++ compliation

First 3 stages are file-by-file; 4th stage merges them. You can use “file anyFileName” to check out each intermediate file, and cat it if ascii.
–1) preprocessor.
yourSourceCode => preprocessedSource — often 200 time larger.
Both ascii format.
to run this single step — gcc -E
–2) assembler. This is Before compiler.
preprocessedSource => assembledTextFile.
Both ascii format — yes for the assembler!
to run this single step — gcc – S
–3) compiler.
assembledTextFile => individualObjectFile
Ascii -> binary
to run this single step — gcc -c -x assembler
One object file for each original source file.
–4) linker.
individualObjectFiles => singleMergedObjectFile
Now executable.

yield CURVE ^ yield/price CURVE

There are many curves in bond math, but these 2 curves stand out as by far the 2 most useful.

* the yield curve and twin sister the discount curve, aka the swap curve
* the yield/price graph.

Note duration, convexity, dv01 are defined on the y/p curve.

For a given bond or for a given position, the y/p curve is fundamental. Most bond characteristics are related to or reflected on the y/p curve.

std::copy-print array of pointers

Suppose you already have a friend operator<<(ostream&, YourClass &) for YourClass, but you need to print out an array of pointers —

YourClass* array[99]

Here’s a first attempt —

copy(array, array+99, ostream_iterator(cout,” “); // prints the addresses

Simple solution —

Simply overload operator<<(ostream&, YourClass* const) using the existing operator<<

Boost Any: cheatsheet

* bite-sized introduction — best is P164 [[beyond c++ standard lib]]. Best 2nd intro is P165.
* Most essential operations on Any are

1) ctors — desposit into safebox
2) any_cast — check out using the “key”

* void pointer — Any is better than void pointers. (It’s good to know that void pointers meet the same basic requirement.)
* shared_ptr — to store pointers in an Any instance, use shared_ptr.
* STL — Any can be used with or without STL containers. First get a firm grip on one of them.

— P164 – without containers
myAny=std::string(“….”);
myAny=3.281; // a double
//As shown you can put any object into the variable. To retrieve it, you specify the expected type.
any_cast (myAny);
any_cast (myAny); // would throw exception.

Q: big 3?
Q: can subclass?
Q: name a real use case
Q: this sounds powerful, convenient, too good to be true. What’s the catch? Why not widely used?

c++ thread library – all wrappers@@

C++ thread libraries invariably smell like C library wrappers. They have a hard time shaking off that image.

Fundamentally, concurrency utilities are low level, and less suitable for OO modelling. OO often introduces overhead. A friend said “thread library is always using function API, not class API”.

I have not seen a c++ thread library that’s not based on a C thread library. A C version is more useful to more people.

client conn IV#FIX/java

Q: OrdStatus vs ExecType in FIX?
http://fixwiki.fixprotocol.org/fixwiki/ExecutionReport
http://fixprotocol.org/FIXimate3.0/en/FIX.5.0SP2/tag39.html — good to memorize all of these

Each order can generate multiple execution reports. Each report has an exectype describing the report. Once an order status changes form A to B, OrdStatus field changes from A to B, but in the interim the exectype in the interim execution reports can take on various values.

Enum values of exectype tag and ordstatus tag are very similar. For example, a partial fill msg will have exectype=F (fill) and ordstatus=1 (partial fillED).

exectype is the main carrier of the cancellation message and the mod message.

In a confusing special case, The exectype value can be the letter “i”, representing OrderStatus, spelt “OrderStatus”.

Q: what’s a test request in FIX
AA: force opposite party to send a heartbeat.

Q: what’s send Message() vs post Message in COM?
http://wiki.answers.com/Q/What_is_the_difference_between_PostMessage_and_SendMessage

Q: intern()?

Q: are string objects created in eden or ….?

simplified order book design doc – jump

It’s tempting to use virtual function processMessage() to process various order types (A, M, X …) and trade types, but virtual functions add runtime overhead. Template specialization is a more efficient design, but due to the limited timeframe I implemented an efficient and simple alternative.

Assumption: M messages can only change quantity. Price change not allowed — Sender would need to cancel and submit new order. The B/S and price fields of the order should not change, but validation is omitted in this version.

Assumption: T messages and the corresponding M and X messages (also the initiator A message) are assumed consistent and complete. Validation is technically possible but omitted. Validation failure indicates lost messages.

The cornerstone of the design is the data structure of the order book — a RB-tree of linked lists. Add is O(logN) due to the tree-insert. Modify is O(1) thanks to the lookup array. Remove is O(1) — eliminating tree search. This is achieved with the lookup array, and by saving iterator into the order object.

There are 2 containers of pointers — the map of lists and the lookup-array. It would be better to use container of smart pointers to ease memory management, but STL doesn’t provide any smart pointer.

All equality test on doubles are done using “==”. Should use some kind of tolerance if time permits.

Here’s the documentation in the lookup array class

/*This class encapsulates an array of pointers.
 Assumption 1 — Exchanges is likely to generate auto-incrementing orderID’s. Here’s my reasoning. OrderID’s are unique, as stated in the question. If orderID generation isn’t continuous, then the generator has 2 choices about the inevitable gap between 2 adjacent ID numbers. It can keep the gap forever wasted, or somehow “go back” into a gap and pick a number therein as a new orderID. To do the latter it must keep track of what numbers are already assigned — rather inefficient. There are proven in-memory algorithms to generate auto-increment identity numbers. I assume an exchange would use them. Auto-increment numbers make a good candidate as array index, but what about the total number range?

 Assumption 2 — each day the number range has an upper limit. Exchange must agree with exchange members on the format of the orderID. It’s likely to be 32 bits, 64 bits etc and won’t be a million bits.

 Question 1: Can the exchange use OrderID 98761234 on both IBM and MSFT during a trading day? I don’t know and i feel it doesn’t matter. Here’s the reason.

 Case 1: suppose exchange uses an *independent* auto-increment generator for each stock. So IBM and MSFT generators can both generate 98761234. My design would use one array for IBM and one array for MSFT. For basket orders, additional generator instances might be needed.

 Case 2: suppose exchange uses an independent auto-increment generator for each stock, but each stock uses a non-overlap number range. 98761234 will fall into IBM number range. My design would need to know the number range so as to convert orderID to array index and conserve memory.

 Case 3: suppose exchange uses a singleton auto-increment generator across all stocks (bottleneck inside the exchange). My design would use one gigantic array. Given Assumption 1, the numbers would be quasi-continuous rather than sparse — below 50% of the range is assigned. Suppose the range is S+1, S+2 … S+N, then my array would be allocated N elements (where S is orderIDBase). There’s a limit on N in reality. Every system is designed for a finite N — no system can handle 10^9999 (that’s one followed by ten thousand zeros) orders in a day. Each array element is a pointer. For a 64-bit machine, N elements take 64N bits or 8N bytes. If I have 640GB memory, N can be 80 billion but not higher. To scale out horizontally, we would hope Case 1 or 2 is the case.

 Therefore the answer to Question 1 shows array of pointer is feasible for the purpose of lookup by orderID. In a real system hash table is likely to be time/space efficient. In this exercise, only STL is available, which provides no hash table. Tree based map has logN time complexity — too slow. My choice is between a built-in array vs a non-expanding vector. I chose array for simplicity.
 */

in-line field initializer ] c++11

I believe the concise form of java-style field initializer is mostly legal in c++11 (except static fields — See P115 [[essential c++]]). In c++ lingo, “initializer” usually refers to one special part of a ctor, but here I focus on in-line initializers like

float myField = 0.11073;

Q: can you inline initialize the following entities?

  • case: static field of a class? No unless const integral types. Must be initialized (One-Definition-Rule) outside the class
  • case: instance field of a class? inline field initializer allowed since c++11. See https://stackoverflow.com/questions/13662441/c11-allows-in-class-initialization-of-non-static-and-non-const-members-what-c
  • case: instance field of type std::string or STL container? Allowed but no need to specify any initializer. These component-objects are automatically initialized to “empty”. I tested in my CRAB project in MVEA.
  • case: local variable? Yes … Best practice. Otherwise compiler can silently put rubbish there!
  • case: local static variable?Yes but no need… because Default-initialized!
    • Note the initialization happens only once, ignored on subsequent encounters
  • case: global variable? Allowed
  • case: file-scope static variable? Allowed
  • .. These rules are messier than java
#include &amp;lt;iostream&amp;gt;
#include &amp;lt;string&amp;gt; // without it, "string" is different type in MSVS!
using namespace std;

float global = 0.1;
static float file_scope_static = 0.1314;

struct Test {
	float instance_field = 0.3; // since c++11
	string instance_field_str = "instance_field_str"; // no-initializer also safe.
	static float static_field;
};
float Test::static_field = 0.4;

int main()
{
	float local = 0.2;
	static float local_static = 0.793;
	cout &amp;lt;&amp;lt; Test::static_field &amp;lt;&amp;lt; endl;
	Test t;
	cout &amp;lt;&amp;lt; t.instance_field_str &amp;lt;&amp;lt; endl;
	cout &amp;lt;&amp;lt; t.instance_field &amp;lt;&amp;lt; endl;
	cout &amp;lt;&amp;lt; file_scope_static &amp;lt;&amp;lt; endl;
	cout &amp;lt;&amp;lt; local_static &amp;lt;&amp;lt; endl;
	return 0;
}

y marketable limit order treated as market order #my take

Imagine a limit order to Sell at $9.99 comes before a limit order to buy at a higher(!) price of $10.01. You may feel 2nd trader is crazy, but it can happen in a fast market. It can also happen for price-control on a market order — See [2]

Q: execute at what price?
%%A: at the earlier quote’s price. $9.99 in this case.

A limit Buy must be executed at the specified-price-or-better, by the definition of limit orders

Exchange ought to publish a transaction price at the earlier quote’s price. I feel this is so as to maintain a realistic view of the supply/demand on the security.

The rule — 2nd limit order is a “marketable limit order” and treated as a market order. What’s wrong if exchange decides to set the execution price at the late-comer’s price?

– Suppose this is the last trade of the day. The closing price would be skewed/influenced by the “crazy” trader. This would create a skewed view of the price level of the stock.
– Or suppose a trader wants to trigger false signals, so she sends a few “stupid” limit orders once a while to make exchange send out artificially high last-execution prices. There are many algorithmic trading engines out there that react to last-execution prices, so execution price feed ought to be designed as a realistic reflection of supply/demand.
– A “crazy” trader can easily create a historical high in the price feed by buying one share at $800. Clearly unrealistic and misleading price information.

But why do I say the earlier quote’s price is more realistic and consistent with reality on the market?
* the 2nd limit order is irrational, or simply a mistake.
* the earlier quote has remained in the market longer and therefore represents a more serious, more firm and more rational intention

If the 2nd limit order is a serious order, it has to be a larger order (otherwise I can’t see why you call it serious). In that case the unfilled portion will remain in the market, and represents a serious intention.

Dr. Hongsong Chou actually said a BUY market order can be seen as a (marketable) limit order with price = +inf.

[2] given The Rule, a rational justification to use a marketable limit order is to put a constrain on an otherwise unconstrained market order in a fast market. An unconstrained market order could result in a disastrous Buy at $18.

a greek’s value converges during expiration #my take

Let’s be clear what I mean — for a given greek (say, delta) of European options, regardless of which underlier, what strike or expiration month, the value of that greek tends to move towards predictable levels as we approach expiration.

Well before expiration, a greek is like a real-value variable which could take on any numerical value within a range. However, during the last days, BS formula predicts that a greek’s value (say delta) always gravitates towards some well-defined convergence points.

delta — either 1.0 or 0 (negative for puts). However, if underlier spot price fluctuates around my strike level (IF you remain very close to ATM), then I’m switching sides between ITM and OTM, so my delta would swing wildly between the 2 convergence 1.0 and 0.

gamma — either infinity or zero. ATM (IF you remain very close to ATM) would go to positive infinity gamma (for long positions); deep ITM/OTM go to zero gamma. However in the last moments there’s no point talking about infinity or zero gamma. If in the last minute spot is still changing around your strike level, then you basically close your eyes and wait to see if you finish ITM or OTM. Even if you finish ITM, the payoff would be a small [1] percentage of strike, and probably a small percentage of premium.

[1] unless you have a Binary option. If you finish ITM with a binary option, you effective win a million dollar bet.

theta — either infinity or zero. Similar to gamma, ATM would go to negative infinity theta; deep ITM/OTM go to zero theta.

option valuation — either $0 or stock price.

request/reply in one MOM transaction

If in one transaction you send a request then read reply off the queue/topic, i think you will get stuck. With the commit pending, the send won’t reach the broker, so you the requester will deadlock with yourself forever.

An unrelated design of transactional request/reply is “receive then send 2nd request” within a transaction. This is obviously for a different requirement, but known to be popular. See the O’Relly book [[JMS]]

c++ compilation dependency – my take on effC++

(Based on P144 effC++)

* size of each class instance
* #include
* header (assuming one header file per class) contains field listing of the class

These are the key points of ComplDependency. CD means that if an upstream file changes, then a downstream file needs recompiling.

As an analogy, imagine the “downstream” file (your app class) has an auto-generated Table of Contents of a large MS-word document. The Word document includes sub-documents (utility classes), which in turn include other sub-documents (lib classes). Edits in any included documents could render the TOC obsolete, and needs a re-compilation.

In the traditional (simple) design, the app class Person has its fields declared as type String, Date, Address etc. Any change in the size of Type Address triggers re-compilation of our class.

In pimpl or the java world, all those fields are moved into a PersonImpl class which is still CD on Type Address. However, the domino effect stops at PersonImpl. Our app class Person needs no recompilation since size of Type Person is unaffected.

content of a typical AWTEvent object

* source (actually a pointer thereto)
* exact type of event — getID, not getClass(). Value could be mouseRelease, losingFocusDueToClick …
* timestamp

What about the most common events?

MouseEvent contains location information.

Table model event contains firstrow/lastrow, which column. The event also contains an enum field about what type of change — delete, insert etc

default-initialize^value-initialize — new-expressions

Q: difference between new Dog and new Dog() with the parentheses?

http://www.cplusplus.com/forum/general/37962/ answers clearly
The first constructor – naked – provides what is called default initialization (unrelated to default ctor). Default initialization leaves the values of fundamental types (int, double, etc) uninitialized, i.e. arbitrary as in the graveyard. Therefore, with new Dog you get uninitialized chunk of memory with arbitrary values in the fields.

The second constructor – with parenthesis – provides what is called value initialization. Value initialization zeros fundamental types. Therefore, with new-Demo() you get zero-initialized chunk of memory – all fields in this case will be zeros.

–C++Primer P407 made it even clearer that default-initialize on the heap basically leaves the bits (of the real estate carved out of a graveyard) as they were left behind by the dead/freed/bulldozed corpses.

http://stackoverflow.com/questions/620137/do-the-parentheses-after-the-type-name-make-a-difference-with-new/620402#620402 has good illustrations of 3 categories of classes and shows the behavior of new-Dog vs new-Dog(). Another contributor pointed out that the difference is usually negligible. “If there is such a constructor it will be used. For 99.99% of sensibly designed classes there will be such a constructor, and so the issues can be ignored.” However in practice, how many classes out of 100 are sensibly designed?

— to initialize the field (of type T) of a class template, 
P687 [[absoluteC++]] says we can skip the init since there’s no good default value for the unknown type T.

Stanley Lippman on P246 [[essentialC++]] says we could use T() even if T happens to be int.

JTable (etc) how-to

http://www.java2s.com/Code/Java/Swing-JFC/Creatingamodalprogressdialog.htm – modal progress bar, as an alternative to a glass panel

spreadsheet concretize #Junli Part1

Q: A spreadsheet consists of a two-dimensional array of cells, labelled A1, A2, etc. Rows are identified using letters, columns by numbers. Each cell contains either a numeric value or an expression. Expressions contain numbers, cell references, and any combination of ‘+’, ‘-‘, ‘*’, ‘/’ (4 basic operators). Task: Compute all RPN expressions, or point out one of the Cyclic dependency paths.

——— Here is “our” design ———-
I feel it’s unwise to start by detecting circles. Better concretize as many cells as possible in Phase 1.

* First pass — construct all the RPN objects and corresponding cell objects. An RPN holds all the concrete or symbolic tokens. A cell has an rpn and also a cell name. If a cell is completely concrete, then calculate the result, and add the cell to a FIFO queue.

Also construct a p2d or precedent2dependent<Name, Set > map. It’s a look-up map of <Name, set > This will help us fire update events. If you wonder why use Name. In this context, name is a unique identifier for a cell. I use a simple hash map.

* 2nd pass — process the queue of concrete cells. For every cell removed from the queue, get its name and concrete value into a pair (call it ppair since it’s a Precedent). Look up p2d to get all dependents. Fire update events by feeding the ppair to each dependent cell, which will use the ppair to concretize (part of) its expression. If any dependent cell gets fully concretized, add it to the queue.

Remember to remove the ppair cell from p2d.

Only 2 data structures needed — queue and p2d.

* Phase 1 over when queue depletes. If p2d is still non-empty, we have cyclic dependency (Phase 2). All concrete cells have been “applied” on the spreadsheet yet some cells still reference other cells.

All remaining cells are guaranteed to be involved in some circle(s). To print out one circle, just start from any cell and follow first link and you are bound to hit the starting point.

achieving sub-millis latency: exchange connectivity

This post is about really achieving it, not “trying to achieve”.

First off, How do you measure or define that latency? I guess from the moment you get one piece of data at one end (first marker) to the time you send it out at the other end (2nd marker). If the destination is far away, then I feel we should use the ack time as the 2nd marker. There are 2 “ends” to the system being measured. The 2 ends are
* the exchange and
* the internal machines processing the data.

There are also 2 types of exchange data — order vs market data (MD = mostly other people’s quotes and last-executed summaries). Both are important to a trader. I feel market data is slightly less critical, though some practitioners would probably point out evidence to the contrary.

Here are some techniques brought up by a veteran in exchange connectivity but not the market data connectivity.
* Most places use c++. For a java implementation, most important technique is memory tuning – GC, object creation.
* avoid MOM? — HFT mktData redistribution via MOM
* avoid DB — local flat file is one way they use for mandatory persistence (execution). 5ms latency is possible. I said each DB stored proc or query takes 30-50 ms minimum. He agreed.
** A market data engineer at 2Sigma also said “majority of data is not in database, it’s in file format”. I guess his volume is too large for DB.
* object pooling — to avoid full GC. I asked if a huge hash table might introduce too much look-up latency. He said the more serious problem is the uncontrollable, unpredictable GC kick-in. He felt hash table look-up is probably constant time, probably pre-sized and never rehashed. Such a cache must not grow indefinitely, so some kind of size control might be needed.
* multi-queue — below 100 queues in each JVM. Each queue takes care of execution on a number of securities. Those securities “belong” to this queue. Same as B2B
* synchronize on security — I believe he means “lock” on security. Lock must attach to objects, so the security object is typically simple string objects rather than custom objects.
* full GC — GC tuning to reduce full GC, ideally to eliminate it.
* use exchange API. FIX is much slower, but more flexible and standardized. See other posts in this blog

Some additional techniques not specific to exchange connectivity —
$ multicast — all trading platforms use multicast nowadays
$ consolidate network requests — bundle small request into a large requests
$ context switching — avoid
$ dedicated CPU — If a thread is really important, dedicate a cpu to it.
$ shared memory — for IPC
$ OO overhead — avoid. Use C or assembly
$ pre-allocate large array
$ avoid trees, favour arrays and hash tables
$ reflection — avoid
$ concurrent memory allocator (per-thread)
$ minimize thread blocking
** immutable
** data parallelism
** lockfree

AWT vs swing #my take

I believe each awt component (you manipulate in java classes) maps to one and only one native screen object — probably the
so-called peer object. Native screen object is tied to the display hardware and registered (created?) with the hardware
wrapper-layer known as the OS. It takes up memory (among other resources) on the display hardware.

A lightweight component in your java source code is simply Painted as a graphic on the ancestor’s screen object (which is tied to a
heavyweight ancestor), so it doesn’t occupy additional resources on the display hardware. It’s “light” on system resources.

Also, since it’s painted as a graphic by java, the appearance is controlled by java and consistent across platforms. In contrast,
the heavy weights look native and consistent with native apps like Powerpoint. There are pros and cons. Some praise the java
consistent look and feel. Some say swing looks alien on windows.

When I say the “native screen object” corresponding to a jtable, i mean the image painted on the (heavyweight) jframe. I believe
multiple lightweights on the same heavyweight all share the same native screen object i.e. the peer object.

jcomponent / listener / event, another analysis

http://docs.oracle.com/javase/tutorial/uiswing/events/eventsandcomponents.html shows which (type of) listener can attach to which

(type of) jcomponent. It leaves out the 3rd vital element — the (type of) event. The relationship of the 3 entities is central to

Swing.

An AWTEvent is an object containing data about some “happening”. It also “contians” a source jcomponent via getSource(). Another

important content is the type of event — mouse move, resizing, checkbox click…via getID(). An event usually comes from a

jcomponent — both user actions and code actions generate events.

When generated, the event doesn't “know” the listeners — similar to how a MOM sender generates an “unaddressed” message. Instead,

the “source” jcomponent keeps the (list of) listeners.

I sometimes prefer to discuss in terms of event-queue task (qtask) rather than events. A qtask is an event + a single listener

object. Now I feel this is educational but imprecise.

Recall a listener object is an initially stateless functor object + optional objects passed in to ctor + optional objects in the

enclosing class or enclosing method. The last group are more tricky.

Here's my hypothesis. When an event occurs,

Event object constructed

Event object euqueued

EDT picks up the event and invokes Component.processEvent(theEvent)

Inside processEvent(), each listener is invoked sequentially.

http://tech.stolsvik.com/2009/03/awt-swing-event-pumping-and-targeting.html described some of the details in a

MouseEvent.mouseClicked() example.

field^param^local-variable — C++ allocation

A field is always allocated memory, since a (including static) field is part of an object.

A function parameter is always allocated on the stack.

Local variables are supposed to be allocated on stack, but may not be allocated at all. Compiler can often optimize them away.

Sometimes a variable is just a token/alias in source code’s symbol table. A constant variable can be replaced by the constant value at compile time.

How about initialization?
Rule 1: class instances are never UNinitialized
Rule 2: static variables are never UNinitialized
Rule 3: local vars and class fields are uninitialized except as part of Rule 1.

Nomura FX option IV #Ldn video link

Q: what are the products traded on your desk
%%A: stock/ETF/index options + var swap for flow vol. For EFS, there are a lot – digital options, barrier options, Asian style options, structures with 2 underliers
Q: how do you monitor the barriers – discrete or continuous or …?
Q: what does a typical quant library API function looks like?
Q: You said c++ is more challenging, but what technical challenges do you see using c++ compared to java?
(Now I would say pointer, memory mgmt, STL, undefined behaviors…)
Q: how is a libor yield curve constructed?
Q: what’s the rationale for using swap rates to derive spot rates for a long tenor?
Q: what’s the rationale for using libor futures rates to derive spot rates? 
Q: for risk management, what features of the vol surface are measured/monitored?
%%A: skew bump, tail bump. Wings are known to be problematic.
Q: did you work on PnL explain?
Q: Our trades are often long gamma. What happens to their thetas?

when to use Action !! ActionListener

I struggled for years to understand the real reason to use Action rather than ActionListener, and finally found http://www.developer.com/java/other/article.php/1146531/Understanding-Action-Objects-in-Java.htm .

Truth — action is more sharable, though it's unclear how until you read the url.

Myth — Action instance holds icons, text, tooltip etc to be shared among jcomponents. In reality we often use Action without icons etc

Myth — Action is shared by jcomponents. In reality we also put Action instances into ActionMap, nothing to do with jcomponents.

Truth — the disable/enable feature is important — all “users” of the Action instance are enabled/disabled together — the most convincing justification.

ActionListener can also achieve sharing. It is straightforward to instantiate an ActionListener and register it on a menu item and a button. This will guarantee that action events will be handled in an identical way regardless of which jcomponent fires the event. However, such a shared action listener can't send property change notifications for enabling/disabling.

In contrast, when some object adds our Action instance, this object will be enabled/disabled upon property change notification from our Action.

target a custom cell renderer – laser precision

1) Most examples show how to install a renderer for a particular data type like Integer, but often I don't want to apply a highly customized renderer on all Integer columns

2) To customize for Column 2, use myTableColumn2.setCellRender(). You can do myTableColumn2 = table.getColumn(target column header name used as column identifier)

3) To specify a cell-specific renderer, you need to subclass and override getCellRenderer(int,int). For example, the following code makes the first cell in the first column of the table use a custom renderer:

TableCellRenderer weirdRenderer = new WeirdRenderer();

table = new JTable(…) {

public TableCellRenderer getCellRenderer(int row, int column) {

if ((row == 0) && (column == 0)) {

return weirdRenderer;

}

// else…

return super.getCellRenderer(row, column);

left hand sending event to right hand – swingWorker

First, remember SW is stateful, and publish() and process() are non-static methods.

The publish() method posts a stream of events to the event Queue. Each event holds a hidden pointer to “this” i.e. the SW instance. EDT recovers the pointer and basically calls this.process().

Both publish() and process() are called on the same (stateful) SW instance, but on different threads. The 2 instance methods are like the left and right hands, designed to interact between themselves.

In SwingWroker, everything else is secondary — doInBackground(), done(),and type params — The 2 type params are interesting — T and V can be identical or T can be a collection of V.

T – the result type returned by doInBackgroun()
V – the type of intermediate results of publish() and process()

content of a TableColumn instance

Summary – a TC instance holds __presentation_settings__ (width, header value etc) for a particular column[1]. This instance remembers (via the this.modelIndex) which “physical” column the presentation settings apply, but this instance doesn’t know if it’s displayed as 1st or last column, or hidden.

Should be renamed ColumnDescriptor

[1] actually a column in the model, which might be excluded from the view, via the TCM.

— First look at the field listing —
* myTableColumn.modelIndex — The model index of the column “covered” by this TableColumn. As columns are moved around in the view or removed, modelIndex remains constant. As the #1 most important field in TableColumn.java, this is first parameter in every ctor except the no-arg.

* myTableColumn.cellRenderer
* myTableColumn.cellEditor

This is a useful but less explored feature —
* myTableColumn.identifier — with getter/setters. This object is not used internally by the drawing machinery of the JTable; identifiers may be set in the TableColumn as as an optional way to tag and locate table columns.

———–

Now we are ready to look at TableColumnModel. #1 job is to hold an ordered list of TableColumn’s. Order is the display order.

getColumns() — #1 method. Returns the ordered list. Underlies these methods
** TableColumn getColumn(int viewColumnIndex) — should be renamed getColumnDescriptor
** getColumnCount() — used by JTable.getColumnCount()

array-name can’t be LHS #almost

An array name myArr (including c-str) looks like a variable but no no no. It is the name plate on a room door.

  • It represents an allocated room of fixed size
  • It is permanently tied to a fixed address
  • You can’t put this name plate on another door later on.
  • (obscure) You can’t put a 2nd c-array name plate on the same room door like q[ cstr2=existingCstr ], either as initialization or assignment on cstr2.
    • you can use q[ char * ptr2=existingCstr ], though ptr2 is not a c-str not an array.

An array-name in source code is like a const ptr (not ptr-to-const) with a value like 0xFF82, i.e. an alias for Address 0xFF82. Such an Address is not an Lvalue. Therefore array-name isn’t an Lvalue. Array-name can’t be LHS —

int aa[3];
int b[3];
aa=b; // error

That’s trivial, but look at this —

f(aa); // f was declared f(int pa[])

When you pass the array-name “aa” to a function, the array-name is effectively on RHS (not breaking the LHS rule). The LHS is the function parameter-variable “pa”, which is always treated as pointer Variable even though you declare the function parameter-variable “pa” as an array-name! Compiler converts it to f(int * pa).

In summary,
* Rule 1 — array name can’t be LHS of an assignment
* Rule 2 — function param-var is effectively the LHS of an assignment. You can declare a function param-var as array type, which seems to violates the 2 rules above, but
* Rule 2b — array param-var is converted to pointer param-var

Now let’s look at c-string, an important special case , since the “=” is initialization, not assignment.

char const * s = “literal”; // compiler turns a blind eye if you remove the “const” but DON’T
char s8[] = “literal”;

Note the above creates an char-array of length 7+1=8 including the null char, but the array below has length 7 only:

char s7[] = {‘l’, ‘i’, ‘t’, ‘e’, ‘r’, ‘a’, ‘l’};

// To modify such a literal string,
char s[] = “literal”;
s[0]=’A’;
cout<<s<<endl;

##instrumentation[def]skill sets u apart

Have you seen an old doctor in her 60’s, or 70’s? What (practical) knowledge make her ?

[T = Some essential disagnostic Tools are still useful after 100 years, even though new tools get invented. These tools must provide reliable observable data. I call it instrumentation knowledge]
[D = There’s constant high Demand for the same expertise for the past 300 years]
[E = There’s a rich body of literature in this long-Established field, not an Emerging field]

As a software engineer, I feel there are a few fields comparable to the practice of medicine —
* [TDE] DB tuning
* [DE] network tuning
* [T] Unix tuning
* [TDE] latency tuning
* [] GC tuning
* [DE] web tuning

Now the specific instrumentation skills

  • dealing with intermittent issues — reproduce
  • snooping like truss
  • binary data inspection
  • modify/create special input data to create a specific scenario. Sometimes you need a lot of preliminary data
  • prints and interactive debugger
  • source code reading, including table constraints
  • log reading and parsing — correlated with source code
  • data dumper in java, perl, py. Cpp is harder
  • pretty-print a big data structure
  • — above are the generic, widely useful items —
  • In the face of complexity, boil down the data and code components to a bare minimum to help you focus. Sometimes this is not optional.
  • black box testing — Instrumentation usually require source code, but some hackers can find out a lot about a system just by black-box probing.
  • instrumentation often requires manual steps — too many and too slow. In some cases you must automate them to make progress.
  • memory profiler + leak detector

when is rho important

A: long-dated options.

I was told for both equity options and currency options, rho is usually the least significant price sensitivity greek.

For a customized “structured” option, maturity could be 10 years or longer. I feel the BS assumption of a constant risk-free rate breaks down — Over 10 years, interest rate could quadruple. Stock price could be directly affected even if all other factors remain constant. Implied vol could also be affected. However, to compute theoretical rho, we need to hold all of them constant and simulate a small bump in the time-invariant interest rate. In that context, over 10 years the effect on option valuation is non-negligible.

##retail-friendly investment products

listed stocks, futures and options
ETF on equity sectors
ETF on FX
ETF on commodities
ETF on bonds
(ETF is like mutual funds, open to small investors.)

FX — through retail dealers — not really “brokers” in the strict sense. There's no public exchange.

Some corporate bonds — similarly through retail dealers
muni bond — more retail-oriented, since the tax advantage targets individual investors.
US treasury bonds

3 types of high-level external events – swing

An entire swing app is event driven – either 1) user events, 2) “mkt data” events, or (rarely) scheduled timer events. Without these events, entire swing app is idle.

Therefore, event handling logic permeates swing apps.

Actually “Mkt data” event is a misnomer since it can be any external data from MOM, from database response, from loading a file, from RMI response, from a web service response…

These 3 types of external events are not AWTEvents, but “happenings”. They can generate internal events, typically AWTEvents. For example, if a user event hits a JButton, the event handler in the button could fire an AWT event on a TableModel. The TableModel event handler can then modify table data and fire another event to update the screen.

Swing has a notion of low-level events vs semantic events. My 3 types are semantic.

speculator vs hedger in FX #my take

Here’s an easy way to differentiate speculator vs hedger in FX — Speculators have to “convert” back to domestic currency. They have no way to spend the foreign currency.

As soon as a speculator finds a way to spend $1k for that $1k she is no longer speculator. For example — Buying online; Buying a gift for a relative; Tourism; Invest in a foreign asset such as securities or properties.

Multinational corporations the archetypical hedgers. They also occasionally engage in speculative trading.

Hedge funds and retail traders are the archetypical speculators. I feel Banks are typically speculators. The Singapore subsidiary of Citibank is like a “trader” who has to convert everything back to SGD. The “trader” starts with a seed capital in SGD and tries to grow it. At the end of the year, any cash position in another currency should be kept small because it represents exposure. The Singapore bank has no justification to keep that exposure because the foreign cash can’t be used as cash. It’s like a silver position in your portfolio.

Think about the “convert back”. When do you convert to USD and never convert back? The real demand for USD is tied to the demand for US exports, including foreigners’ demand for American securities and properties. (However, global oil is not a US export but is quoted in USD??) Now we realize the demand for any currency is ultimately determined by the nation’s export. See also http://bigblog.tanbin.com/2012/04/1-driver-of-long-term-fx.html

y noSQL — my take

Update: cost, scalability, throughput are the 3 simple justifications/attractions of noSQL

As mentioned, most of the highest profile/volume web sites nowadays run noSQL databases. These are typically home-grown or open source non-relational databases that scale to hundreds of machines (BigTable/HBase > 1000 nodes). I mean one logical table spanning that many machines. You asked why people avoid RDBMS. Here are my personal observations.

– variability in value — RDBMS assumes every record is equally important. Many of the reliability features of RDBMS are applied to every record, but this is too expensive for the Twitter data, most of which is low value.

– scaling — RDBMS meets increasing demands by scaling up rather than scaling out. Scaling up means more powerful machines — expensive. My Oracle instance used to run on Sun servers, where 100GB disk space cost thousands of dollars as of 2006. I feel these specialized hardware are ridiculously expensive. Most of the noSQL software run on grid (probably not cloud) of commodity servers. When demand reaches a certain level, your one-machine RDBMS would need a supercomputer — way too costly. (Also when the supercomputer becomes obsolete it tends to lose useful value too quickly and too completely since it’s too specialized.)

This is possibly the deciding factor. Database is all about server load and performance. For the same (extremely high) level of performance, RDBMS is not cost-competitive.

Incidentally, One of the “defining” feature of big data (according to some authors) is inexpensive hosting. Given the data volume in a big data site, traditional scale-up is impractical IMO, though I don’t have real numbers of volume/price beyond 2006.

– read-mostly — Many noSQL solutions are optimized for read-mostly. In contrast, RDBMS has more mature and complete support for writes — consider transactions, constraints etc. For a given data volume, to delivery a minimum performance, within a given budget, I believe noSQL DB usually beats RDBMS for read-mostly applications.

– row-oriented — RDBMS is usually row-oriented, which comes with limitations (like what?). Sybase, and later Microsoft and Oracle, now offer specialized columnar databases but they aren’t mainstream. Many (if not most) noSQL solutions are not strictly row-oriented, a fundamental and radical departure from traditional DB design, with profound consequences. I can’t name them though.

What are some of the alternatives to row-orientation? Key-value pairs?

– in-memory — many noSQL sites run largely in-memory, or in virtualised memory combining dozens of machines’ memory into one big unified pool of memory. Unlike RDBMS, many noSQL solutions were designed from Day 1 to be “mostly” in-memory. I feel the distinction between distributed cache (gemfire, coherence, gigaspace etc) and in-memory database is blurring. Incidentally, many trading and risk engines on Wall St. have long adopted in-memory data stores. Though an RDBMS instance can run completely in-memory, I don’t know if an RDBMS can make use of memory virtualization. Even if it can, I have not heard of anyone mention it.

– notifications — I guess noSQL systems can send notifications to connected/disconnected clients when a data item is inserted/deleted/updated. I am thinking of gemfire and coherence, but these may not qualify as noSQL. [[Redis applied design patterns]] says pubsub is part of Redis. RDBMS can implement messaging but less efficiently. Event notification is at the core of gemfire and friends — relied on to keep distributed nodes in sync.

– time series — many noSQL vendors decided from Day 1 to support time series (HBase, KDB for example). RDBMS have problem with high volume real-time Time-Series data.

“A Horizontally Scaled, Key-Value Database” — so said the Oracle noSQL product tagline. There lie 2 salient features of many noSQL solutions.

I feel the non-realtime type of data is still stored in RDBMS, even at these popular websites. RDBMS definitely has advantages.

##conventional wisedoms in option valuation

There are many empirical rules in option math, but I feel with different universality and reliability.

Rule) vega of an ATM op =~ premium / implied vol
Rule) put-call equivalence in FX options. See separate blog post http://bigblog.tanbin.com/2011/11/equivalent-option-positions.html
Rule) PCP – “complicated” in American style, according to CFA textbook
Rule) delta(call) + delta(put) =~ 100% — See separate blog post http://bigblog.tanbin.com/2010/06/delta-of-call-vs-put.html
Rule) delta is usually between 0 and 1? Someone told me it can exceed 1 before ex-div
Rule) option valuation always decays with time
Rule) ATM delta is “very close” to 50%, regardless of expiration
Rule) delta converges with increasing vol. See separate blog post http://bigblog.tanbin.com/2011/11/option-rule-delta-converges-to-5050.html

Rule) For a strike away from predicted forward price, the OTM option has better liquidity than the ITM option. Therefore the OTM is  more useful/important to volatility estimate at that strike.

Rule) for equities, OTM put quotes show higher i-vol than OTM calls. Incidentally, at the same low strike, the OTM put is more liquid than ITM call. Reason is, most people trade OTM options only. However, ITM options are still actively traded — if an ITM option is offered at a low enough price, someone will buy; if an OTM option is bidding high enough, someone will write the option.

outer table predicate in ON-clause #my take

(INNER table predicate in where-clause vs ON-clause is well covered online and also in my own http://bigblog.tanbin.com/2011/05/whats-different-between-these-2-sqls.html)

Note intermediate table is unaffected by where-clause. Query processor always uses on-clause exclusively to build an intermediate joined table before applying where-clause. This is the logical view we can safely assume. Physically, Query processor could optimize away the where/on distinction, but output is always consistent with the logical view.

Q1: can we _always_ assume that LL-outerjoin-RR intermediate table _always_ includes all LL rows if no outer table predicate in ON-clause?
A: yes with the big “if”
%%A: If you need to filter on outer table, do it in where-clause — better understood. Avoid ON-clause.

Q2: Boldly dropping the big “if”, can we _always_ assume that LL-outerjoin-RR intermediate table _always_ includes all LL rows _regardless_ of outer table predicate in ON-clause?
%%A: probably yes.

If you really really want outer table predicate in ON-clause, I assume you have no good reason and just feels adventurous.
http://infocenter.sybase.com/help/index.jsp?topic=/com.sybase.infocenter.dc32300.1502/html/sqlug/sqlug169.htm shows that even with outer table predicate in ON-clause, we still get “… all LL rows”

select title, titles.title_id, price, au_id
from titles left join titleauthor
on titles.title_id = titleauthor.title_id
and titles.price > $20.00
title                  title_id   price   au_id       
--------------------   ---------  ------  ---------------
The Busy Executive’s   BU1032     19.99   NULL
Cooking with Compute   BU1111     11.95   NULL
You Can Combat Compu   BU2075      2.99   NULL
Straight Talk About    BU7832     19.99   NULL
Silicon Valley Gastro  MC2222     19.99   NULL
The Gourmet Microwave  MC3021      2.99   NULL
The Psychology of Com  MC3026      NULL   NULL
But Is It User Friend  PC1035     22.95   238-95-7766

ECN ( !! exchnge)has Partial control over execution

An exchange (actually the clearing house) provides a critical safety shield – stopping domino effect of credit default. An exchange uses clearing fund to cover any member’s default. No exchange has ever failed to fulfil an execution. An exchange guarantees to deliver on every single execution. In contrast, an interbank broker on either FX or bond market
– doesn’t guarantee anything, doesn’t stand behind any execution
– doesn’t take the opposite side of a trade.
– doesn’t have the same level of control over execution. An exchange controls execution and subsequently informs both market maker and market taker, who must accept the execution. In a fast market, if a limit order gets depleted quickly or is withdrawn amidst heavy trading, exchange order-matcher decides which order to reject. ECN doesn’t do this.

In an ECN context, the trade (and credit relationship) is between exactly 2 parties — market Maker vs market Taker, not the ECN.

If you see a too-good-to-be-true quote and hit it, and the dealer revises it (saw this on TMC too), you probably wish you were on a real exchange. This unfortunate scenario is known as a re-quote or slippage. To the dealer it’s known as last-look. Dealer would explain that “market has moved”, or “inventory depleted since another customer grabbed it before you”.

In practice, a dealer’s automatic execution system also needs to validate counterparty. Does an account exist? Is there credit relationship? Is this account in a blacklist with a checkered history?

In the exchange context, the trade (and credit relationship) is between an exchange member vs the exchange. Both sides have enough capital to commit to the trade. If the trade fails to settle due to either side’s default, it’s a big deal.

Every exchange always has a huge pool of clearing fund which gives it the capacity to take position as the counter-party to every trade. In contrast, an ECN doesn’t have this much capital and won’t take any position.

callable bond is a real option #my take

If you are only working in the fixed income space, you may feel you don’t need option knowledge. Well, the ubiquitous callable bond is an embedded option. (Puttables are less ubiquitous.)

To the bond holder (or buyer), A long position in a callable bond is equivalent to
+ a long in a non-callable bond
+
+ a short position in a call option, which is like a giving out a shopping voucher “get a beer from me for $1”. If exercised, bond holder loses the “asset” — the long position in the regular bond

In other words, You buy a callable when you give counter party the right to “call it away”.
———-To the Issuer (not the dealer), a short position in a callable is equivalent to
+ a short position in a non-callable bond i.e. an obligation to pay fixed coupons + principal
+
+ a long position in a call option i.e. a right to pay holders a stipulated amount to acquire an “asset” that’s a perfect hedge for (100% cancels out) the short position.
——————–
Every call has an underlier asset. For a callable bond, the asset is the piece of paper representing ownership of a bond. The paper also has strips called coupons. When the issuer exercises the call, issuer pays par price to “buy” back the paper. Assuming there was only one holder for this issue, then issuer has no more liability. The bond ceases to exist.

named vs nameless objects in C++, again

This is an obscure point only relevant to compilers….

In C, a local variable on stack refers to a named object. That memory location holds an int (or a char or a struct or whatever) and has a unique name like “dog1”. The variable, the object (with its address) are 2 sides of the same coin. As stated elsewhere on this blog, the variable has a limited scope and the object has a limited lifespan. Well defined rules. In C++, you can create a reference to dog1. You get an alias to the object, whose real name is still dog1. Pretty simple so far. Things gets complicated with heap (aka free store or DynamicallyAllocatedMemory DAM).

If an object is a field named “age” in another object “person2” (a class/struct instance), and person2 lives on heap, then it’s fair to say this nested heap object has a name “age” [1].

If an object lives on heap (I call it a “heapy thingy”) and is Not a field, then it can’t have a name. Such an object is created by new() and new() only returns an address. Multiple pointer Variables can hold the same address, but have different variable names. Pointee object remains forever nameless.

Revisiting [1], person2 is actually a heap object identified by a pointer Variable, so another pointer Variable (say ptr2) can also refer to the same object. Therefore “person2” is just one of the pointer variable names, not the pointee object’s name since the pointee is on heap and therefore nameless.

Taking another look at the “age” field, person2->age and ptr2->age both refer to the same age object so it gets 2 names.

The nameless heap object is dominant in java (reference types) and c# (reference types). P36 [[c# primer]] puts it nicely — “Reference-type object consists of 2 parts: 1) named handle that we manipulate in our program, 2) unnamed object allocated on the managed heap….. A value type object is not represented as a handle/object pair.”

The upshot — a stack object has one controlling variable (hence a real name), whereas a heapy thingy is a puppet pulled by many strings.

RFQ ^ limit order ^ pseudo limit order

The dominant protocol of quote distribution is different between markets. Limit order and RFQ are well understood. As to of pseudo limit orders, there are 2 types —
– firm quotes invite market takers to click-and-trade, and are automatically executed (or rejected) by the dealer’s system
– Indicative quotes invite market takers to send RFQ.
* Which mode is dominant on an ECN? I guess firm quotes.

Note true limit orders are only available on a real exchange with a clearing house, which has real power/fund of execution. See http://bigblog.tanbin.com/2012/07/unlike-exchanges-ecns-have-only-partial.html

Note “dealer” means the same as “market maker” in this context. Usually a dealer keeps inventory. He can sell short but usually for a short durations.

Here are the dominant protocols —
* On exchanges — limit order. Be it equity, futures, listed options…
** for large block trades, I guess it’s RFQ, sent in private.
* equity ECNs — no limit order.
* institutional FX spot — 1) pseudo limit orders 2) RFQ for large orders (25+ mio). No enforcement/guarantee by ECN. Dealer always gets a last look
* institutional bond market — 1) pseudo limit order 2) RFQ
* institutional OTC equity option market — RFQ. Pseudo limit order are unheard-of
* Treasury — pseudo limit orders and RFQ
* IRS — 1) RFQ 2) pseudo limit orders. Most trades are large, so RFQ dominates.

where (in your code) to run input validation

1) click-away, tab-away, or postActionEvent()

public boolean stopCellEditing() {// triggered even if text field content unchanged!

2) ENTER key hit during editing

ftf.getInputMap().put(KeyStroke.getKeyStroke(
KeyEvent.VK_ENTER, 0),
“check”); //VK_ENTER -> check -> actionInstance callback

ftf.getActionMap().put(“check”, actionInstance); // where actionInstance has the callback method running the validation

Here are the High-level Steps adding custom validation logic into a table cell editor
* You write a customized editor class to return on-demand a JFormattedTextField instead of the JTextField as the jcomponent
* in the JFormattedTextField object, you inject a formatter factory,
* the factory is initialized with a NumberFormatter
* the NumberFormatter is initialized with a NumberFormat (no “ter” suffix)
* you configure one of these format thingies to use your custom logic

http://docs.oracle.com/javase/tutorial/uiswing/components/table.html#validtext has good coverage of user input validation using TCE

container class-template having array field@@ #swap

Background — pretend to be an author of a new container template.

Most containers internally uses some raw array. Therefore the container class often (or usually?) needs a non-static field for that array. It’s tempting to simply declare an array field like

   T onsiteArray[size]; // T is the template parameter

This container is not growable. But today let’s focus on another issue — I feel this makes swap() hard to implement. Swap() achieve efficient move semantic through pointer-Variable swapping. onsiteArray is an Array-Name, not a Lvalue variable, so you can’t manipulate it as a pointer-Variable.

Note “pointer” has immensely confusing meanings (http://bigblog.tanbin.com/2012/03/3-meanings-of-pointer-tip-on-delete.html). onsiteArray is more like (alias of) pure address, but swap() requires pointer Variables.

To support swap(), I often use

      T* real_array; // to be dynamically allocated outside the class instance, and deleted in dtor

2 phase commit (2pc) — homemade "implementation" idea

Phase 1 — reversible, preliminary, try-commit.
Phase 2 — irreversible final commit.

During phase 1, the XA manager issues (over the network) the try-commit command to all resource (nodes).

* If one of them didn’t come back (perhaps after a timeout), manager issues rollback command to all.
* if all successful, it issues the final-commit command.

Phase 2 should be a very simple, no-fail operation. If a resource (node) can’t support/provide such a never-fail operation, then it can’t qualify for XA transaction at all.

Another disqualified resource — If a node crashes badly during Phase 1, then rollback may be impossible and data loss might occur — this resource doesn’t qualify for XA.

Q: is the XA mgr a separate process or just an “object” in the same VM as a resource?
A: a process on the network

Q: is the XA mgr a network daemon that can receive incoming messages? Is it a multi-threaded network server?

greeks(sensitivity) – theoretical !! realistic

All the option/CDS/IRS … pricing sensitivities (known as greeks) are always defined within a math model. These greeks are by definition theoretical, and not reliably verifiable by market data. It’s illogical and misleading to ask the question —
Q1: “if this observable factor moves by 1 bp (i.e. 0.01%) in the market, how much change in this instrument’s market quote?”

There are many interdependent factors in a dynamic market. Eg FX options – If IR moves, underlier prices often move. It’s impossible to isolate the effect of just one input variable, while holding all other inputs constant.

In fact, to compute a greek you absolutely need a math model. Without a model, you can say instrument valuation will appreciate but not sure by how much.

2 math models can give different answers to Q1.

Eclipse CDT copy-create project

Project Copy-create is easier than create-from-scratch. However, the new project is often broken, perhaps due to multiple CDT bugs:-(. Since CDT is buggy, i prefer external diagnostic tools.

* External — the exe file. ….\ideWorkspace\yourProject\Debug\*.exe. I call it “the EXE of the project”
Each clean-rebuild should recreate this file with a new timestamp. You can delete the file directly. If your main() ends with cin.get(), then you can run the EXE by double-click.

—– now, let’s look at how to copy-create —-
The copy actually copies the .exe as the EXE. I hate that name, so I run a text replace in
ideWorkspace\yourProject\.cproject. You need to reopen the project, clean and rebuild, then refresh the view to see the newly produced EXE.

You may need to create a run-config. To search for the EXE, use the browse button. However, CDT might say the EXE is not a recognized executable [1]. Occasionally You can ignore it and run the EXE externally, or through the run-config.

[1] If you prefer to fix that, change the project properties -> c/c++ build(CCB)->toolChainEditor->currentToolChain to minGW.

theta = a rent to own gamma #my take

* large positive gamma ~ large theta decay. Note theta is always negative since option valuation always decays with time.
* small positive gamma ~ small theta decay.

The extreme cases often help us simplify and better remember the basics —
– Large gamma is characteristic of ATM options
– Small gamma is for deep ITM/OTM options.

Some say “theta is a rent to own gamma”. Imagine you delta hedged your ATM long position — long gamma. Long gamma gives you upside profit potential whether underlier moves up or down. That’s an enviable position, but comes at a “rent” — With every day passing, you position loses value thanks to decay. The loss amount is the daily theta value (always negative). The larger the “upside” (gamma), the higher the daily rent (theta).

Negative gamma is for short option positions {large Negative gamma ~ large Positive theta}. In this blog we focus on long call/put positions either European or American style, so all gammas are positive.

##[12] bottlenecks in a high performance data "flow" #abinitio

Bottlenecks:

#1 probably most common — database, both read and write operations. Therefore, ETL solutions achieve superior throughput by taking data processing out of database. ETL uses DB mostly as dumb storage.

  • write – if a database data-sink capacity is too slow, then entire pipe is limited by its throughput, just like sewage.
    • relevant in mkt data and high frequency trading, where every execution must be recorded
  • read – if you must query a DB to enrich or lookup something, this read can be much slower than other parts of the pipe.

#2 (similarly) flat files. Write tends to be faster than database write. (Read is a completely different story.)
* used in high frequency trading
* used in high volume market data storage — Sigma2 for example. So flat file writing is important in industry.
* IDS uses in-memory database + some kind of flat file write-behind for persistence.

#? Web service

#? The above are IO-bound. In contrast, CPU-bound compute-intensive transform can (and do) also become bottlenecks.

packaged software less popular in front office #my take

For commercial configurable packages such as Murex, Summit, Sunguard, Calypso…, banks (big or small) are more likely to deploy to middle/back office where differentiation is unimportant. In contrast, Front office (eg. pre-trade pricing) is competitive among banks, and involves traders’ personal views, models, strategies and entire proprietary product creations. If you use software package, such a unique competitive edge is sometimes achievable by configuration, but only up to a limit. Beyond that limit, you can write plug-in modules for the vendor product (such as pricing modules) but up to a limit. Beyond that limit, you can request features but vendors are slow and/or expensive. Therefore, bigger investment banks choose Build over Buy. You can engage an external consultancy, or hire your own developers.

Middle/Back office includes booking, position master, PnL, risk, STP, clearance/settlement, GL, cash management …

Another Example — Charles River is an OMS/EMS software vendor, popular with small buy-sides. Selling points include rich feature set out-of-the-box, extension points in the form of plug-in modules client programmers can create. The last resort (Avoid!) for a desperate client is a formal Feature Request to the vendor. May take a long time. This is the dark side of the Buy (vs build) route. Vendors like to defend themselves by playing up the rich and extensible features, superior software quality, and quick FR turnaround.

ETL vendors are interested in middle/back office, and front office too. I feel CEP, Distributed cache vendors … are competing for the same customers.

For a large trading desk’s front office, the Build route is preferred. An Asia regional bank’s trading/eCommerce IT head told me.

Another practical way to customize is to add stored procs into the vendor-designed database schema. Read-only procs are powerful and safe, but functionally limited. Next level is DML stored proc. This requires intimate knowledge of table relationships. I feel with practice some API users can master it.

cell renderer to fit row height to content

int rowHeight = table.getRowHeight(); 
for (int aColumn = 0; aColumn < table.getColumnCount(); aColumn++) {
  Component comp = aColumn == column ? ret : table.prepareRenderer(table.getCellRenderer(row, aColumn), row, aColumn);
  rowHeight = Math.max(rowHeight, comp.getPreferredSize().height);
}
table.setRowHeight(row, rowHeight);

Inside getTableCellRendererComponent(), before returning, scan all cells on the current row to get the max height. Configure the current row to use that height.

How about column width? I feel there are too many rows to scan, but Yes I think it’s doable.

count unique words]big file using5machines: high-level design

Q: Design a system to calculate the number of unique words in a file
1) What if the file is huge? (i.e. cannot fit in the main memory)
2) Assuming that you have more than one computer available, how can you distribute the problem?

Constraints are the key to such an optimization. Let’s make it more realistic but hopefully without loss of generality. Say the file is 2 TB ascii of purely alphabetical words of any language in a unified alphabet, with natural distribution such as text from world newspapers. Word length is typically below 20.

I’d assume regular 100GB network with dedicated sockets between machines. The machines have roughly equal memory, and the combined memory is enough to hold the file.

I’d minimize disk and network access since these are slower than memory access and require serialization.

Q: is the network transfer such a bottle neck that I’m better off processing entire file in one machine?

— one-machine solution —
Assuming my memory (2GB) can only hold 1% of the unique words. I’d select only those words “below” ad* — i.e. aa*, ab*, ac* only. Save the unique words to a temp file, then rescan the input file looking for ad*, ae*…ak* to produce a 2nd temp file… Finally Combine the temp files.

— multi-machine solution —
Don’t bother to have one machine scanning the file and tcp the words to other machines. Just copy the entire input file by CD or file transfer to each machine. Each machine would ignore words outside its target range.

How do we divide the task. Say we have 50 machines. We don’t know the exact distribution, so if we assume aa-ak to Not have too many unique words to fit into one machine (2GB), assumption might be wrong. Instead, we’d divide the entire universe into 50 * 10 ranges. We assume even if we are underestimating, still each range should fit into one machine. Every time a machine finishes one range, it sends a tiny signal to a controller and waits for controller to give it next range.

— hashing on words —
Hash table should be sized to minimize rehash. We need superfast hashCode and compression. hashcode should use all the characters, perhaps except the first, since it tends to be the same within a range.

conduit-based tierd pricer in fixed income, again

In credit bond (specifically muni, corp and ABS) market, I have seen several sell-side quote pricers that output differential prices depending on audience/receipient. Call it double-standard if you like.

Conduit Retail — mostly HNW clients via financial advisors
Conduit Institutional
Conduit Fidelity (distributor?)
Conduit ECN-Bloomberg
Conduit ECN-TradeWeb
Conduit ECN-Knight

Say the trader has an inventory of an IBM 5.5% May 2020 bond, she sets up quote pricing rules to output slightly different prices for each conduit.

Within the instutional conduit, quote prices may differ based on the business relationship with each client. Some clients get preferential treatment.

Similarly, within the retail conduit the Gold tier HNW client may get preferential prices.

Pricer also applies quantity-based discount. Odd lots discounts is a common practice.

Fidelity case is interesting. It is probably a client rather than an ECN. For credit bonds, Fidelity probably doesn’t have inventory or ECN connectivity, so Fidelity connects to big dealers as if the dealers are ECNs.
http://bigblog.tanbin.com/2011/06/ecn-core-services-quote.html shows the 2 core functions of ECN, so Fidelity sends/receives quotes, and sends/receives orders.

IR exposure of a strucrtured vol dealer desk

I believe many (if not most) structured volatility contracts have some IR exposure. Forgive me for stating the obvious — exposure is felt by both the sell-side and the buy-side, though either side can hedge away that exposure. Let’s focus on the sell-side.

On a structured volatility dealer desk, I believe (confirmed by a veteran) the vol exposure outweighs any IR exposure, since the desk is experienced and set up to trade volatility, not IR. This means that interest rate fluctuations should have less PnL impact than volatility fluctuations.

However, the back office (including IT) must get the IR calc right, which is often non-trivial. Therefore to the back office employees, IR calc could be more complicated than the vol calc. For a given portfolio, I guess the business logic and processing complexity might theoretically be dominated by the IR component rather than the volatility component. This is suggested by anecdotal evidence.

nomura FX op system

— Products traded —
vanilla fx options
Barriers options
digital options
multiple barriers
fx option strips
Target Redemption notes (TARN)

— system function —
real time risk,
scenario risk — more flexible,
GUI changes — possibly no one will take it on except you.

Quants give the order.

y dynamic_cast returns NULL on pointers

https://bintanvictor.wordpress.com/2010/01/14/new-and-dynamic_cast-exceptions-quick-guide/ shows that dynamic_cast of a pointer returns NULL upon failure. Why not throw exception?

A: for efficient test-down-cast. dynamic_cast is the only solution. If it throws exception, the test would have to incur the cost of try/catch.

See P59 [[beyond c++ standard lib]]

retail, institutional and ECN conduits #credit bond

For a retail-friendly asset class (such as muni), a sell-side dealer system often designs 3 conduits – retail, institutional and external. That’s what I saw in a major bond dealer. Later, we also added joint-venture conduit, and “distributor” conduits such as Fidelity and Charles Schwab.

Conduit concept is fundamental. Quote price, quantity, timing, discount, bid/ask spread control, commission, odd-lot discounts…. are all controlled by a system specifically configured for a single conduit. For each inventory item, the trader needs to set up pricing rules for retail conduit, with all the gory details. Then she repeats the set-up process for institutional conduit. Then external conduit.

External refers to ECNs, often jointly owned by major dealers in the given security.

Retail and institutional are both “internal” conduits owned and operated by “us” the dealer desk. In the context of quote dissemination, we want to put out different bid/ask prices to retail clients vs institutional clients. That’s why we have the 2 conduits separated.

In theory, if we had a dominant and special client such as a big online distributor, we could have created a dedicated conduit just for this client. We would publish all quotes to this special client, but with the prices customized.

Institutional clients include investment/commercial banks, pension/mutual funds, insurers, regular corporations… These clients are often on the ECN too. If a given insurer gets the same quote from us on institutional vs external conduits, the prices may differ. On the internal institutional conduit, each client account is carefully managed because we maintain a relationship. So the prices could be better.

Even within the institutional conduit, we may want to give preferential prices to long-term, high-credit investors. High frequency shops will get bad quotes (high bid/ask spread) because they tend to extract money. Tiered pricing is the ultimate quote pricer for such a purpose.

3 meanings of POINTER + tip on q[delete this)\]

(“Headfirst” means the post on the book [[headfirst C]])

When people say “receive a pointer”, “wild pointer”, “delete a pointer”, “compare two pointers”, “include two pointer members“… a pointer can mean several things. A lot of fundamental issues lie in those subtle differences. (To keep things simple, let’s assume a 32 bit machine.)

1) 32bit Object — a pointer can occupy (heap, stack, global) memory and therefore (by ARM) qualify as a 32bit Object.Note it’s not always obvious whether a given ptr variable has its own 32-bit allocation.

  • For example, if a pointer is a field in a C struct or C# struct/class, then it pretty much has to be allocated 32 bits when the struct is allocated.
  • For example, if a double pointer p5 points to a pointer p8. Since p8 has an address it must occupy memory so p8 is by definition an object. But p5 may not be an object if without an address.

2) address — in some contexts, a “pointer” can mean an address, and we don’t care the pointer object (if any). An address is pure Information without physical form, perhaps with zero /footprint/. When you pass myfloatPtr + 2 to function(), you are passing an address into function(). This address may not be saved in any 32-bit object. I suspect compiler often uses registers to hold such addresses. Note it’s not always obvious whether a given ptr variable has its own 32-bit allocation.

  • For example, in C An array name is always a const-pointer (not ptr-to-const) to an already-allocated array in memory.  For an array of 10 doubles, 640 bits are already allocated. However, compiler may not need to allocate 32 bits to hold the array name. The array name like “myArray” is like an alias of an address-of-a-house i.e. pure address not an object.
  • For example, in C if I save the address-of-something in a transient, temp variable, compiler may optimize away the stack-allocation of that variable.
  • see also headfirst.

Fundamentally, if a symbolic name is permanently attached to an Address-of-a-house (permanent alias?), then compiler need not allocate 32bit of heap/stack/global area for the symbolic name. Compiler can simply translate the name into the address. If the symbolic name rebind to a 2nd address, compiler can still avoid allocating 32 bit for it.


Whether it’s an object or a pure address, a pointer variable is a nickname. Most commonly, a pointer means a pointer variable as in (a) below. Remember a nickname exists in source code as a mnemonic for something but binding is impermanent. When we look at code snippets, we may not know whether a variable is a

  • a) nick name for a 32-bit pointer object — most common
  • b) or nick name for a pure address. An array-name like myArray in source code is such a nickname — rather confusing. Note there’s no 32-bit pointer object in myArray, beside the actual allocation for the array.
  • See also headfirst

A 32-bit pointer Object often has multiple nick names. In general, any object can have multiple nick names. If a nickname is transient or never refers to a 2nd Object, then compiler can optimize it into (b).

—- some implications —-
A resource (like a DB) — usually requires some allocation on heap, and we access the resource via a pointer. This pointer could be a pure address,  but more commonly we pass it around in a 32-bit object.

“delete this” —– When you delete a pointer, you invoke delete() on an Address, including the controversial “delete this” — The entire Object is bulldozed but “delete this” is fine because “this” is treated as an Address. However, the method that performs “delete this” must ensure the object is not allocated by malloc() and not a nonref field (resident object) “embedded in the real estate” of another heapy thingy object. In the case of multiple inheritance, it must not be embedded in the real estate of a derived class instance. See http://www.parashift.com/c++-faq-lite/freestore-mgmt.html#faq-16.15.

reference counting —- In reference counting for, say, a shared data structure, we count “handles” on our pointee. Do we count pointer Variables or pointer Objects? I’d say neither — we count Address usage. If a thread has a function1() in its top “frame” and function1() is using our target data structure we must count it and not de-allocate it with a bulldozer. This function1() does not necessarily use any 32-bit pointer object to hold this Address. It might make do without a pointer Variable if it gets this address returned from another function.

vptr and “this” pointers —- The vptr is always a pointer Object taking up 32 bit real estate in each instance (java/c# classes and Polymorphic C++ classes). How about the “this” pointer? Probably a pointer Variable (nickname) — See http://bigblog.tanbin.com/2011/12/methods-are-fields-function-pointer.html

pointer comparison —- When you compare 2 pointers, you compare the addresses represented by 2 pointer variables. Apart from the 2 pointee objects A and B, there may not be two 32bit pointer Objects in memory whose 32 bits hold A’s or B’s addresses.

smart pointer —- When you pass a raw pointer to a smart pointer, you pass an Address, not necessarily a pointer Object or pointer Variable. A smart pointer is always an Object, never a pure Address.

wild pointer —- is a pointer Address, but a dangerous address. Object at the address is now bulldozed/reclaimed. SomeLibrary.getResource() may return an Address which becomes wild before you use it. If you don’t give a nick name to that returned value, it remains a wild pointer Address.
** now I feel even a stack object can be bulldozed, creating a wild pointer. getResource() may mistakenly return a pointer to an auto variable i.e. stack object

pointer argument or returning a pointer —– I think what’s passed is an address. I call it pbclone — the address is cloned just like an int value of 9713.

Most of the time we aren’t sure if a nickname refers to a 32-bit pointer object or pure information. The prime example of a 32-bit pointer object is a pointer field in a struct.