central features of WPF

After studying (50%?) of the most popular wpf features at a superficial level, I now feel at the “center” of the features are
– data binding from VM to visuals
– DProp

Many important features grow (extend, borrow) from the center
– 2-way data binding
– binding from visual to visual
– data triggers
– property triggers? based on DProp
– AProp? based on DProp
– command binding? borrows from data binding
– Most controls rely on data binding
– Most controls expose a lot of dependency properties, and these are easily programmable in mere xaml
– a huge variety of requirements are now achievable in xaml without any c# code, largely due to the simple ideas in the “center”.

itemsControl.Items ^ ItemsSource ^ itemsCollection

Any time you have a collection to show, ItemsControl and subtypes ListBox/ListView are the most popular choices. ItemsControl.Items is a property, whose type is ItemsCollection. By the way, ItemsCollection is a subtype of CollectionView — sorting, filter etc.

For Data binding, the target must be this.ItemsSource, not this.Items. Once using that, Items.Add() will trigger *runtime* exception. Instead, add to the data-bound physical collection instead. That collection is usually an ObservableCollection. After you add to it, the screen will auto-refresh, without any intervention.

Under the hood, I believe After the add/remove, the data binding engine queries the physical data and refreshes the display. Just put a breakpoint on the getter of the added item.

[17] strength/weakness ] sg dev talent pool

Update — SG tech talent pool=insufficient: expertise^GTD

+++ core trading engine green field development, not just maintenance
+++ [d] java keywords and syntax nitty gritty
+++ [d] threading, data structure, STL
+++ [d] integration of MOM, dispatchers, queues, threads in a high volume design
+++ java and c++
+++ sybase
+++ [d] DB tuning
+++ unix
+++ bond math
+++ tibrv
+++ complex query
+ concurrent package
+ [d] low latency techniques + theory
+ swing
+ java mem mgmt
+ pre-trade quote pricing
+ trade execution, order matching
+ marking, pnl
+ design patterns

– greeks
– unix tuning for low latency
– python
— mkt data
— c#. Many candidates have c# and c++

[d = details]

## low latency – key expertise4GTD

I talked to a few big and small low-latency shops. I feel latency optimization is “low-logic, high speed, high throughput”. Below are the Expertise required, but in any real system, you will see diminishing return on each aspect, so remember to prioritize the cost-effective areas to optimize. See my post on 80/20 (http://bigblog.tanbin.com/2012/03/8020-rule-dimishing-return.html)

* threading
** parallel lock free threading but sometimes we can’t avoid inter-thread-comm (condVar)
** [L] try every technique to avoid locking (such as immmutables)

* [L] low-level memory management — custom allocators, custom (or disabled) garbage collectors
** care and feeding of java GC
** try every way to avoid non-deterministic garbage collection

* in-memory data stores — KDB etc
** This is how secDB compensates for some of the performance drawbacks
** state maintenance in memory — important in many OMS, DMA, exchange connectivity engines.

* try every techniques to avoid RDBMS
** use flat files

* low-latency messaging — 29West, tibrv messaging appliance …
** [H] async MOM is the only choice for high volume systems
** multicast is the de-facto standard

* OS monitoring and optimization
** dtrace
** network utilization
** paging activity
** cpu utilization

* [L] socket programming customized
* avoid ethernet collisions
* [H] avoid choke points, promote multi-lane highways
* [H] connectivity, collocation
* [L] serialization – customized
* [L] powerful hardware, FPGA
* [H] scale out – important
* 64 bit
* realtime java vs C++
[L] = low-level
[H] = high-level architectural feature

c++/java MS commodity IV – 2010

Q: SQL when would u turn on dirty read?

Q: can you tell me some of the drawbacks of stored-proc?

Q: challenges in database refactor?

Q: technical/project challenges you faced in your career?

Q: how do you put across your argument in that challenging situation?

Q: when I enter some.pl in a shell, what happens?

Q: 3 threads are adding/remove/reading single elements in a hashmap. How do you optimize synchronization?
A (now i think): i guess lockfree is perhaps the best
A: concurrent hashmap with each segment holding multiple keys

Q: but if remover and reader use different keys, then why should they wait for each other?
A: they might hit the same bucket

Q: what other maps beside hashmap?
A: treemap, a red-black tree

Q: what’s a red-black tree?
A: perhaps a balanced tree

Q: how is it balanced?
A: perhaps by shifting the root?

Q: What are the list implementations?

Q: is insertion faster in linked list or array-based list

Q: is insertion faster in linked list or hashmap

Where is c++ used? quant lib for risk, PnL and trade booking. Is c++ going away? no.

2 main specializations — physical scheduling + risk including pnl attribution

smart ptr is 50% similar to a raw ptr

I’d say it’s 90% different and 10% similar to a raw ptr. Smart pointers are class objects, way beyond 32-bit pointers. smart pointers overload de-referencer and many other operators to “look like” a raw pointer, but it’s really a class-template (Avoid the non-standard “template-class” jargon)

– pbclone? A raw pointer is always passed by clone (bitwise clone), just as in java. Smart pointers override copier (and op=), so a new instance is created based on the RHS smart pointer instance. Is this pbclone or pbref? I’d say more like pbclone.
– When you use a star to dereference a raw ptr, you simply dereference (unwrap) the real pointer. Smart pointer dereference is a high-level user-defined operation. There’s a get() method to return the  real pointer, but I think we seldom need it.
– Similarly, you use an arrow to access a member of the pointee object, without the get() method.
– You can up/down cast raw pointers, not smart pointers.
– a raw ptr can be cast to void ptr. A smart ptr can’t.
– Raw pointers are key to virtual functions, not smart pointers.
– Creation and initialization is simple for raw pointers.
– size? A Smart pointer exceeds 32 bits except intrusive_ptr
– raw ptr has tight integration with arrays, with pointer arithmetic. Not smart ptr.
– double ptr sematics with 2 stars is natural for raw ptr, not smart ptr
– delete
– new? we can put raw ptr as lvalue of a “new” expression
– null assignment

In conclusion, raw pointers are such a part of the fabric. As a crude analogy, an e-book can feel like a paper book, but you can’t fold corners; can’t write anywhere using any pen; can’t spread 3 books out on your table; can’t tear off a page; can’t feel the thickness

linked hash map with LRU size control

I was asked to implement a generic LRU cache with size control. When size reaches a threshold, the LRU item must be kicked out. Optionally thread safe.

I came up with a concurrent hash map to provide fast get() and put(). Then I realized the kickout() operation would require a full scan of O(n) unless I provide an auxiliary data structure to physically
store the nodes in lastAccess order. I came up with a sorted map (concurrent skip list map) of {time stamp –> key}. My idea was to __adjust__ the physical positions within the aux at every get() and put(). My “adjust” implementation basically removes an item from the aux and re-insert.

Alternatively, the key object could be decorated in a stampedKey objects with a mutable timestamp field and an immutable key field.

That was my back-of-envelope design. Let’s look at the standard solution — a LinkedHashMap with “A special constructor to create a linked hash map whose order of iteration is the order in which its
entries were last accessed, from least-recently accessed to most-recently (access-order). This kind of map is well-suited to building LRU caches. Invoking the put or get method results in an access to the corresponding entry (assuming it exists after the invocation completes). The putAll method generates one entry access
for each mapping in the specified map, in the order that key-value mappings are provided by the specified map’s entry set iterator. No other methods generate entry accesses. In particular, operations on collection-views do not affect the order of iteration of the backing map.”

Apparently that too uses a regular hash table + an aux which is a linked list. (Linked list allows 2 nodes with identical time stamp. For my design to accommodate this, the stampedKey decorator might be stored
in the aux.)

Without looking at its source code, I believe the adjustment uses remove-reinsert on the aux.

Q: removing (not kickout()) from linked list is O(n). How do you avoid that? Remember any remove() on the cache must hit both the hash table and the aux.
%%A: the hash table entry holds a pointer to the node on the linked list

wpf/swing – invite EDT to query the model and update screen

In Swing, You can never change the screen directly — You always “invite” the EDT (or the UI thread in wpf)  to asynchronously query the model and update the screen.

I guess this is a golden rule in wpf and swing, at least for the mainstream visual components with a model/view.

If my custom method synchronously updates a visual component, then this method must always be on the EDT.

In WPF, How about listBox1.Items.Add(..)? I guess this is editing the Model or ViewModel, not the View. I guess screen is updated upon invitation.

(Asked Neeraj…)

"events" in wpf – 3 meanings

In WPF, the “event” word takes on multiple meanings. Know the subtle differences.

1) CLR event — used extensively in INPC. These events are generated by code, not user actions. Here “event” has the unique dotnet meaning. In other GUI systems (like swing), events are also generated from code, such as table change event after you add a row.

2) UI event from keyboard/mouse — the traditional GUI events, implemented (in WPF) with the CLR event + other things. These events have a hardware flavor.

3) routed (bubble/tunnel) event — a special type of UI events?

1st SCB IV #commod #private inheritance

Q: you have identical rows in a SQL table. How do you remove the unnecessary rows?
%A: select into a new table, then overwrite the old, but this involves a lot of disk space and IO
%A: perhaps delete where (select count(*)…) > 1 and rowid > 1 — using oracle rowid

Q: given a long string consisting of the 26 letters A-Z, print a histogram like A:865 times, B:9932 times….

Q: copy a vector to another vector. What if the source is very large? Correct — you get reallocation, so how do you avoid that?
%A: either reserve or resize() the target vector with sufficient capacity. However, the 2nd option default-constructs a large number of payload objects!

Q: when would you use private inheritance?
A: This is rarely needed or quizzed. Not on my Tier 1/2. [[effC++]] P204 has an example of MI and also an example of private inheritance — a public inheritance of a pure interface and also a private inheritance of a concrete class. However, Google style guide strictly states that private inheritance should be replaced with composition.

bare bones trading system components for a small trading shop

I talked to a micro hedge fund (not a prop trading shop, since they have outside investors) and realized how important each system functionality is to a trader.

* pos mgmt? I thought this would be the core, but i guess in day trading, there’s not much position to keep. Probably provided by a professional software package. Such packages exist for small to medium to large firms. Even the largest banks could use Murex and SunGuard (a competitor to Apama)
* market data? essential to a trader, but high volume not always needed
* connectivity to ECN or banks, or other liquidity venues? essential
* pnl – always requires manual verification
* trade blotter? apparently quite basic. probably provided by a professional package.

Again, pricing proves to be heart of the data flow. Most important data items are those related to pricing. Most important everyday decision is pricing decision including (market) risk management.

table events and fire* methods

Q: is there any table event directly linked to the jtable object, without the table model object?
%%A: i doubt it.
Note all listeners are registered on the table model object, not the jtable object. P705[[def guide]]

Before studying the fire* methods of swing tables, better get familiar with the typical TableModelEvent constructors. A few examples

TableModelEvent(source); // all rows changed
TableModelEvent(source, HEADER_ROW); // Structure change, reallocate TableColumns
TableModelEvent(source, 3, 6); // Rows 3 to 6 inclusive changed
TableModelEvent(source, 2, 2, 6); // Cell at (2, 6) changed
TableModelEvent(source, 3, 6, ALL_COLUMNS, DELETE); // Rows (3, 6) were deleted

Now we are ready to compare the various fire* methods

fireTableChanged(TableModelEvent e) // is internal workhorse method used by other fire* methods
fireTableStructureChanged(void) // column (structural) change.
fireTableDataChanged(void) // no arg, so all rows changed. But no column (structural) change.
fireTableRowsUpdated/Inserted/Deleted(firstRow, lastRow)

I don’t see any fire* method about a single cell. I guess you can construct such an event and pass it to fireTableChanged(e)

##c++features used on WallSt, again

STL, boost
array-pointer integration (A major strength in C)
dynamic cast, but not RTTI or vptrcustomized operator-new and operator-delete

–These features are less used
tricky operator overload
template class development
Now we know that on Wall Street, C++ OO designs are primitive compared to Java OO designs

static: 3meanings for c++objects

http://www.cprogramming.com/tutorial/statickeyword.html echoes my analysis below.

I feel “static” has too many overloaded meanings, almost unrelated. (Java is better.) Maybe there are just 3 mutually exclusive meanings

1) local static vars — simplest meaning of “static”. as inherited from C, static local var retain their values between function calls. My Chartered system relied heavily on these.

2) static field — the java meaning.

3) file scope — We know local variables (block scope) and global variables (extern). There is one (intermediate?) level of scoping for variables — file scope using “static” keyword. A variable with file scope can be accessed by any function or block within a single file. To declare a file scoped variable, simply declare a variable outside of a block. (Note all of these 3 types are about “objects”, rather than “variables”.) There’s a technical jargon — non-local static object, sometimes IMPLICITLY static. See [[effC++]] P221.

Are all these objects allocated at compile time not dynamically at run time. How about auto variables?

How about static_cast? Irrelevant to this discussion. Nothing to do with object type

simple snoop table to monitor DB access

 charp1 VARCHAR(16384) DEFAULT  ”  NULL,
 charv1 VARCHAR(16384) DEFAULT  ”  NULL,
 charp2 VARCHAR(16384) DEFAULT  ”  NULL,
 charv2 VARCHAR(16384) DEFAULT  ”  NULL,
 nump1 VARCHAR(99)     DEFAULT  ”  NULL,
 numv1 FLOAT           DEFAULT  0   NULL,
 datep1 VARCHAR(99)    DEFAULT  ”  NULL,

/* one way to use this table is to save multiple params when calling a proc
insert snoop(charp1,charv1,nump1,numv1,datep1,datev1,sproc)
      values(‘param1’,?, ‘param2’,?,  ‘param3’,?, ‘myProc’)
 sproc VARCHAR(99) DEFAULT  ”  NULL,
 remark VARCHAR(99) DEFAULT  ”  NULL

java singleton – y nested class implementation important

http://en.wikipedia.org/wiki/Initialization_on_demand_holder_idiom is the recommended solution. My friend Rohan pointed out why this solution is sometimes necessary.

Say you can’t eagerly init the static field because it must wait for some service object.

Say you also don’t want to incur the static synchronization every time. Though un-contended lock acquisition is cheap, it’s never free.

##design patterns in my own code

— Ranked by real value and power —
producer/consumer – swing, thread pool, MOM, and almost anything synchronous
template method
proxy – dynamic or static
Dependency injection
command — stateful timer task
singleton — the strict ones are less useful.
adapter ie wrapper

— other patterns I like —

— patterns I don’t apply in my own code —
service locator
chain of responsibilityflyweight

no lock no multiplex`no collections/autobox`:mkt-data sys

Locks and condition variables are important to threading and Wall street systems, but the highest performance market data systems don't use those. Many of them don't use java at all.

They dedicate a CPU to a single thread, eliminating context switch. The thread reads a (possibly fixed sized) chuck of data from a single socket and puts the data into a buffer, then it goes back to the socket, non-stop, until there's no more data to read. At that time, the read operation blocks the thread and the exclusive CPU. Subsequent Processing on the buffer is asynchronous and on a different thread. This 2nd thread can also get a dedicated CPU.

This design ensures that the socket is consumed at the highest possible speed. (Can you apply 2 CPUs on this socket? I doubt it.) You may notice that the dedicated CPU is idle when socket is empty, but in the context of a high-volume market data feed that's unlikely or a small price to pay for the throughput.

Large live feeds often require dedicated hardware anyway. Dedication means possible under-utilization.

What if you try to be clever and multiplex a single thread among 2 sockets? Well you can apply only one cpu on any one thread! Throughput is slower.

[11]how2incur 1 STW JGC/day: #1 low-latency java

Big eq trading desks in Citi, UBS and other banks claim they incur a single StopTheWorld GC “penalty” a day in most jvm instances (where an instance is typically a node in a scalable grid). Here are some techniques.

1) “pre-allocate” long-living objects. Probably allocate a large array of primitive objects. Each “primitive object” could be a byteArray of fixed size. No POJO no java beans.
1b)) don’t let GC do the clean-up; reuse the memory pre-allocated. Reuse by dropping in new data. Circular array of fixed-size byteArray is one of my ideas. RTS xtap also uses circular array to hold new messages. There are millions of them a day.

See pre-allocate DTOs@SOD #HFT #RTS

2) avoid promoting objects beyond the nursery/eden — see other posts in this blog.

3) run 64bit and allocate enough heap (typically 10G) to last a day.
)) Each jvm instance presumably serves only US, or only Hongkong…, so it could shutdown nightly.
)) If we still need more heap, split to 2 jvm instances.
)) avoid distributed cache and latency. Barc DMA team also avoided distributed caching.
)) How do you size heap? See post on heap sizing

5) During a scheduled quiet period, preemptively trigger a full GC throughout the entire (huge) heap, which could actually take 30 minutes. Just call Runtime.gc(). If this doesn’t trigger GC, then tweak -XdisableExplicitGC. This flag disables explicit calls to Runtime.gc() but has rare side effects — https://www.ibm.com/developerworks/community/blogs/kevgrig/entry/malpractice_do_not_permanently_use_xdisableexplicitgc_or_xx_disableexplicitgc?lang=en

## common FX arbitrage scenarios

As in other financial markets (FI, options, forwards…), arbitrage is probably #1 force at play constraining public quoted prices. In FX, there are a couple of major arbitrage scenarios, and each is extremely important to real world pricing algorithms

* tri-arb (total 6 bid/ask numbers on 3 currency pairs)
* Int Rate Parity. See [[Complete guide]]
* arb between FX futures and spot markets. Bear Sterns used to do these arbitrage trades.

IRP is at the heart of forward point calculations.

IRP is the crucial link between the money market and FX forward market prices.

Tri-arb is at the heart of cross rate derivation.

How about fx options quoted on AB, AC and BC? I believe there are arb-constraints on the quotes.

IV – UDP/java

My own Q: How do you make UDP reliable?
A: sequence number + gap management + retransmission

My own Q: can q(snoop) capture UDP traffic?

Q: Why would multicast use a different address space? Theoretical question?
A: each MC address is a group…

Q: why would a server refuse connection? (Theoretical question?)
%%A: perhaps tcp queue is full, so application layer won’t see anything


Q: How do you avoid full GC
Q: what’s the impact of 64 bit JVM?
Q: how many package break releases did your team have in a year?
Q: In a live production system, how do you make configuration changes with minimal impact to existing modules?

dbx commands

To start dbx, you can just type dbx without arguments

–the essential commands
print, stop

–critical but rarely used commands
attach your_pid # just a single argument
thread t@8 # select thread “t@8”
where 5

–most frequent commands
# cont — (or “c”) command to resume program execution. Currently,
threads use synchronous breakpoints, so all threads resume execution.
# status # shows all breakpoints. You need this after a lot of “cont”
# dump — in suspension mode, shows a lot of variables at that point in
# assign

list # src

–if your c-string is too long to 'print',
print strlen(longStr)
print longStr+788 # will print from that position to the \0

–help any_command

–(18) What are the major MT features of dbx?

The `threads' command will give a list of all known threads, with
their current state, base functions, current functions etc.
You can examine stack traces of each thread.
You can resume (`cont') all threads.
You can `step' or `next' a specific thread.
The `thread' command helps navigating between threads.

object models in Quartz, briefly

Every risk system has this core family of object models. (I think we are describing the objects from a quant perspective, not as java coders)

o Market Models — market data -> curves. Probably a well-fitted math model consistent with (bulk of) the market data

o instrument/product models

o Deal model – roll-up to Portfolios/books, trade events/workflow. (I believe “Deal” refers to a position, trade or any financial contract we enter into.)

o Lifecycle & Settlements

Heart of all such systems is probably valuation, and its sensitivity to various factors. Valuation changes in response to market data. Historical Market Data and Simulation/Backtesting is the 2nd layer of that “heart”.

Quartz features: key points of my 2011 learning

Qz object/data models — primarily 1) models for positions, but also
models for trades, market data, ref data. Another mail will describe the
key models.

Qz database — object DB, possibly in-memory, but persisted in files.
Written in c++ for performance but main api is python. This ODB can be a
backend to replace distributed caches to scale computations onto a grid.

Qz usage — 1) market-risk 2) pricing/valuation. You can tell who’s the
#1 target user — The dev team is named “global markets risk systems”

Qz implementation — ODB database + dependency graph, tightly

WPF is a non-core feature. I feel the creators added WPF (and java)
bindings on top of the DB and models. Once the data models are sound and
coherent, it’s fairly easy to provide read/write access in WPF. Since
WPF is hard, python was added on top of WPF as a simplified “driver”
language. “Business logic resides solely in the objects; UI is just the
presentation layer.”

The core object models are written in c++, manipulated in python. Most
of the model logic is probably in python.

Database tables to facilitate cross rate generation

has some details.)

Since the same EUR/USD bid/ask can be published by multiple ECN's,  we needed a

  QuoteSourceID — either an ECN or a market maker's ID
  Symbol char(6) — exactly 6 characters
  SSID — identity column

Based on this mapping, we need a

  CrossSymbol char(6)
  SSID2 — These 2 rows refer to the 4 numbers sufficient to price the
CrossSymbol in question
  VehicleCurrency char(3) — either USD or EUR
  Formula — only a few combination

With this set-up, the same cross symbol AAABBB can appear many times
in this table because I can price AAABBB bid/ask using

USDAAA b/a USDBBB b/a from Currenex (4 numbers)
EURAAA b/a BBBEUR b/a from Currenex (4 numbers)
USDAAA b/a USDBBB b/a from Hotspot (4 numbers)
EURAAA b/a BBBEUR b/a from Hotspot (4 numbers)

So which of these algorithms should I use? I feel we need to use all
of them and select the safest bid and safest offer for AAABBB symbol.

4 distinct sub-domains of financial math

Even though many real world scenarios involve more than a single topic below, it’s still practically useful if you understand one of the topics well.

* bond math including IRS, FRA…
* option
* VAR — correlation, simulation
* credit risk — essential to commercial banks, consumers, corporate borrowers… though I don’t know how much math

Every single topic is important to risk management (like the OC quant team); Half the topics are important to pre-trade pricing. For example, a lot of bond math (duration, OAS, term structure) is not that critical to pre-trade quote pricing.

All derivatives always involve more math than the underlying instrument do, partly because of the expiration built into every derivative instrument. Among derivatives, option math complexity tends to exceed swaps, futures, and forwards.

%%Lockfree Unbounded Stack(generic) using AtomicReference

import static java.lang.System.*;
import java.util.concurrent.atomic.AtomicReference;
public class LockfreeUnboundedStack {
                final private AtomicReference<Node> arTop = new AtomicReference<Node>();
                public LockfreeUnboundedStack() {
                                this.arTop.set((Node) Node.theTerminator);
                public static void main(String[] oipwuir) {
                                LockfreeUnboundedStack stack = new LockfreeUnboundedStack();
                                Node a = stack.pop();
                                a = stack.pop();
                                a = stack.pop();
                                a = stack.pop();
                                a = stack.pop();
                                a = stack.pop();
                * @return a terminator node if empty stack
                public Node pop() {
                                boolean done;
                                Node oldTopNode;
                                do {
                                                oldTopNode = this.arTop.get();
                                                if (oldTopNode.isTerminator()) {
                                                                out.println(“returning terminator”);
                                                                return oldTopNode;
                                                Node newTopNode = oldTopNode.previous;
                                                done = this.arTop.compareAndSet(oldTopNode, newTopNode);
                                } while (!done);
                                out.println(“returning ” + oldTopNode);
                                return oldTopNode;
                public void push(V item) {
                                assert item != null;
                                Node newNode = new Node(item);
                                boolean done;
                                do {
                                                Node replacedTopNode = this.arTop.get();
                                                newNode.previous = replacedTopNode;
                                                done = arTop.compareAndSet(replacedTopNode, newNode);
                                } while (!done);
class Node {
                final V payload;
                Node previous = null;
                public Node(V pay) {
                                assert pay != null;
                                this.payload = pay;
                * callable only by the anonymous nested class
                private Node() {
                                this.payload = null;
                public boolean isTerminator() {
                                return false;
                public String toString() {
                                String p = “an invalid payload”;
                                if (this.payload != null) p = this.payload.toString();
                                return “Node holding ” + this.payload.toString();
                static final Node theTerminator = new Node() {
                                final public boolean isTerminator() {
                                                return true;

quote-drive ^ order-driven markets #my take

Many practitioners classify each market into either quote-driven or order-driven.
– Futures, Listed options, listed stocks are Order-Driven.
– FX, Bonds, IRS, CDS are Quote-driven

—- In Quote-Driven markets, dealers (loosely known as “market makers”) publish quotes on
– interdealer broker systems
– 3rd-party dealer-to-client markets known as ECN’s
– their own private markets (like MLBM or CitiVelocity, or perhaps Rediplus)

Clients can also send RFQ to get customized quotes.

Here’s a key feature of the quote — at execution time, the dealer has a last look and can reject the order. Therefore the quote is an indication, not a legal promise.

FX spot/forward is Quote-driven. In some markets (such as Hotspot), non-dealers can also publish quotes, and probably can reject orders too.

Q: How about limit orders in FX spot market?
A: Remember FX spot is QD. When an ECN lets you (a retail trader) submit a limit order, the ECN will hit/lift the quotes on your behalf but still the dealer has the last look.

—- In Order-Driven markets, participants submit market orders, and “limit orders” like irreversible quotes. The exchange performs order matching [1] among market orders and limit orders. At time of execution, there’s no (easy) way for either side to reject the trade. Only the exchange can cancel trades, as they did to a lot of trades on a few special occasions.

To the exchange, there’s no dealer vs. client. If the trade happens be an individual vs GS, then GS is perhaps a dealer, perhaps a market-maker (not the same thing), or perhaps just a regular player. The exchange treats both sides as equals.

[1] Note stop market orders and stop limit orders are not part of the matching until they “activate” to become live market/limit orders.
In FX, even the executable quotes are subject to dealer’s “last look”. When a small investor Ken hits a Citi offer on an ECN, the ECN dare not book the trade right away because it can’t guarantee to Ken that Citi’s last look would pass. Ultimately, the ECN doesn’t take position.

Q: In stock exchanges (or other listed markets), dealers can’t cancel a trade. Only the exchange can. Exchange executes trades and they are final, because the exchange takes position. Why is the stock exchange so powerful? Why is the exchange able to guarantee to Ken that no matter what Citi does, Ken’s trade is confirmed?
%%A: because the exchange’s clearing fund is contributed by the big dealers
%%A: because the exchange is counterparty of every trade and takes positions. If a member bank (Citi in this case) fails to deliver, the exchange takes the loss, using the clearing fund.

FX trading volume by hedge funds

passive trading (i.e. posting a bid or offer) and active trading (i.e.
hitting a bid or offer).

hedge funds and CTAs trade FX speculatively to a greater extent than
any other type of buy-side institution. most hedge funds trade FX
electronically on one or more multidealer-to-client platforms. The
leaders in this space remain Currenex, FX Connect, FXAll, and Hotspot

(like tradeweb for bonds?)

Hotspot in 2006 did 5-10k trades/day. Hedge funds and CTAs account for
approximately 80 percent of Hotspot's volumes. The balance is composed
of asset managers, large-volume corporates, and small, nonmarket maker
banks. The platform trades only forex spot, and supports 24 currency
pairs. Buy-side to buy-side trading accounts for up to 45% of all
trade volumes.

Hotspot in 2011 trade about USD 60b notional. Hotspot trades are
exclusively institutional, so minimum lot size is probably 1m or 10m.
About 5,000 – 50,000 trades a day.

FXall java IV

Q: what’s the difference between sleep() and wait()
Q: what’s an inner class?
Q: can an inner class access variables outside the inner class?
Q: what’s a deadlock? How do you avoid deadlocks?
Q: how do you tune garbage collection?
Q: what’s isolation level in database? What do you know about isolation levels?
Q: what’s concurrent hash map?
Q: hash map iterator can throw concurrent mod exception. In the same scenario, if I use CHM instead of HM, what happens?
Q: override equals() in your own String class
Q: implement a static sort() method accepting a collection of numbers. Each member must be a particular type, which is a subtype of Number.java.

[11] real time high volume FX quote processing #letter

Horizontal scale-out (distributing to different boxes) is the design of choice when we are cpu-bound. For instance, if we get hundreds of updates a sec and each update requires repricing a large number of objects.

Ideally, you would want cpu to be saturated. (By using twice the hardware threads, you want throughput to double.) Our pricing engine didn’t have that much cpu load, so we didn’t scale out to more than a few boxes.

The complication of scale-out is, data required to reprice one object may reside in different boxes. People try many solutions like memory virtualization (non-trivial synchronization cost + network latency), message-passing, RMI, … but I personally prefer the one-big machine approach. Throw in 16 (or 128) processors, each with say 4 to 8 hardware threads, run 64-bit, throw in 256G RAM. No network latency. No RMI/messaging latency. I think this hardware is rather costly. Total cost of 8 smaller machines with a comparable total CPU power would cost much less, so most big banks prefer it – so-called grid computing.

According to my observations, most practitioners in your type of situations eventually opt for scale-out.

It sounds like after routing a message, your “worker” process has all it needs in its local memory. That would be an ideal use case for parallel processing.

I don’t know if FX spot real time pricing is that ideal. Specifically, suppose a worker process is *dedicated* to update and publish eur/usd spot quote. I know you would listen to the eurusd quotes from all liquidity providers, but do you also need to watch usd/jpy and eur/jpy?

mkt-data subscription engines prefer c++ over C

For a busy feed, Java is usually slower. One of the many reasons is autoboxing. Market data always prefer primitive integers (rather than floats), char-arrays (rather than null-terminated or fancy strings).

I think Another reason is garbage collector — non-deterministic. I feel explicit free() is fast and efficient [1].

A market data engineer at 2-Sigma said C++ is the language of choice, rather than C or java. Some market data subscription engines use C to simulate basic C++ features.

[1] free(3) is a standard library function, not a syscall (manpage section 2). No kernel involvement.

JMS queue is (almost) a FIFO

Not sure about topics, but a jms queue is a priority queue as far as I know.

JMS guarantees FIFO order for messages of the same priority but will attempt to expedite messages of higher priority.

Topic messages also have a priority field, so I assume delivery is based on priority too.

swingWorker/Timer example

public class TimerSwingWorker extends SwingWorker {
final Timer newsTimer = new Timer(millisec2sleep, new ActionListener() {

public void actionPerformed(ActionEvent notUsed) {


// each periodic timer event triggers EDT to instantiate and starts a new
// *scanner* worker, which runs only once.
// Worker will contact EDT.
new ScannerWorkerThread(jta, command2getUpdates, ” <- detected ").execute();


}); // newsTimer instantiated. Once started [see doInBackground()], newsTimer will fire periodic “news” events
// Note all timer instances share the same timer thread.
public Void doInBackground() {
this.newsTimer.start(); // now our timer will start firing periodically

interesting jargon in FX swap # !! ccy swap

What does it mean if you “buy and sell GBP 5 million against USD in a 1-month (30 days) FX swap”?

It means you buy GBP now, using USD (as collateral?), and in a month sell it back with interest, and get back USD.

Note you must arrange to sell back more than 5mil GBP, otherwise, you end up holding some amount of interest in a foreign currency (GBP), with an unknown market value when reporting in home currency (USD). This is commonly seen as a FX risk.

##high frequency trading system expertise

Background — what asset classes are we talking about? Equities, FX, index futures, listed options, US/German government bonds, ED futures, …See also http://bigblog.tanbin.com/2011/06/algorithmic-vs-high-frequency-vs-low.html.

Here are some basic features/techniques of HF (high-frequency) systems. In other words, HF trading systems compete on these —

1) latency – see other blog posts such as http://bigblog.tanbin.com/2011/08/low-latency-key-expertise.html

2) algo trading — a mature body of knowledge and technique starting from vwap/twap, Arrival-Price, Implmentation-Shortfall …
** trade origination

3) market data — basis of algo and latency techniques. HF systems have higher requirement for market data

?) real time risk measurement/control — i feel risk system is different if you do thousands of trades a minute. I know some HF shops run trading engine on autopilot, so risk system is not a barometer for a trader to look at

—- Above are the 3 aspects I know —-

4) architecture? Do HF system demand a special architecture? I think A) latency and B) market data volume dictates architecture

5) pricing model? It’s Part of every trading algorithm, but are they any different from the regular run-of-the-mill pricing engine in every automated trading system? Well, there are models and there are models. I feel algo trading models are slightly more [1] sophisticated than pricing models in other pretrade pricing engines.

[1] sometimes less. In practice, some HF engines compete on latency and seeks small profit on each trade, so pricing is probably simplified.

I feel HF systems sometimes use simpler pricing models than non-HF. However, it’s possible that for a given instrument a large percentage of HF trades come from a single trading engine, and that engine happens to use a simple or complex, fast or slow pricer. I won’t argue with such statistics. I’m only giving a personal perspective.

The more sophisticated pricing models usually exist in time-insensitive pre-trade Aanalysis or post-trade phases (no such luxury in the millisecond real time battlefield). They often analyze a larger body of market data, unconstrained by latency of data analysis. They often involve time-consuming simulations. They often involve optimizations. They may output a price decorated with a bunch of metrics about the quality of the price. Such output is often intended for a human consumption not machine consumption.

Now, if another buy-side runs a traditional trading system to compete against a HF, latency-gap alone could spell defeat – consider front-runners

## personal xp on low latency trading

Thread — lockfree becomes relevant in latency-sensitive contexts
Thread — create opportunity for parallelism, and exploit multi-core
Thread — avoid locks including concurrent data structures and database. Be creative.
Data structures — STL is known to be fairly efficient but can affect performance
Data structures — minimize footprint
Data structures — favor primitive arrays because allocations are fewer and footprint is smaller
Algo — favor iterative over recursive
DB tuning — avoid hitting DB? Often impractical IMO
Serialize — favor primitives over java autoboxing
Serialize — favor in-process transfers, and bypass serialization
Mem — avoid vtbl for large volume of small objects
Mem — reuse objects, and avoid instantiation? Often impractical IMO
Mem — mem virtualization (like gemfire) as alternative to DB. A large topic.
Mem — Xms == Xmx, to avoid dynamic heap expansion
Mem — control size of large caches.
Mem — consider weak reference
MOM — multicast is the clear favorite to cope with current market data volume
MOM — shrink payload. FIX is a primary example.
GC — avoid forced GC, by allocating large enough heap
Socket — dedicate a thread to a single connection and avoid context switching
Socket — UDP can be faster than TCP
Socket — non-blocking can beat blocking if there are too many low-volume connections. See post on select()

order book live feed – stream^snapshot

Refer to the blog on order-driven vs quote-driven markets. Suppose a ECN receives limit orders or executable quotes (like limit orders) from traders. ECN maintains them in an order book. Suppose you are a trader and want to receive all updates to that order book. There are 2 common formats

1) The snapshot feed can simplify development/integration with the ECN as the subscriber application does not need to manage the internal state of the ECN order book. The feed publishes a complete snapshot of the order book at a fixed interval; this interval could be something like 50ms. It is possible to subscribe to a specific list of securities upon connecting.

2) Streaming event-based format. consumes more bandwidth (1Mb/sec) and requires that the subscriber manages the state of the ECN order book by applying events received over the feed. The advantage of this feed is that subscriber’s market data will be as accurate as possible and will not have the possible 50ms delay of the snapshot feed.

virtual base-class initialization during ctor/dtor

Rule 2 — http://bigblog.tanbin.com/2012/01/dcbc-dtor-execution-order.html shows the dtor sequence. Ctor is the opposite — B-C-D. Let’s call it Rule 2

The exception to Rule 2 is virtual base — Virtual base is constructed before non-virtual bases.

However, this rule still holds —
Rule 1 — dtor / ctor are Always in reverse orders.

Therefore, virtual base is destroyed Last. See http://msdn.microsoft.com/en-us/library/8ff62s5k.aspx

P292 ARM has a concise and complete treatment.

temporary topic – is actually a temporary queue

Q: Since the topic is unknown to other sessions, (See also blog on temp topic vs sybase temp table http://bigblog.tanbin.com/2010/10/temporary-queue-jms.html) how do other sessions publish to it?
A: this topic is *revealed* via the ReplyTo header. Senders receive private “invitations” to send to the temp topic. Once received, the recipient can save that topic and hit it any time.

Temp topic accepts no subscriber except the topic’s creator. 

This exclusive subscriber must be non-durable.

Since the topic has at most 1 receiver, it’s better named a TemporaryQueue.

trade confirm – key points

Keyword: compare — confirms are generated for the purpose of comparing the 2 confirms

Keyword: pre-settle — confirms must be compared before settlement

For a listed derivative, match-maker (exchange etc) sends 2 trade confirms to both buyer/seller, so both side will independently verify the confirms — 3-way confirm.

In OTC, both sides need to exchange confirms as well. Without it, there's real risk of “out trade”. In my bond execution engine, we are one of the 2 sides only (the dealer), so we generate one confirm only. I guess the retail side also generates an independent confirm.

For CDS trades, there are third party post-trade platforms (like Creditex ) to provide this service. The counterparties could execute a trade electronically or over phone, then send execution details to the third party, which matches them up before sending a confirm to each.

counterparty reference table

TblCounterParty is a typical static reference table in any trading system.
Usage: generate confirms.
Usage: Counterparty is needed to generate “settlement instructions”, as Jinwen pointed out.

An interesting complexity occurred with a “parent company” in the counterparty table. This row in the table has “associated counterparties” ie sub-entities. (The TblCounterparty table is actually hierarchical.) In this case, the actual counterparty is a sub, but the eager trader didn’t take down the particulars of her counterparty. Unable to decide what specific counterparty to enter on the trade, and unable to use the parent company, she used a “dummy” counterparty, hoping the actual identity would transpire. But it didn’t. The sub-entity is untraceable. Trade had to be cancelled.

2 survivor spaces

— Annotated Sun documentation —

Java defines two generations: the young generation (sometimes called the “nursery”) and the old generation. The young generation consists of an “Eden space” and __two__”survivor spaces.” The VM initially assigns all objects to the Eden space, and most objects die there. When it performs a minor GC in Eden, the VM moves any remaining objects from the Eden space to __one__ of the 2 survivor spaces.

The VM moves objects that live long enough in the survivor spaces to the “tenured” space. When the tenured generation fills up, there is a full GC that is often much slower because it involves all live objects.

The permanent generation holds all the reflective data of the virtual machine itself, such as class and method objects.

— adapted from online article (Covalent) —
1. work work work
2. When EDEN is full -> minor collection copies surviving objects into 1st survivor space
3. work work work
4. Eden is full again -> minor GC again
Copy from Eden to 2nd
Copy from 1st to 2nd
5. If 2nd fills up and objects remain in Eden or 1st, these get copied to the tenured

One survivor space is always empty. Serves as destination for minor collections

6. work work work
7. Major collections occur when the tenured space fills up. Major collections free up Eden and both survivor spaces

reserve management in a bond dealer desk

A “reserve” is placed by a retail trading system and “owned” by a subsystem therein – called reserve manager. This retail system is a “client” of the trading desk built atop the desk’s inventory.

One of the most important servers (runs as a daemon) of the desk — the Offer mgr — receive a reserve request, and actually “moves” a portion (say 20 bonds) to the reserve mgr and says good-night to those 20 bonds. (They may come back though.)
– If the reserve expires untouched, the 20 bonds “move” back from reserve mgr to offer mgr.
– If part (say 15 bonds) of the reserve goes unused and expires, they too “move” back from reserve mgr to offer mgr.
– If the entire reserve quantity is bought by the requesting client, then reserve mgr and offer mgr both say farewell to the 20 bonds.

In the partial (15 bonds) case, the retail client actually buys part (5 bonds) of the reserve quantity. We say farewell to the 5 bonds, book the trade and leave the remaining (15 bonds) with the reserve mgr until expiration.

when would JMS broker persist a msg

Condition: When a persistent producer sends a msg, broker saves it before sending ack to producer. This is __regardless__ of durable/nondurable subscribers.

Condition: If there’s any durable subscriber, then broker also saves the msg, probably before hitting subscriber. This is __regardless__ of the persistent/nonpersistent producer.

Note each condition alone necessitates saving msg to persistent store.

P101 of [[java message service]] describes the tricky scenario of durable-subscriber/non-persistent-producer. As stated above, broker does save the msg, but probably after sending ack to producer. Producer thinks it’s received by broker and records the ack as evidence, then goes offline. Broker can fail and lose the msg.

ComponentUI instantiation in jcompopnent ctor

Experts say every(?) swing component’s ctor calls this.updateUI(). I was told (verified in jtable and jtree) that each class’s updateUI() calls something like


This static method getUI() is something like a global hashtable lookup. If you look at its implementation, this getUI() basically calls createUI() reflectively. I believe it almost always instantiates a new ComponentUI instance.

In conclusion, swing component’s ctor indirectly instantiates a new ComponentUI instance. Note UI delegate instantiation is NOT delayed to painting time.

Stamford equity drv IV

Q: describe some of the GC algorithms?
(Now I think) A: ref counting + root object /descent/. Ref-counting alone isn’t reliable due to islands.

Q: For a given specification, a Java version uses more memory than C and many languages. Does this impact performance, and how?
%%A: more swapping

Q: what if enough free memory?
%%A: then no performance penalty
Now i think reading/writing to 20% more memory takes more time.

Q: Is JVM a performance problem, with the new JIT compilers?

Q: Beside JVM and extra mem usage, what else would you say to criticize java performance among competing languages?

Q: How many KB for a java Object?

Q: Why do you say java userland threads are light-weight relative to kernel threads?
%%A: memory. One address space per JVM

Q: does Object.java have any field?
%%A: serial number
A: none

Q: How does the GC decide who should live in old-generation area and who in young-generation area.

Does new objects live on young or old generation area?
%%A: young, except static fields

Q: what’s hashmap load factor?

Q: What can you tune in JVM
%%A: heap size, native/green thread, young generation size

Q: We agree that threads, statics and JNI are 3 types of root objects, but I’ve not heard of JNDI as a 4th type. Are you sure?

Q: What’s so good about java’s thread feature compared to other languages?
I only know 2 comparable languages — c# and c++.
%%A(2013): memory model; concurrent collections

Q: key challenges of large java projects in your past?

What if this inflated WallSt IT salary bubble bursts@@

Update: I feel it’s unlikely. The industry is becoming more tech, more data, more automated. Complexity increases. Software has to deal with more complexity, not less.

Tip: remember survival = job market beauty contest, not on-the-job productivity.
** Stay in the leading pack – marathon.
** know your weaknesses on the job_market and on the job

Tip: stay relevant to the business. Business needs critical apps rolled out fast so they can trade. They are willing to sponsor such projects.So what are the critical, non-trivial apps?
– trade booking, sub-ledger, daily unrealized pnl
– pricing – usually non-trivial
– market data – sometimes trivial as traders can simply get a low-tech data feed
– OMS – only in exchange trading

Tip: know the high margin trading desks in each city. These profit centers will continue to pay high. SG (FX …) HK (eq), Ldn (FX, IR …)

Tip: remember pure tech firms might catch up Wall St IT pay.

—(Rest of the items may not address the issue in the subject.)—

Tip? broaden trec into GUI. Many of my server-side systems have a GUI counterpart.
Tip? broaden trec from java/c++ to dotnet, math, market data.
Tip?? move up to architect, team lead
Tip?? broaden trec into BA and prj mgmt
Tip?? deepen — unix tuning, DB tuning, memory tuning, math