Mostly inspired by the MS equity order-management “frameworks”
- message-based, not necessarily MOM.
- FIX messages are the most common
- SOAP messages are also possible.
- BAML system is based on MOM (tibrv)
- message routing based on rules? Seems to be central to some sell-side /bloated/ “platforms” consisting of a constellation of processes.
- client newOrder, cancel requests
- trading venue (partial) fills
- Citi muni reoffer is driven by market data events, but here I focus on equity systems
- Stirt realtime risk is driven by market data events + new trade booking events
- buy-side would have order-origination events, but here I focus on sell-side systems
- market data subscription? Actually not so important to some eq trading engines. Buy-side would make trading decisions based on market data, but a sell-side won’t.
Note in these designs, the complexity can never disappear or reduce. Complexity shifts to somewhere else more manageable.
- [c] stateless — http
- complexity moves out of individual services
- [c] pure functions — without side effects
- use the database concept in solving algo problems such as the skyline #Gelber
- stateless static functions in java — my favorite
- EDT — swing EDT
- singleton implemented as a static local object, #Scott Meyers
- [c] garbage collection — as a concept.
- Complexity shifts from application into the GC module
- in c# and c++, all nested classes are static, unlike in java
- python for-loop interation over a dir, a file, a string … See my blog post
- [c] immutable — objects in concurrent systems
- [c] pipe — the pipe concept in unix is a classic
- [c=classic, time-honored]
in 2018, I have heard more and more sites that push the limits of stateless designs. I think this “stateless” trend is innovative and /bold/. Like any architecture, these architectures have inherent “problems” and limitations, so you need to keep a lookout and deal with them and adjust your solution.
Stateless means simplicity, sometimes “extreme simplicity” (Trexquant)
stateless means easy to stop, restart, backup or recover
Stateless means lightweight. Easy to “provision”, easy to relocate.
Stateless means easy scale-out? Elastic…
Stateless means easy cluster. Http is an example. If a cluster of identical instances are stateless then no “conversation” needs to be maintained.
https://jaxenter.com/nobody-puts-java-container-139373.html is not too shallow, and not too deep.
* containers are standard Linux processes running a shared kernel, not isolated kernels
I have hit this same question twice — Q: in a streaming price feed, you get IBM prices in the queue but you don’t want consumer thread AA to use “outdated” prices. Consumer BB needs a full history of the prices.
I see two conflicting requirements by the interviewer. I will point out to the interviewer this conflict.
I see two channels — in-band + out-of-band needed.
- in-band only — if full tick history is important, then the consumers have to /process/ every tick, even if outdated. We can have dedicated systems just to record ticks, with latency. For example, Rebus receives every tick, saves it and sends it out without conflation.
- dual-band — If your algo engine needs to catch opportunities at minimal latency, then it can’t afford to care about history. It must ignore history. I will focus on this requirement.
- in-band only — Combining the two, if your super-algo-engine needs to analyze tick-by-tick history and also react to the opportunities, then the “producer” thread alone has to do all work till order transmission, but I don’t know if it can be fast enough. In general, the fastest data processing system is single-threaded without queues and minimal interaction with other data stores. Since the producer thread is also the consumer thread for the same message, there’s no conflation. Every tick is consumed! I am not sure about the scalability of this synchronous design. FIFO Queue implies latency. Anyway, I will not talk further about this stringent “combo” requirement.
https://tabbforum.com/opinions/managing-6-million-messages-per-second?print_preview=true&single=true says “Many firms mitigate the data they consume through the use of simple time conflation. These firms throw data on the floor based solely on the time that data arrived.”
In the Wells interview, I proposed a two-channel design. The producer simply updates a “notice board” with latest prices for each of 999 tickers. Registered consumers get notified out-of-band to re-read the notice board, on some messaging thread. Async design has a latency. I don’t know how tolerable that is. I feel async and MOM are popular and tolerable in algo trading. I should check my book [[all about HFT]]…
In-band only — However, the HSBC manager (Brian?) seems to imply that for minimum latency, the socket reader thread must run the algo all the way and send order out to exchange in one big function.
Out-of-band only — two market-leading investment bank gateways actually publish periodic updates regardless how many raw input messages hit it. Not event-driven and not monitoring every tick!
- Lehman eq options real time vol publisher
- BofA Stirt Sprite publishes short-term yield curves on the G10 currencies.
 The notification should not contain price numbers. Doing so defeats conflation and brings us back to a FIFO design.
Q: can you describe a blocking scenario in a CPU-bound system?
Think of a few CPU bound systems like
- database server
- O(N!) algo
- MC simulation engine
- stress testing
I tend to think that a thread submitting a heavy task is usually the same thread that processes the task. (Such a thread doesn’t block!)
However, in a task-queue producer/consumer architecture, the submitter thread enqueues the task and can do other things or return to the thread pool.
A workhorse thread picks up the task from queue and spends hours to complete it.
Now, I present a trivial blocking scenario in a CPU bound system —
- Any of these threads can briefly block in I/O if it has big data to send. Still, system is CPU-bound.
- Any of these threads can block on a mutex or condVar
Real Time Symbol Data is responsible for sending out all security/product reference data in real time, without duplication.
- latency — typically 2ms (not microsec) latency, from receiving to sending out the enriched reference data to downstream.
- persistence — any data worthing sending out need to be saved. In fact, every hour the same system sends a refresh snapshot to downstream.
- performance penalty of disk write — is handled by innoDB. Most database access is in-memory. Disk write is rare. Enough memory to hold 30GB of data. https://bintanvictor.wordpress.com/2017/05/11/exchange-tickers-and-symbols/ shows how many symbols there across all trading venues.
- insert is actually slower than update. But first, system must check if there’s a need to insert or update. If no change, then don’t save the data or send out.
- burst / surge — is the main performance headache. We could have a million symbols/messages flooding in
- relational DB with mostly in-memory storage
Are there any best practice online?
Q1: Say I have a small db table of 10 columns x 100 rows. Keys are
non-unique. To cache it we want to use STL containers. What container?
%%A: multimap or list. unordered_multimap? I may start with a vector, for simplicity. Note if 2 duplicate rows aren’t 100% identical, then multimap will lose data
%A: for a map, just lookup using this->find(). For list, iterate using generic find()
Q1c: what if I have a list of keys to search?
%%A: is there an “set_intersect()” algorithm? If none, then I would write my nested iteration. Loop through the target keys, and find() on each.
Q1e: how do you hold the 10 col?
%%A: each object in container will have 10 fields. They could be 10 custom data classes or strings, ints, floats. Probably 10 smart pointers for maximum flexibility.
Q1h: what if I have other tables to cache too?
%%A: parametrize the CacheService class. CacheService class will be a wrapper of the vector. There will be other fields beside the vector.
Q1m: how about the data class? Say you have a position table and account table to cache
%%A: either inheritance or template.
As of 2017, I see evidence that both Morgan Stanley and Millennium have such a system