3 real overheads@vptr #inline

Suppose your class Trade has virtual functions and a comparable class Order has no virtual functions. What are the specific runtime overheads of the vptr/vtable usage?

  1. cpu cache efficiency — memory footprint of the vptr in each object. Java affected! If you have a lot of Trade objects with only one char data field, then the vptr greatly expands footprint and you overuse cache lines.
    • [[ARM]] singles out this factor as a justification for -fno-rtti… see RTTI compiler-option enabled by default
    • [[moreEffC++]] P116 singles out vptr footprint as the biggest performance penalty of vptr
  2. runtime indirection — “a few memory references more efficient” [1] in the Order usage
  3. inlining inhibition is the most significant overhead. P209 [[ARM]] says inline virtual functions make perfect sense so it is best to bypass vptr and directly call the virtual function, if possible.

[1] P209 [[ARM]] wording

Note a virtual function unconditionally introduces the first overhead, but the #2/#3 overheads can sometimes be avoided by a smart compiler.

 

Advertisements

How RTS achieved 400-700 KMPS #p2p

Official benchmark result – 700 KMPS per instance of rebus or parser. Here are some enabling features:

  • [! iii] mostly single-threaded apps. Some older parsers are multi-threaded but each thread operating in single threaded mode. No shared mutable:)
    • In the order book replication engine, we use lockfree in one place only
  • ! (ST mode loves) Stateless [1] parser, small footprint. Only downstream component (Rebus) handles cancels/busts, modifications.. So we can run many parser instances per machine, or many threads per instance, fully utilizing the cpu cores. See https://haydenjames.io/linux-performance-almost-always-add-swap-space/
  • [! ii] allocation avoided — on millions of Output message objects. Pre-allocated ring buffer eliminates new(). very few and small STL containers .. mostly arrays… pre-allocate DTOs@SOD #HFT
  • ! allocation avoided — on millions of Input message objects, thanks to reinterpret_cast() on pointers… nearly Zero-copy. See reinterpret_cast^memcpy in raw mktData parsing
  • ! allocation avoided — custom containers to replace STL containers used, since they all allocate from heap
  • ! p2p messaging beats MOM 
  • ! Socket buffer tuning — to cope with busts. 64-256MB receive buffer in kernel; 4MB UserModeBuffer
  • low memory footprint. Most parsers use around 60MB. (My parsers was 100 to 150MB.) I think there are many benefits in terms of latency.
  • epoll — to replace select() with 2 orders of magnitude improve in throughput
  • buffering of Output list (of messages). I guess this hurts latency but enhances throughput
  • Very fast duplicate seq check, without large history data — a hot function
  • large containers like the ring buffer are pre-sized. No reallocation.
  • mostly array-based data structures — cache-friendly
  • Hot functions use pbref or RVO or move semantics, minimizing stack allocations
  • aggressively simplified parsing. Minimal logic
  • Special Logger to reduce I/O cost
  • Sharding to speed up symbol lookup
  • kernel bypass : RTS
  • No virtual functions — enhancing data cache and inlining .. 3 overheads@vptr #ARM
  • Inline
  • —-Speed with control
  • Best of both to reduce the real risk of our connection becoming slow or lossy
  • Depth monitor to detect missed messages
  • Capacity tracker to monitor mps rates
  • Missed seq reporting
  • [!=one of the really effective techniques in RTS]
  • [i=presented in multiple (!!, !!!) interviews]

[1] parser does remembers the sequence numbers. In fact sequence number management is crucial. Before retrans was added to parser, we never sent any retrans request. We did keep track of gaps. Mostly we used the gap-tracker to check duplicates across Line A/B. We also relied on our sequence number state to handle sequence resets.

## CRTP usageS #template Method

This blog post describes two usages of CRTP

I briefly read the excellent blog https://www.fluentcpp.com/2017/05/16/what-the-crtp-brings-to-code/. I feel CRTP is advanced for almost all the contexts I can think of. (In contrast,  I can see some “necessary” usages of SFINAE, such as the one in my little AddOrder.h)

https://stackoverflow.com/questions/262254/crtp-to-avoid-dynamic-polymorphism shows a simple [1] example of CRTP to replace virtual functions. How much efficiency improvement does it make? Questionable. I always say that if you want the lowest latency, then write selected modules in assembly language, and store it in hardware like FPGA.

[1] if there is anything simple in template meta-programming.

I have heard of several techniques to avoid virtual functions, but I believe the actual evidence (in terms of measured improvement in latency) is likely unconvincing or insignificant. Therefore, if CRTP is used to eliminate virtual function latency, then I am not sure how much value it adds.

There are other techniques to avoid “virtual”. I feel they are easier to understand than CRTP.

Sometimes CRTP is the only choice — if the virtual function needs its own template parameters, then compiler will complain that “function template can’t be virtual”. Rahul hit this complaint.

Q: for a true virtual method v1(), the derived class is not yet written when the base class is compiled. Later, Only at run time can the “system” pick the right implementation of v1(). How about CRTP?
A: base class is Not compiled ahead of the derived class. Each derived class includes a header defining the base class template.

——

Beside this “virtual-elimination” use case, CRTP has other applications (am still unfamiliar with), but if I’m asked in interviews I will only mention this one use case. One of the common “other usages” is TemplateMethod with compile time (not runtime) resolution, the 1st usage in the excellent blog . In the classic template method pattern, the “procedure” is published and finalized in the base class. Individual steps of the procedure are virtual methods, resolved at runtime. In the CRTP version, superclass methods call subclass methods, safely and cleanly. Superclass using subclass is a taboo in most contexts, but both traditional TemplateMethod and CRTP-TemplateMethod are notable exceptions.

The article didn’t clearly highlight a key point about this usage — The base class NumericalFunctions is general purpose, designed to be subclassed by anyone.  I could write a Temperature class to subclass NumericalFunctions too. This way, the code in NumericalFunctions is available for reuse.

https://github.com/tiger40490/repo1/blob/cpp1/cpp/template/CRTP_demo1.cpp is working code. Key points to remember about the code sample:

  • base-class — is a template with a dummy type “Sub”
  • derived classes — have the form “class Sub1 public Base<Sub1>”
  • the static dispatch (non-virtual) function in Base always static_cast “this” to *Sub.

 

simplified order book design doc: jump

It’s tempting to use virtual function processMessage() to process various order types (A, M, X …) and trade types, but virtual functions add runtime overhead. Template specialization is a more efficient design, but due to the limited timeframe I implemented an efficient and simple alternative.

Assumption: M messages can only change quantity. Price change not allowed — Sender would need to cancel and submit new order. The B/S and price fields of the order should not change, but validation is omitted in this version.

Assumption: T messages and the corresponding M and X messages (also the initiator A message) are assumed consistent and complete. Validation is technically possible but omitted. Validation failure indicates lost messages.

The cornerstone of the design is the data structure of the order book — a RB-tree of linked lists. Add is O(logN) due to the tree-insert. Modify is O(1) thanks to the lookup array. Remove is O(1) — eliminating tree search. This is achieved with the lookup array, and by saving iterator into the order object.

There are 2 containers of pointers — the map of lists and the lookup-array. It would be better to use container of smart pointers to ease memory management, but STL doesn’t provide any smart pointer.

All equality test on doubles are done using “==”. Should use some kind of tolerance if time permits.

Here’s the documentation in the lookup array class

/*This class encapsulates an array of pointers.
Assumption 1 — Exchanges is likely to generate auto-incrementing orderID’s. Here’s my reasoning. OrderID’s are unique, as stated in the question. If orderID generation isn’t continuous, then the generator has 2 choices about the inevitable gap between 2 adjacent ID numbers. It can keep the gap forever wasted, or somehow “go back” into a gap and pick a number therein as a new orderID. To do the latter it must keep track of what numbers are already assigned — rather inefficient. There are proven in-memory algorithms to generate auto-increment identity numbers. I assume an exchange would use them. Auto-increment numbers make a good candidate as array index, but what about the total number range?

Assumption 2 — each day the number range has an upper limit. Exchange must agree with exchange members on the format of the orderID. It’s likely to be 32 bits, 64 bits etc and won’t be a million bits.

Question 1: Can the exchange use OrderID 98761234 on both IBM and MSFT during a trading day? I don’t know and i feel it doesn’t matter. Here’s the reason.

Case 1: suppose exchange uses an *independent* auto-increment generator for each stock. So IBM and MSFT generators can both generate 98761234. My design would use one array for IBM and one array for MSFT. For basket orders, additional generator instances might be needed.

Case 2: suppose exchange uses an independent auto-increment generator for each stock, but each stock uses a non-overlap number range. 98761234 will fall into IBM number range. My design would need to know the number range so as to convert orderID to array index and conserve memory.

Case 3: suppose exchange uses a singleton auto-increment generator across all stocks (bottleneck inside the exchange). My design would use one gigantic array. Given Assumption 1, the numbers would be quasi-continuous rather than sparse — below 50% of the range is assigned. Suppose the range is S+1, S+2 … S+N, then my array would be allocated N elements (where S is orderIDBase). There’s a limit on N in reality. Every system is designed for a finite N — no system can handle 10^9999 (that’s one followed by ten thousand zeros) orders in a day. Each array element is a pointer. For a 64-bit machine, N elements take 64N bits or 8N bytes. If I have 640GB memory, N can be 80 billion but not higher. To scale out horizontally, we would hope Case 1 or 2 is the case.

Therefore the answer to Question 1 shows array of pointer is feasible for the purpose of lookup by orderID. In a real system hash table is likely to be time/space efficient. In this exercise, only STL is available, which provides no hash table. Tree based map has logN time complexity — too slow. My choice is between a built-in array vs a non-expanding vector. I chose array for simplicity.
*/

clever use of enum(char@@) in demanding c++ app #Miami

class Der: public Base {…..}

Requirement — we need to down cast a Base pointer. dynamic_cast has overhead.

Der* returnVal = static_cast (new Base) // will compile but is potentially disastrous, because the returnVal can be a partially wild pointer.

Q: without incurring the overhead of vtbl lookup, how do I decide if a pointer to Base given to me actually points to a Base or Der instance?

%%A: type_info still uses vtbl.
%%A: use a Base field to indicate the type. Both ctor will set the field, so the Der ctor will overwrite.
A: enum. If sizeof(aBase) is a tiny 2 bytes, then adding an enum field adds just 1 byte. Adding vptr would add 8 bytes. Big difference if we instantiate billions of this class.

You can also use a char, which is always 1 byte. An enum can be configured to take 1 byte, and have meaningful longer itemized names.

https://github.com/tiger40490/repo1/blob/cpp1/cpp/88miscLang/enumBasedRTTI.cpp has my tested code