simple example@ptr to method[[Essential c++]]

I tried repeatedly but unsuccessfully to wrap my mind around ptr to methods, until I hit P130 [[essential c++]].

Your class should have a bunch of sister methods with identical params and identical returns. Remember every ptr VARIABLE[1] exists as (compile time) symbol table entry with
– a name,
– a value (address of the pointee) and
– a declared type

This pointer var can point to any of those sister methods, but not any look-alike methods in other classes.

[1] there’s no such thing as a “pointer” — only pointer _variables_ and pointee addresses.

Advertisements

non-trivial Spring features in my recent projects

FactoryBean —
lookup-method — method injection using CGLIB
InitializingBean — an alternative to init-method
Auto wiring
component-scan
@RunWith for unit testing
property place holders
converters
jms messageConverters
TestContextManager.java

–SmartLifecycle.java — Simply Magical ! Used to auto start threads when our custom ApplicationContext starts

— @Resource — on a field “cusip” would mean “Mr XMLBeanFactory, please inject into this.cusip an XML bean named cusip”. System implicitly declares this *field* in the xml bean definition of this class, but you need to declare the “cusip” bean.

Q: Just where and how does my aa_reoffer JMS test start the broker daemon thread?
A: Neoreo QueueFactoryBean (FactoryBean derivative) has a getObject(). In it, we create a new JmsTemplate and call execute() on it. This Spring method creates a full-blown jms session, which creates the daemon on demand.

arch design xp]job candd #trading sys

(Some personal observations, to be posted on my blog)

If a job description says … architectural design experience desired…, but the role is NOT the architect, then I bet the new hire is NOT really welcome to comment on architectural decisions.

I have seen enough to be convinced that each small development team (3 – 10 developers) only has one architect. “No 2 tigers in 1 mountain.” as the Chinese saying goes.

If a new hire has a lot of architect experience, he (or she) had better keep a low profile on that. Whether the system is in early or final design stage, the designated architect inevitably has some good ideas how she (or he) wants it done. Most software designs have non-trivial flaws and painful trade-offs, so it’s easy to question her design.

( Requirements are best understood after system roll-out, and very vague in the design stage. Requirements are always subject to change, so every architect endeavors to future-proof her design, which is always a hit and miss — we always overshoot there and under-design here. )

The system architect won’t admit it but she seldom enjoys challenging questions on her design. If the question gives her a chance to show off her design, fine. Otherwise, those questions waste more time and adds more confusion than value. Some questions may bring rare insight and help the design, but i’m no expert on the communication process required for such questions to be openly asked. There are many wrong ways to ask such a right question.

As job descriptions put it, employers genuinely “desire” some amount of architect experience in a new hire. Perhaps such an experience helps the new hire work with the existing architect. A soccer team desires players with center-midfield experience. Doesn’t mean you and the existing center midfielder should both play that position.

RBC iview 2

Q: How do you size your JVM memory? Beside -Xmx?
A: initial size is important, cos dynamic resizing is bad for performance.

How would u choose between dist cache vs database?

Why volatile has problem with long/double? How do you solve it?

Your exception strategy? return null? return error code?

What’s the benefit compared to a hierarchy of exception classes?

What if your queue becomes full when your consumer is offline?

Q: What’s the middle layer between your web server and the database in a high volume web site? None?
A: Before dist cache was easily available, a lot of popular websites were fine without it. One solution was adding enough memory (100G) in DB server so the entire DB is in memory, to eliminate disk I/O. Another solution is one-master-many-slave one-way replications, so RO operations are highly concurrent and hit replicated DB servers nearby. In my department, we split position data into 11 premises to speed things up, without any dist cache sandwiched between web server and DB. We do have a cache holding account data. We use RV to update the cache.

PnL attribution — first lesson

For any derivative risk engine, one of the most essential information
business wants to see everyday is PnL attribution.
http://en.wikipedia.org/wiki/PnL_Explained illustrates that a typical
report has
Column 1: PnL — This is the PnL as calculated outside of the PnL
Explained report
Column 2: PnL Explained — This is the sum of the explanatory columns
Column 3: PnL Unexplained — This is calculated as PnL – PnL
Explained (i.e., Column 1 – Column 2)
Column 4: Impact of Time (theta) — This is the PnL due to the change in time.
Column 5: Impact of Prices (delta) — This is the change in the value
of a portfolio due to changes in underlier price
Column 7: Impact of Volatility (vega) — This is the PnL due to
changes in (implied) volatility
Column xxxx: new trades, cancels,
http://pnlexplained.com/PEP_PnL_Explained_FAQ.html shows that usually
the attribution numbers (theta att, delta att, vega att…) add up to
be close to total unrealized PnL (Column 1). The mismatch is known as
PnL unexplained (Column 3)
A market maker should be delta-hedged (regulatory pressure) so delta
attribution is kept low — MM should stay unaffected by stormy
markets. See http://bigblog.tanbin.com/2011/12/buy-side-vs-sell-side-trader-real-diff.html
Let’s take volatility attribution for example.
http://www.pnlexplained.com/ has a nice hierarchical diagram showing
the breakdown of vol attribution.

dependency injection: promises+justifications

I was asked this in 2017 and 2019! Rare longevity for a jxee topic.

  • “boilerplate code slows you down”…. the wiring
  • “code density”
  • separating behavior from configuration. See post on “motivation of spring”
  • code reuse, compile-time dependency. cut transitive dependency. — u can move the aircon without moving the house. “to build one class, you must build the entire system”
  • “tests are easier to write, so you end up writing more tests.”
  • easy to mock

These are the promises and justifications as experienced users described.

SL WCF async call – a real example

In one of my SL apps, client need to
1) connect (via ConnectAsync/ConnectCompleted) to a WCF server so as to subscribe to a stream of server-push heartbeat messages. These heartbeats must update the screen if screen is ready.

2) client also need a timer to drive periodic pings using getXXXAsync/getXXXCompleted. If ping succeeds/fails, we would like to update the status bar. Therefore the callbacks had batter come after screen setup.

3) at start time client makes one-time get1Async/get1Completed, get2Async/get2Completed, get3Async/get3Completed calls to receive initial static data so as to populate the GUI. I wrapped these calls in a static Download6() method.

*) basically all these data from server (either server-pushed or callbacks) “WANT” to update screen provided static data is loaded, which is done by D6 callbacks. So D6 callbacks had better happen before heartbeats come in, or heartbeat processing would hit empty static data and cause problems. Also, D6 callbacks must execute after screen setup, because those callbacks need to update screen.

That’s basically the requirements. In my initial (barely working) solution everything except timer stuff happens on the main thread.

1) at start time, main thread invokes D6. The callbacks would come back and wait for the main thread to execute them — troublesome but doable
2) main thread makes initial ConnectAsync. The callback would hit main thread too
3) before these callbacks, main thread finishes painting the screen and setting up the view model objects.
4) now the D6 and ConnectCompleted callbacks hit main thread in random order. D6 callbacks tend to happen before server pushes heartbeats.
5) upon ConnectCompleted, we subscribe to heartbeats
6) Timer has an initial delay (about 10s) so usually starts pinging after the dust settles. The Async and Completed calls execute on 2 different non-main-threads.

All the “scheduling” depends on event firing. It’s not possible to “do step 1, then at end of it do step 2…” on main thread.

I now prefer to move the connect, d6 etc off main thread, so the callbacks would hit some other threads. More importantly, this way I have a chance to utilize locks/wait-handles/events to properly serialize the Get/Connect calls, the timer-start and the subscription to server-push heartbeats. This means Step B starts only after Step A completes all callbacks. This was not possible earlier as it would block the main thread and freeze the screen.

WCF async call in SL – 2 funny rules

Rule AB0) in all cases, Main thread must not be blocked during the WCF call. Some part of the flow must happen on the Main thread.[1]

A) You can make an xxxAsync call from either Main thread (Thread1 in debugger) or a non-Main-thread such as thread pool or timer.

Rule 1) Main thread will execute xxxAsync(), then finish whatever it has to do, then execute the callback when it becomes idle. The callback ALWAYS executes on the same Main thread.

Rule 2) non-Main thread executes xxxAsync(), then the callback usually executes on the same (or another) non-main-thread.

B) Synchronous? It took me a bit of effort to make the call synchronously. But obey Rule AB0. See http://www.codeproject.com/Articles/31018/Synchronous-Web-Service-Calls-with-Silverlight-Dis

 var channelFactory = new ChannelFactory("*");
var simpleService = channelFactory.CreateChannel();
var asyncResult = simpleService.BeginGetGreeting("Daniel", null, null);
string greeting = simpleService.EndGetGreeting(asyncResult); 
 

————-
I find it a good rule to avoid calling xxxAsync on Main thread in non-trivial SL apps. I now do it on other threads. It often involves more BeginInvoke, but eliminates a lot of future restrictions.

Incidentally, if the callback is on non-Main thread but needs to modify View or Model, then you need to use BeginInvoke or SynchronizationContext

[1] Suppose you put UI thread on some wait handle or grabbing a lock so it blocks, to be released by the “wcf thread”. If you do that, then “wcf thread” would also block waiting for some part of the processing to run on the Main thread. This is a fairly classic deadlock situation.

c#event^delegate, again

There are many excellent online introductions of the relationship between events and delegates. Here’s some personal observations —

1) first there’s a difference between an event-field vs an event occurrence (similar to tibrv event). When we say “This class has an event” or “subscribe to this eventA” we mean the field. However, when we say “fire this eventA” we use this big gun named eventA to fire a missile.

2) Virtually everyone commit the crime of ambiguity when they say a “delegate” rather than “a delegate Type” or “a delegate Instance”.

3) we must point out there are

  • delegate types
  • delegate instances
  • delegate variables — local variables, method parameters
  • a delegate field
    • ** a special case — If the delegate field has an “event” modifier, then the field becomes an event field.

It’s confusing and unproductive to compare event fields to delegate instances, or delegate types. Many casual discussions of event vs delegate often bring the delegate instances into the picture, adding to the confusion. Always remember this point whenever there is similarity/confusion between delegateInstance vs events.

An event field is better compared to a delegate variable, but an event field is BEST compared to a non-event delegate field. A non-event delegate field behaves differently from event fields. I think it’s supported and more flexible but rarely used. See
http://pro-thoughts.blogspot.sg/2008/08/delegates-and-events-what-does-event.html

RBC iview #java

Q: Given your commissions batch system, what if the input file has 10 times the trade vol, but you still need to finish the batch in 4 hours?
A: I spent more than 2 years in both systems (zed and GS) —
* identical pipelines with a load balancer to turn streams on and off. Each stream has enough capacity to take on entire input vol.
* partitioning by region, office, acct etc

Q: how do you split the 10-mil trade file into multiple files?

Q: how do you prevent out of memory in a large database-driven system?
A: distributed cache
A: processing by chunks. I think P/L and merge sort can use this
A: partitioning into premises
A: push the high volume processing into db with stored procs
A: informatica probably is built for large volume

Q: say you have a lot (perhaps below a million) of shortlived JMS clients. Each sends requests and disappear, and reappears seconds (or more) later to expect responses from MOM. Obviously you can’t maintain that many queues. Also security is important so shared topic and multicast are insecure.
A: dynamic queues

Q: you are designing an API for other people. Part of it is a method returning a list of some data. Argument is a long list of input parameters, but the number of parameters is unknown. What data types do u use for arg and return?
A: I would use a Map and return a List

Q: As an API creator, suppose your method returns a List normally, but can also end up with nothing to return. So what kind of thing do you return?

Q: how do you handle exceptions? Do you use more checked or unchecked exceptions.

Q: any experience with JDBC tuning?

Q: any experience with model-driven frameworks or eclipse RCP?

Q: what are the latest java concurrent features?
A: i only know the 1.5 features…

Q: sleep() vs wait()

Q: final, finally and finalize

every STL container has a no-arg no-parenthesis ctor

I believe iterators are seldom (never?) instantiated with no-arg ctor, but  every container has a no-arg ctor.
Note all containers has a no-arg ctor, and you call it without parenthesis

#include <iostream>
#include <iterator>
#include <string>
#include <vector>
#include <list>
#include <deque>
#include <stack>
#include <queue>
#include <set> //supports multiset too
#include 

<map>
#include <ext/hash_set>
#include <ext/hash_map>
using namespace std;
using namespace __gnu_cxx;
// needed for hash_set

int main() {
    //    vector<float> cont;
    //    cout<<cont.size()<<endl;
    //    list<float> cont
    //    cout<<cont.size()<<endl;
    //    deque<float> cont;
    //    cout<<cont.size()<<endl;
    //    set<float> cont;
    //    cout<<cont.size()<<endl;
    //    multiset<float> cont;
    //    cout<<cont.size()<<endl;
    //    map<float, float> cont;
    //    cout<<cont.size()<<endl;
    multimap<float, float> cont;
    cout << cont.size() << endl;

    // container adaptors
    //    stack<float> adaptor;
    //    cout<<adaptor.size()<<endl;
    //    queue<float> adaptor;
    //    cout<<adaptor.size()<<endl;
    priority_queue<float> adaptor;
    cout << adaptor.size() << endl;

    //hashed
    hash_set<float> hashed;
    cout << hashed.size() << endl;
    //    hash_map<float, float> hashed;
    //    cout << hashed() << endl;
}

difference between Set and List

* Fundamentally, a list is maintained in insertion order physically, whereas a Set can optionally maintain insertion order (such as LinkedHashSet).

* Because of the insertion order, List can be the basis of Queues (FIFO) and Stacks (LIFO). A Set can’t.

* A set can stay sorted (eg TreeSet), whereas a list is seldom sorted. The most useful List implementations don’t “like” to stay sorted.

* Look-up i.e. finding an element, is faster in Set than List.

* In terms of API, the primary difference is — Set won’t add an object if it is equal to any existing element (determined by equals() method).

value-add for S’pore financial-IT cluster competitiveness

Let’s say we want to add value to Singapore financial-IT sector’s competitiveness, either in a software house or an in-house dev team on Shenton Way. Look at the competitive landscape described in http://www.zaobao.com/cs/cs070802_509_1.html. What kind of people do these companies have difficulty recruiting?

* insight [1] into building key financial IT systems. In reality, a department often need a single “navigation guide” to help make (a myriad) key decisions including what to avoid. This guide will help recruit and buy other resources.

[1] gained from deep xp

* Generic PM zbs, generic tech zbs, professionalism … are fundamental, but i think easy to find in Singapore. Perhaps there’s no special personal quality for this sector. Perhaps the top 3 make-or-break personal characteristics are similar to other IT sectors such as logistics, telecom, education …

Even without prior dnlg, if you are sensitive, motivating, strong, self-sacrificing, visionary … you will add value to that “cluster-competitiveness”.

* domain knowledge? As a “cluster”, the SG financial IT sector must offer specific expertise such as remittance and those listed on http://www.zaobao.com/cs/cs070802_509_1.html. However, as individuals techies, … I think the dnlg can be acquired in a few weeks to 1 year, and you will be competent even as a team lead. Look at Henk.

* low latency high frequency

* algo trading

Now (2010) i feel generic or entry-level domain knowledge or tech knowledge is not hard to find. Some people say low latency and analytics is now commodity expertise. I don’t agree.

If i need someone to build a billing system, a veteran can guide me to avoid the “good try” routes, but the imperfect design vs the proven design aren’t going to make a huge difference. What about a pricing system? If you don’t have a good guide, maybe you won’t get it done.

financial jargon would set u apart from someone new

XR,

You once “said” something like — the domain knowledge is a thin layer of additional knowledge between a finance and a non-finance app developer. Here are some jargon terms I recently came across. They can take a lot of learning effort.

* interest rate swap
* fund of funds vs direct hedge funds
* private equity
* market making
* secondary market vs primary market
* loan syndicate
* side pockets
* broker vs dealer
* dark pools
* ECN
* how market data is distributed (RV)
* how traders hedge using different bonds
* haircut
* execution fee
* revenue transfer
* General Ledger posting
* acrual

* mark to market
* daily P/L roll-up across trading accounts, across trading desks, across divisions
* bid/ask spread
* commission and bid/ask spread
* floating coupon
* yield curve (technically aka “term structure”) — how is it constructed
* the trio relationship among yield, price and coupon rate
* how yield and price are closely linked
* spot rate and forward rate
* time value of money
* bond duration, dv01, convexity
* libor rate and fed fund rate
* taxable vs non taxable bonds
* why is muni more popular compared to treasuries and corporate bonds
* callable and put-able bonds, yield-to-worst, yield-to-best
* sinking fund, prepayment
* settlement and clearing
* swaption on municipal bonds

singleton Q&&A — somewhat contrived

Some answers are found in http://www.javaworld.com/javaworld/jw-01-2001/jw-0112-singleton.html

q: why not just use a static var?
a: see the article

q: name 2 ways to end up with multiple singletons
a: see the article
A: deserialize
A: reflection

Q: lock-free lazy singleton?
A: http://en.wikipedia.org/wiki/Initialization_on_demand_holder_idiom

Q: is there no way to subclass a singleton class?

What if i create a nested class inside the singleton? Yes i modify its source code but keeping the private constructor private. Will i get away with an anon inner class that extends the singleton host class?

Q: is it one instance per class loader in a given JVM? Consider eager singleton.
A: i think so

Q: how about lazy singleton?
A: same thing. one instance per class loader.

y c# value types extend Object@@ #my take

It’s always been a puzzle to me why C# creators made this decision —

Q: why are all value types (including enums, structs and simple ints/floats etc) designed to be subtypes of System.Object. I don’t know any language (python?) making a primitive data type extend a reference type. Bold departure from tradition?
%%A: perhaps they want Object params to accept value type arguments?

Q: what “assets” do the int type (and other value types) inherit from Object supertype?
%%A: I only know ToString(), GetType(), GetHashCode(), Equals() etc. Obviously Int32 won’t inherit actual Implementation of GetType(), but it does inherit the calling Signature, so this GetType() method can be called on an Int32 instance.

risk and P/L importance in a trading desk

Outside margin/collateral management (Reg T etc), risk is fundamentally a nice-to-have to most trading system.

i feel the VaR measurement is rather subjective — a elastic yardstick. Many traders can distort mark to market.

I don’t think it’s a real time thing. When a trader executes trades, she has no time to evaluate risk. Trade execution system can automatically check compliance but not risk.

Now I feel a sell-side (and big buy-sides) is more serious about risk management.

tableColumn, tableColumnModel, TableModel

(Outdated. See other posts on TCM.)
Actual Table data (like numbers and string values) are stored in the table model Instance, and should not be redundantly stored in another object such as JTable instance, TCM instance or TC instance. http://www.chka.de/swing/table/TableColumnModel.html points out —

A TableColumnModel Instance holds a bunch of TableColumn Instances, ordered like they are shown in a JTable — #1 job of TCM. You can verify this observation from method “TableColumn getColumn(int)” . The method getColumns(void) actually returns an Enumeration of TC instances.

The 2nd job of a TCM Instance is the listener list. These listeners listen to column-move (and similar) events, NOT table data change events.

(A TCM instance doesn’t hold actual table data.)

In a TC (TableColumn) instance, the most important information is the model index, i.e. which column in the TM is behind this TC instance. A TC Instance holds data fields controlling the appearance of a column. You can think of a TC Instance as a table_column_view object.

(Allow me to repeat — There’s no actual table data in a TC instance.)

jagged^matrix(2D-array) syntax: C/j/py/C#

See also https://bintanvictor.wordpress.com/2018/02/09/2d-array-using-deque-of-deque/

There exist only 2 types of 2D arrays across these 3 languages — C, java and c#. Across these languages the 2 types can be compared but the syntax … better don’t compare.

C uses vastly different syntax between
– array of pointers — i.e. jagged.
[a][b] matrix. Column(and row) size is fixed and permanent. Total a*b storage space allocated, used or unused. Unused space is sometimes padded?
** arr[a,b] // this code strangely compiles but means arr[b]. This confusing syntax is completely unrelated to 2D array.

A Java 2D array is jagged,  _ a l w a y s _ see [[javaPrecisely]]. Implemented as an array of pointers. Syntax is…. [a][b], a departure from C. Java has no built-in support for matrix

C# 2D arrays are really 2 unrelated constructs with similar syntax. The Jagged[a][b] vs the Matrix[a, b]. See http://www.dotnetperls.com/jagged-2d-array-memory

Python can support arr[3][2]. See https://stackoverflow.com/questions/6667201/how-to-define-a-two-dimensional-array-in-python

In terms of syntax evolution, java jagged took the c matrix syntax, and C# jagged inherited java syntax.

Therefore the c# designers faced a dilemma whether to follow c or java syntax. They chose java and not c.

In summary, the more useful jagged construct has this syntax
*arrOfPointer[3] // C
arr[][] //java
arr[][] //c#

A matrix in linear algebrea is a rectangular 2D array, built-in with c and c#, not java.

prefix/postfix increment – my cheatsheet

Based on [[moreEffC++]].

Q (important): why return by reference in one scenario and return by value in another?

It’s by necessity, not by preference. We _always_ prefer return by reference — efficient. But one of them (quiz – which one?) must return the “Before image” (database lingo), so it has to copy the object and return the copy, while the original object is updated.

Implication — the return-by-reference operator (quiz – which one?) is always More efficient, therefore we must Change the old habit of (ab)using “myIterator++” —- must be changed.

Q (non-trivial): why const?
This is a simple (not the only) solution to disallow myVar++++

Q (trivial): which operator takes an argument?
A: in fact neither needs argument, but one of them takes a fake argument of int. I guess it’s not really important to remember which one

GUI for fast-changing mkt-data (AMM

For a given symbol, market data MOM could pump in many (possibly thousands of) messages per second. Client jvm would receive all updates by regular MOM listener, and update an small (possibly one-element) hashmap — by repeated overwriting in the memory location. Given the update frequency, synchronization must be avoided by either CAS or double-buffering, in the case of object or a long/double. For an int or float, regular volatile field might be sufficient.

Humans don’t like screen updated so frequently. Solution – GUI worker thread (like swing timer) to query the local cache every 2 sec — a reasonable refresh rate. It will miss a lot of updates but fine.

(Based on a real implementation hooked to a OPRA feed)

select() syscall lesson1: fd_set …

A fd_set instance is a struct (remember — C API) holding a bunch of file descriptors. I don’t think there’s any boolean flag in it.

A fd_set instance is used as an in/out parameter to select(). Only pointer arguments support in/out.
– upon entry, it carries the list of sockets to check
– upon return, it carries the subset of those sockets found “dirty” [1]

FD_SET(fd, fdSet) is a free function (remember — C API) which adds a file descriptor “fd” into a fdSet. Used before select().

FD_ISSET(fd, fdSet) is a free function (remember — C API) which checks if fd is part of fdSet. Used after select().

First parameter to select() is max_descriptor. File Descriptors are numbered starting at zero, so the max_descriptor parameter must specify a value that is one greater than the largest descriptor number that is to be tested.

See http://publib.boulder.ibm.com/infocenter/iseries/v5r3/index.jsp?topic=%2Frzab6%2Frzab6xnonblock.htm

[1] “ready” is the better word

recursive -> iterative — level 2

In this example, first call’s job need 2nd call’s result. We build up another stack while depleting the args stack. This emulates the recursive descend->ascend.

 static List elements = new ArrayList();
 static {
  elements.add('a');
  elements.add('b');
  elements.add('c');
  elements.add('d');
 }
 static List combinations = new LinkedList();
 static LinkedList> stack1 = new LinkedList>();
 static LinkedList stack2 = new LinkedList();

static void interativePrintCombinations() {
stack1.add(elements);
while (!stack1.isEmpty()) {
List chars = stack1.removeLast();
char firstChar = chars.get(0);

System.out.println(“stack2 = ” + stack2);

if (chars.size() == 1) {
combinations.add(“”);
combinations.add(“” + firstChar);
continue;
}
stack2.add(firstChar);
stack1.add(chars.subList(1, chars.size()));
}

while (!stack2.isEmpty()) {
append1CharToEveryString(stack2.removeLast());
System.out.println(combinations);
}

}

private static void append1CharToEveryString(char firstChar) {
List tmp = new LinkedList();
for (String s : combinations) {
tmp.add(s + firstChar);
}
combinations.addAll(tmp);
}

static void getCombinations(List letters) {
if (letters.size() == 1) {
combinations.add(“”);
combinations.add(“” + letters.get(0));
return;
}
getCombinations(letters.subList(1, letters.size()));

// here we absolutely need the nested call to complete before we proceed
List tmp = new LinkedList();
for (String s : combinations) {
tmp.add(s + letters.get(0));
}
combinations.addAll(tmp);

System.out.println(combinations);
}



			

state maintenance in FIX gateway

Even the fastest, most dumb FIX gateways need to maintain order state, because exchange can send (via FIX) partial fills and acks, and customer can send (via other MOM) cancels and amends, in full-duplex.

When an order is completed, it should be removed from memory to free up. Before that, the engine needs to maintain its state. State maintenance is a major cost and burden on a low-latency system.

A sell-side FIX engine is a daemon process waiting for Response from liquidity venue, and mkt/limit orders from buy-side clients.

[[Complete guide to capital markets]] has some brief coverage on OMS and state maintenance.