EDT and non-EDT delegating to each other

This is a very common swing pattern, so better recognize it. Both techniques are often necessary rather than nice-to-have. Every swing developer needs to master them.

X) a regular thread often passes Runnable tasks to EDT via the event queue. See invokeAndWait() etc.

Y) EDT often passes slow tasks to helper threads so as to keep the GUI responsive. Tasks running on EDT should be quick. For road hogs, EDT can use a thread pool, or SwingWorker thread.

Q: data sharing? Race conditions?
A: I feel most swing apps are ok. The jcomponent objects are effectively singletons, accessible from multiple threads, but usually updated on EDT only.

an event pseudo-field = wrapper over a delegate field

Summary — an event pseudo field is a bunch [1] of callbacks registered (with the host object) to be notified/invoked.

Each callback is a 2-pointer thingy — an enhanced functor known as a delegate instance. First pointer references the “receiver object”.

[1] an ordered list

Suppose an instance of class Thermometer has a bunch of callback listener Objects each registered with this thermometer. When a new temperature is produced, all these listeners are invoked when runtime executes MyEvent(…args). That’s the requirement. C# does it with event pseudo-fields.

Under the hood, this thermometer has a delegate Object enclosing an invocation list. Each node on the list holds 2 pointers to a) the (non-static[1]) method and also to b) the object to call on. Good solution, so what’s the “event” field?

Answer — the event pseudo-field is a wrapper over the hidden delegate field. Like a property pseudo-field, an event defines a pair of add/remove methods. In the case of add(), “myEvent += d1” simply calls “_hiddenDlgField += d1″

http://www.yoda.arachsys.com/csharp/events.html explains why exposing the delegate field is a bad idea.

[1] This is the most common usage of delegate. Alternatively you can also register a static method as a callback.

VaR≠a maximum loss: illustrating Condition VaR

Update — At 95% confidence level, VaR is a dollar amount like $9M. $9M is the worst (maximum) loss within the 95% part of the bell curve. $9M is the most optimistic (minimum) loss within the tail.

Q: Everyone should know the theoretical maximum loss is 100% [1]. That’s theoretical max. How about realistically? Can we say Value-at-risk is a realistic estimate of “maximum loss” in your portfolio, from a large number of extensive simulations and analysis? The original creators of VaR seems to say No. See https://frontoffice.riskmetrics.com/wiki/index.php/VaR_vs._Expected_shortfal.

Compared to ExpectedShortfall aka ConditionalVaR,  the original VaR measures the most optimistic level of loss i.e. the smallest loss within the fat tail.  Therefore, the magnitude of those big losses are not considered.

“Expected” is used in the statistical sense, like “average”, or average-width [2] of normal bell curve.

Q: Does ES consider the magnitude of the loss in the worst, worst cases?
A: yes. Superior to VaR. Measures severity of fat tail losses.

For a given portfolio and a given period, the 5% expected shortfall is always worse (larger) than 5% VaR. This is Obvious on any probability density distribution curve, not just the Normal distribution bell curve. If PDF is hard to comprehend, try histogram.

— Example —
“My 10-day 5% Expected Shortfall = $5m” means in the worst 5% caseSSS, my AVERAGE-loss is that amount.
In contrast, “My 10-day 5% VaR == $4m” means in the worst 5% caseSSS, my MINIMUM-loss is that amount. Most optimistic estimate.

VaR makes you feel confident “95% of the time, our loss is below $4m” but remember, this level of loss is the SMALLEST-loss in the fat tail. How badly you lose if you are 5% unlucky and one of those 5% cases happen, you can’t tell from VaR.

[1] In leveraged trading, you could lose more than 100% of the fund you bring in to the trading account, because the dealer/broker actually lends you money. If you lose all of the $10,000 you brought in and lose $2000 more, they could go to your house and ask you to compensate them for that loss.

[2] actually “average distance from the vertical-axis”. Vertical-axis being the mean PnL = 0.

wit – increase chance of getting into US job market

For a developer from overseas, another skill valued by US interviewers is wit.

A lot of US interviewers really appreciate it. Many believe it shows an intelligent communicator, who gets things fast. Humor and wit doesn’t always cross borders. Takes a long time to learn the American way, though many US interviewers aren’t American.

If you delight your interviewer with just a little bit of ingenious humor and wit, it will be memorable and outstanding. Most of the time candidate’s wit isn’t ingenious, but some effort is worthwhile.

In US everyday culture, humor is prized more than in other cultures. There is a less hierarchical, more free-flowing, expressive, almost “selling” style.

You mentioned some colleague spending a lot of his personal time getting familiar with fantasy football? Same kind of thing.

Yield as relative value comparator

Yield is the most versatile, convenient (therefore most popular and practical) soft market data for relative value comparison. Yield lets you

– compare across currencies
– compare across time horizons (yield curve)
– compare across credit qualities (credit spread)
– compare across disparate coupon frequencies including zero-coupons
– compare across disparate call provisions
– compare government vs private issuers
– compare across vastly different industries
– compare across big (listed) vs small companies and even individual borrowers
– compare across eras like 70’s vs 90’s
– compare with interest rates. In fact, lenders use credit spread and prevailing IR to derive a lending rate on each loan.

Why is yield such a good comparator? Yield is a soft-market-data item derived using many inputs. Yield, in one number, captures the combined effect (on what? Of course valuation) of many factors such as

* different credit qualities
* different probabilities of default
* different embedded options
* different coupon rates
* comparable (but different) maturities

Without capturing all of these differences, it’s unwise to even attempt to compare 2 bonds. You get an obviously biased comparison; or you get an incomplete comparison. No info is better than misleading info.

Yield is so widely adopted that major data sources directly output yield numbers, making yield a “raw” market datum rather than a “soft” market datum.

question@Reo pricing engine: effective duration

Hi Jerry,

I recently worked on eq derivative pricing. I realized traders need to know their sensitivities to a lot of variables. That made me start thinking about “your” pricing engine — If a bond trader has 100 open positions, she also need to know her aggregate sensitivity to interest rate (more precisely the yield curve).

To address this sensitivity, I know Reo displays dv01 at position level (and rolls up to account/sector levels), but how about effective duration?

If we do display duration on a real time basis, then is it calculated using dv01 or is there option-adjusted-spread factored in for those callable bond positions?

risk system can be front office (KK


One of my systems was a real time risk monitor. Traders use this same system to book trades, price potential trades, make market, monitor market data, conduct scenario analysis. If this is not front office app, then I don’t know what is.

If I don’t say this app also handles real time risk, then no one will question it is front office. In fact, trading floor guys told me traders use this app more than any other app.

However, at the heart of this application is real time risk engine. All those “front office” features are built on top of the pricing module in this risk engine.

In another bank I worked, I know a Fixed Income derivative trading app that’s responsible for position management, deal pricing, live market data, quote pricing, contract life-cycle event processing — all front office functionality, but at the heart, this is a risk system. The team is known as “Risk team”. In fact, there was no other front office app for these derivatives. This was the only thing traders had. If you call it middle office, then there’s nothing front office.

In many derivative systems (including fx options and fx swaps presumably), pricing engine takes center stage in both front and middle office. Derivative traders’ first and last job is (i believe) monitoring her open positions/deals and trade according to existing exposures and sensitivities. That’s the defining feature of derivative trading.

Experts often say derivatives were created as risk management tools. They reduce risk and introduce risk, too. They are creatures of risk.

H-Vol vs I-Vol – options #%%jargon

Volatility measures the dispersion / divergence / scatter / spread-out among snapshots of a fluctuating price, over a period. Most intuitive and simplest visualization of the spread-out is a histogram. Whenever I have problem understanding volatility, i go back to histogram.

– Frequency of observations can be high or low, usually daily.
– The fluctuating price can be a stock, Interest Rate, Forex, Index, ETF…
– But the period’s start/end date must be specified otherwise volatility is meaningless.

σh‘s start/end dates are always in the past. σi‘s start date is always today, and end date is typically 30 days later. In other words, σh is backward looking; σi is forward looking. Therefore only σi (not σh) can affect option pricing

σh‘s sample values are real snapshots. σi‘s “sample values” are unknowable. We predict that if we take snapshots over the next 30 days, stdev will be this σi value.

what can an ibank SELL besides financial products

As an investment bank, Clients most buy/sell financial instruments with us, but there are many value-add “services” we can sell to clients. Some of these services are packaged into a “value meal”.

– research information — selling “information”
* recommendations
– soft market data
– historical data
– indicative/evaluation prices for illiquid instruments
– quant lib? I doubt ibanks sell these, but it's possible in theory.

– trading strategy, algorithms
– trading signals

– low latency “connectivity” to trading venues
– smart order router

– security lending
– custody services
– book keeping service?
– settlement and clearing services

homemade ref-counted string – implementation notes

([[ nitty gritty ]] P202 has simple sample code)

An option exchange interviewer asked me to outline a ref-counting string class “str”.

char* cstr; //field will be null-terminated and allocated on heap.
int counter; // field will be an int allocated on heap.

Now forget about ctor and big 3, and focus on simple, common client operations AFTER instantiation. Now I realize we need to recall how a string variable is USED.

int length() const;
char* c_str() const; // STL string offers this conversion method, so do we.
char* substr(….) const;
//operator << to print the string
str operator+() const; // produce a new str object by concatenation. Probably follow the effC++ advice to avoid return-by-reference??

Now the big questions

Q: does copy ctor allocate the cstr or the counter, or share them with sister instances?
%%A: share

Q: does conversion ctor from a C string allocate this->cstr and this->count?
%%A: allocate

Q: how do we create another str variable sharing an existing cstr object?
%%A: copy ctor or assignment
A: cvctor

3rd party evaluation service for structured FI instruments

Products covered – 200,000 structured FI instruments in multiple currencies, including

* regular CMO (biggest group)
* hybrid CMO
* unsecuritized whole loans — In contrast, those securitized loans are sliced up and not “whole”

Twice a day, each bond would get a new evaluation price. Price is always different from previous day because
1) accrued interest
2) benchmark interest rate and yield changes daily,
3) built-in optionality may kick in

Benchmark interest rates in this space are — Libor, government bond yields in different currencies

Upstream Market data includes
* transactions of the day
* indications of interests (like quotes) by both dealers and their clients
= either come in end of the day or come in by email

Methodology – take in market data to CALIBRATE the models. Feed them into Intex engine ..

OPRA feed processing – load sharing

On the Frontline, one (or more) socket receives the raw feed. Number of sockets is dictated by the data provider system. Socket somehow feeds into tibco (possibly on many boxes). Optionally, we could normalize or enrich the raw data.

Tibco then multicasts messages on (up to) 1 million channels i.e. hierarchical subjects. For each underlier, there are potentially hundreds of option tickers. Each option is traded on (up to) 7 exchanges (CME is #1). Therefore there’s one tibco subject for each ticker/exchange. These add up to 1 million hierarchical subjects.

Now, that’s the firmwide market data distributor, the so-called producer. My system, one of the consumers, actually subscribes to most of these subjects. This entails a dual challenge

* we can’t run one thread per subject. Too many threads.
* Can we subscribe to all the subjects in one JVM? I guess possible if one machine has enough kernel threads. In reality, we use up to 500 machines to cope with this volume of subjects.

We ended up grouping thousands of subjects into each JVM instance. All of these instances are kept busy by the OPRA messages.

Note it’s not acceptable to skip any single message in our OPRA feed because the feed is incremental and cumulative.

RAII^ContextManager ^using^ java AutoCloseable

1) Stroustrup commented that c++ doesn’t support finally{} because it has RAII dtor. See

Both deal with exceptional exits, unless noexcept specified.
Both are robust.
Both are best practices.

However, try{} etc has performance cost [1], so much so that some c++ compilers can be configured to disable it. C++ Memory management relies heavily on RAII. Using Try for that would be too costly.

[1] noexcept was presumably introduced partly to address this cost.

2) python ContextManager protocol defines __enter__() and __exit__() methods

Keyword “with” required …

3) Java uses finally(). Note finally{} becomes implicit in java7 try-with-resources

AutoCloseable interface is needed in try-with-resource. See https://docs.oracle.com/javase/tutorial/essential/exceptions/tryResourceClose.html

4) c# — Achilles’ heel of java GC  is non-deterministic. C#’s answer is q(using). C# provides both USING and try/finally. Under the hood USING calls try/finally.

I feel c# USING is evolution-wise a closer cousin to RAII (while try/finally is is a distant cousin). Both use variable (not obj) scope to manage object (not var) lifetime.

USING uses Dispose() method, which is frequently compared to the class dtor/Finalize(). For the difference between c# Dispose() vs dtor vs Finalize, see other blog post(s).

As you can see, c# borrowed all the relevant techniques from c++ and java. So it’s better to first understand the c++/java constructs before studying c# constructs.

consistency and liveness hazards – largely theoretical

The primary tension/conflict in multi-threading — the twin challenges of consistency vs liveness, are seldom seen as practical challenges. This challenge is supposed to the only show in town but has an empty audience. After years of practice, I now see a few reasons

Reason: Developers stick to simple or (a tiny number of) well-tested concurrency constructs. For example, swing developers use lots of threads often without knowing. Threads are indispensable to servlet and JDBC. Even more important roles are given to threads in the MOM arena. No threading knowledge required though.

Reason: Creating a new concurrency construct is considered tricky, unproven, hard to sell, unjustified, low ROI, so few developers try.

Reason: When we do try, we seldom confront the twin challenge. We focus on getting the pieces to work together.

Reason: When concurrency bugs do exist, they stay dormant practically “forever”. I have never seen a test on Wall St that actually revealed a concurrency bug. Concurrency Bugs are discovered by code review.

##[11]c++interview topic "clusters"@WallSt

Update — 7 clusters@HFT-c++IV questions has valuable insight. You cannot gain these insights without attending (and preparing/reviewing) a lot of real c++ interviews.

Hi Yang Bo

You probably have more experience programming c++. But to answer your question about interview topics, my c++ interviews tend to cluster around

  • * big 3 — dtor, assignment, copy ctor, “virtual ctor”
  • * vtbl, vptr, dynamic_cast, (pure) virtual functions
  • * smart pointers including auto and shared ptr
  • * pass by value vs pass by reference
  • * string — only in coding tests. Essential knowledge and internal implementation of strings. (In my java interviews, the most scrutinized data structures are hashmap and arrayList.)
    • ** conversion to/from c string
    • ** standard str functions in C
    • ** when to allocate/deallocate the array of char, esp. in the big3
    • ** functions passing/returning strings in and out;
  • * thread
    • ** creation, mutex, scoped lock, RAII,
    • ** implementation of read/write lock and recursive lock
    • ** popular implementations of thread libraries, their differences, idiosyncrasies… — practical experience with win32, posix and linux thread libraries.
  • * data structures
    • ** stl containers, algorithms, big O
    • ** functors, binary functors, comparison functors
    • ** internals of linked lists
    • ** sorting, redblack tree
    • ** (non-STL) hash containers
    • ** home-grown simple containers
  • * memory management
    • ** new/delete, malloc/free. c-string is a favorite example among interviewers.
    • ** exception handling during allocation
    • ** array-new, placement-new
    • ** memory leak detection

To a lesser extent, these topics are also repeatedly quizzed —

  • * operator overloading — pre/postfix, operator <<
  • * initializer list
  • * const field
  • * slicing problem
  • * reference counting — used in string and smart pointers etc
  • * double pointers
  • * private/protected inheritance
  • * abstract classes
  • * exception class hierarchy
  • * exception catch by reference/value
  • * efficiency
    • ** C-style string and string functions
    • ** enums (Miami, Tower…)
    • ** avoid overhead of virtual functions

C# event registration – unregister first (defensive)

Experienced c# guys have a defensive habit like

Register(Action theDeletage){
  this.someEvent -= theDelegate;
  this.someEvent += theDelegate;

Why the first line? It basically says “first check if new joiner is already in the inv list and remove it if present, then add him to the end”. Remember the inv list is duplicate-allowed. Therefore this defensive practice becomes important when you call Register() repeatedly (esp. in a loop), potentially passing in the same delegate objects over and over.

Without this insurance, the inv list would contain growing duplicates entries. Consequence? Event firing would trigger a large number of duplicate callbacks. I learnt this lesson the hard way.

smaller company to poach a big-bank developer

A smaller company often has no choice but to pay a premium to poach a developer from a big bank. Without a premium, i don’t see a good reason to move. In reality, a smaller company often don’t want to pay a premium, so no deal.

In some real cases, the smaller company actually wants a discount.

Before believing recruiters saying “i have placed 20 candidates from non-big-banks into big banks”, consider these formidable undercurrents —

brank and credibility – if your future is in big banks, then remember they generally trust your brank in other big banks. They trust that (and IBM brank) more than they trust brank in smaller companies.

fear (the other side of the above coin) — A big bank developer enjoys job security since next interviewer feels “safer” hiring him and “fearful” hiring an unproven candidate from a smaller firm or a big tech like Google.

Familiarity – large banks are similar in set-up. Even a big buy-side like Pimco or a hedge fund is probably seen as different from a sell-side ibank. Forget techs like Microsoft!

radiation – ibank trec radiates everywhere, so does big techs like Oracle. Blackrock/hedgeFund trec radiates less. ECN or software vendor trec won’t radiate. Remember only a tiny number of core engine developers in ECN have the experience relevant to an ibank’s trading engine core team. This applies to all the big names like murex, sunguard, calypso, … See my blog on 3rd type of domain knowledge

auto_ptr is still worth learning

auto_ptr was about the only smart pointer in the standard library (Until boost shared_ptr was added to TR1) and has been used in commercial apps for years. Obviously, it’s not designed for STL. Well, competent programmers won’t use auto_ptr in STL, so it’s basically robust and battle tested.

People say auto_ptr can lead to bugs in the hands of a novice, but what c++ features can’t?

The essential features of auto_ptr are very much relevant even if you use an alternative smart ptr. Numerous wall street interviews still reference auto_ptr.

In MTS, I simply replaced every “auto_ptr” in source code with “unique_ptr”, because auto_ptr has been deprecated in c++11.

eq cash agency trade volume #systems to support it

A friend in the exchange connectivity team of a big investment bank told me they get half a million orders daily. On average, 2 partial fills per order, so about a million “matches” at the exchanges.

GS TPS feed has about 1 mio eq trades/day.

However, a UBS claims to process 90 million client orders daily (another mgr said 500 mio, and a hiring mgr said 100 mio transactions/day), mostly eq cash trades. Around 5000 clients.

300,000 events/sec (confirmed) message throughput. Latency is either 50us or 50ms per order.

For each client, all messages (cancels, amends, executions…) are processed on a single thread. Otherwise state maintenance is tricky. Multiple clients could, at least in theory, share a single thread.

quote/order volume for EUR/USD

Many say EUR/USD is the most liquid instrument in the world. However, its quote/order volume [1] is probably very different in different order books.

EBS probably receive fewer orders but minimum $10mio. Big banks only trade large orders between each other.

Institutional ECN's like hotspot and FXall probably receive more (tends of thousands) orders usually between 1 to 5mio. Institutional orders are fairly large.

Retail investors send orders too, but minimum $10,000. Therefore volume is much larger.

[1] Note quote and order are slight different. For example, Hostpot lets you send market orders, limit orders etc that are executable. In contrast, Quotes are usually indications.

DIY auto_ptr — NittyGrittyC++

P200 [[NittyGrittyC++]] shows that you can create your own wrapper class to ensure a local heap-allocated object always, always gets deallocated when the variable goes out of scope, even if exceptionally.

Once you wrap your local heap variable into this wrapper object, you can effectively forget deallocation. Just make sure you don't deallocate manually.

Then you realize — This is a partial auto_ptr, but it's actually better to forget about auto_ptr at this point. Auto_ptr has too many other features. Here we are only interested in the RAII dtor.

q[ char * ] reflexes in C++ developers

A java guy develops reflex on wait() or HashMap; a c++ guy has the same for ( char * ), a C heritage that’s pervasively inherited and embedded in C++.

– reflex — if this is a key field of a class, then very likely there’s implicit conversions to/from a “string like this”
– reflex — usually a string. Specifically, if this appears as func param or return type, then almost always means a string

– if on heap, array-new. Note array-new or regular new expressions share identical return type i.e. qq( char * ). Therefore whoever calling new is the only buy knowing to call array-delete or regular delete. No one else knows how to delete that pointer.

– if on heap, someone must delete it, or we get memory leak. Best way is RAII dtor

– if you see any variable declared as qq( someType * ), then you know this is one side of a coin. Something is wrong if the other side is not specified — the array size. However, for qq (char *), array size is optional, given the null terminator.

– It’s possible for the array not to have a null terminator

– It’s possible for the pointee to be on heap or stack

– better use all the C string functions. They operate on qq( char * ), be it stack or heap, null-terminated or not

– reflex — if you see 2 qq( char* ), they might be 2 overlapping char-arrays !
– reflex — if we have qq (char * var1; float var2)   then the object addresses are qq( var1;  &var2). Note the missing & in front of our string variable!

quants still maintaining the C++ quant libraries

According to a Morgan Stanley strategist, they use C++ plus some scripting languages. No java. The C++ quant lib consists of mostly C functions.

For some instruments, there's no formula based pricer. I was told most tailor-made structured equity derivative instruments are that way. Simulation path based pricer is the only choice. How to achieve real time pre-trade pricing? Run fewer (thousands of) paths, in parallel. How few in practice? Thousands.

Q: for a task assignment on quant code change, what's the choice between IT coder (including a quant developer) vs a quant coder?
A: IT is professional developer. More robust. Quant guy is faster time to market.

I guess the IT guy doesn't have the domain knowledge or the financial math training to fully understand the change. Real understanding is key to fast and safe change.

socket file desc in main() function

I see real applications with main() function declaring a stack variable my_sockfd = socket(…).

Q: is the my_sockfd stackVar in scope throughout the app's lifetime?

Q2: what if the main() function exits with some threads still alive? Will the my_sockfd object (and variable) disappear?
A: yes see Q2b.

Q2b: Will the app exit?
A: yes. See blog http://bigblog.tanbin.com/2008/12/main-thread-early-exit-javac.html

auto_ptr Achilles’ heel

auto_ptr would be fine if you don’t implicitly use copy-ctor or copy-assignment. This is the true Achille’s heel of this otherwise useful smart pointer.

Q: auto_ptr would be fine if you don’t ever use stl??? Well, non-STL functions also assume “normal” copy semantics??
A: I would grudgingly agree. auto_ptr users should simply avoid all such libraries.

The fact that auto_ptr is widely banned is testimony to the entrenched status of STL, and to the widespread use of implicit copy.

1) The c++11 unique_ptr is an improvement. Name suggests sole ownership.

2) Boost  scoped_ptr is also designed to be an improvement. Its name signal its intent to retain ownership solely within the current scope. According to boost documentation, it is “safer than shared_ptr or std::auto_ptr”. It has no more space overhead that a built-in pointer, so probably no other field than the built-in pointer. Almost bitwise const, except the reset(T*) and swap() methods.

Both these improvements address auto_ptr Achille’s heel.

(ADP) agency^prop traders^dealer^broker^

See also http://www.mergersandinquisitions.com/what-you-do-in-fixed-income-sales-trading/

— AAAAAgency trader =~= broker (=?= flow trader) —
* ideally holds no position (therefore takes no risks), unlike other models.
* earns commission ie “transaction-fee” as a pre-defined percentage of the transaction amount.
** The stock exchange also earns a transaction fee on each trade. Same model
** inter-dealer broker too, like e-speed, taking no risk
* has a seat on the exchange. Sponsors client’s trades but depends on clients to fulfill obligations. For a sell, client must quickly provide the stock to the broker, so broker can transfer it to the counter party. If client disappears, then broker is short and must buy on the open market for cover.
* eg: My OCBS broker was such an example, though she is subject to client risk

— DDDDDealership —
* (defining feature) Keeps inventory.
* eg MTS traders
* like a car dealership.
* takes on the opposite side to the client
* Often buys for client’s request. Could also buy without client request but it ties up massive capital.
* for our long positions, we maintain firm offers.
* Earns a bid/offer spread. Commission? Part of the spread.
* traders – Mostly PnL-based compensation
* Seldom shorts, and shorts only liquid securities.
* eg: I believe a bond trading desk is usually a dealership, just like a car dealership. When a client wants a bond we don’t have, we buy them on the market under house’s account, then sell to the client — virtual trade.

— PPPPPPProp trader —
* no “client” to serve, unlike other models.
* buys and holds for any length of time, but might tie up too much capital
* could accept client’s money as in a mutual fund
* eg: Hedge fund is best example

~~ flow^prop indistinguishable
client A wants to sell ABC, you bid $50, trade is done. you decide to keep it on the book for a bit as you think it’s got potential. you exit at $60 eventually. is it prop or flow? flow brought you the business, and you may not have paid as much attention to ABC company until the client brought it up. but you took a prop view…

(ADP) algo trading system @ sell-side

Refer to my “ADP” posts for background

I feel the most prominent algo engines are buy-side, or prop trading systems. Example — Citi muni prop traders operate a so-called arbitrage systems that publish bid/offer prices.

If you see a sell-side system advertising itself as algo-trading, then perhaps it caters to buy-side algo engines. Example — Barc DMA. Probably an execution-algo engine, rather than an order-origination strategy/alpha.

Sell side is often a market maker, though the term is ambiguous. As market maker, you may not have the freedom to issue a lot of market orders. Example — in fixed income, a dealer often holds positions, so they maintain firm offers and bids on the market.

I feel High-Frequency trading is not the bread-and-butter business of a sellside.

(ADP)#1 acid test for broker/dealer/prop/flow…traders

When people mention agency-traders, sales-traders, position-traders, flow-trader, prop-traders, online-brokers, market-makers, dealers … half of your confusion can be cleared by A1 below.  A2 can clear half of the remaining doubts.

Q1: NEVER keep positions, therefore never a counter-party?
A1: NEVER for sales-traders, sales-brokers, salespersons, inter-dealer-brokers, Bud Fox…

Q1b: does this role take on market risk?
Q1c (Yes or no please.) does his own portfolio (if any) change due to this trade?
– If NO, then it’s a maTCH-maker, a pure-play broker such as an inter-dealer broker like espeed, or interbank broker like EBS.
– If YES, then it can be a dealer such as a maRKET-maker, or a prop trader such as a fund manager, or an agency trader, a flow trader, or a lending broker.

Q2: ALWAYS client-driven?
yes for agency traders, sales traders …
no for prop traders.

Related to Q1 —
Q8: does he keep an inventory (kind of low-risk long positions)?

Another interesting question but less significant.
Q10: Does this role earn any fee?
Q10a: client-visible fees such as Exchange fee?
Q10b: client-invisible fees such as Salesmen commission?
Q10c: does he make money by the bid/ask spread or earns a commission?

(ADP) Barings – agency trading outweights prop trading

Most futures dealers (Barings bank etc) don’t allow their traders to initiate a trade without client request. Basically forbids prop trading. Prop trading (using Barings’ own money) was too dangerous and destroyed Barings.

Barings (or any other broker, dealer) is an exchange member and sponsors client’s trades. All non-members must engage a member to trade on any exchange including NYSE.

Futures market is mostly driven by client-request -> sponsor.

Muni market is similar — client-request -> dealer.

In both cases, the bank takes positions and risks. Now I feel some stock brokers do the same.

SIMEX disallows members to finance client’s margin account.

(ADP) "broker" means..@@

The word “broker” is an overloaded term. It can take on exactly One of many very different meanings. 2 of them are most common

A) in prime brokerage, the broker sponsors all the trades done by the Hedge Fund. Broker is the counterparty to every trade.
** Broker is like a “parent” of the naughty kid. Kid can do any stupid thing, and the parent is held responsible.
** Similarly, my stock broker (OC), lending money to me and taking risks

B) a commission-based salesman, who takes no position no risk
** eg: interdealer broker like espeed, lending no money, never a counterparty to a trade, and taking no risk no position

Other meanings —
* in FX, a “broker” can mean anyone you use to access the market.
** (Contrary to the strict meaning of “broker”) Could well be a market maker i.e. a dealer

* pit traders who aren’t locals (locals are typically market makers)

* Bud Fox in the Wall Street movie, who takes no position or risk.

(This paragraph was written when we basically ignored (A) above.) I believe a pure-play broker usually takes zero or small risk, and zero or brief position. As soon as a broker buys and holds [1] a security, she acts more like a dealer.

[1] or short sells

(ADP) sales ^ trading

~~ sales ^ trading
basic question cutting through all the categories in the big ADP post —
Q: is compensation PnL-based [trading] or [sales] transaction fee?

GS and other trading houses have 2 big departments – sales vs trading.
– “Sales” means brokers, bringing in customer orders. They probably hand the order to agency traders
– “trading” includes dealership, prop trading, and perhaps agency trading.

GS makes tons of money in 1) agency trading esp. block trades, 2) dealership and 3) prop trading.

(ADP) mkt-makers

“Market-making” is an over-generalized and misused term. Many so-called market-makers publish “indicative” or fake quotes (instead of uncanceled limit orders), or one-way quotes, or wide quotes.

By strict definition, market maker of a security/instrument must maintain firm and tight bid/ask. They take market-risk to create liquidity, so they get compensated somehow. Market making is a job, a responsibility, and a business, just like policing, match making, ..

Say a seller client is asking $300 for IBM, and a buyer client is bidding $100. No deal. A partial sign of poor liquidity (see http://bigblog.tanbin.com/2011/09/liquidity-immediacy-resiliency.html). Market maker now publishes 2 quotes — asking $200.05 and $199.95 bid. Now the buyer [3] lifts the offer $200.05 + some fees. Now market maker gives in a bit to bid $199.99, as he tries to cover his short position. After some time the seller jumps and sells at that price. Market maker makes a small profit to compensate for the m-risk he assumes.

[3] Note as market takers buyers always get the worse of the 2 quotes i.e. the higher one.

There’s a difference between nasdaq market maker vs a NYSE specialist. See http://www.investopedia.com/ask/answers/128.asp

After you understand the basic biz models (http://bigblog.tanbin.com/2010/11/agencyprop-tradersdealer.html), you will realize

* Market-maker is usually a dealer or a prop trader, PnL-based compensation
* when you make a market, you put house/own money at risk — exactly like prop trading
* Very loosely, a broker role is the “opposite” of market-maker. Most visible in the CBOT trading pit. Brokers take position/risk[1] very very briefly, and trade only upon client orders; market-makers always take positions and _maintain_ bid/offer.
* a dealer often plays market-maker, esp. prevalent in the Treasury/corp/muni market, and IRS/CDS markets, and FX options market and many OTC markets
* MM on a stock exchange? dealers and prop traders. If a broker plays MM, she becomes a prop trader and risks her own money.
* eg: Those locals in the pit could give one-way quotes, so not true market makers

[1] and only when clients default

(ADP) ECN match-maker ^ mkt-maker

( See also post on #1 key question) Fundamental difference between MM and ECN — position management. ECN has no inventory or position, therefore no risk to check before executing a trade.

P42,59 [[currency trading]] compares Market Maker vs ECN. (I’d say it’s Market-Maker vs MATCH-Maker)

* ECN is a typical BBBBroker model
* eg: espeed, TMC
* takes no position, no risk
* earns a match-making fee or commission
* is NOT counter party to any trade

immutables and their total costs of ownership

I always maintain that one of the practical cures for deadlock is immutable object, but there are several costs.

* every time you deep-clone, you must ensure all the objects cloned are immutable.
* every method and constructor taking in an object arg must deep-clone that object.
* every method returning an object must deep-clone the object. Note Java strings are inherently immutable
* deep-cloning usually require massive chain-reaction. If MyImm class has a pointer field of MyImm, then the deep-cloning will clone a linked list. Immutability means object state immutable. If your definition of “State” includes that MyImm field, then the entire linked list is part of the state of the head node! Cloning could lead to stack overflow.

* It’s easy to overlook a detail, and unknowingly create an almost-immutable, with (possibly disastrous) consequences in production
* immutable classes are harder to extend, limiting reusability, flexibility and usefulness.

Database: limited usage]real time trading

“Database” and “Real-time trading” don’t rhyme!

See http://bigblog.tanbin.com/2009/03/realtime-communication-in-front-desk.html. Trading systems use lots of MOM and distributed cache.

In comparison, DB offers perhaps the most effective logging/audit. I feel every update sent to MOM or cache should ideally be asynchronously persisted in DB. I would probably customize an optimized DB persistence service to be used across the board.

Just about any update in cache need to be persisted, because cache is volatile memory. Consider flat file.

what is kernel space (vs userland)

(sound-byte: system calls — kernel space; standard library functions — userland, often wrappers over syscalls)

Executive summary — kernel is special source code written by kernel developers, to run in special kernel mode.

Q: But what distinguish kernel source code from application source code?
A: Kernel functions (like syscall functions) are written with special access to hardware devices. Kernel functions are the Gatekeepers to hardware, just like app developers write DAO class as gatekeepers to a DB.

Q: Real examples of syscall source code?
A: I believe glibc source code includes either syscall source code or kernel source code. I guess some kernel source code modules aren’t in glibc. See P364[[GCC]]
A: kernel32.dll ?
A: tcp/ip is implemented in kernel.
A: I feel device drivers are just like kernel source code, though RAM/CPU tend to be considered the kernel of kernel.

My 2-liner definition of kernel — A kernel can be thought of as a bunch of (perhaps hundreds of) API functions known as “syscalls”. They internally call additional (10,000 to 100,000) internal functions. Together these 2 bodies of source code constitutes a kernel. On an Intel platform, kernel and userland source code both compile to Intel instructions. At the individual instruction level, they are indistinguishable, but looking at the source code, you can tell which is kernel code.

There are really 2 distinct views (2 blind men describing an elephant) of a kernel. Let’s focus on run-time actions —
X) a kernel is seen as special runtime services in the form of syscalls, similiar to guest calls to hotel service desk. I think this is the view of a C developer.
Y) behind-the-scene, secret stream of CPU instructions executed on the CPU, but not invoked by any userland app. Example — scheduler [4]

I don’t think a kernel is “a kind of daemon”. Such a description is misleading. Various “regular” daemons provide services. They call kernel functions to access hardware. If a daemon never interacts with user processes, then maybe it would live in “kernel space”. I guess kernel thread scheduler might be among them.

I feel it’s unwise (but not wrong) to think of kernel as a process. Kernel services are used by processes. I guess it’s possible for a process to live exclusively in “kernel space” and never interact with user processes. http://www.thehackademy.net/madchat/sysadm/kern/kern.bsd/the_freebsd_process_scheduler.pdf describes some kernel processes.

P241 [[Pro .net performance]] describes how something like func3 in kernel32.dll is loaded into a c# application’s code area. This dll and this func3 are treated similar to regular non-kernel libraries. In a unix C++ application, glibc is linked in just like any regular library. See also http://www.win.tue.nl/~aeb/linux/lk/lk-3.html and http://www.win.tue.nl/~aeb/linux/lk/lk-3.html

[4] Scheduler is one example of (Y) that’s so extremely prominent that everyone feels kernel is like a daemon.

The term “kernel space” is misleading — it is not a special part of memory. Things in kspace don’t run under a privileged user.

— call stack view —
Consider a c# P/Invoke function calling into kernel32.dll (some kernel func3). If you were to take a snapshot of an average thread stack, top of the stack would be functions written by app developers; middle of the stack are (standard) library functions; bottom of the stack are — if hardware is busy — unfinished kernel syscalls. Our func3 would be in the last 2 layers.

All stack frames below a kernel API is “kernel space”. These stack frames are internal functions within the kernel_code_base. Beneath all the stack frames is possibly hardware. Hardware is the ultimate low-level.

Look at the bottom-most frame, it might be a syscall. It might be called from java, python, or some code written in assembly. At runtime, we don’t care about the flavor of the source code. The object code loaded into the “text” section of the Process is always a stream of assembly code, perhaps in intel or sparx InstructionSet

ANY process under any user can call kernel API to access hardware. When people say kernel has special privileges, it means kernel codebase is written like your DAO.

how does reliable multicast work #briefly

I guess a digest of the msg + a sequence number is sent out along with the msg itself.

See wiki.

One of the common designs is PGM —

While TCP uses ACKs to acknowledge groups of packets sent (something that would be uneconomical over multicast), PGM uses the concept of Negative Acknowledgements (NAKs). A NAK is sent unicast back to the host via a defined network-layer hop-by-hop procedure whenever there is a detection of data loss of a specific sequence. As PGM is heavily reliant on NAKs for integrity, when a NAK is sent, a NAK Confirmation (NCF) is sent via multicast for every hop back. Repair Data (RDATA) is then sent back either from the source or from a Designated Local Repairer (DLR).

PGM is an IETF experimental protocol. It is not yet a standard, but has been implemented in some networking devices and operating systems, including Windows XP and later versions of Microsoft Windows, as well as in third-party libraries for Linux, Windows and Solaris.

reliable multicast – basics

First, use a distinct sequence numbers for each packet. When one of the receivers notices a missed packet, it asks sender to resend ….to all receivers.

As an optimization, use bigger chunks. Use a window of packets. If the transmission is reliable, then expand the window size, so each sequence number covers to a (much) larger chunk of packets.

These are the basic reliability techniques of TCP. Reliable multicast could borrow these from TCP.

Note real TCP isn’t usable for multicast as each TCP transmission has exactly one sender and one receiver. I think entire TCP protocol is based on that premise — unicast circuit.

y c++ dominates telecom industry

According to a friend in AT&T…

Call center runs on PBX ( a special hardware machine with embedded
software: http://en.wikipedia.org/wiki/Private_branch_exchange ) and
control server on UNIX/WINDOW machines as well. C++ is used often in
telecomm because:
(1)  there is no JAVA at that time. 🙂
(2) legacy systems are C/C++
(3) people thought C++ is faster high throughput  than Java
(4) special hardwares only have C/C++ complier
(5) old fashion guys only know C/C++

But java is also finding its way into telcom.
I saw a lot of applications in Bell Labs and AT&T Labs are built by
Java right now.

design patterns in software arch job interviews

Hi Youwei,

My architect job interviews seldom ask me to describe design patterns. They don't seem interested in textbook knowledge. (I feel many people can describe a lot of patterns, but it tells me nothing about their competency at using them.)

Some asked me to describe my personal contribution to my projects, my design challenges — candidate is free to mention design patterns.

The more difficult technical questions are software problem solving + low-level component design, which requires more skills than a junior developer.


synchronization ] java HashMap: non-trivial

Q: why bother to synchronize access to a linked list?
%%A: as a rule, mutable shared data between 2 threads need synchronization. Recall java volatile keyword.
%%A: Let’s consider a simplified single linked list and removal of a middle node NodeE. Well we just need to reseat pointers — perhaps in nodeD, right? Just make nodeD point to nodeF. Atomic operation, thread safe right?

But what if another thread is removing NodeF? What if another thread is removing NodeD?

As a necessary but insufficient condition, I feel the composite operation must be atomic — read the pointer in NodeE, and put that address (address of NodeF) into NodeD’s pointer, while entire link is locked down.

What if we are operating on the beginning of the link?
What about insertion?

%%A: Therefore, a linked list remove() needs synchronization.

Q: why bother to synchronize access to a HashMap?
%%A: one reason — because a hash map uses linked lists, and also the map can be rehashed.

Q: why is CHM read operation wait free??

clustered index looks like arrayList + sList

(based on the sybase ebook [[How Indexes Work]]) When you insert a row into a data page with 10 row capacity (2KB, 200B/row), you may get one of

* arrayList behavior — if much less than 10 rows present (eg 6), new row goes into the sorted position (say 2nd), shirting all subsequent (5) rows on the same page.

* linkedList behavior (page split) — page already full. New tree node (i.e. data page) allocated. About half of the 10 rows move into new page. All previous/next page-pointers adjusted. In addition, must adjust page-pointers in
** higher index page and
** non-clustered indexes.

PropertyChanged events work with a property !! a field (c#

PropertyChanged(this, new PropertyChangedEventArgs(“Age”)) // required by INotifyPropertyChanged

If Age is a field rather than a property in a Model or ViewModel class, then this event firing will fail to have any effect.

Incidentally, event firing and callback happen on the same thread — sequentially, like JTable fireXXX() and unlike invokeLater().

Further, I think the binding system invokes get_Age() and never reads the Age field so even if the field never triggers any event, it will break because initial query-then-display will display an empty string.

reflexes about the big 5 parameters in option valuation

Introducing the cast

+ val i.e. premium — option valuation, manifested in bid/ask
+ vol — implied vol assuming 30 days to maturity
+ delta — by far the most important sensitivity. All other greeks pale in significance, though vega and gamma are important.
+ strike
+ spot — underlier spot price
– one more … leverage – ratio of spot/val , typically 2 to 200. Measures how expensive an insurance this is. See http://www.tradingblock.com/Learn/public/ShowLearnContent.aspx?PageID=28 and my blog post http://bigblog.tanbin.com/2011/10/option-premium-should-be-low-cost.html Note delta is the first derivative of val against spot, comparable to the inverse of this ratio.

For a given option, these 5+1 variables move in tandem. They are intricately linked but not by a simple math formula. When an experienced trader sees a subset of these numbers, she has a reflex about the level of other numbers i.e. their actual values. This reflex is important. Val, Vol, Delta, Spot are actively monitored.

Unrealized PnL is derived from Val. VaR depends on Delta. Major decisions are made when these “risk” numbers become unacceptable.

In this list i have excluded many well-known parameters (omitted), and introduced a few unsung heroes. I feel this is a real list of important numbers we should master. http://www.ivolatility.com/calc/ lets us see how changes in some numbers come with corresponding changes in others.

Here are some Examples from P24 [[Option Vol Trading]] –

Val = $6.50
Spot = $77
Strike = $75 ITM
Delta ~= 60%
leverage = 77/6.5 = 12 times
Vol ~= 52% according to http://www.ivolatility.com/calc/ assuming 48 days to maturity

Val = $6.50
Spot = $24
Strike = $20 ITM
Delta ~= 75%
leverage = 24/6.5 = 4 times
Vol ~= 103% assuming 84 days to maturity

expertise demand]sg: latency imt quant/hft

Q: which skill in Singapore looks to grow in terms of salary, and less importantly, # of jobs
A: latency,

Q: which fields in S'pore has a skill shortage based on your recent observations
A: possibly latency. not so obvious in quant field. There are many real quants in Singapore. Generic c++/java is no shortage.

Q: which fields in S'pore present entry barrier in terms of tech?
A: thread, latency, data structure mastery (eg: iterator invalidation); green_field_ground_up design in general; high volume market data sys design in particular;

Q: which fields in S'pore present entry barrier in terms of domain knowledge?
A: FX (…..), P/Y conversion, duration, IRS pricing, bare bones concepts of a lot of major derivatives
A: I feel a lot of employers want relevant biz experience. That experience is a few weeks (or months) of on the job learning, but (the lack of) it does cast a long shadow.

eg@3rd domain knowledge #Apama

I-bank may hire someone from a software vendor (like mysis, sunguard, murex …) but more likely they prefer the core engine developer not a presales architect according to Lin Y. The PS architect’s insight and value-add is tied to that core product (hopefully long-lasting), not very generic.

If you remove the “unremarkable” code, I guess the __core__ of the product is a kind of winning architecture + some patented algo/optimizations. They are integrated but each is an identifiable feature. Together they constitute an entry barrier. If you don’t have these insights like the core Apama developers have, then you can’t help break the entry barrier and build a similar trading engine at an ibank. The ibank is more likely to hire someone with core engine development experience from other ibanks

Q: What features can qualify for a patent?

%%A: Not the high level architecture, but some mid-level techniques, or “tricks” in Shaun’s words. I feel the high-level architecture may not be an entry barrier (or qualify as a patent) since in the world of trading platform there are only a small number of common architectures at the high level. However, as a job candidate and a team lead you are expected to communicate in high-level architectural lingo. The low-level jargon is harder to understand by management or by some fellow developers.

DependencyObject ^ DependencyProperty, briefly

I’m sure there are good online articles, but here’re my own 2-cents.

The 2 class names show an implicit connection. A DObj instance has (non-static) method SetValue()/GetValue(), which use the special per-instance hash table to manage the DProp values. These values “describe” that DObj instance in various aspects.

I feel the 2 “constructs” are symbiotic.

[[Pro WPF & Silverlight MVVM]] covers DObj/DProp in details.

PropertyChanged() event firing should use dispatcher

In my app, most of the time I could simply call PropertyChanged() to raise a given event but sometimes this has no effect on GUI (no errors either!) and I have to wrap it inside Dispatcher.BeginInvoke(() => {…})

This is probably because the call to PropertyChanged() happens to be on a non-UI thread. Therefor I added a defensive

protected void BeginInvoke(Action action){
  if (this.Dispatcher == null) throw new Exception(“”);

  if (this.Dispatcher.CheckAccess())
    this.Dispatcher.BeginInvoke(() => { action(); });

listBox ^ listView ^ itemsControl

Inheritance – ItemsControl begets ListBox, which begets ListView.

From IC to ListBox — ListBox adds selection support (single/multiple/Extended)

LV is often used for multi-column table.
LB is usually used for single-column “table”, aka list. Therefore many simple tutorials use LB.

Both LV and LB present the data items readonly by default.

If you need no scrolling, you may not see a border around a listbox. But You can always select each item in it.

http://www.wpfsharp.com/2012/03/18/itemscontrol-vs-listbox-vs-listview-in-wpf/ points out
Sizing — IC is inflexible.
Scrollbar — not in IC

removal→iterator invalidation:STL, fail fast, ConcurrentMap

This is a blog post tying up a few discussions on this subject. It’s instructive to compare the different iterators in different contexts in the face of a tricky removal operation.

http://tech.puredanger.com/2009/02/02/java-concurrency-bugs-concurrentmodificationexception/ points out that ConcurrentModEx can occur even in single-threaded myList.remove(..). Note this is not using myIterator.remove(void).

[[Java generics]] also says single-threaded program can hit CMEx. The official javadoc https://docs.oracle.com/javase/7/docs/api/java/util/ConcurrentModificationException.html agrees.

ConcurrentHashMap never throws this CMEx. See http://bigblog.tanbin.com/2011/09/concurrent-hash-map-iterator.html. Details? not available yet.

Many jdk5 concurrent collections have thread safe iterators. [[java generics]] covers them in some detail.

As seen in http://bigblog.tanbin.com/2011/09/removeinsert-while-iterating-stl.html, all STL graph containers (include slist) can cope with removals, but contiguous containers can get iterators invalidated. Java arrayList improves on it by allowing iterator to perform thread-safe remove. I guess this is possible because the iterator thread could simplify skip the dead node. Any other iterator is invalidated by CMEx. I guess the previous nodes can shift up.

–brief history

  1. STL iterator invalidation results in undefined behavior. My test shows silent erroneous result. Your code continues to run but result can be subtly wrong.
  2. In java, before fail-fast, the outcome is also undefined behavior.
  3. Fail-fast iterator is the java solution to the iterator invalidation issue. Fail-fast iterators all throw CMEx, quickly and cleanly. I think CMEx is caused by structural changes — mostly removal and insertions.
  4. CHM came after fail-fast, and never throws CMEx

java concurrent hash map iterator consistency

As mentioned repeatedly in other blog posts, thread safety primarily means liveness + consistency. Consistency with reality.

For any collection, counting the size of the collection should reflect a _snapshot_. On a multiprocessor machine, if Thread A adds 3, while Thread B removes 100 items all within the same time window, over a 1000-element collection, all after I start counting, then I should eventually return a count of 1000 or 903. If I were to return 1003, then that’s inconsistent with reality, since at no time is the collection 1003-long??

Now let’s turn to iterators. If one thread is iterating while another thread write a structural change, one valid result (consistent) is the create-time snapshot of the underlier. Another valid result would be that snapshot + changes in the unseen items, assuming all the changes are in the unseen items. Personally, I see this as a consistent snapshot — the snapshot taken after the changes.

What if the change happens ON a segment already visited? Iterator already “shipped” them to client, so I feel we had better ignore those changes and all future changes. Use the create-time snapshot instead??

Brian Goetz said — Iterators returned by ConcurrentHashMap.iterator() will return each element once at most and will not ever throw ConcurrentModificationException, but may or may not reflect insertions or removals that occurred after creation. No table-wide locking is needed (or even possible) to provide thread-safety when iterating the collection. Note size() does need table-wide locking.

http://tech.puredanger.com/2009/02/02/java-concurrency-bugs-concurrentmodificationexception/ points that CHM iterator is _NOT_ a snapshot iterator.

c++Citi (Changi Biz Park) winows IV (method hiding

Q: Base class has non-virtual float f(int); Derived class has void f(int);
Base& ref = aDerivedInstance; ref.f(333) invokes which one?

My test proved — base and derived classes’ f() can both be invoked in the same program, but on 2 distinct variables (Base vs Derived). I believe compiler/runtime can distinguish the 2 functions. Remember this is no inheritance nor redefinition — If you only have a Derived variable, then the Base f() is hidden, regardless Base f() is virtual or not.

If Base and Derived both define methods of the same NAME f(), then Derived either overrides/redefines or hides Base f() functions. I don’t think you could get to the Base versions via a Derived object — Base version can’t coexist in Derived with Derived version. But When would method inheritance actually work? Well, only if D doesn’t contain a method of the same NAME. As soon as it does, all B methods of the same NAME are either overridden or hidden.

Incidentally, EffC++ P 114 points out that c++ compiler condones lots of ambiguities and only generates a COMPILER (not runtime) error when you write a call that’s ambiguous. Javac compiler can also complain about ambiguous method calls

Incidentally, if you call a declared but undefined func, you get a linker error, not a compiler error. EffC++ P116.

Q: Is option delta always between 0 and 1?
A: no

Q: write a c++ singleton

Q: write a reader/writer lock class given a basic mutex class. I don’t feel confident since there’s no one to help review my code.
%%A: lock release should be in dtor. Therefore, acquire should be in ctor.