sybase 15 dedicated cpu

In one of my sybase servers with large database load, we managed to mark one of 16 cpu cores to a specific stored proc, so no other process could use that core.

We also managed to dedicate a processor to a specific JDBC connection.

This let us ensure a high priority task gets enough CPU resource allocation.

What’s the name of the feature?

vector insertion — implicit copy-ctor frequently

When you insert your object into a vector, the original object (B) is passed by value. Copy-ctor called exactly once for each insert.

    vector v(3); // default ctor — 3 calls.
// update — now I see one default ctor call, and 3 copy ctor calls.

    cout<<v.capacity()<<endl; // capacity = 3. Next insert would reallocate to 6
    v.push_back(B()); // temp B instance created, copied in, then destroyed. The reallocation calls copy ctor 3 times on the existing elements. 3 original objects destroyed
    cout<<v.capacity()<<endl; // Capacity is now 6
    v.push_back(B()); // temp B instance created, copied in, then destroyed. No reallocation
    cout<<v.capacity()<<endl; // Capacity remains 6

Here are some useful instrumentation code —

struct B {
    static int count;
    int id;
    B() {
        ++count;
        cout << count << " B()\n";
        id = count;
    }
    B(B const & rhs) :
        id(rhs.id + 10) {
        cout <<id << " copied\n";
    }
    ~B(){
        cout<<id<<" ~B\n";
    }
};
int B::count = 0;
ostream & operator<<(ostream & os, B const & b){
    os<<b.id<<endl;
}

3 types: Portable dnlg ] finance IT

Domain knowledge is specialized, hard-to-find, and an entry barrier. I see 3 broad categories of domain knowledge — Lingo (jargon), Math and Architecture

— 1) Lingo (Jargon) — Technical or Non-technical professionals in a given trading system must share a few hundred terms, phrases, verbs and adjectives. Each jargon term often has a superficial “face” meaning and connects with other jargon terms. The full meaning in context often requires a wikipedia page. Some Random examples that came to my mind — fixings; basis risk; option knock-in; tick DB; Basel; lockfree; move-semantic; rvalue ref.

Roughly half the jargon terms involve math. An IT guy can either memorize the derived mathematical conclusions or examine the underlying math. Random examples of such jargon include — volatility surface; yield; price sensitivities; yield’s impact on FX options; when to use stress testing vs VaR. I feel many people in IT don’t really have a firm grip on these. We go into a restaurant of financial jargon and look for fast-food. I call it fast-food culture, or quick-answer culture. Anything we don’t easily understand, we like to say we don’t need to understand. If we don’t make a conscious effort, we will always stay a financial laymen. As a result, a “conscious” programmer completely new to finance for 6 months can understand concepts deeper than a 5-year veteran.

Rather than “Jargon” I prefer “Lingo” — more general and less technical.

— 2) the Math part is a body of knowledge created by quants and PhD’s, over the past 50 years or so, perhaps starting with bond math. Needs high school math and later some calculus. For many IT folks who left school 5+ years ago, even the high school math pieces are not a piece of cake. If I don’t invest hours of spare time, i won’t fully understand half the theories in bond math which is all high-school.

The math in finance looks simple but needs to be rigorous. I feel pricing calculations in fixed income and in options often use a precision of 5 to 10. A shallow understanding could overlook details.

Financial math is a sizable body of theory, but most trading IT developers need only the basics. If I could say that I needed 6 months to grow from zero-finance-knowledge to be competent enough to help build trading engines, then obviously my roles didn’t need a lot of financial math.

— 3) system architecture + infrastructure + best practices — components like pricing, risk analytics, high speed market data, pnl explains, stress test, visualization, exchange gateways, SOR, DMA, mark-to-market, trade capture …
It’s possible to talk about an architecture of a complete suite of components but I prefer to talk about architectures of individual components. I prefer this because architectures are vastly different in terms of volume and real-time nature.

Banks want to hire experienced developers to help them re-architect, so as to stay competitive in the “arms race”.

Architecture domain knowledge changes faster than math or jargon knowledge. Just like jargon and math, architectural knowledge is specialized and not a commodity skill, so relatively few people in finance IT has it, creating a has-vs-hasnot divide among candidates. It’s difficult to gain that insight and knowledge because developers don’t need to know all the design decisions once made to fix up the live system they are now supporting. Team is set up such that you are given a lot to do just to get-things-done so you have no spare bandwidth to learn other components, but non-trivial insight into neighboring components are necessary for an architectural understanding.

I have seen a few impeccable blue-prints that fail in practice and need major changes.  Anyone can come up with great-sounding architectures but most of them are /sub-optimal/. Reason can be “hard to debug”, “inflexible”, “learn curve”… Therefore real world, proven architectural domain knowledge is rare and valuable.

Most architectures rely on solid, time-honored infrastructure software like Message-oriented-middle-wares, RDBMS, distributed cache and grids, cross-platform RPC/ORB, thread libraries, xml. Vendors always say “we are perfect” but good knowledge about their weaknesses and limitations are hallmarks of real architects.

Some say fixed income has more domain knowledge but that’s true only in the math area. Equities (and FX?) HFT probably have a larger body of domain knowledge in the architecture space.

Market data systems have substantial jargons + some architecture.

Risk systems probably (based on hearsay) has all three, but I feel only a small percentage of the software developers need math.

object, referent, and reference — intro to weak reference

see also post on pointer.

— physical explanation of the 3 jargons
A regular object (say a String “Hi”) can be the target of many references
* a regular strong reference String s=”Hi”;
* another regular strong reference String ss1=s; // remote control duplicated
* a weak ref WeakReference wr1 = new WeakReference(ss1);
* another wak ref wr2 = new WeakReference(s);

1) First, distinguish a regular object “Hi” from a regular strong reference ss1.
* The object “Hi” is a “cookie” made from a “cookie cutter” (ie a class), with instance and static methods defined for it.
** like all objects, this one is created on the heap. Also, think of it as an onion — see other posts.
** “Hi” is nameless, but with a physical address. The address is crucial to the “pointer” below.

* strong reference ss1 is a “remote control” of type String. See blog post [[an obj ref = a remote control]]
** People don’t say it but ss1 is also a real thingy in memory. It can be duplicated like a remote control. It has a name. It’s smaller than an object. It can be nullified.
** It can’t have methods or fields. But it’s completely different from primitive variables.
** method calls duplicate the remote control as an argument. That’s why java is known for pass-by-value.

2) Next, distinguish a strong reference from a pointer.
* a regular strong reference ss1 wraps a pointer to the object “Hi”, but the pointer has no technical name and we never talk about the pointer inside a strong reference.
** the pointer uses the “address” above to locate the object
** when you duplicate a remote control, you duplicate the pointer. You can then point the old remote control at another object.
** when no pointers point to a given object, it can be garbage collected.

Summary — a pointer points to the Object “Hi”, and a Strong reference named ss1 wraps the pointer. Note among the 3, only the strong ref has a name ss1. The other strong ref s is another real thingy in memory.

3) Now, weak reference. A weakref wr1 wraps a pointer to an obj (ie an onion/cookie), just like a strong ref. In a weak reference (unlike strong reference) the pointer is known as a “referent in the reference”. Note wr1’s referent is not ss1, but the object “Hi” referenced by ss1. That’s why “Hi” is target of 4 pointers (4 reference). This object’s address is duplicated inside all 4 variables (4 references).

“Referent in a reference” is the pointer to object “Hi”. If you “set the referent to null” in wr1, then wr1 doesn’t point to any object in memory.

Unlike strong references, a weak reference is sometimes mentioned like an object, which is misleading.

Q: Is the weak ref wr1 also an object? We explained a strong ref ss1 differs completely from the object “Hi”.

A: Yes and no. you create it with new weakReference(..) but Please don’t treat it like a regular object like “Hi”. In our discussion, an object means a regular object, excluding reference objects. [[Hardcore Java]] page 267 talks about “reference object” and “referenced object” — confusing.
A: java creators like to dress up everything in OO, including basic programming constructs like a thread, a pointer, the VM itself, a function (Method object), or a class (Class object). Don’t overdo it. It’s good to keep some simple things non-OO.

A: functionally, wr1 is comparable to the remote control ss1. Technically, they are rather different

G10 fields in NOS ^ exec report

—- top 5 fields common to NOS (NewOrderSingle) and exec report —
#1) 22 securityIDSource. Notable enum values
1 = CUSIP
5 = RIC code
*** 48 SecurityID, Requires tag 22 ==NOS/exec
*** 55 optional human-readable symbol ==NOS/exec

#2) 54 side. Notable enum values
1 = Buy
2 = Sell

#3) 40 ordType. Notable enum values
1 = Market order
2 = Limit order

#4) 59 TimeInForce. Notable enum values
0 = Day (or session)
1 = Good Till Cancel (GTC)
3 = Immediate Or Cancel (IOC)
4 = Fill Or Kill (FOK)

—- other fields common to NOS and exec type
*11 ClOrdID assigned by buy-side
*49 senderCompID
*56 receiving firm ID
*38 qty
*44 price

—- exec report top 5 fields
* 150 execType
* 39 OrdStatus. Notable enum values
0 = New
1 = Partially filled
2 = Filled

* 37 OrderID assigned by sell-side. Not in NOS. Less useful than the ClOrdID

15,000 quotes repriced within a minute

One of my bond pricing engines could price about 15,000 offers/bids in about a minute. 4 slow lanes to avoid
 
1) database persistence is done asynchronously by gemfire write-behind.

2) offers/bids we produce must be verified by another system, which officially owns the OutgoingQuote table. The verification takes a long time. We avoid that overhead by pricing all the offers/bids in gemfire, then send them out by batch, then wait for the result. The 1 minute speed is without the verification.

3) all reference data is preloaded into gemfire, so no more disk I/O.

4) minimal serialization overhead, since most of the objects needed are in local JVM.

In contrast, a more complex engine, the mark-to-market engine needs a few minutes to price 15,000 positions. This engine doesn't need real time performance.

## if a event-driven trading engine is!! responding

  • Culprit: deadlock
  • Culprit: all threads in the pool are blocked in wait(), lock() or …
  • Culprit: bounded queue is full. Sometimes the thread that adds task to the queue is blocked while doing that.
  • Culprit: in some systems, there’s a single task dispatcher thread like swing EDT. That thread can sometimes get stuck
  • Suggestion: dynamically turn on verbose logging in the messaging module within the engine, so it always logs something to indicate activity. It’s like the flashing LED in your router. You can turn on such logging by JMX.
  • Suggestion: snoop
  • Suggestion: for tibrv, you can easily start a MS-windows tibrv listener on the same subject as the listener inside the trading engine. This can reveal activity on the subject

commitment between contractor and client – wallSt

“Professionalism” is the hallmark of a decent Wall Street IT contractor. Commitment? Not much, either way between contractor/client.

Stark reality is that clients have no commitment to (or mercy for) a contractor. No-commitment is a key reason for the high hourly rate. In contrast, the same bank does have real but limited commitment to employees. There is some effort to provide a stable job to employees but the business’s own survival always comes first. In the case of contractors, not even that limited effort is ever made.

Banks pay that high rate for time-to-market.

For a contractor, it’s family first, professionalism and skill-improvement 2nd, commitment …no-place.

python for-loop: string,file,dict,args,dir …

Following the Unix philosophy, Python’s for-in loop is a simple idea (iterator) pushed to the max. It supports
– iterating chars in a ….. string
– iterating lines in a …. file
– iterating integers in a range() or xrange()
– iterating …. keys() values() in dict —
– iterating …. key/value pairs in a dict.items()
– iterating sys.argv on command line
– list, tuple
– retrieving pairs from list-of-pairs — idiom ## this example also illustrates defaultdict(list)

>>> s = [('yellow', 1), ('blue', 2), ('yellow', 3), ('blue', 4), ('red', 1)]
>>> d = defaultdict(list) # construct empty defaultdict .. "list" is some mysterious default_factory
>>> for k, v in s:
...     d[k].append(v)
...
>>> d.items()
[('blue', [2, 4]), ('red', [1]), ('yellow', [1, 3])]

More generally, any class supporting iteration can use for-loop. Here’s an example illustrating some of them.

import re, sys
from os import walk
printed={}
for (path, dirs, files) in walk(“c:\\”) :
for filename in files :
if not re.search(“\.py$”,filename) : continue
if not printed.has_key(path):
print ” path = ” + path
printed[path] = True

for line in open (path+’\\’+filename) :
if re.search(‘^\s*class\s’, line) : print filename + ‘:\t’ + line,

%%FX IV – math

Q: Suppose you converted home currency (USD) into GBP and JPY. Avarege rate is computed as net short position in home currency (say 17,982,000 USD) divided by net long position in a foreign currency (say 10m GBP), in a single account. Same for JPY average rate, but in another account. But what if you did some cross trades between GBP and JPY? How would you compute your GBP average rate?
%%A: I would need to work out how much nett GBP was converted to JPY

IV < — Q: if a cross pair AAA/BBB can be priced using EUR, and also can be priced using USD, then what’s the algorithm to choose the correct pricing? Choose USD if it has tighter spread?
%%A: I feel it’s possible to go with EUR and publish a quote with a tri-arb loophole in it. I feel the bid quote on the cross must be the safest bid, and the offer quote on the cross must be the safest offer.

Note ECN’s don’t generate cross rates. Only market-makers (dealers) do that. Its their responsibility to check for triangular-arb loopholes.

%%FX IV

Q772: does your system give your clients margin accounts? Do you let clients margin-trade FX options, FX futures or FX spot?
Q1802: is your system Retail-facing or Institutional-facing, an interbank trading system or prop trading?

Q: since NY traders often need to hand over the book to Tokyo traders, do they have different base currencies in their respective trader accounts?
%%A: Tokyo has a caretaker/baby-sitter trader for each NY account.  He babysits for the absent parents.

Q1234: how do you detect tri-arb (Triangular Arbitrage)? Both Outgoing quotes by your traders and incoming quotes need to checked? http://forexmentalist.com/forextrading/forex-arbitrage-trading-a-short-lived-trading-strategy/ is an introduction.
Q987: what live feeds do you use?
Q2323: typical size of your trades?
Q8792: how many cross currency pairs are actively traded on your system?
Q623: Beside the crosses, how many currency pairs are actively traded on your system?  Do these pairs always have USD as one of the 2 currencies?
Q: how many users do you support? Do they use website or WPF or swing or Quartz GUI?
Q: which ECN or interbank trading venues do you connect to?

Q: what’s the typical bid/ask spread on a non-cross currency pair?
A: 2-3 pips for top tier clients, and 20-40 pips for retail. I believe the dealer makes little if any profit on these trades, but they make more from the clients on other deals such as FX fwd, options or lending.
A: 1-2 pips for EUR/USD.

Q: is your system buy-side or sell-side?
(I believe fund managers, asset management firms are all buy-side. I guess Retail online trading websites are probably buy-side too.)

Q: typically how many percent of the trades are voice trades, and how many percent electronic?
(I know many big banks still have voice trades.)

Q: how many types of FX options are processed in your system? Barrier options, Binary options.
Q: what’s the process of FX option trading?
Q: what’s no dealing desk?
Q: what’s ledger balance?
Q: for a fx option, What’s strip? Strip leg?

A1234 (from a veteran): checking the quote frequency of crosses. for example, if EURJPY update rate is the sum of EURUSD update rate and USDJPY update rate. then that could mean one is using EURUSD and USDJPY to make market on EURJPY.
A772: I have seen real examples of retail traders using 50:1 leverage in FX spot trading, but not sure about institutional clients.
A8792 (Piroz): 5. Most of the active pairs are non-crosses.
A8792 (ZJW): 20-30 is common, and often high volume
A623 (ZJW): 200 – 300 is common in a large bank
A1802 (P): Inst sell-side system with about 700 web users.
A987 (P): Bloomberg, FXAll, Reuters
A987 (QEMS): Bloomberg

%%FX IV – IT

[L]Q: describe some multithreading challenges in your system
Q: where is the performance bottleneck of your system?
A: in the space of trade matching or “order-book”, there’s a single thread responsible for EUR/USD matching. I think this is the only thread to *removes* items from the EUR/USD order book. Allowing 2 threads to remove is probably impractical. Is this a bottleneck? I don’t know.
A: for FX options, it’s risk/PnL real time update across a large number of positions.
A: tiered pricer. See blog post.
A: Message volume and throughput. For multi-threaded processes, internal state tracking and event trigger sequences would be quite tricky.

Q: what are the functional components in your trading platform? I guess trade booking, quoting, end-of-day risk batch and PnL batch, market data gateway, connectivity to interbank trading venues i.e. FIX engine to send/receive orders and quotes?

Q: which database tables have a history table behind them and why?
(I know Trade, Quote and Position need history tables so the main table can be kept small and fast.)

[L] Q: how is tick data processed?

%%FX IV – processing

[L] Q: what database tables hold unrealized PnL data? I guess it’s Position table
[L] Q: how is general ledger reports generated from unrealized PnL tables?
IV <– Q: how long is a quote (in response to a RFQ) good for?
A: For an extremely competitive quote, it could be good for 1 minute only.

Q: I know derivative instruments have expiration dates, but must all CASH trades be closed sooner or later?
(Someone said you must close all your trades by converting back to your base currency, but I was told it’s possible to keep a long position in a non-base currency forever, just like you keep an IBM position forever.)

Q: after an open trade is closed, does it disappear from the Position table?
IV <– Q: is there any real time risk data presented to traders?
Q: if a trading account start out with USD 100m, and after a month becomes USD 50m + Eur 30m, how is unrealized PnL calculated?
(I think each account must choose 1 base currency. All positions are converted to it when computing unrealized PnL)

[L] Q: how is settlement done? Using some CounterPartyAccount table?

%%FX IV – volume stats

Q6601: how many quotes do you publish each day, and how many RFQ do you get each day?
Q3244: On a typical incoming market data feed, how many message a day?

A6601 (Pz): possibly a few thousand RFQ. There are probably fewer RequestForStreams

A3244: for a large ECN, hundreds of millions of incoming quotes a day, or 30,000 messages/sec/client, probably aggregate from liquidity providers.
A3244 (Pz): given the tiered pricer, we act on not more than 10msg/sec/currencyPair. (Presumably more are received but only this many trigger outgoing messages.)
A3244 (QEMS): Peak load was 30,000 messages per minute per client, but GUI get refreshed only 4 times ps. only few clients… 6 with GUI and 1 headless client

low-latency trade execution in a bond mkt-maker – suggestions

(I think dealership trading desks include most bonds, IRS, FX options, FX cash, CDS, OTC stock options…)

After saving the trade into database, all post-trade messages can be delegated to a task queue for a background thread (like swing event queue). These messages inform downstream[1] systems , so they can update their GUI [2] and database in real time. These post-trade messages aren’t can’t really fail (if they fail we just have to resend). These are one-way messages. So they don’t need to be on the critical path.

[1] could be internal modules of the same trading engine or external systems owned by other teams like commissions:). Either way, such a downstream always runs in a separate process or machine.
[2] Note these GUI aren’t web pages, but WPF, Swing or similar auto-updating screen.

Q: Must position be updated in the critical path, before we process the next trade?
A: For a dealer, yes I think so. If Trade 1 depletes inventory, then we must reject next trader-sell.

[11]traction[def1]^learning curve gradient

Context — learning a new software language, API, entire server (OS, DB..) or toolkit, with non-trivial concepts embedded therein.

The more common pattern is the learning curve. Initial gradient is often higher as pick up speed. After 6 months (or 1, or 12..) it flattens and tapers off and you experience diminishing returns.

A 2nd pattern is “gaining traction” . For the first 4 weeks (or 1 or 20..), you spend a lot of time reading and experimenting but without growing confidence…

  1. after a while, you start to connect the dots via thick->thin, often in a series of incremental breakthroughs.
  2. thick -> thin is not merely (superficial) accumulation of knowledge
  3. you often need perseverance and sustained focus. See focus+engagement2dive into a tech topic#Ashish
  4. knowledge gap build-up above the new entrants
  5. high ROTI
  6. high retention rate
  7. gaining-traction is opposite of wheel-spinning

I experienced and overcame this wheel-spinning process in ..

– swing
– C++, – java, c#
– python
– JMS, RV
– drv pricing
– options
– yield
– IRS

Solution?

A related pattern is engagement

fx pairs – 4 + 3

the seven most liquid currency pairs in the world, which are the four “majors”:
and the three commodity pairs:

  • EUR/USD (euro/dollar)
  • USD/JPY (dollar/Japanese yen)
  • GBP/USD (British pound/dollar)
  • USD/CHF (dollar/Swiss franc)

  • AUD/USD (Australian dollar/dollar)
  • USD/CAD (dollar/Canadian dollar)
  • NZD/USD (New Zealand dollar/dollar)

Read more: http://www.investopedia.com/articles/forex/06/SevenFXFAQs.asp#ixzz1SzVcpoIuThese currency pairs, along with their various combinations (such as EUR/JPY, GBP/JPY and EUR/GBP), account for more than 95% of all speculative trading in

Barc eq drv IV

Q: can you create your own thread pool?

Q: if an exception is thrown on a thread, will other threads get affected?

Q: how do you implement a scheduling engine embeddable in a client JVM. The entry point is a static/nonstatic schedule(Runnable, long initialDelay). A bonus feature — cancel a given task.

datachange && auto update-stat in sybase 15

— based on P22 [[ new features guide to ASE 15 ]]

The holy grail — let users determine the objects, schedules and datachange thresholds[1] to automate the stat-update process.

[1] “update only when required”

– datachange is a function used in “select datachange(….”
– datachange function returns a percentage — how many percent of data changed, due to CUD.

real world high-volume mkt-data gui – tips

Suppose a given symbol (say EURUSD) bid/ask floods in at 1000 updates/sec (and last-executed at a fraction of that messaging rate). We need to display all of them in a table. We also need to update some statistics such as moving average, best bid/ask.

Suppose we don’t have the option to skip some quotes and periodically poll the data source [1].

I feel This is technically impossible. It’s important to identify unrealistic requirements.

I feel if a (institutional) client wants all the 1000 updates in rea time, she should get a “bank feed”, rather than rely on the GUI we build for them.

[1] A common technique to display a feed with more than 1 update/sec.

FX/Eq trade && notional daily volume – 2011 statistics

2011 – Morgan … FX spot + futures is about 100k – 200k trades
2011 – BA FX everything is about 50k trades
2011 – Hotspot FX spot is between 10k to 100k trades. Average Notional USD 66b
2011 – Another ECN I know does tens of thousands of trades a day in FX spot. Average Notional USD 11b
2011 – A big European bank’s Equities desks – about 500mil orders a day. 50 microsec/order
2011 – another big European bank’s Equities (mostly cash but also fut/op) – about 1 – 3 mil orders executed/day, typically with 2 fills for each client order.

binary^European options – parity relation

I confirmed with my quant friend – the European call valuation formula by BS consists of 2 terms like “Term1 – Term2”

C = S*N(d1) – K*(e^-rt) N(d2)

Term1 is the valuation of an asset-or-nothing binary option.
Term2 is the valuation of a   cash-or-nothing binary option.

In other words, C = AON  –  CON

Now, before maturity, does the above equality hold?

N(d2) is the probability of the option finishing ITM, which is “easy to derive”

BS-M vs naive model on stock price changes

Q: Say IBM closes at $100 today (S0). What’s the “distribution” of tomorrow’s close S1?

First, note S1 is a random variable and has a distribution (or histogram if you simulate 1000 times).

The naive model says it’s equally likely to go up 20% or down 20%. BS says it’s equally likely to go up 25% or down 20%, because

   log(1+25%) and log(1-20%) have identical values (ignoring +/-…).

BS basically says log(price relative) is normally distributed.

Obvious flaw of the naive model — it assumes IBM is equally likely to go up 200% or down 200% … to negative!

Many people say BS assumes “underlier return is normal” but all in-depth articles say “log of *** is normally distributed”. P402 of CFA textbook on stats says “continuously compounded return” is defined as log(price relative).

25G/table – sybase comfort zone

Many people feel Sybase is unsuitable for large tables. How about a 25GB table?

I worked with a few post trading systems (commissions, trade-level revenues, settlement…), where each table’s data occupy 10 – 25GB, with another 10 – 25GB for all indices of that table. Each row is typically 1 – 2KB, typically within a Sybase data page, so such a table typically hold 10+ million rows.

My Sybase trainer said Sybase is faster than oracle/db2 for small tables below 100mil rows.

My database colleagues also feel Sybase is fast with 10-25GB/table.

JVM tuning is 80% memory tuning, which is 80% JGC tuning

Look at real time java documentations. I feel most of the design
effort went into memory optimization.

I once read an article on Weblogic server tuning. The JVM tuning
section is 90% on memory tuning. Threads is a much smaller section.

Finally, I had a long discussion with a Oracle/BEA/Sun consultant. The
primary focus of JVM tuning is GC overhead and long pauses.

how time-consuming is your pricing algo

I would say “at most a few sec” in most cases for bonds with embedded options [1]. My bond trading systems typically reprice our offers and bids in response to market data and other events. There can be a lot of these events, and the offers go out to external multi-dealer brokers as competitive offers, so delay is a minor but real problem. Typically no noticeable delay in my systems, due to the fast repricer.

[1] though OAS and effective duration could take longer, but those aren’t part of basic pre-trade pricing(??)

Post-trade risk valuation and low-volume pre-trade pricing can afford to be slow, but in my systems, there are typically 10k – 50k positions, so each position must not be too slow.

In another systme i worked on, we run month-end market value pricer. Probably using published referenced rates. No simulation required.

My friend’s friend briefly interfaced with an FX option system, where a single position in a complex derivative could take 5 – 10 minutes to price — in pre-trade. Traders submit a “proposed deal” to the pricer and wait for 5-10min to get the price. Traders then adjust this “auto price” and send it out. I would guess the pricer is simulation based and path-dependent.

If pricer takes the firm’s entire portfolio as input to evaluate the proposed deal, then it qualifies as a pre-trade risk engine.

my own custom cell renderer-cum-editor — simple, light-weight

Note a single instance of text component is shared by all cells. The text component needs no long term memory. The existing text value is always overwritten by “value” when calling get*Component() methods. We only need its transient memory for getCellEditorValue().

Thread safety — object state i.e. text value is modified on EDT only.

Input validation — add document listeners — textComp.getDocument.addDocumentListener()

class CustomRenderer extends AbstractCellEditor /////// avoid DefaultCellEditor
implements TableCellRenderer, TableCellEditor {
JScrollPane scrollPane;
JTextComponent textComp;

public CustomRenderer() {
textComp = new JTextField(); ///////////// or text area
scrollPane = new JScrollPane(textComp); // if text area
}

public Component getTableCellRendererComponent(JTable table,
Object value,
boolean isSelected,
boolean hasFocus,
int row, int column) {
System.out.println(System.identityHashCode(textComp) + ” render (b) has — ” + textComp.getText());
textComp.setText((String) value);
System.out.println(System.identityHashCode(textComp) + ” render (a) has — ” + textComp.getText());
return scrollPane;
}

@Override
public Component getTableCellEditorComponent(
JTable table,
Object value,
boolean isSelected,
int row,
int column) {
System.out.println(System.identityHashCode(textComp) + ” editor (b) has — ” + textComp.getText());
textComp.setText((String) value);
System.out.println(System.identityHashCode(textComp) + ” editor (a) has — ” + textComp.getText());
return scrollPane;
}

@Override
public Object getCellEditorValue() {
return textComp.getText();
}
}

row sorter – swing fundamentals #my take

A row sorter instance holds
1) an all-important “row index” mapping, somewhat similar to the “col index” translator in TableColumnModel,
2) the list of sort keys

The row sorter updates the mapping whenever underlying content changes or sort key list changes. Javadoc says “RowSorter’s primary role is to provide a mapping between two coordinate systems: that of the view (for example a JTable) and that of the underlying data source, typically a model.

One sorter has multiple sort keys.
One sorter covers all the columns — all potential sort columns

public List getSortKeys() – returns the current sort key list
public setSortKeys()

RowSorter need to reference a TableModel. JTable also have a reference to the model. RowSorter should not install a listener on the model. Instead the view class will get model events, and then call into the RowSorter. For example, if a row is updated in a TableModel, JTable gets notified via the EDT event queue then invokes sorter.rowsUpdated(), which is a void method. The rowsUpdated() internally refreshes row sorter’s internal mapping. I believe after this method returns JTable would query row sorter to re-display the rows in the new order.

Because the view makes extensive use of the convertRowIndexToModel() and convertRowIndexToView(), these methods need to be fast.

When you click a header, mouse click handler calls into the sorter’s instance method toggleSortOrder() to
1) change sort key list,
2) then update internal mapping

#1pitfall in return-by-ref: pointee lifetime

See the post on using out parameter to return by ref

Q: Return by reference is very common in C++, but there’s a common pitfall, that everyone must always remember?
A: pointee object’s lifetime (not “scope”).

Q2: common, practical safeguards?
A: return a field by reference, where the host object has a longer lifetime
A: return a static object by reference, either class-static, global variables, or function-static. P150 [[NittyGrittyC++]] has a useful example.

put spread – a frequently useful option strategy

(Based on a Barron's 6/20/2011 article) If you anticipate a stock (or ETF) depreciation in a few months, you can sell an OTM call, then use the /proceeds/ to finance a “put spread”.

Eg: iShares Trust MSCI EAFE Index Fund (Ticker EFA). In June, the EFT was 58.
– sell Jul 60 call for a premium
– buy Sep 56 put, which costs more than the other put, and need financing from the call's proceeds
– sell Sep 52 put

Note, as usual, all the calls and puts are OTM — in the spirit of term insurance, where premium should be low-cost to be efficient.

JTable renderer to check old/new cell values then set color

I feel there are many broken designs, and a few working designs.

If the requirement is needed only upon user input (not during background update like MOM), then property change event or cell editor could probably pass old/new values to cell renderer.

But let’s assume the requirement is that all changes (by user or behind the scene) be covered. Now, how many ways can the underlying data change? Only 2 — Either by setValueAt() or by direct write to underlying. So I’d make underlying data structure (usually 2D) private and provide a setter as a single choke point. In this setter I would save old value into another 2D. Cell renderer has access to the jtable, so it can retrieve the table model, and access the other 2D.

Note you have to install your render as default for Integer.class (not Object.class) if the column is integer.

Note you must save the default background colour early. That color object is a field of the renderer instance. The same renderer instance is reused and persisted. Once you call renderer.setBackground(), that field is changed forever (and the previos color object unreachable), so you must save the before-value.

What if we force all updates to go through setValueAt(), including MOM updates? Heavy load on EDT given the high volume of MOM. Hard truth — Some updates to model must happen off EDT. However, fireXXX() must always run on EDT. [[java concurrency]] says “data  model  fireXxx  methods always call  the model listeners directly rather than submitting a new event to the event queue, so the fireXxx methods must be called only from the event thread.

4 infrastructure features@Millennium

 swing trader station + OMS on the server-side + smart order router over low-latency connectivity layer

* gemfire distributed cache. why not DB? latency too high.
* tibrv is the primary MOM
* between internal systems — FIX based protocol over tibrv, just like Lehman equities. Compare to protobuf object serialization
* there’s more advanced math in risk system; but the highest latency requirements are on the eq front office systems.

dv01 ^ duration – software algorithm

Q: Do dv01 and duration present the same level of software complexity? Note most bonds I deal with have embedded options.

I feel answer is no. dv01 is “simulated” with a small (25 bps?) bump to yield… Eff Duration involves complex OAS. See the Yield Book publication on Durations.

In AutoReo, eff duration is computed in a separate risk system — a batch system… No real time update.

By contrast, eq option (FX option probably similar) positions need to have their delta and other sensitivities updated more frequently.

"experienced" developer in trading system..meaning@@

Experience (of an old timer) in “our” system means
– #1) I can rely on this person to help me, if he’s willing
– #2) local sys knowledge
problem solver track record;
– extremely fast turnaround; – knows exactly where to tweak to make it work
– knows what (not) to test
– knows what users (don’t) want;
– knows the limitations of a hell lot of paper designs that don’t work in this context
– can get most assignments done

— To the hiring side —
Experience (of a candidate in the candidate pool) in a similar investment-bank means
– #1) proven track record, so my boss won’t fire me if I hire this candidate unwisely or unsuccessfully
– knows how this kind of place breaths and sucks

Experience (of a developer) in a similar system means
– adequate knowledge of standard tech tools in this space. Best eg — threading
– hopefully 50% familiarity with our tech tools used here, if 100% is the level of my average team members.
– possible knowledge of the limitations of some tools
– possible knowledge of unique capabilities of some tools
– knows the trading conventions and how this kind of things work
– knows the jargons

[11]how to incur 1 STW JGC/day: #2 heap sizing

(See other post on 1 GC/day)
UBS eq system in Stamford claims to incur a single GC a day. They probably need enough RAM for the heap. 32bit  gives you about 3GB memory, probably insufficient.

Q: Just How much RAM is sufficient?
A: In trading, 16G is a common figure, all dedicated to a single JVM.
A: In a risk engine, each machine has 512GB but runs multiple JVMs.
A: Here’s an answer from an Oracle/BEA tuning consultant (XiaoAn)+ my own input.

1) decide if caching is suitable.
1b) if yes, then make sure caches have size-control and leak-control

2) eliminate obvious memory hogs, reuse objects and pools if possible
3) Profile your engine in load test.
4) tentatively release to production. If it actually uses much higher memory, then investigate why the estimate failed. Possible reasons

– leaks
– cache

Many memory profilers can count how many instances of a particular class (like String.java) there are. If your own class has too many instances, you can add instrumentation to ctor

basic JVM tuning – 2 focuses

JVM tuning is part of system tuning, often a small part. I feel DB tuning and application tuning often/usually provide better ROI. In a sense, JVM is already very well-turned compared to application code (in any langauge). It’s worthwhile to keep the ROI in mind. After you make sure JVM performance metrics look reasonable, further effort often yields diminishing ROI. Better focus on other parts of the system.
It’s critical to establish measurable targets, otherwise we can’t tell when JVM performance metrics are reasonable. In practice, 2 main targets are
– OO) GC overall overhead – to be reduced, expressed as a percentage of system resources spent on GC
– PP) long pauses are often unacceptable and should be eliminated. Short jitters are more common and more tolerable
Once OO and PP are in reasonable range, further JVM tuning gives diminishing return, according to real world experiences.

You can see OO) in verbose GC output (more easily on GCHisto). 1-3% is ok. This goal is important to batch systems. An exchange gateway guy told me PP is more important. 10% OO is tolerable in the low-pause CMS.

You can see PP) in verbose GC output. One minute would be too long.  This goal is important to latency sensitive systems like most trading engines, where Even 100millis is bad. Long pause is often due to full GC. A first step is to turn on concurrent mark and sweep collector. Some trading shops create their own GC.

Real time java addresses PP. Not easy. UBS claims they incur a single GC “penalty” a day, so PP is eliminated. See other blog posts on “GC/day”

If your overall Overhead is too high, you need to see breakdown, then divide and conquer. A key metric is allocation rate.

Related tools needed for jvm tuning
– jstat
– jconsole
– hpjmeter

IRS trading in a big IR Swap dealer

A3: each deal is tailor made for a particular client. A deal (or a “trade”) often has more than 100 attributes and a lifecycle of its own.

I spoke with a big US investment bank (Citi?) and a big European bank. IRS is a dealer market — No exchanges; each dealer takes positions. (There’s London Clearing House though). Each dealer bank maintains bid/offer but don’t publish them. Why? See A3. Each new client is signed on by an elaborate on-boarding/account-opening process, through the bank’s dedicated sales team. I guess that’s the signing of ISDA Master Agreement.

Once signed up, a client can send an RFQ with a deal size but without a price and without a Buy/Sell indicator. Bank typically responds with a fully disclosed bid/offer price pair. Client can either hit the bid or lift the offer.

Since IRS trade size is much larger than equities, the daily volume of trades is much smaller.

A dealer bank maintains IRS bid/offer pairs in multiple currencies. But here we are talking about single-currency swap.

There are dealers in cross-currency IRS, but that’s a different market.

specialist^generalist(manager), SG^NY #le2Ed

To compete in a knowledge-intensive industry, an organization needs specialists + good managers (what I call generalists).

In labor-intensive industries, specialists or knowledge experts are less important. Important roles (below the C*O level) in such an organization are effective managers. Singapore has a reputation for producing effective managers.

Singapore is trying to move off labor-intensive into knowledge-intensive sectors such as life science, research, high-tech design, high-tech manufacturing (such as chip making, where I once worked), education/training… The sector I know best is the Info tech (IT) sector. IT is often cited as knowledge-intensive, but the large workforce required in a typical IT project makes it more and more like a blue-collar labor-intensive industry. You don’t need top experts in a typical IT project. You do need good managers. They make important decisions, shape the team culture, create the communication patterns, select team members for each task, motivate and lead….

Now let’s zoom into a special sub-sector within IT. In investment banking, IT is relatively labor-intensive, with large headcounts. In contrast, quant and true front office trading roles are specialist roles — very few head counts but very high financial impact.

What I found recently in Singapore vs Wall St job market is — Wall St pays big bucks for both specialists and generalists, whereas Singapore primarily rewards generalists. Certainly there are quant and trading roles in Singapore, but I can’t qualify for those so I only focused on tech roles. On Wall St, there are a good number of well-paid developer positions — specialist positions, paid on par with entry-level managers (Some architects are paid like mid-level managers). Very, very few such roles in Singapore. In Singapore, well-paid IT roles are exclusively managers and high-level architects (largely hands-off). These generalists are no doubt important — they are important on Wall St too, and also in traditional industries. It’s easy to recognize their importance so they are well-paid.

I’m not a manager, and without substantial management track record. I’m more of a knowledge specialist (aspiring to an expert). That’s why it’s so tough for me to get a suitable job in Singapore.

options, swaps and futures: 3 drv’s !! equally important to FX

options — eq, FX. Many bonds have embedded call/put options[1]
swaps — FI
futures/forwards — all

Why did the market evolve this way? Highly educational but I'm not knowledgeable.

(Commodities? slightly less “relevant” to an IT guy since there are few IT jobs in commodity trading. Partly because there's not much data or automation.)

[1] In fact an IT team was dedicated to refunding analysis i.e. how to price a proposal to an issuer to recall a bond and reissue at a lower coupon. A sizable IT team was tasked with creating an instrument of puttable floating notes. Tender Option Bond is another puttable bond.

y trading IT pay so high in US and also ] sg

I asked a few friends —

Q: finance IT pays 2 to 3 times the salary compared to other sectors. Initially i thought the person requirement must be much higher, i.e. the average non-finance developer probably can’t easily handle the job. To my dismay, just about any java developer can handle the job. You can learn multi-threading, MOM, … on the job. So why do they keep paying such high salaries?

A: profit margin is much higher in finance. Having qualified developers means faster time to market and more profit. Qualified means track record.
A: trading systems prefer a elite team of experienced guys, not large army of rookies. Average salary is therefore higher. Root cause is time-to-market. Tiny team of elites deliver faster.

A: At the senior developer level, trading system experience (track record) is very rare, partly because of the low head count in a lot of successful desks.

A: manager must spend the budget or get a reduced budget next year.
A: employer must pay high enough to keep the talent

A: Will (recruiter) said trading sys architect talent is scarce in S’pore. Globally, Battle tested architects are rare. Battle tested on trading system is ..rarer. Trading systems have additional characteristics such as (but not limited to) extreme time-to-market, extreme quick-and-dirty, instrumentation, manual override, “explain what happened”….

A: Y. Lin feels many of those high-end jobs are very specialized, with very few qualified local candidates

A: MS commodity trading interviewer said most local candidates don’t have the design experience — they only assist London or NY

A: Raymond feels some sectors (like web, or php) are over-supplied due to inflow of immigrants. System requirement is fairly simple compared to …. say Oracle or SAP — mostly used by large, well-funded enterprises. System is expensive and mission critical.

A: Raymond T also feels an IT service provider is bound by the price clients are willing to pay. Raymond gave examples of construction industry. These clients have a budget and won’t pay extraordinary prices, so the IT guys serving that industry get an average salary. Raymond said finance industry is unique IT serves internal business units and business unit decides how much (possibly huge) budget to allocate to IT. I would add that headcount is much smaller than a team in other industry. So finance IT budget is probably higher, and head count is probably much lower, therefore salary is so high.

A: Raymond T feels in other industries, IT salary is kept “reasonable” by specialized outsourcing suppliers who specialize in certain high-value sectors. In trading, I only know IBM and Sapient have such “practices” and they probably cost a bank more than hiring directly.

buy OTM call, and sell OTM put

When GLD (Select sector Gold SPDR, an ETF on gold) was trading at $142.05, a veteran recommended buying June $150 calls and sell June $135 put to “position” GLD to exceed 150 by June.

This is known as Risk-Reversal. “It lowers the price of a bullish position by selling a bearish PUT.”

(As in all recommendations, the options to buy or sell are all OTM.)

It's relatively easy to plot the { pnl vs underlier terminal price }. I mean “pnl” — Generally I avoid { portfolio MV vs underlier expiration price } curve since it looks silly — a fairytale no-loss-always-profitable strategy.

Comparable to — a long call
Comparable to — a long underlier

Motivation and advantage of this trade — cash commitment (paid out of pocket) is lower than buying a call, thanks to PUT premium income.

[11]Miami exchange HFT IV

First of my 10+ HFT style QQ interviews, where the QQ topics were completely alien to me.

Q: 3 no-choice scenarios to use initializer list. Efficiency is a 4th reason.
%A: const or reference field, or if one of the fields has no op=
%A: if a base class lacks a no-arg ctor
A: if a data member’s class lacks a no-arg ctor
A: what if a data member is a reference-counted class like a shared_ptr?
A: what if a data member’s class keeps track of instances constructed

Q: What exchanges send you market data feed?

Q: outline a ref-counted copy-on-write string class, showing all the function declarations
See separate post https://bintanvictor.wordpress.com/2017/05/29/ref-counted-copy-on-write-string-miax-exchange-iv/

Q: how do you see what system calls a running process is calling?
%%A: strace for linux
AA: also DTrace/truss/tusc on other unix variants.

Q: RTTI is too slow, but somehow (can’t remember the context) we need to tell what class (in a hierarchy) this object is. How do you achieve that? See clever use of enum(char?) in demanding c++ app

Can’t remember the exact question….
Q: tell me a clean solution to support cout<<myDog; where Dog/Cat/Ant/.. classes are derived from Animal
Hint from questioner — you can edit Dog class.
A: see c++friend function calling a virtual method

P156 [[essentialC++]] has some simple tip

Note overloaded operator is like a method and can be virtual — http://stackoverflow.com/questions/2969966/c-polymorphism-of-operator-overloading. However, our operator<< has to be a friend!

C-str ^ std::string (+boost) – industry adoption

I believe the boost features aren’t needed for coding test. C++ coding IV doesn’t care std::string or cStr. Perhaps std::string is easier.

C++ threading, c++ trading, c++ pricing, c++ connectivity… all use C string functions like a workhorse. STL string class is still largely displaced by C string.

I think many teams in need of custom optimization create their homemade string classes.

Boost offers a bunch of standalone functions as string utilities + the tokenizer class. See http://www.boost.org/doc/libs/1_36_0/doc/html/string_algo.html, also covered in my boost book.

SQL code generation – ETL^ORM

Hibernate does SQL code generation. Often sub-optimal. Now I think ETL tools probably avoid SQL code generation. Reason is efficiency.

Suppose your legacy app has business logic in query or sproc. ETL tools often emulate the same business logic but outside the database, and often at a much faster throughput.

basic quote filtering during vol inversion

* large bid/ask spread may be filtered out, if there are more than 10 data points on a smile curve.

* low vega (10e-6) bid/ask quotes are discarded. Here’s why —

After an option premium quote (in dollars) is converted using BS, vega is easily compouted by BS. Low vega often indicates low liquidity, less demand, less traded.

Implied volatility inversion is a numerical procedure with a defined tolerance. For a given tolerance (i’m guessing 10e-8 or 10e-14 or whatever), the low-vega quote would have a large inherent “tolerance/inaccuracy/tolerance” in the implied vol. As a result, the implied vol is less reliable in this particular option quote. A quant told me a small noise in present value can lead to big noise in implied vol. The low vega is a magnifying lense.

marathon – strengthen your(GTD+)lead in the pack#sg high-end

Y Lin pointed out those high-end jobs tend require specialized skills. I now feel concurrency is an example, in addition to —

latency – c++ is the default for extreme latency
latency – sockets
latency – market data
latency – java?
dnlg – math — better entry barrier, closer to business.
dnlg – jargons in FX, Rates…
dnlg – arch
FIX
MOM


real projects are completed in lowly tools like sql, java, C, scripts, but interviewers need to see much much more than those.

3 types of ROI from owning a stock

“Some share holders of certain stocks (say XYZ) are now writing (i.e. selling) OTM call options to get cash income”. Classic buy-write (covered call) strategy, as illustrated on Barron’s.

These investors aren’t concerned about limiting potential upside. They want the cash income in the form of premium. If they are greedy, they could write slightly OTM calls.

The XYZ stock is probably paying insufficient dividend, if you ask such an investor.

This strategy is popular when implied volatility rises, i.e. extrinsic value rises (OTM valuation is purely extrinsic).

Now I see that buying a stock can provide 3 types of returns
1) capital gain
2) dividend
3) premium income by selling call options