fitting cost vs Local-Vol cost

(I think this applies to any vol surface fitting — Eq, FX, IR…)

As a concept, fitting-cost is part of fitting, not validation, not extrapolation. Extrapolation has no fitting-cost since there’s no fitting.

LV-cost is different from fitting-cost.  LV-cost measures smoothness in LV — there’s an LV value at each point on the vol surface. Extrapolation calibration tries to minimize LV-cost.

– High fitting-cost means fitted curve deviates too much from targets i.e. input data.
– High LV-cost means some LV values are too high (or too low!!) compared to other LV values.

Bump check is one of the many post-fitting checks on the new surface.

trading systems – corp ^ muni bond

^ yield curve — based on Libor vs tax-exempt curve
^ stock-based hedging — less in muni
^ preferred stock — is traded in the (longer-term) corp bond desk, but no such thing in muni
^ corp bond — dealers specialize by issuer whereas muni dealers specialize by state. Each trader is licensed by that state and only sells to residents of that state.

^ Bond.Hub, tradeWeb, marketAxess vs JJKenny, TheMuniCenter, Hartfield

bare bones trading system components for a small trading shop

I talked to a micro hedge fund (not a prop trading shop, since they have outside investors) and realized how important each system functionality is to a trader.

* pos mgmt? I thought this would be the core, but i guess in day trading, there’s not much position to keep. Probably provided by a professional software package. Such packages exist for small to medium to large firms. Even the largest banks could use Murex and SunGuard (a competitor to Apama)
* market data? essential to a trader, but high volume not always needed
* connectivity to ECN or banks, or other liquidity venues? essential
* pnl – always requires manual verification
* trade blotter? apparently quite basic. probably provided by a professional package.

Again, pricing proves to be heart of the data flow. Most important data items are those related to pricing. Most important everyday decision is pricing decision including (market) risk management.

how time-consuming is your pricing algo

I would say “at most a few sec” in most cases for bonds with embedded options [1]. My bond trading systems typically reprice our offers and bids in response to market data and other events. There can be a lot of these events, and the offers go out to external multi-dealer brokers as competitive offers, so delay is a minor but real problem. Typically no noticeable delay in my systems, due to the fast repricer.

[1] though OAS and effective duration could take longer, but those aren’t part of basic pre-trade pricing(??)

Post-trade risk valuation and low-volume pre-trade pricing can afford to be slow, but in my systems, there are typically 10k – 50k positions, so each position must not be too slow.

In another systme i worked on, we run month-end market value pricer. Probably using published referenced rates. No simulation required.

My friend’s friend briefly interfaced with an FX option system, where a single position in a complex derivative could take 5 – 10 minutes to price — in pre-trade. Traders submit a “proposed deal” to the pricer and wait for 5-10min to get the price. Traders then adjust this “auto price” and send it out. I would guess the pricer is simulation based and path-dependent.

If pricer takes the firm’s entire portfolio as input to evaluate the proposed deal, then it qualifies as a pre-trade risk engine.

basic quote filtering during vol inversion

* large bid/ask spread may be filtered out, if there are more than 10 data points on a smile curve.

* low vega (10e-6) bid/ask quotes are discarded. Here’s why —

After an option premium quote (in dollars) is converted using BS, vega is easily compouted by BS. Low vega often indicates low liquidity, less demand, less traded.

Implied volatility inversion is a numerical procedure with a defined tolerance. For a given tolerance (i’m guessing 10e-8 or 10e-14 or whatever), the low-vega quote would have a large inherent “tolerance/inaccuracy/tolerance” in the implied vol. As a result, the implied vol is less reliable in this particular option quote. A quant told me a small noise in present value can lead to big noise in implied vol. The low vega is a magnifying lense.

2+1 live sources for troubleshooting "my trade" in PROD

DB, log, and cache — then link what you discover to source code. These are only 3 places we can find crucial telltale signs. It is therefore extremely important to exploit these sources to the max, esp. DB.

(Background — if a trading system is misbehaving intermittently and a particular trade is messed up, how do you find out what’s going on?)

If I can add one other trick, I hope JMX remote operation can turn on object graph dump of any object in the prod JVM.

I feel MOM is not that easy to use as a persistent data store. Data gets purged quickly. In contrast, DB often has an audit/history table.