ATM definitions in various asset classes

what's atmf exactly?

http://www.sdgm.com/Support/Glossary.aspx?term=ATM%20option says

An ATM is an option where the strike is the same as the current observed underlying which can be the current spot, forward rate or delta neutral straddle. You can have an:

ATMS (at-the-money spot) option, where the strike is the same as the current spot rate.
ATMF (at-the-money forward) option, where the strike is the same at the current forward rate.
ATM delta neutral straddle, where the strike gives a delta neutral straddle [1].

The default ATM in the:
FX market is the delta neutral straddle.
IR market is the forward rate.
CM (commodity) market is the forward rate.
EQ market is the spot price.

[1] net delta = 0. Therefore for small fluctuations in underlier, portfolio MV doesn't change.

#1 motivation/justification for threading in GUI

In most multi-threading domains, the driving force is often one of throughput, latency and parallelism. Yet The most prevalent motivation goes “processor clock speed is reaching a plateau, so the only way ahead is multi-core. Without multi-threading, the multiple cores lay waste.” This is like scale-out vs scale-up. This is the resource “utilization” argument. Any resource underutilized is a “guilt” and shame to some managers.

In GUI space, however, the real motivation is not “utilization” per se but Responsiveness.

I have seen many GUI apps that can't keep up with the live market data rate — latency.

I have seen many GUI apps that leave a long ugly trail when you move/resize a window.

Most GUI threading problems and techniques are related to Responsive not performance or throughput. Even “latency” is not really appropriate. In latency sensitive systems, Latency is often measured in clock cycles and microseconds, sometimes sub-micro, but Responsiveness doesn't need that. Such latency techniques can lead to over-engineering.

sticky delta – bit of clarification

http://www.columbia.edu/~mh2078/FX_Quanto.pdf says “as the exchange rate moves, the volatility of an option with a given strike is also assumed to move in such a way that the volatility skew, as a function of delta, does not move“. By “skew” I take it to mean curve shape. By “function” I take it to mean a curve with delta as the x-axis.

If you refresh your USD/JPY vol surface every hour, you will see the surface moves.  Let's keep things simple and just focus on the 1-year smile curve. This curve moves but its shape stays fairly constant. It stays constant if you plot (imp) vol against delta. I believe the shape doesn't stay constant if you plot vol against strike. We say FX vol is sticky delta not sticky strike.

flow-vol franchise lowers cost of a structured vol provider

A strong flow vol business Drastically reduces hedging cost of a structured vol deal maker.

A structured vol trader in a sell-side basically constructs structured deals for hedge fund, mutual fund, insurer and other institutional clients. Typically based on client request, but the sell side could presumably market a given instrument to a client on an unsolicited basis too. As a sell-side, this trader always, always need to hedge her exposure. Being a structured vol trader, the biggest exposure isn’t rates or credit but vol. The way to hedge away vol exposure is to enter into positions in options or var swaps — using Flow volatility instruments.

Since the vol exposure in a structured deal can be large in dollar terms, the hedge involves large flow vol trades. These trades tend to entail sizable amount of transaction cost, which is factored into the quote to client. The lower this hedging cost, the more competitive is our quote.

Best way to lower the hedging cost is a strong flow vol business. In the same way, a strong cash equity business reduces hedging cost of an equity volatility trader and makes his bid/ask more competitive.

For a commercial bank, it’s relatively easy to become a structured vol solution-provider. It’s harder to build flow vol business, the traditional strength of investment banks (and broker/dealers?).

In the retail space, there are parallels. Target sells lots of shoes and pays lots of rent, so it wants to make shoes and also buy (rather than rent) commercial property so as to drive down cost and offer more competitive prices to consumers.

c++ template instantiation – a few of the many rules

— Based on [[c++InANutshell]] —

Template declarations describe a class (or function) but do not create any actual classes (or functions). To do these things, you must instantiate a template.

Most often, you implicitly instantiate a template by using it. An Actual template instance requires a template type argument for each dummy type. The arguments can be
– explicit or
– implicit
** default arguments for class templates or
** deduced arguments for function templates, not class templates

CommandSource, briefly

Update: it looks like Command Source is usually a user input thingy?


Buttons, MenuItems, Hyperlinks, KeyGestures are command source objects. A command source Instance is a (typically) stateful object. Users issue commands via a command source.

Technically, CommandSource objects are instances of ICommandSource (ICS). It’s revealing to examine the properties of ICS….

Besides the ICS properties, the most important thing to Keep in mind is that Command Source objects subscribe to CanExecuteChanged events, by adding a (2-pointer) delegate instance into the invocation list (behind the event pseudo-field) inside the command object.

What a mouthful! Let’s try again. An event is a (pseudo) field inside a command. The command object “fires event A” by invoking callbacks in A’s invocation list. Each command source instance bundles “this” and one of its non-static methods into a callback object and appends it to the invocation list. Event A is the “CanExecuteChanged” event.

When a command source gets called back up on this event, it queries the command’s CanExecute() and gets a yes/no, and disables/enables itself accordingly. Note in a simple commanding usage, this even never fires and CanExecute() always returns true.

to-target ^ to-source updates in WPF binding

(Based on P502 [[Pro wpf in c#2008)

The fulcrum of MVVM is data binding. Data binding links 2 objects — Source and Target objects.

Rule 1: Source-to-Target is always immediate, in either one-way or two-way. In contrast, T-to-S is not always immediate.

Rule 1a: regular one-way is immediate, since regular one-way means S-to-T. OneWayToSource is like the  T-to-S  in two-way. See below.

Rule 2: In two-way, the immediacy of update is asymmetric.

– Source changes immediately fire events and hit Target object (usually a visual).
–  T-to-S  update mode in two-way set-up has several enum values, such as

** every key stroke
** on focus loss

Intuitively, LostFocus is more reasonable than key-stroke for a text input source. You don’t want event firing on every text box keystroke.

Model dependent on view

See also http://bigblog.tanbin.com/2008/01/dependency-rigid-fragile-oo-design.html

The longer version of the guideline is “model class source code should not mention views.” Model class should not be “implemented” using any speficic view. Instead, it should be view agnostic.

Import — view classes are usually in another package. Should not import them.

Reusable — Imagine MyModel uses MyView. MyModel is usable only with MyView and unusable otherwise.

Knowledge of MyView — The fact that MyModel uses MyView implies that MyModel methods need to “open up” MyView to use its members or to pass a MyView variable around. This means whoever using MyModel needs to know something about MyView.

Change/impact — Changes to MyView can break MyModel. It can happen at compile-time (good), or no impact at compile-time but fails silently at run-time. On the other hand, the change/impact from M to V is normal and expected.

Layering — (This is the best illustration of layering principal in OO.) M is at a lower layer below V. Lower layers should be free of higher-layer knowledg.

Separation of concerns — M author should not be concerned with (not even the interfaces of) upper layers. She should focus on providing a simple clean “service” to upper layers. This makes lower-layer classes easy to understand.

Coupling — This is also one of the good illustrations of tight coupling.

Two-way dependency — (This is the best illustration of two-way dependency.) If V is implemented using M, M should not “depend on” V. Basic principal. Two-way dependency is Tight coupling.

most important params on a parametric vol surface #term struct

Each smile curve on a vol surface is typically described by a few parameters. The most important are
1) atmVol aka anchorVol, and
2) skew

All other curve-parameters are less important. All the curve-parameters are “calibrated” or “tuned” using market quotes.

Skew is a number and basically describes (for a given maturity) the asymmetry of the vol smile. There’s one skew number for each fitted maturity. These numbers are typically negative.

That’s the parametrization along the strike axis. How about along maturity axis? What parameters describe the term structure of vol?

I don’t know for sure, but often a parametric vol surface has a term structure parametrization for each curve-parameter. For example, there’s a term-structure for anchorVol. There’s another term structure for skew. Well, in some of the most sophisticated vol surface models, there’s no such TS parametrization. I guess in practice users didn’t find it useful.

real time +ve trade ack – Required by some ECN

I have seen several ECN’s in fixed income and FX that require both buyer/seller to send trade ack even if nothing goes wrong.  ECN has strict timeout about this so-called positive ack — real time.

Related to heart-beat. If a party goes offline after a $9900 trade is executed in the ECN server, ECN can’t assume it’s completed because

– It’s possible that this Seller has done another trade with someone else before our trade could complete, or
– It’s possible that this Buyer (hedge fund?) has done another trade with someone else before our trade could complete

In either case, the non-responding side will not be able to settle the $9900 trade. So ECN assumes no news is bad news.

Implication — market maker has a privilege similar to last-look. In fact, both sides enjoy this flexibility, but usually market makers are treated preferentially for providing much-needed liquidity to the ECN.

ECN needs this liquidity to attract buyer, just as supermarket needs good suppliers to attract shoppers.

High-speed vs high-complexity financial products

I now see 2 broad categories of financial products–

A) Analytically complex instruments — CDS, IRS, structured, IR-linked, index-linked(?), MBS, swaptions, embedded options… Usually derivatives.
** You often need sophisticated valuation engines for pre-trade quoting and/or risk
** If such a product (say bond with option) is liquid then you may get tight bid/ask and need no sophisticated valuation engine. However, I feel most of these instruments have large spread.

B) high speed, high volume markets – stocks, major indices, FX spot, ED futures, Treasury,
** These are by far the most heavily traded and prominent products, often with low (profit) margins

Many popular or dominant instruments do not fall into either of these — vanilla bonds, vanilla options, FX options, FX futures, FX fwd, VIX

Any product falling into both categories? I don’t think so.

Products in (A) often give practitioners a sense of job security due to the specialist knowledge required. However, I usually stay clear of exotics. I tend to feel INsecure away from the mainstream.

Some System developers in (B) may have no real product-specific insight. They probably do not analyze data — historical data; or summarize (to make sense of) large volumns of data; or fundamental analysis; or statistical analysis

async web service – windows app ^ browser app

http://ondotnet.com/pub/a/dotnet/2005/08/01/async_webservices.html?page=3 mentioned that …

If the application needs the web service output in the later stages of the thread, then the WaitHandle is the best approach. For example, if the web service queries a database and retrieves a value needed in the parent process and then displays the final result on GUI, then WaitHandle should be used. We need the parent thread to block at a certain stage.

Most web apps come under this scenario. In all other scenarios, we can use callback. Mostly Windows apps use this approach.

In my mind this is the inherent difference between windows and web apps — web app has a dominant thread returning data to the browser while other threads are Unwanted or optional.

Windows apps (swing, winform, wpf etc) typically use a lot of threads beside the UI thread. Most tasks should avoid the UI thread, and keep the UI snappy.

y IR needed for spot ccy trading

Interest rate is fundamental to FX fwd pricing/trading. How can it possibly affect spot trading?

Well, if you are trading spot CAD/JPY as a cross on USD, but on a Friday evening (EST) the USD/JPY market is closed so you don’t receive USD/JPY quotes, you need latest spot IR (still available? Probably) to adjust your pricing.

I heard a comment that an FX spot trade is effectively a fwd contract to be settled not right away but in 2 business days.

network server perf improved, real illustration#Guardian

Here’s the infrastructure. Exactly one microagent is installed on each machine to be monitored. An “environment” is defined by a base name on a machine. Strictly 1:M between micro agents and environments. Suppose we have 2 micro agents (on 2 machines) and 3 environments under each, so 6 distinct environments in total. A command like “dir” or “ipconfig” can execute in one of the 6 environments such as Environment #1. We can also run the same command on Environment #2, Environment #3, #4, #5, or all 6 environments. Another command “path” can also hit any environments.

If we single out one microagent, one enviroment under it, and run one command against it, the command output would be the status of one “service”. So a service is identified by a tuple of 3 things – a particular microagent, a particular environment, and a particular command. If we have 2 microagents, 3 environments under each, and 4 commands, then we could have up to 24 services. I use many different terms to refer to a service.

Sometimes I call it a query. You keep firing the same query to get updates from the microagent.
Sometimes I call it a chat room. All clients registered for that CR would get all updates.
Sometimes I call it a message generator.

For each service, GUI clients would continuously monitor its status. Client connects to server to subscribe to updates. Server maintains about 100 “services” like chat rooms, and each one generates messages once a few seconds. Server would push them to the registered clients, using WCF.

In terms of topology, just one server instance in the network, at least 3 microagent-enabled app server machine, and many, many client machines.

—-
Trick: Server doesn’t know when one of the registered clients is offline so I often notice it sending updates to 13 client when only 1 or 0 client is alive. I created a dictionary of connections using their IP address as key, so we won’t have 2 duplicate clients to update, since one of them must be dead.

Trick: many msg generators (“services” or “chat rooms”) share the standard update interval of 60 seconds. Each is driven by a private timer. The timers start at server start time, but I decided to use different initial delays. Therefore one generator fires on the 1st sec of every minute, another generator would fire on the 2nd sec every minute. Spread out the load on all parties.

Trick: when a microagent is offline, the central server would keep hitting it as per schedule (like every 5 sec) driven by the timer. Expensive because the thread must block until timeout. I decided to reduce the timer frequency when a microagent is seen offline. Restored after microagent becomes reachable again.

Trick: some queries on the microagent take a long time (20 sec). Before first query completes, 2nd query in the series could hit the same microagent, overloading both sides. I decided to set a busy flag on each query, so next time a thread from the thread pool “wants” to fire the query, it would see the flag and simply return, without blocking.