# 2 so-called Prices in a repo contract

The moment 2 parties agree on a repo, they finalize 2 numbers
– The Price is the amount paid for the security at the “opening” leg
– The Rate is the interest to be paid at the “closing” leg

For a common repo, the opening leg is a spot trade, so price is comparable to the market price of the security, but negotiable if it's illiquid.

A repo can also be forward-start. Price would be a forward price.

# tunnel^bubble routed events, first look

Routed events have a bit of theory behind it. Routing theory perhaps?

One of the (top 3?) essential jobs of xaml is linking up the GUI event handlers. These event handlers are essential and found in every GUI toolkit. WPF event handlers are often implemented as instance methods in the xaml class. The xaml xml can specify a handler method by unqualified method name like a bare-word.

A bit of history on the “event” concept. In swing and other GUI, an event instance is a single occurrence of a … say a button click (or mouse-over or a special key combination). In dotnet, an event is conceptually a (usually non-static) field holding a functor. In GUI, such an event often points to an instance method of the xaml class behind the xaml. So I feel in a dotnet gui an event in a xaml screen is like “a specific type of user action”, like a click on ButtonA. WPF uses a supercharged kind of event, built on top of the CLR event. We are talking about routed event.

To understand routed event, we better bear in mind that a xaml defines a screen, built with a tree of visuals in a containment hierarchy. Such a hierarchy is also essential in Swing.

When a user action happens, the WPF “Hollywood” typically raises 2 routed events — 1) a tunneling version then 2) a bubbling version. Tunneling events are named Preview_*, and designed to fire before the bubbling event. [[illustrated wpf]] shows a simple example. Upon a click, first the tunnel event hits outer container, then the inner image, and then a second event, the bubble event, hits the inner image then the outer container. Both the inner and outer visual object define event handlers for the click, adding up to 4 event handler methods in the code behind. Therefore we we see 4 log messages.

This is all standard behavior of “Hollywood”. It provides flexibility that you can opt out. You can disable any of the 4 event handlers, and an earlier tunneling event handler can stop the propagation of the tunneling event (ditto bubble), but often you want to give outermost container the first chance to react, before the children. “This gives the container the power to see an event before its children“, as someone pointed out on http://stackoverflow.com/questions/1107922/wpf-routed-events-tunneling-and-bubbling. Here’s my sound byte –

In the preview (tunneling) phase, container gets the preview before the children

# 2 technical motivations for inventing WPF commanding

When WPF creators came up with the command infrastructure, they had some competing designs, motivations, trade-off etc. http://msdn.microsoft.com/en-us/library/ms752308.aspx pointed out 2 major driving forces. Here’s my wording, not the original wording

1) 1:Multiple
2) enable/disable the Multiple CommandSources at once.

Various authors also touched on these 2 concepts among other concepts. I find it extremely insightful to single out these 2 primary purposes.

Let me elaborate a bit ….

# [practically]proc to return 0-row, null or default value

We often write lookup procedures to return a single joined record.
Better distinguish between these scenarios below. The same stored proc
can
– returns 0-row
– return a special value to indicate 0-row
– return a null value for a field
– return a default value
If possible, I generally avoid returning null value, because they
require extra parsing in java. Besides, null values can be a consequence
of many scenarios — ambiguous.
If the one and only select from the proc simply selects a bunch of
variables, then 0-row won’t happen. How do you indicate 0-row? A very
common scenario. I often use a @rowct variable, that’s updated by the
earlier table selects. In this context, we can also put special values
into other fields to indicate 0-row.
If you want the caller to know it’s 0-row, null or default value, when
all scenarios are possible.
– then choosing a default value can be tricky
– null can be tricky because in the @rowct case, a lot of fields of the
last select might be null.

# What’s so special about jvm portability cf python/perl #YJL

You have a very strong technical mind and I find it hard to convince you. Let’s try this story…

At a party, one guy mentions (quietly) “I flew over here in my helicopter …” 5 boys overheard and start talking “I too have a helicopter”. Well the truth is, either they are renting a helicopter, or their uncle used to have a helicopter, or their girlfriend is rich enough to own a helicopter, or they have an old 2nd hand helicopter, they have a working helicopter for a university research project, or a toy helicopter.

It’s extremely hard to build a cross-platform bytecode interpreter that rivals native executable performance. Early JVM was about the same speed as perl. Current JVM easily exceeds perl and can sometimes surpass C.

In contrast, it’s much easier to build a cross-platform source code interpreter. Javascript, python, perl, php, BASIC, even C can claim that. But why do these languages pale against java in terms of portability? One of the key reasons is efficiency.

To convince yourself the value of JVM portability, ultimately you need to see the limitations of dynamic scripting languages. I used them for years. Scripting languages are convenient and quick-turnaround, but why are they still a minor tool for most large systems? Why are they not taking over the software world by storm?

Why is C still relevant? Because it’s low-level. Low-level means (the possibility of) maximum efficiency.  Why is MSOffice written in C/C++ and not VBA? Efficiency is a key reason. Why are most web servers written in C and not perl, not even java? Efficiency is a key reason.

Back to jvm portability. When I compile 2000 classes into a jar, and download 200 other jars from vendors and free packages. I zip them up and I get a complete zip of executables. If I fully tested it in windows then in many cases I don’t need to test them in unix. Compile once, run anywhere. We rely on this fact every day. Look at spring jars, hibernate jars, JDBC driver jars, xml parser jars, jms jars. Each jar in question has a single download for all platforms. I have not seen many perl downloads that’s one-size-fit-all.

I doubt Python, php or other scripting languages offer that either.

Sent: Sunday, June 26, 2011 8:14 PM
Subject: RE: What’s so special about jvm’s portability compared to python’s or perl’s?

If you treat JVM == the interpreter of php/python/perl/etc., then Java’s so called “binary code portability” is almost the same as those scripting languages’ “source code portability”.
[Bin ] I have to disagree. AMD engineered their instruction set to be identical to Intel’s. Any machine code produced for Intel runs on AMD too — hardware level portability.
That’s one extreme level of portability. Here’s another level — Almost any language, once proven on one platform, can be ported to other platforms, but only at the SCP (source-code-portable) level. Portability at different levels has very different values. High-level portability is cheap but less useful.

Java Bytecode is supposed to be much faster as a lot of type checking, method binding, access checking, address resolution.. were already completed at compile-time. Java bytecode looks like MOV, JMP, LOAD … and gives you some of the efficiency of machine code.

Another proof is: Java binary code (compiled using regular method) can be de-compiled into source code, which indicates that its “binary code” has almost 1-to-1 mapping to “source code”, which means its binary code is equal to source code.
[Bin ] I would probably disagree. The fastest java bytecode is JIT and probably not decompilable I guess. For a sequence of instructions, the more “machine-like”, the faster it runs.

Well, you may want to argue JVM is better than the interpreter of those scripting languages, and I tend to agree. Java must have something that earned the heart of the enterprise application developers. Only that I haven’t found what it is yet 🙂

# what swing components must a trading app use

Absolutely must —
* top-level container? Yes, mostly JFrame, as the other 2 top-level containers are less useful or popular
* content pane
* JComponent, the parent of most UI components
* layout manager is an absolute must, since every content pane needs a layout manager, unless you choose absolute positioning

There are non-UI objects that are absolute musts —
– invokeLater() etc. Without it swing might appear to be functional, but i don't feel safe.
* UI events and listeners

If you are expecting more, I'm afraid those are the only ones I know. But here are a few “unavoidables”
– jtable? row/column data. Why is Excel by far the most important and sophisticated among Office apps?
– text panes?
– jpanel? indispensable for grouping components

# simplified wire data format in mkt-data feed

struct quote{
char[7] symbol; // null terminator not needed
int bidPrice; // no float please
int bidSize;
int offerPrice;
int offerSize;
}
This is a 23-byte fixed-width record, extremely network friendly.

Q: write a hash function q[ int hashCode(char * char7) ] for 300,000 symbols
%%A: (c[0]*31 + c[1])*31 …. but this is too slow for this options exchange

I was told there’s a solution with no loop no math. I guess some kind of bitwise operation on the 56bits of char7.

# handling pointer field during object assignment

class inner {int innerVal;};
class outer{
privaet:
int val;
inner * inptr;
….
};
How do you overload the assignment op?

outer.val is simply bitwise copy. My solution (Sol1) for inptr is

*inptr = *(rhs.inptr);

Q: what if the *inptr memory cells are already returned to freelist? i.e. intptr pointee is on heap and deallocated.

The suggested solution (Sol20 is

delete inptr;
inptr = new inner();
*inptr = *(rhs.inptr);

Let’s compare Sol1 and Sol2
– If we have control in place to prevent accidental deletion of the inptr pointee, then Sol1 is faster.
– If our control is weaker and only guarantees that after deletion, the 32-bit object inptr always gets ==reseated== to NULL, then Sol1 is not safe as Sol2
– Without control, then when we want to access or delete inptr, we will worry it could be pointing to freelist, or a valid object. We dare not delete or read/write; and we don’t want it to leak either. No good solution. I would risk double-delete because memory leak is worse. Leak is harder to detect.

# can’t avoid threads in socket, GUI, MOM…

I wrote about this topic in other posts but here’s a new angle. Forget about multicore hardware or low latency arms race. A few application domains were created (and inherently) multi-threaded. A single-threaded design would require a redesign, if at all feasible.

+ Domain – GUI (swing, wpf ..) — EDT is a super busy thread. Any long-running task would need a helper thread.
+ Domain – blocking socket — accept() would get the main thread blocked. read() and write() would also block a worker thread.
+ Domain – nonblocking socket — I believe you still need multiple threads. The entire design of sockets presumes multi-threading.
+ Domain – MOM — onMsg() would get one dedicated thread blocked.
+ Domain – DB server — 2 clients can keep their transactional sessions open, and would need 2 dedicated threads. Contrast —
– Domain – stateless web server — which need not maintain client state. In this case, a single thread can service many users off a request queue (at least in theory).
– Domain – multi-user Operating System — at least one session per user, but the OS usually fork-exec a new process, without multi-threading

# is my dtor virtual if I don’t declare it virtual@@ #my take

Q: is my dtor virtual if I don’t declare it virtual?

A lot of implicit rules, but there’s a simple rule — see below

– If you are a top-level class (i.e. not inheriting), and don’t declare a dtor —> non-virtual
+ if you declare it “virtual” —–> virtual
+ if you inherit and declare it “virtual” —–> virtual
= if you inherit but don’t declare a dtor, then synthesized dtor is ——> as virtual as your parent’s
= if you inherit and declare without “virtual”, then your dtor is still —–> as virtual as your parent’s. http://www.parashift.com/c++-faq-lite/virtual-functions.html#faq-20.7

Rule — once ancestor’s dtor is declared virtual, descendants have no way of losing the virtuality.

It’s illuminating to visualize the memory layout. Physically, a subclass real estate always encloses a base instance. Since the base real estate has 32 bits for a vptr, every descendant instance has exactly 32 bits for a vptr too, not more no less. Java simply puts the 32 bit footprint into Object.java, so every java object has 1 and only 1 vptr.

If you go through dtor virtuality down a family tree, you may see NV -> NV -> NV …-> V -> V -> V. Once you get a Virtual, you can only get more Virtuals.

As an analogy, a 3-generation warrior is made a king. All his descendants become royal.

As an analogy, a 3-generation farmer becomes a landlord in 1949. All children and grand children are considered by communists as landlords.

Reason? As described in http://www.parashift.com/c++-faq-lite/virtual-functions.html#faq-20.4, once a base class gets a (class-level) vtbl, its subclass always gets its own vtbl. I believe every descendant dtor is always on a vtbl.

# y java is dominant in enterprise app

What's so good about OO? Why are the 3 most “relevant” enterprise app dev languages all happen to be OO – java, c# and c++?

Why is google choosing java, c++ and python?

(Though this is not really a typical “enterprise app”) Why is apple choosing to promote a compiled OO language — objective C?

Why is microsoft choosing to promote a compiled OO language more vigorously than VB.net?

But why is facebook (and yahoo?) choosing php?

Before c++ came along, most enterprise apps were developed in c, cobol, fortran…. Experience in the field show that c++ and java require more learning but do offer real benefits. I guess it all boils down to the 3 base OO features of encapsulation, inheritance and polymorphism.

# 2 services@ECN: quote aggregation^execution

2 core functions of a liquidity venue
Q) quote collection — including dissemination. Consider a BBS.
Q2) RFQ — not needed on exchanges.
E) execution

(Most fundamental messages are quotes and orders.)

An exchange offers additional services (such as integrity, credit guarantee…) but most ECNs offer only these 2 core services.

Typically each quote (similar to a limit order) received is assigned a quoteID. Each order must reference an orderID, so the liquidity venue can forward the order to the quote originator. The originator always has the option to reject, even if the quote is advertized as “firm”.

Such a liquidity venue typically holds no inventory and takes no position hence zero risk.

Most of these liquidity venues support FIX. They might support another messaging protocol too.

# list processing — across Functional languages

update — java 8 streams …
—-
I feel FP is an increasingly loose term. A lot of FP-like features are tagged with the FP label. That makes it harder to identify the core FP concepts and syntactical features. You may want to focus on the top 3 “topics” one at a time[1], so let’s start with the #1 — Sequence processing using first class functions — qualifies as the #1 most practical (and identifiable) hallmark of functional programming. Widely used in real world business logic implementations.

In perl, Sequence processing is a fundamental concept and designed into the language core from Day 1.

Python has the most natural Sequence processing. Look at
* list comprehension,
* map() grep() reduce() apply() zip()

STL algorithms and iterators are all about Sequence processing. The data processing logic is often passed in as standard STL functors.

C# linq — all about Sequence processing

Java has fewer identifiable features for Sequence processing, but look at JVM dynamic languages. See
http://www.ibm.com/developerworks/java/library/j-ft1/

–Huskell (new to me)–
Sequence and Functions are the two major building blocks of any Haskell program. (http://en.wikibooks.org/wiki/Haskell/Lists_and_tuples)
fold — like python reduce(). See http://en.wikipedia.org/wiki/Fold_(higher-order_function)
filter — like python filter(), perl grep()
list comprehension — like python

[1] What are some of the other major topics? Here are some of my picks
* functors — functions as first-class objects. Remember in traditional languages functions (including methods) are never objects and objects are never functions
* side effects, repeatability and immutability, like math functions — this is probably where the name “functional” originated

# y memory footprint hurts latency #my take

Reason: the larger the heap size to scan, the slower is the GC (but paradoxically, large heap helps you postpone the GC). The less memory you use, the lower your GC penalty.

reason: favor in-memory data store rather than disk. In-memory can mean remote or local (best). The smaller your footprint. the easier.
reason: Favor single-VM — serialization-free, network-free. The smaller the easier.
reason: serialization (for IPC, cache replication, network or disk) is slower for larger footprints.
reason: allocation is costly, even for an 32-bit int. Look at shared_ptr and circular arrays.
reason: minimize object creation (i.e. allocation) — standard optimization advice
reason: reduce GC frequency
reason: reduce GC workload ie amount of memory to scan. If you must incur a full GC, then make it short.

reason? Compared to graphs, arrays enjoy 1)random access and 2)one-shot memory allocation, but if each element is bulky[2] then a long array requires too large a contiguous block of memory, which is hard to allocate. The smaller your object, the easier.

[2] Note java array element is usually a 32 bit pointer.

Major technique: favor iterative over recursive algorithms to reduce stack memory footprint

# ##classical engineering fields #swing@@

At 60 a mechanical engineer can be productive. These finance IT fields are similar. High value, tech-heavy and finance-light ==> Portable(!), optimization/engineering/precision ==> Detail-Oriented.

Socket – C, Non-blocking IO
Mainframes
RV, MQ
Unix, syscalls, esp. kernel hacking – C
low latency – C
SQL tuning
DBA? new tools emerge
regex
memory management – C/C++ only. Other languages are “well-insulated”

— these aren’t
windowing toolkit like motif, swing, wpf ? churn rate!
threading — new tools make old techniques obsolete
MSVC

# PCP synthetic positions – before expiration@@

In most of the books I have seen, all the synthetic option positions (buy-write vs short put for eg) are analyzed at expiration, using the hockey stick PnL graphs. But what about valuations before-expiration — is the synthetic also a substitute?

Let me give you a long answer. Bear in mind all the synthetics are based on PCP, applicable to European (E) options only.( For Europeans, real valuation happens only at expiration.)

By arbitrage analysis, and assuming 0 bid/ask spread 0 commission European style, I believe we can prove that n days (eg 5) before expiration, BW valuation must match the naked short put.

5 days before expiration, even with very low spot level, even with very high implied volatility, the option holder can’t exercise and must wait till expiration. At expiration, the 2 portfolios have identical payoff. Therefore the 2 are equivalent any time before expiration.

# real Protagonist in an asynchronous system

When discussing asynchronous, folks say “this system” will send request and go on with other work, and will handle the response when it comes in. When we say that, our finger is pointing at some code Modules — functions, methods, classes — business logic. By “this system” we implicitly refer to that code, not data. That code has life, intelligence, and behavior and looks like the agonist of the show.

However, a code module must run on a thread but such a thread can’t possibly “react” or “handle” an event. It has to be handled on another thread. Therefore the code we are looking at technically can’t react to events. The code handling it is another code module unrelated to our code module. So what’s “this system“?

A: a daemon…. Well not a bad answer, but I prefer …
A: the cache. Possibly in the form of disk/memory database, or in the form of coherence/gemfire, or a lowly hashmap, or more generally a buffer (http://bigblog.tanbin.com/2011/05/asynchronous-always-requires-buffer-and.html). In every asynchrous design, I have always been able to uncover a bunch of stateful objects, living in memory long enough for the event/response. It’s clear to me the cache is one eternal, universal, salient feature across all asynchronous systems.

The “cache” in this context can be POJO with absolute zero behavior, but often people put business logic around/in it —
* database triggers, stroed procs
* coherence server-side modules
* gemfire cache listeners
* wrapper classes around POJO
* good old object-oriented techniques to attach logic to the cache
* more creative ways to attach logic to the cache

However, I feel it’s critically important to see through the smoke and the fluff and recognize that even the barebones POJO qualifies as the “this system” (as in the opening sentence). It qualifies better than anyone else.

Swing/WPF can have one thread displaying data according to some logic, and another thread (EDT?) responding to events. Again, there are stateful objects at the core.

# What’s so special about jvm portability cf python/perl, briefly@@

When I compile a class in windows, the binary class file is directly usable in unix. (Of course I must avoid calling OS commands.) I don’t think python or perl reached this level.

I feel dynamic, scripting languages are easier to make portable because they offer source-code portability (SCP), no binary code portability (BCP). In other words, BCP is tougher than SCP. I believe BCP is more powerful and valuable.  BCP was the holy grail of compiler design and Java conquered it.

Due to the low entry barrier, some level of SCP is present in many scripting languages, but few (if any) other compiled languages offer BCP, because it’s tough. JVM is far ahead of the pack.

Even C is source-code portable, but C is known as a poorly portable language due to lack of binary portability.

# y java CMS needs remarking #my take

CMS marking consists of 3 phases — Initial Mark -> Concurrent Mark -> ReMark. I used to feel “once an object is found dead/unreachable, it will remain forever unreachable, and will never come back to life, and therefore needs no double-check”. Reality is, during IM and CM you can’t conclude “ok this object X is definitely unreachable”

Remember the #1 job duty of marking is to ensure EVERY (i am not saying “only”) live objects is identified.

Initial Mark (IM) puts together a list of “immediately reachable addresses”. Job not completed.

Concurrent marking (CM) is an imprecise process regarding the job duty #1. Imagine object A (say some collection) holds a pointer to our object at address X and is transferring that pointer to object B (like another collection). “Transfer” in the form of baton in a relay race — never drop the baton (once dropped it’s unreachable forever). If CM algorithm visits B before A, it may not catch sight of the (on-the-move) pointer to X. In such a case, it will not notice address X is reachable. Job not completed!

Only in the RM phase is the “job” completed.

This is part of the deal with Concurrent marking in the low-pause CMS. With STW you won’t get this complication.

# sort a jtable by any column you click

jdk 1.6 added TableRowSorter.java, but here are some alternative ideas, often relying on setModel()

I feel this is similar to making a “bag” of java beans sortable by each field. Once sorted, you can iterate over the beans and populate a new table model. The ctor taking 2D array, or the ctor taking a vector-of-vector will suffice.

One sort solution – pass the collection into a sorted map with any custom comparator. Does the map allow equal members? Need to make sure equals never returns true, but compare() can return 0.

One sort solution – Collections.sort() with a comparator. This can allow equals() to return true.

If there are 9 columns in jtable, then the bean has 9 fields. We need 9 custom comparator classes.

I feel in business apps it’s unlikely but What if there’s no bean class? Each rowData object would be a vector or array. I feel you can still use Collections.sort with a custom comparator. One basic solution creates 9 custom comparator instances for sorting by Column1, Column2… Column9. Comparator3 would use the 3rd cell of each rowData.

I feel in business apps it’s unlikely but What if the jtable columns are dynamic so the column count is unknown at compile time? Then I feel we can use a vector of comparator objects. Or perhaps we can instantiate each singleton comparator on demand. Comparator class would need an _immutable_ int field this.indexOfSorterCol, to be used in the compare() method.

# basic order book data structure

Requirement:
An order book quickly adds/removes order objects against a sorted data structure. Sorting by price and (within the same price) sequence #. Both price and seq are represented by integers, not floats. Sequence # is never reused. If due to technical reason the same seq# comes in, then data structure should discard it.
—————————————————-
This calls for a sorted set, not multiset. Maps are less memory-efficient.

You can actually specialize a STL set with a comparator functor.

The overloaded () operator should return a bool, where true means A is less than B, and false means A >= B. Note equality should return false, as pointed by [[effective stl]]. This is important to duplicate detection. Suppose we get 2 dupe order objects with different timestamps. System compares price, then compares sequence #. We see A is NOT less than B, and B is NOT less than A, thanks to the false return values. System correctly concludes both are equivalent and discards one duplicate.

# ## many ways to classify STL algorithms

I feel there are too many STL algorithms for my modest brain capacity. If we can fully digest a single interesting *comparison*, then we conquer a small piece of that universe. I like binary, black and white schemes.

^^ RW algorithms can operate on each element OR on the container.

^^ _copy version ^ in-place edit — many algorithms come with these 2 versions. replace_copy, remove_copy, sort_copy(???)

^^ public, generic sorter ^ private sorter. Powerful random access iterators are needed by generic sort algorithms. But many containers aren’t random-access so they provide their own private sorters in the form of methods

A few algorithms expect the range to be sorted. [[eff STL]] has a dedicated chapter.

I think only a few major algorithms need random access iterators — sort, shuffle?

# first lessons on option delta

I feel delta is the #1 most important greek for a new guy trying to understand option valuation sensitivities.

If you are long or short any security, then you want to monitor your sensitivity to a few key variables. For fixed income positions, you want to monitor sensitivity to IR and credit rating change, among others. For FX positions, you want to monitor sensitivity to IR of multiple currencies…For option positions, you monitor
* (vega) volatility changes. The underlier can exhibit very different volatility from Day 1 to Day 2.
* (delta) underlier price
* (theta) speed of decay
* since delta is such a important thing to watch, you also want to monitor gamma, i.e. how fast your delta changes in response to underlier appreciations.

For a typical option’s valuation, sensitivity to underlier is the biggest sensitivity. To trade volatility, you first need to insulate yourself from directional changes. Call it direction-neutral or direction-indifferent. I was told in most cases 0 delta won’t “happen to us”, so we need to calculate or design our trades so portfolio has 0 delta. Note you get zero delta only at a particular spot price of the underlier. When underlier moves, your portfolio delta won’t stay zero.

To learn basic option trading, First a student needs a good grasp over option payoff at expiration. Actually non-trivial. Even for a basic call option, there is a payoff graph like a hockey stick. We need to understand the payoff graph of all 4 basic positions + payoff graph of basic strategies like protective put. Also PCP.

A more realistic graph is portfolio pnl at expiration. “Portfolio pnl” includes the cash you paid/received (realized) in addition to unrealized PnL. In this case, hockey stick crosses x-axis, which is completely realistic. It means your portfolio pnl can be positive or negative depending on the at-expiration price of underlier. Premium cost is a very practical consideration, so it’s naive to ignore it in the payoff diagram. Prefer portfolio-PnL.

Next graph or curve[1] is {{ option valuation vs spot price }} i.e. option valuation/premium relative to underlier spot price. Obviously option premium is priced by each trader’s pricing engine taking inputs of strike price, time to expiration, vol etc, but here we need to hold all other parameters constant and focus on spot price’s effect on option MV. This is how to get the curve. Within this simplified context, we need to
– compare all the basic strategies
– PCP
– know the difference of ITM vs OTM
– know how a basic call’s curve depends on vol. We can plot the curve for different vol values

Lastly, Delta is treated like a soft market data on option MV. The slope of the curve in [1] is Gamma. Charm is also a derivative of delta.

Practical usage of delta? Delta is used in delta hedging and delta-neutral trading.

For a typical option’s price, sensitivity to underlier is the biggest sensitivity. Therefore delta overshadows vega, theta and rho. But Delta is definitely not the only important greek. In fact, vol is probably the most important factor in option pricing, so vega is rather important.

# buy a PUT to protect long position – simple examples

Suppose you bought IBM at $100 but it is$90. You fear it may drop below $80. You can buy a PUT option at$80. It's ITM so premium is at least (spot – strike), perhaps $12/share or$1200/contract. Any premium you pay is likely to go down the drain.

Now suppose you have a lot of USD. You may need a lot of SGD in x months. USD has dropped to SGD1.23, but you fear it may drop to $1.20. You may buy a PUT like the above. The put allows you to convert USD1 to SGD1.20. # enterprise reporting with^without cache #%%xp YH, (A personal blog) We discussed enterprise reporting on a database with millions of new records added each day. Some reflections… One of my tables had about 10G data and more than 50 million rows. (100 million is kind of minimum to qualify as a large table.) This is the base table for most of our important online reports. Every user hits this table (or its derivative summary tables) one way or another. We used more than 10 special summary tables and Business Objects and the performance was good enough. With the aid of a summary table, users can modify specific rows in main table. You can easily join. You can update main table using complex SELECT. The most complex reporting logic can often be implemented by pure SQL (joins, case, grouping…) without java. None of these is available to gigaspace or hibernate users. These tools simply get in the way of my queries, esp. when I do something fancy. In all the production support (RTB) teams I have seen on wall street, investigating and updating DB is the most useful technique at firefighting time. If the reporting system is based on tables without cache, prod support will feel more comfortable. Better control, better visibility. The fastest cars never use automatic gear. Really need to limit disk I/O? Then throw enough memory in the DB. # 3 types of analytics in a muni trading system 3) credit analysis of an issuer, be it a corporation or a municipality. Muni issuers are important subjects of credit analysis. Many recent muni bonds have embedded options, mostly callables.Therefore, 2) Option adjusted spread. Binomial tree in yield space… 1) Refinancing analysis. Investment bankers propose to issuer to recall and reissue at lower coupon. # rho of a vanilla fx option (See post on theoretical greeks) For equity options, it’s known that rho is a relatively insignificant sensitivity compared to the big 4 greeks. I know FX rates are sensitive to the 2 interest rates involved. What’s the rho of a FX option? A: insignificant. where : From FX-modified BS formula, we can tell rho is equally insignificant for equities vs forex. However, a 10bps bump in either rd or rf will immediately(?) affect S, more in forex than in equities. I believe an rate hike makes a currency attractive at least in the short term Answer: (from a young FX quant) even though spot forex rate S0 is influenced (less than fwd forex rate) by rd or rf, we still assume S0 is an independent variable when we bump rd or rf. There’s no mathematical justification to adjust S0 when we bump the rates. Therefore for short dated FX options, rho is insignificant. # what greek measures depth of moneyness@@ Q1: is there a greek that numerically measures moneyness that is, how deep ITM/OTM an option position is? There’s a key difference between the 1) probability of an option expiring ITM vs a 2) “weighted average”. The latter has many incarnations like – expected PnL. – area under the histogram weighted by PnL of each “column” – area under the lognormal cuve waited by expiration payoff Which of the 1) and 2) measurements do you want? I like 1. A1: I feel delta is an approximate likelihood of an option expiring ITM. http://en.wikipedia.org/wiki/Moneyness confirms that delta is a good approximation of the probability of expiring ITM. A1: deep ITM option would have a premium close to its intrinsicVal. A deep OTM option would have a low premium. TimeVal is close to 0 for deep ITM/OTM. Therefore, A1b: {{ option price – intrinsicVal }} is another reasonable measurement of moneyness but not by the probability definition. # y trading systems use so many stored procedures A popular Wall Street interview question is the pros and cons of stored proc. Here are a few #1 single point of access from java, c++ … #2 modular encapsulation. separation of concern + network efficiency + access control + reusable. DRY + easy version control – readability – exception handling – hard to log actual query Perhaps the biggest motivation is to avoid recompiling binary in an emergency fix. Many sites have extremely strict control on binary build/deployment [1]. Every release always builds from version control. If you need a bug fix release, then deal with all the changes checked into cvs but not approved! Redeploy binary can also break any number (or all) other applications. Proc is the answer to your prayer. In some places, every select/insert/update/delete statement is extracted into a proc. Changing the logic in them feels almost painless compared to a binary build/release. Hibernate is a big departure from the proc tradition. 1) Wall Street users want frequent changes, not bound by software release controls. Control-vs-time-to-market makes a healthy contention. 2) Wall Street code is often extremely (quick and) dirty, so fixing bugs without software release is often a life saver. About half of all business logic, both features (1) and bugs (2), are often expressed in SQL. Now you see how useful it is to have flexible ways to change the SQL logic. If you think hard and always forecast which business logic might need change, then you can strategically extract those SQL into store procedures. [1] Given the huge sums involved, wall st wants control on software. They can't control code quality but can control build/release. Many, many levels of approvals. Numerous staging, integration, QA, preQA environments. # when a heap variable gets updated behind your back Beginning developers aren’t really familiar with how a normal-looking variable can suddenly change value between Line 1 and Line 2 where we read it twice without writing. We are slightly more familiar with a shared hash table being updated concurrently. Well, an int variable can be equally “volatile”. Most commonly, it’s another thread. If a variable is placed in shared memory, then another process can modify it. In C, it can even be hardware driven — the so-called volatile object. Heap? not necessarily. In C, if you declare an auto variable (on stack) in main(), and pass its address around, then many threads can update it. How about JNI? What if a purely native thread and a jvm-only thread both write to a variable? Possible? (Note a JVM thread extending into native method is fine and won’t cause unexpected outcome.) # python: very few named free functions In python, Most operations are instance/static methods, and the *busiest* operations are operators. Free-standing functions aren’t allowed in java/c# but the mainstay of C, Perl and PHP. Python has them but very few. — perl-style free functions are a much smaller population in python, therefore important. See the 10-pager P135[[py ref]] — len(..) map(), apply() open() min() max() — advanced free functions such as introspection repr() str() — related to __repr__() and __str__() type(), id(), dir() isinstance() issubclass() eval() execfile() getattr() setattr() delattr() hasattr() range(), xrange() ?yield? not a free-function, but a keyword! # programmatic control on a java thread Update: The Process object in dotnet is another “handle” on a OS construct… Suppose a virtual machine thread FastRunner is linked to a Thread Object TO2. A) As a java object, a Thread.java object TO2 is a chunk of HEAP mem, and can have fields. B) As a real thread ie “vm thread”, FastRunner occupies memory for its call stack and can receive cpu cycles. In general, a call stack can have “access” to any heap object if you pass in a (indirect) reference. TO2 is a heap object and can be accessed on the call stack of FastRunner. In fact it’s dead easy. Thread.currentThread() will return a TO2 reference. This is a embrace. 1) TO2 has a pointer to the call stack of FastRunner and can affect FastRunner in a few limited ways. 2) among the local variables of the call stack, there’s now a pointer to TO2. A field f1 in TO2 is not related to anything in the vm thread FastRunner. You might potentially want to use TO2 fields to control FastRunner, but TO2.f1 is just like a field in any heap object. Fundamentally, TO2 has a pointer to the FastRunner vm thread. JVM provides no reference to FastRunner as it’s not on the heap and not a java Object. Suppose you have a (usually stateless) Runnable object R2. It is basically a function pointer. R2 has nothing but 4 bytes holding the address of run() method which lives in Code Section, not stack not heap. You can pass R2 to Thread() ctor to create TO2 object. TO2 has a different address than R2 though. If TO2 is a television remote control, then it offers a small number of “buttons” to control FastRunner, like • * interrupt() • * join() – will block the current thread • * stop() – deprecated but still useful as in SureStop.java • * start(). – could be useful to control timing • Note run() should never be called except by start() – if you have a java reference to the TO2 object, you can’t use it to make FastRunner sleep – if you have a java reference to the TO2 object, you can’t use it to make FastRunner grab a lock – if you have a java reference to the TO2 object, you can’t use it to make FastRunner wait in a monitor – if you have a java reference to the TO2 object, you can’t use it to notify FastRunner I’d say the remote control is so limited that most of the important “buttons” are missing. As explained below, TO2 is a “poor” handle on the vm thread (FastRunner) known as a “thread” with its call stack, thread registers, cpu cycles etc. Better to avoid confusion — distinguish “Thread” and “thread”. P27 [[java threads]] explains * when a vm thread in VM is running its run() method, both the VM thread and the Thread object are connected. TO2 is a relatively good handle on the vm thread in VM. * after run() returns, vm thread no longer gets cpu cycles, but the Thread object is still useful until it’s garbage collected. * after you instantiate a Thread, but before you call its start(), the call stack doesn’t exist. * bottom line — Thread object lives a bit longer than a vm thread. If you naively think that an instance method in TO2 is related to the corresponding call stack of FastRunner, then you are wrong on multiple accounts. * if FastRunner run() method calls such a method m1(), then the method runs on FastRunner, but m1() body may have nothing to do with the VM thread FastRunner. * As explained earlier, Thread has longer lifespan than thread, so any method call made outside FastRunner’s lifespan is unrelated to FastRunner the vm thread. * the main thread can call any of those methods. Such method calls may have nothing to do with the VM thread’s call stack. In terms of programmatic control on the vm thread, D) is most useful, followed by B and C — C) thread instance methods — least used. Quiz — can you name 2? TO2.interrupt() is one of the few important instance methodS. Others include run(), join() and the associted isAlive(). These operations can be invoked on main thread or another thread. They mostly affect the current thread not the target thread. B) Thread.java static methods let you probe/modify the thread to some extent, such AS Thread.sleep(). These Thread methods are static methods, without a specific Thread instance as the target. They usually affect the current thread. The static and non-static mix in Thread.java is confusing. I feel these should be put into a ThreadControl.java class and converted to instance methods, and instantiated as a static member of System, just like System.out. D) Many of the modifications, controls && operations on the thread are effected through constructs outside the Thread instance, such as wait(), the all-important keyword synchronized, which is similar to the Lock.lock() … There’s something like TO2.isInterrupted() instance(!) method which is often called — Thread.currentThread().isInterrupted(). # into a java variable{name,value,address,type}. In any scripting or compiled language, OO or otherwise, a variable is a trio of { name, value, address }. In java, we have to remember the type and actual object behind a variable. “address” and “object” are slightly diffferent views of the same entity. In java, it’s instructive to see a variable in terms of a pointer and an onion. Multiple remote controls can point to the same chunk of memory. a pointer is a reference and a remote control, with * a unique name * a type defined in a type hierarchy. A type can be an interface type. * supported services of the type. We mean instance methods. an object is a /pointee/referent/ and an onion in memory with * a unique address. There’s no address for the base object nested in an onion. Not possible to have a variable pointing to the base objects inside an onion. * no name * fields * methods, possibly overriden or hidden. Pointer Casting (up or down) affects the type, the fields and methods. When up-casting from subtype C to a basetype B, – address and name remain – instance/static fields may disappear, since they may be undefined in parent class C – instance methods remain, even if they are overriden in a subclass C. Polymophic runtime binding via vptr – static methods? Yes affected in a subtle way. see blog on [[ static binding ]] # ceiling price of an option@@ Ceiling price of an option? I don't think there's such a thing but there's an important floor price. Q: any option (Am/Eu, call/put) has a floor/minimum value tied to the underlier spot price as the 2 prices move. But no max value tied to underlier spot price. Why? A: any option *premium* quoted is always (intrinsicVal + timeVal). IntrinsicVal is tied to underlier spot price. Therefore option's minimum value is the intrinsicVal. TimeVal is a function of volatility and can be very high. Note timeVal is each buyer's/seller's judgement, whereas intrinsicVal is converted, like temperature, from underlier spot price. Q: how does that floor depend on ITM/OTM? A: ITM has intrinsicVal. That “floor” is nothing but the instrinsicVal A: OTM has$0 intrinsicVal, so “floor” = $0. The valuation and premium is pure timeVal # constant-vol assumption ^ varying daily realized vols As stated in http://bigblog.tanbin.com/2012/06/var-swap-daily-mark-to-market.html, people expect daily realized vol (DRVol) to fluctuate during next 3 months (or any given period). Monday 9%, Tue 17%, Wed 8.9%… However, we also know that BS diffusion equation assumes a constant vol. Any contradiction? Well, BS is not so naive as to assume every single day’s ln(PriceRelative) == the same value throughout the life of an option. That would be a deterministic asset price model. Such a model would absolutely predict the exact IBM closing price tomorrow. It’s clearly unrealistic — no one can predict the exact IBM closing price tomorrow. No, BS is all about diffusion/randomness, so the exact price on any (future) day is random (even though price now is realized and known). That means BS can’t predict the exact value of ln(PriceRelative), which is the to-be-realized vol. Even in the constant-vol model, this ln() could be 10% tomorrow, and 20% next day (annualized vols). Such varying vol on a daily basis is perfectly legitimate I a constant-vol model. Q: So What is the constant-vol assumption by BS? A: the sigma for a given stock, once calibrated using historical data, is assumed to permanently characterize the diffusion or the random walk or the geometric Brownian motion. So even though BS can’t predict the exact value of ln(PR) today vs yesterday closing, BS does predict the Distribution of that ln() value. It treats the ln() as a random variable following a precise Normal distribution. Consequently, today closing price is another random variable following a precise LOGnormal distribution. To a layman, this is revolutionary thinking. A layman like me tries to predict today’s closing price. Knowing how hard it is, we try to draw a narrow band of the closing. BS is smarter in treating that unknown closing just like a temperature, and predicting its probability distribution instead of its exact value. # y no early exercise of American option (my2011writing Suppose you have a microsoft American-style call, expiring end of next month, K =$20. You believe microsoft will drop to $19. Let me convince you that you should hold to expiration and never exercise the call. (Assumptions: no dividend; short selling is nearly interest-free) Strategy 1 (naive): exercise the call now and immediately sell the 100 shares for$2467. Hurry before it drops! Realized profit of $467. You get the intrinsic value and give up the time value. Strategy 2 (slightly better): sell the call option immediately. You get (intrinsicValue + timeValue). This realized profit definitely exceeds intrinsicValue of$467. Why? Before expiration, a call is worth at least (underlyingSpotPrice – strikePrice) or (S – K) i.e. the intrinsic value. Remember in-the-money call option always sells slightly [1] above (S – K). Therefore it’s always always always better to sell the option rather than exercise it then sell the shares. But should you sell it *now* because you feel Microsoft will drop to $19 by the expiration date? Strategy 4 (recommended): short sell 100 shares at$24.67, and keep the call to limit potential shortfall.
If indeed $19 on expiration day, short earns$567. call expires worthless. At expiration,
If S@T = $20.01, profit =$466 (short) + (call) $1 =$467 realized at T i.e. expiration.
if S@T = $24.67 same as today, then your short-sell breaks even, but the American call earns$467.

Now let’s try another perspective — forget about expiration scenarios. Look at price movements after your short sell. As of today, when spot = $24.67, unrealized profit = ($0 from the short + value of the call). intrinsicVal + timeVal is above intrinsicVal of (S – K = $467). – Tomorrow, if MSFT edges above$20 (ie K), the short position has an unrealized profit just below $467, but together with the call, total unrealized profit will exceed$467.
short’s unrealized profit might be $466 (assuming S = 20.01) intrinsicVal =$1
timeVal = some positive value
– Tomorrow, if MSFT is below $20, the short alone would generate an unrealized profit exceeding$467.

==> Therefore, the short position + the call is better than $467 cash. Therefore, you should never exercise that American option (under above ASSUMPTIONS). Therefore the American call option and European version have identical values. Strategy 3 (Reckless): sell the option and short the stock. You lose the protection from the option. If MSFT rises after your short sell, you would need the option to cover your losses. (Under those opening ASSUMPTIONS) Don’t exercise and don’t sell due to your view on the stock. I guess you should sell the option if you feel vol is going to drop. [1] if volatility is assumed 0, then the gap (ie time value) would be 0 Update — I always feel compared to an European option, an American option has an element of timing, surprise and flexibility — when the market condition is right, the owner should cash in. Now I feel there is indeed a time to sell — when implied volatility is higher than reasonable, then you should sell the option, but not exercise it. However this applies to European options too. # u owe a bank$10k..your problem;u own them$1m…bank’s problem Case 1a) If you owe a bank$10k, it’s your problem; but if you own them \$1m … bank’s problem.

Case 1b) If a small financial institution fails, it’s their own problem; but if a big one fails, it’s government’s problem. Look at Lehman.

Case 2) Suppose you interview 10 IT contractors (perm even worse) and hire one. If she proves clearly unfit in a month, it’s the consultant’s problem (losing job); but if it’s after 3 months, it’s employer’s problem. To some extent, employer invests in each new hire, in terms of training.

After 3 years (GS experience), when employer has milked the cow enough, they don’t mind letting her go except the package. They may create an opportunity to “lose” her.

Case 3) if in the first rollout Informatica (or any software vendor) can’t meet a customer’s requirement, it’s the vendor’s headache; but if it shows imperfections a year after it goes live, that’s the customer’s headache. Customer has invested too much into this software product.

# code change right before sys integration test

Reality – each developer has many major projects and countless “noise” tasks simultaneously. Even those “successfully” completed projects have long tails — Never “completely completed” and out of your mind.

Reality – context switch between projects has real cost both on the brain and on your computer screen — If you keep too many files, emails, browsers open, you lose focus and productivity.

Q: for something like SIG, should you wrap up all code changes 3 days before SIT (sys int test) or the night before SIT?
A: night before. During the SIT your mind will be fresh and fully alert. Even if you finish 3 days before, on the night before you are likely to find more changes needed, anyway.

# My simplified form of BS-E

At any time before expiry fair premium of a European call option for a non-dividend paying stock is
where
S = spot price at valuation date such as today
t = time to expiry, at valuation date. This value is measured in years. If now our option is 2 years 6 months from maturity then t=2.5 at valuation date
K and r are all constants
sigma is also assumed constant — see below
N() is the cumulative normal distribution function. I believe N(0) = 50%, N(g) + N(-g) = 1.0 and N() is monotonic increasing

Now, in the BS formula, sigma is treated as a constant — the well-known and unrealistic constant-volatility assumption. If I were to get sigma scaled Up for t=2.5 years (our example) and denote it “simga_t” or σt then
$d_1=\frac{\ln\frac{S}{K}+rt+ \frac{\sigma_t^{2}}{2} } {\sigma_t}$
$d_2=\frac{\ln(\frac{S}{K})+rt-\frac{\sigma_t^{2}}{2}}{\sigma_t} = d_{1}-\sigma_t$
$C(S,t)=N(d_1)~S-N(d_2)~K e^{-rt}\,$ ……………(same as before)

Let’s try to understand parts of this monster

Q2: why the (…) (…)
A: that comes from the simple fact that at expiration (not now), the terminal valuation (I didn’t say “PnL”) is in the form of “stock price at expiry – strike”

Q2b: what’s the implication on delta?
%%A: Well we know the part after the “-” is independent(?) of spot price, so if we simulate a tiny change in S, that portion remains unchanged. Delta calc can safely ignore it.

Q: for a deep ITM call, how is this simplified?
A: ln(S/K)/sigma_t dominates in both d1 and d2, so d1 and d2 are approximately equal and N ~= 1.0. So
C ~= S-K*exp(-rt). In other words the European call valuation is mostly its intrinsic value.

Q: for a deep OTM and small rt i.e. drift ?
A: d1 and d2 are approximately equal, both extremely negative, dominated by ln(S/K), so
C ~= K * N(large negative value)

Q: what’s the exp(-rt)?
A: simple. Discounting the strike price to present value. I believe for t below 2 years (listed options) this factor is close to 1.0 and has a minor effect. However if you ignore it a profitable deal can become unprofitable.

Estimating delta, vega, gamma, theta all requires differentiating through the normal distribution.

(use http://en.wikipedia.org/wiki/Wikipedia:Tutorial/Citing_sources/sandbox to edit equations).

# Leibniz notation of integration

Look at the standard integral notation
ab(some expression f of running variable x)dx ……….(1)
This usually but NOT always means (f of x) multiplying dx, then integrating from a to b, even though in some contexts it looks exactly like that and you feel “no doubt it means exactly that”.
In better computer graphic, it’s written as $\int_a^b \! f(x)\,dx \,$where f(x) is our complicated expression of x .

Instead, this Leibniz notation is based on the algebraic summation of n items, each a product of (f * Δ),

$\sum_{i=1}^{n} f(t_i) \Delta_i ;$…………..(2)

What is expressed in the integral in (1) is that we integrate over infinitesimal divisions. These divisions divide up and cover the entire continuous range [a,b]. Therefore (1) denotes the sum in (2) as Δ goes infinitesimal. Indeed, most of the time we can treat the (…)dx as a product as in (2), but i just feel there are some special contexts /with invisible broken glass on the floor/. I can’t identify exactly those contexts, but here are some hints.

People often put funny things after the /innocent-looking/ “d”. like
(…expression of x) d(-x/2)
(…expression of x) dw(x)
(…expression of x)dy dx/dy

dw(x) means dw, where w is treated as a variable, even though we know that w is a function of x.

I don’t know if teachers ever do these things, but students do. They muddy the water.

cd(∫ab(f of x and y)dx)dy

Whenever it’s confusing I feel we had better refer to the the original definition of Leibniz’s notation, based on (2)

# exponential functions in calculus

Exponential function (and its twin sister the logarithmic) is one of the top 3 most important functions in finance. (Can’t name the others though.)

Derivative of the standard exponential function exp(x) is exp(x) itself. Consequently, 2nd derivative (and nth derivative) is again itself. Furthermore, when you know that a mysterious function’s derivative function is itself, you may wonder whether that function is related to exp(). Answer is that function must be exp(). No one else have this unique property.

All exponential functions (i.e. functions whose x is the exponent, i.e. up there) are related to the standard exponential function exp(x)=ex. This makes the standard exponential function very useful esp. in calculus.

5x is also commonly written as 5^x or 5**x
3(2x)
(5x)y
5(x)^3.1

# ##[11]technologies relevant]2050 #inspired by architect story

Watching an in-depth documentary about an architect (I.M.Pei) in his 80’s, I started thinking (again) about what app dev technologies/experiences would be relevant when I turn 80. I think there will be more “winning” software tools (software are tools) adopted, each dominating a specific domain displacing old guards. But what about mainstream technologies? What are truly resilient in the face of destructive sea changes.

Note many of these technologies could be sidelined and dethroned but still relevant!

• #1) C
• #2) unix/linux
• * [L] multi-threading basic constructs? All the basic low-level constructs are decades old, but not the high-level constructs
• * [L] socket, tcp / ip
• * unix, sql, network tuning
• * [L] classic data structures and (only) those algo on them. STL was the pioneer.
• * c++? less resilient than C, but since C++ compiler is usable by c developers, c++ features would be usable if not always relevant.
• SQL and stored procedure coding?
• [L] memory management — pointers, allocation/deallocations, definitely relevant to ultra high volume, low latency apps(?) More generally, In “demanding contexts” (Scott Meyers) I feel mem mgmt will remain extremely relevant, perhaps beneath the surface
• * GUI? High churn. Requirement will stay but the constructs and the programming language/technique may change completely. GUI threading design seems to be consistent throughout.
• * MOM architecture? probably yes but implementation may change so completely that your knowledge is utterly irrelevant. IBM MQ and RV are long-standing, largely due to the relevance of C.

# eor
* RPC, web service, corba, RMI… Resilient model, but not implementations
* [L] system calls? Actively used by few coders but relevant underneath the surface
* batch jobs? Requirement yes; implementation no.

[L=Low Level, closer to the metal, rather than application level]

# Prob(X=x) is undefined for continuous RV

Mind the notation — the big X __denotes__ a random variable such as the “angle” between 2 hands on a clock. The small x denotes a particular value of X, such as 90 degrees.

Discrete case — we say P(X=head) = P(X=tail) = 0.5, or P(X=1 dot) = P(X=6 dots) = 1/6 on a dice. Always, all histogram lengths add up to 1.0.

When X is a continuous random var, then P(X=x) = 0 for any x, as seen in standard literature. This sounds bizarre, counter-intuitive and nonsensical — the 2 hands did form a 90 degree, so 90 degree clearly isn’t impossible.

A: Well, P(X=…) is undefined for a continuous RV. Range probability is well defined. If a range prob P(X>360) = 0, it does mean “impossible”. When you see P(X=…) = 0 it means nothing — no well-defined meaning.

That’s a good enough answer for most of us. For the adventurous, If you really want a definition of P(X=..), then I’d say P(X=90) is defined as limit of P( 90< X <90+a ) as a approaches 0. Based on this definition, P(X=..anything) = 0

# OTM put — on left or right of a graph against underlier

It’s well known that the implied volatility smile curve is skewed for stock options. The implied vol is Higher on the far Left than far right on a smile curve. The Left derive from OTM Put quotes from the market, whereas the Right is derived from OTM Calls.

In other words, on Low strikes implied sigmas are much higher, and derive from OTM Puts.

On a PnL graph however, you may see a different pattern. For example Look at a simple put PnL graph. OTM is on the Right — At sky high prices, our put is worthless and deep OTM.

So OTM put is on left or right?

Well, what’s the horizontal axis?
– it’s underlier prices on the PnL graph, where strike is a fixed part of the put contract we are analyzing.
– it’s strikes on the smile curve, where current spot level is (roughly) the lowest point on the smile. The current spot doesn’t move on this smile curve. The smile curve is a snapshot of the option quotes and implied sigmas.

# mkt-data favor arrays+fixed width, !! java OBJECTs

In the post on “size of Object.java” we see every single Object.java instance needs at least 8 bytes of book keeping.

Therefore primitive array (say array of 9 ints) has much lower overhead than Collection of Integer.java
– array takes 4bytes x 9 + at least 8 byte booking keeping
– collection takes at least sizeof(Integer) x 9 + book keeping. Each Integer takes at least 4 bytes of payload + 8 bytes book keeping.

Market-data engines gets millions of primitives per second. They must use either c++ or java primitive arrays.

Market data uses lots of ints and chars. For chars, again java String is too wasteful. Most efficient would be C-string without null terminator.

The fastest market data feed is fixed-width, so no bandwidth is wasted on delimiter bits.

Floats are imprecise. Am not an experienced practitioner, but i don’t see any raw market data info represented in floating points.

Java Objects also increases garbage collection. Often indeterminate full GC hurts transaction FIX engines more than the market data FIX engines, but in HFT shops the latter needs extreme consistency. Unpredictable pause in market data FIX can shut out a HFT auto-trader for a number of milliseconds, during the most volatile moment such as a market crash.

# tiny tips on sybase execute immediate

Tip: multi-line string literal is supported
Tip: exec (@full_select) — parentheses needed

Tip: to embed parameters in the query —
@full_select + ” and ac_account_number = ‘” + @acct + “‘”

# OUTPUT param in sybasae

Sound byte — You write “OUTPUT” once each in the service proc AND the client proc.

[ Service proc ] CREATE procedure serviceProc (@p1,@p2,… @lastParam int OUTPUT) /* usually last param */
[ Service proc ] /* in the body */ select @lastParam = 123

1) [ client proc ] exec @ret = serviceProc ‘valueFor_p1’,’valueFor_p2’…. @someVarDeclaredLocally OUTPUT
2) [ client proc ] exec @ret = serviceProc @p1=..,@p2=…. @lastParam = @someVarDeclaredLocally OUTPUT

Note the strange syntax in @lastParam = @someVarDeclaredLocally — assigning left to right!

Note @someVarDeclaredLocally must declared locally. @lastParam must NOT since it’s not a variable at all. It’s a TAG.

In both 1) and 2), you don’t provide a value for @lastParam (like you do @p1), but you specify a L-value variable to RECEIVE the output

# how to get java to capture printing from sybase stored proc

In my experience on Wall St, Sybase store proc can get very complex. A basic technique is the lowly “print”. It beats “select” because under error condition all selects into a log table are rolled back.

Sometimes Sybase print output doesn’t get returned to java. For jdbc I had a simple reusable method to while-loop through a series of warnings. Here’s my technique for spring jdbcTempalte. Note the documented “logging all warnings” may not work. If you don’t override handleWarnings() like I did, then all warnings become exceptions so super.query() return value is lost — real show stopper.

public class GenericProcedureCaller extends JdbcTemplate { @Override protected void handleWarnings(Statement stmt) {  try {   super.handleWarnings(stmt);  } catch (SQLWarningException e) {   log.info("\t\t v v v   output from database server   v v v v ");   SQLWarning warn = e.SQLWarning();   while (warn != null) {    log.info(warn);    warn = warn.getNextWarning();   }   log.info("\t\t ^ ^ ^   output from database server   ^ ^ ^ ^ ");  } catch (SQLException e) {   e.printStackTrace();  } } @Override public  List query(String sql, RowMapper rowMapper) throws DataAccessException {  boolean oldSetting = isIgnoreWarnings();  // setting to false to capture "prints" from the proc, but there's side effect.  setIgnoreWarnings(false);  try {   return super.query(sql, rowMapper);  } finally {   setIgnoreWarnings(oldSetting);  } }