intuitive – with PUT option, first look

When I read financial articles, I find PUT options harder to understand than most derivatives. Here’s my summary

–> You use a put insurance  when you think that underlying might fall.
–> A put insurance let’s you unload your worthless asset and cash in a reasonably high strike price

Here’s a longer version

–> You use a put insurance (on an underlying) at price $100 when you think that underlying might fall below $100. This insurance lets you “unload” your asset and cash in $100. Note most put or calls traded are OTM.

A good thing about this simplified intuitive definition is, current underlying price doesn’t matter.Specifically, it doesn’t matter whether current underlying price is below or above strike (ie in the money or out).

Q: both a short position (in the underlying) and a put holder benefits from the fall. Any difference?
A: I feel if listed puts are available, then they are preferable to holding a short position. Probably cheaper and don’t tie up lots of cash.

Q: how about the put writer?
A: (perhaps not part of the “intuitive” first lesson on puts)
A: sound byte — an “insurer” .
A: therefore they don’t want volatility.  They want underlying price to stay high or at least be stable

Simplest underlying is a stock.

Advertisements

exceed copy-paste between win32 and X-windows

Using exceed, it can be a challenge to set up copy-paste between win32
and X windows. I know 2 options.

Note I always enable auto-copy-x-selection and
auto-paste-to-x-selection.

— option: X-selection-page -> X Selection Associated With Edit
Operations set to Primary —
Lesson? “Primary” is the default. In this mode, don't use the xwin
context-menu.

* Simple-Select (without middle-button or context menu) to copy from
unix, paste to win32? Yes
* Simple-Select (without middle-button or context menu) to copy from
unix, middle-button to paste to unix? yes
* Select from win32, middle-button to paste in unix? Yes
* Select from win32, context-menu->edit->paste in unix? no

— option: X-selection-page -> X Selection Associated With Edit
Operations set to Clipboard —
This is suggested on some webpage. It also enables copy-paste between
unix and windows.

generic subtyping – arrays vs collections

based on [[java generics and collections]]

1)
List<Integer> ints = Arrays.asList(1,2,3);
List<Number> nums = ints; // uncompilable
nums.add(0.001); // mistake caught by compiler
// list of integer is NOT a subtype of list of numbers, but below, array-of-integer is indeed a subtype of array-of-numbers!
2)
Integer[] intArray = {1,2,3};
Number[] numArray = intArray; // compilable
numArray[0] = 0.001; // run time error; compiler negligence
3)
ArrayList is indeed a subtype of List

which banks ask differentiating questions

(another blog post)

If an interview question is too tough, then strong or fake candidates both fail. If question is too common, then both pass. Good differentiating questions let the strong candidates shine through. Differentiating candidates depends on

* non-standard (but not obscure) questions, or
* drill-in on standard questions, or
* an interviewer capable of seeing the subtle differences among answers to standard questions.

If you have “none of the above”, then you can't differentiate candidates.

Most interview questions are open-ended. I (not yet a strong candidate) can give super long answers to sleep-vs-wait, interface-vs-abstract-class, arrayList-vs-linkedList, “how to prevent deadlock”…. However, many interview questions are so common that candidates can find perfect standard answers online. A strong interviewer must drill in on such a stock question to test the depth of understanding.

GS, google, lab49, MS, barc cap have good interviewers.

#1 database scale-out technique (financial+

Databases are easy to scale-up but hard to scale-out.

Most databases get the biggest server in the department as it’s memory intensive, I/O intensive and cpu-intensive, taking up all the kernel threads available. When load grows, it’s easy to buy a bigger server — so-called scale-up.

Scale-out is harder, whereby you increase throughput linearly by adding nodes. Application scale-out is much easier, so scalable architects should take early precautions.

If DB is read-mostly, then you are lucky. Just add slave nodes. If unlucky, then you need to scale-out the master node.

Most common DB scale-out technique is shard or HORIZONTAL partitioning (cut horizontally, left to right). Simplest example — federated table, one partition per year.

I worked in a private wealth system. biggest DB is the client sub-ledger, partitioned by geographical region into 12 partitions.

We also used Vertical partitioning — multiple narrow tables.

Both vertical and horizontal partitions can enhance performance.

equals method of a GraphNode.java class

XR,

You are spot on about linked list — If a class N has-a field of type N, then N is almost always, by definition, a node in a graph. That N field is probably a parent node. So allow me to put in some meaningful names — Each GraphNode has-a field named this.parent. Now the question becomes “how to override equals() in GraphNode and deal with the unbounded recursion”.

It’s an unusual technical requirement to make equals() to compare all ancestor nodes. However, It’s a reasonable business requirement to compare 2 GraphNodes by comparing all ancestors. Such a business requirement calls for a (static) utility method, NOT an instance method in GraphNode.java. A static utility method like compareAllAncestor(GraphNode, GraphNode) can be iterative and avoid recursion and stack overflow. Once this static method is in place, I might (grudgingly) create an instance method compare(GraphNode other) which simply returns compareAllAncestor(this, other), without unbounded recursion or stack overflow.

If 2 threads both perform this comparison, then I feel the method may need to lock the entire graph — expensive.

Even in a single-threaded environment, this comparison is expensive. (The recursive version would add an additional memory cost.) Potentially a performance issue. For most graph data structures in business applications, GraphNode should be Serializable and collections-friendly. Therefore hashCode() and equals() should be cheap.

For most graph data structures in business applications, each graph node usually represents a real world entity like a member in a MLM network. Now, if a graph node represents a real world entity, then it’s always, without exception, identifiable by an immutable and unique ID. Usually this ID is saved in database (could also be generated in application). Therefore, in most cases, equals() should compare ID only.

3 major risk-calculators in an investment bank

Background — Imagine a typical investment bank. The risk engines below are owned by distinct branches of the IT organization. Not integrated (A major shortcoming in risk systems today is such data silos.)  For the bank CRO (Chief Risk Officer), how are these systems related? How do we interpret their risk numbers in a consolidated big picture?

– c-risk (credit risk) systems calculate bank’s potential loss due to defaults OR counter party credit rating drops
– Sophisticated m-risk (market risk) engines calculate expected Market Value drops due to price swings
– L-risk (liquidity risk)? Among other things, it covers capital reserve (Basel). L-risk is Less computerized. Perhaps no daily valuation of assets/liabilities, long and short positions.

— some comparisons among the domains —
There is significant overlap between credit-risk vs market-risk processes. In the bigger picture, unrealized loss due to counter party credit is covered by both c-risk and m-risk. Real cash loss (i.e. realized) is the subject of both by L-risk and c-risk.  Credit risk engine is more about calculating unrealized loss (i.e. MV drop) due to credit quality change. In contrast, realized loss due to default is the subject of liquidity risk.
Unrealized MV loss due to credit quality hurts valuation of loan portfolios and incoming collateral, and hurts our consolidated assets and our own credit rating. Therefore it is a liquidity risk.
At the heart of credit risk analysis (unlike market risk or liquidity risk) is the credit review on individual borrowers/issuers including countries.
M-risk is more quantitative than c-risk and L-risk. Therefore most IT jobs are in m-risk. VaR is the most quantitative domain in finance. The star player in the “team”. Useful for short term m-risk.

For long-term m-risk, Stress-test (aka scenario-test) is the primary risk engine. Stress test is also one of the engines for c-risk estimation.
 

I feel liquidity risk is more critical to a bank than credit risk or market risk, as liquidity means solvency.
How about collateral valuation engines? I think this straddles c-risk and L-risk systems. Outgoing collateral reduces a bank’s liquidity. Collateral we hold in the form of bonds are valued daily in our c-risk calculator.

How about margin risk calculator (for prime brokerage or listed derivatives)? I assume these margin accounts only hold liquid assets, credit-risk free. In such a case, it’s basically a stress test m-risk engine. Not so much VaR. Not much c-risk. It does hit bank’s capital reserve since collateral adds or reduces a bank’s liquidity.

Now, if a margin risk calculator must support risky bonds in the margin account, then this system might affect m-risk, c-risk and L-risk.