bash check command failure but also pipe its output

I faced the same problem as described in

set -o pipefail # The exit status of a pipeline is the exit status of the last command in the pipeline, unless the pipefail option is enabled.

This works with set -e.



semi-retirement jobs:Plan B #if !! U.S.

I had blogged about this before, such as the blogger-pripri post on “many long term career options”

Hongzhi asked what if you can’t go to the US.

* Some Singapore companies simply keep their older staff since they still can do the job, and the cost of layoff is too high.
* Support and maintenance tech roles
* Some kind of managerial role ? I guess I am mediocre but could possibly do it, but i feel hands-on work is easier and safer.
* Teach in a Poly or private school ? possibly not my strength
* Run some small business such as Kindergarten with wife

crossed orderbook mgmt: real story4IV

disappears after a while; Some days better some days worse

data quality visible to paying customers

–individual contribution?
not really, to be honest.

–what would you do differently?
Perhaps detect and set a warning flag when generating top-book token? See other post on “crossed”


  • query the exchange if query is supported – many exchanges do.
  • compare across ticker plants? Not really done in our case.
  • replay captured data for investigation
  • retrans as the main solution
  • periodic snapshot feed is supported by many exchanges, designed for late-starting subscribers. We could (though we don’t) use it to clean up our crossed orderbook
  • manual cleaning via the cleaner script, as a “2nd-last” resort
  • hot failover.. as last resort

–the cleaner script:

This “depth-cleaner” tool is essentially a script to delete crossed/locked (c/l) entries from our replicated order books. It is run by a user in response to an alert.

… The script then compares the Ask & Bid entries in the order book. If it finds a crossed or locked order, where the bid price is greater than (crossed) or equal to (locked) the ask price, it writes information about that order to a file. It then continues checking entries on both sides of the book until it finds a valid combination. Note that the entries are not checked for “staleness”. As soon as it finds non-crossed/locked Ask and Bid entries, regardless of their timestamps, it is done checking that symbol.

The script takes the entries from the crossed/locked file, and creates a CIM file containing a delete message for every order given. This CIM data is then sent to (the admin port of) order book engine to execute the deletes.

Before the cleaner is invoked manually, we have a scheduled scanner for crossed books. This scanner checks every symbol once a minute. I think it uses a low-priority read-only thread.

c++get current command line as str

#include <fstream>

std::string const & get_command_line() {
  static std::string ret;
  if (ret.empty())
        std::string path="/proc/" + std::to_string((long long)getpid()) + "/cmdline";
        std::cerr<<"initializing rtsd parsername from "<<path<<" ..\n";
        std::ifstream myfile(path);
        if (!myfile) return ret;
        getline(myfile, ret);
  return ret;

##short-n-hard algo@simple data structures

90% of the hardest coding questions use simple data structures. Some of these algos are

  • very short but baffling
  • doesn’t use fancy data structures
  • requires prior study; no expectation to invent them during an interview

–Linked list

Brent — check for loop in linked list

–binary tree

Morris — Morris in-order walk]O(N) #O(1)space


  • regex
  • KMP string search



minimal queue ] python #stack=ez

In a coding test, I will use a vanilla list for both.

— Stack can use

  • list.append()
  • list.pop() with no arg.
  • Top of stack is list[-1]

— Queue: shows an primitive (inefficient) Queue class I wrote:

  • dequeue — list.pop()
    • A non-queue operation — pop(2) would remove and return the 3rd vector item but this is inefficient on a vector!
  • enqueue — list.insert(0, newItem) — similarly inefficient

A deque or circular array (fixed capacity) are more efficient for a queue.



elaborate python project/challenge #PWM

Key is “technical challenge”. Without it, I may look like just another python coder. Some of the interviewers may not be easy to impress. I will only target the majority.

Different companies might have different needs for python skill. I can only guess

  1. Usage: data analysis, data science
  2. Usage: automation, glue language
  3. …. py as primary language to implement business logic in an enterprise application? Only in Macquarie, baml and jpm
    1. I could remake my PWM Commissions system in python

For 2), I used

  • devops
  • integration with git, maven, g++, code generation, test result parsing, …. for devops
  • automated testing
  • simple network servers
  • subprocess
  • integration with shell script
  • automatic upload
  • generate c++ source code to be compiled into a native python extension
  • import hack
  • logging decorators —
  • python code attached to trade object. Code to be evaluated at reval time.

## Y avoid blocking design

There are many contexts. I only know a few.

1st, let’s look at an socket context. Suppose there are many (like 500 or 50) sockets to process. We don’t want 50 threads. We prefer fewer, perhaps 1 thread to check each “ready” socket, transfer whatever data can be transferred then go back to waiting. In this context, we need either

  • /readiness notification/, or
  • polling
  • … Both are compared on P51 [[TCP/IP sockets in C]]

2nd scenario — GUI. Blocking a UI-related thread (like the EDT) would freeze the screen.

3rd, let’s look at some DB request client. The request thread sends a request and it would take a long time to get a response. Blocking the request thread would waste some memory resource but not really CPU resource. It’s often better to deploy this thread to other tasks, if any.

Q: So what other tasks?
A: ANY task, in the thread pool design. The requester thread completes the sending task, and returns to the thread pool. It can pick up unrelated tasks. When the DB server responds, any thread in the pool can pick it up.

This can be seen as a “server bound” system, rather than IO bound or CPU bound. Both the CPU task queue and the IO task queue gets drained quickly.


java lock: atomic^visibility

You need to use “synchronized” on a simple getter, for the #2 effect.!


A typical lock() operation in a java method has two effects:

  1. serialized access — unless I release the lock, all other threads will be blocked grabbing this lock
  2. memory barrier — after I acquire the lock, all the changes (on shared variables) by other threads are now visible. The exact details are .. to be detailed.

I now feel the 2nd effect is often more important (and more tricky) than the 1st effect. See P94 [[Doug Lea]] . I like this simple summary, even if not 100% complete —

“In essence, releasing a lock forces a flush of all writes from working memory….and acquiring a lock forces reload of accessible fields.”

Q: what are the subset of “accessible fields” in a class with 9 fields?
A: I believe the compiler knows “in advance” what subset of fields will be accessed after lock acquisition.

Q: what if I acquire a lock, do nothing and release the lock? Is there the #2 effect?
A: I doubt it. You need to enclose all the “tricky” operations between a lock grab/release. If you leave some update (to a shared mutable) outside in the “cold”, then #1 will fail and #2 may also fail.


symlink/hardlink: Win7 or later is a 2017 article.

The mklink command can create both hard links (known as “hard links” in Windows) and soft links (known as “symbolic links” in Windows).

On Windows XP, I have used “Junction.exe” for years, because mklink is not available.

share buy-back #basics

  • shares outstanding — reduced, since the repurchased shares (say 100M out of 500M total outstanding) is no longer available for trading.
  • Who pays cash to who? Company pays existing public shareholders (buying on the open market), so company need to pay out hard cash! Will reduce company’s cash position.
  • EPS — benefits, leading to immediate price appreciation
  • Total assets — reduces, improving ROA/ROE
  • Demonstrates comfortable cash position
  • Initiated by — Management when they think it is undervalued
  • Perhaps requested by — Existing share holder hoping to make a profit
  • company has excess capital and
  • A.k.a “share repurchase”

coin problem #all large-enough amounts are decomposable

This is not really a algo IV question, but more like brain teaser problem.

Based on — For example, the largest amount that cannot be obtained using only coins of 3 and 5 units is 7 units. The solution to this problem for a given set of coin denominations is called the Frobenius number of the set. The Frobenius number exists as long as the set of coin denominations has no common divisor.

Note if a common divisor exists as in {2,4} then all the odd amounts will be non-decomposable.

Q: why a very large amount is always decomposable ? Give an intuitive explanation for 2 coin values like 3 and 5.

Here’s an incomplete answer — 15 (=3*5), 16, 17 are all decomposable. Any larger number can be solved by adding 3’s .

In fact, it was proven that any amount greater than (not equal to) [xy-x-y] are always decomposable. So if we are given 2 coin values (like 4,5, where x is the smaller value) we can easily figure out a range

xy-x-y+1  to xy-y

are each decomposable. Note this range has x distinct values. So any higher amount are easily solved by adding x’s

Also note xy-y is obviously decomposable as (x-1)y.


##top5 std::map tasks 4cod`IV

custom hash func? See short example on P 364 [[c++standard library]]. [[optimized c++]] has many examples too.

initialize? There’s a simple constructor taking a long initializer, but the insert() methods support the same and are more versatile.

insert? single pair; range (anotherMap.being(), end());

** insert single value — won’t overwrite pre-existing
** map1.emplace(…)
** map1[key2] = val3 // overwrites pre-existing
** insert list of values —

(returning the value) lookup? at() better than operator[]

a pointer type as key? useful technique.

erase? by a specific key. No need to call another function to really erase the node.

Q: create only if absent; no update please
A: insert()

Q2: create or uppate
Q2b: look up or create
A: operator []

Q1: update only; no create please
Q1b: look up only. No create please
A: find() method

Q: check for existance
A: c.find() is slightly better than c.count() esp. for multi_* containers


complexities: replicate exchange order book #Level1+2

–For a generic orderbook, the Level 1 complexity is mostly about trade cancel/correct.
All trades must be databased (like our TickCache). In the database, each trade has a trade Id but also an arrival time.

When a trade is canceled by TradeId, we need to regenerate LastPrice/OpenPrice, so we need the ArrivalTime attribute.

VWAP/High/Low are all generated from TickCache.

The other complexity is BBO generation from Level 2.

–For a Level-based Level 2 order book, the complexity is higher than Level 1.

OrderId is usually required so as to cancel/modify.

c++debug^release build can modify app behavior #IV

This was actually asked in an interview, but it’s also good GTD knowledge. points out —

  • fewer uninitialized variables — Debug mode is more forgiving because it is often configured to initialize variables that have not been explicitly initialized.
    • For example, Perhaps you’re deleting an uninitialized pointer. In debug mode it works because pointer was nulled and delete ptr will be ok on NULL. On release it’s some rubbish, then delete ptr will actually cause a problem. points out —

  • guard bytes on the stack frame– The debugger puts more on the stack, so you’re less likely to overwrite something important.

I had frequent experience reading/writing beyond an array limit. points out —

  • relative timing between operations is changed by debug build, leading to race conditions

Echoed on P260 [[art of concurrency]] which says (in theory) it’s possible to hit threading error with optimization and no such error without optimization, which represents a bug in the compiler.

P75 [[moving from c to c++]] hints that compiler optimization may lead to “critical bugs” but I don’t think so.

  • poor use of assert can have side effect on debug build. Release build always turns off all assertions as the assertion failure messages are always unwelcome.

asymmetry lower_bound^upper_bound #IFF lookup miss

For a “perfect” hit in both set::lower_bound() and std::lower_bound(), the return value is equivalent to the target; whereas upper_bound is strictly higher than target. See

To achieve symmetry, we need to decrement (if legal) the iterator returned from upper_bound.
If no perfect hit, then lower_bound() and upper_bound() both give the next higher node, i.e. where you would insert the target value.

#include <iostream>
#include <algorithm>
#include <vector>
using namespace std;

vector<int> v{1,3,5};
int main(){
  vector<int>::iterator it;
  it = lower_bound(v.begin(), v.end(), 2); cout<<*it<<endl;
  it = upper_bound(v.begin(), v.end(), 2); cout<<*it<<endl;

array^pointer variables types: indistinguishable

  • int i; // a single int object
  • int arr[]; //a nickname of the starting address of an array, very similar to a pure-address const pointer
  • int * const constPtr;
  • <— above two data types are similar; below two data types are similar —->
  • int * pi; //a regular pointer variable,
  • int * heapArr = new int[9]; //data type is same as pi


This is related to q[cannot open shared object file]

See for the RUNPATH

q(objdump) can inspect the binary file better than q(ldd) does.

q(ldd) shows the final, resolved path of each .so file, but (AFAIK) doesn’t show how it’s resolved. The full steps of resolution is described in

q(objdump) can shed some light … in terms of DT_RUNPATH section of the binary file.

c++q[new] variations

  1. variation: new MyClass(arg1) // most common. Can throw bad_alloc
  2. variation: new MyClass() // better form, calling the no-arg ctor
  3. variation: new MyClass //bare word, same as above. See op-new: allocate^construct #placement #IV
  4. variation: new (nothrow) MyClass(…) // returns NULL upon failure
  5. variation: placement new
  6. variation: array-new // no argument allowed!

## IV favorites ] sockets

There are dozens of sub-topics but in my small sample of interviews, the following sub-topics have received disproportionate attention:

  1. blocking vs non-blocking
  2. tuning
  3. multicast
  4. add basic reliability over UDP (many blog posts); how is TCP transmission control implemented
  5. accept() + threading
  6. select (or epoll) on multiple sockets

Background — The QQ/ZZ framework was first introduced in this post on c++ learning topics

Only c/c++ positions need socket knowledge. However, my perl/py/java experience with socket API is still relevant.

Socket is a low-level subject. Socket tough topics feel not as complex as concurrency, algorithms, probabilities, OO design, … Yet some QQ questions are so in-depth they remind me of java threading.

Interview is mostly knowledge test; but to do well in real projects, you probably need experience.

Coding practice? no need. Just read and blog.

Socket knowledge is seldom the #1 selection criteria for a given position, but could be #3. (In contrast, concurrency or algorithm skill could be #1.)

  • [ZZ] tweaking
  • [ZZ] exception handling in practice
  • —-Above topics are still worth studying to some extent—–
  • [QQ] tuning
  • [QQ] buffer management


op-new: allocate^construct #placement #IV

Popular IV topic. P41 [[more effective c++]] has an excellent summary:

  1. to BOTH allocate (on heap) and call constructor, use regular q(new)
  2. to allocate Without construction, use q(operator new)
    1. You can also use malloc. See
  3. to call constructor on heap storage already allocated, use placement-new, which invokes ctor

The book has examples of Case 2 and Case 3.

Note it’s common to directly call constructor on stack and in global area, but on the heap, placement-new is the only way.

Placement-new is a popular interview topic (Jump, DRW and more …), rarely used in common projects.

in-depth article: epoll illustrated #SELECT

(source code is available for download in the article)

Compared to select(), the newer linux system call epoll() is designed to be more performant.

Ticker Plant uses epoll. No select() at all. is a nice article with sample code of a TCP server.

  • bind(), listen(), accept()
  • main() function with an event loop. In the loop
  • epoll_wait() to detect
    • new client
    • new data on existing clients
    • (Using the timeout parameter, it could also react to a timer events.)

I think this toy program is more readable than a real-world epoll server with thousands of lines.


I like the top answer in

Both are unsafe casts and could hit runtime errors, but RC is more unsafe. RC turns off compiler error checking, so you are on your own in the dark forest.

I feel RC is the last resort, when you have control on the bit representation of raw data, with documented assurance this bit representation won’t change.

In a real example in RTS — a raw byte array comes in as a char array, and you RC it into (pointer to) a packed struct. The compiler has no reason to believe the char array can be interpreted as that struct, so it skips all safety checks. If the raw data somehow changes you get a undefined behavior at run time. In this case SC is illegal — will not pass compiler.


big bright world behind that thick heavy door

(Was so hard to get over U.S. barriers in the form of h1b and later GC …)

Was so hard to overcome the English barrier … I struggled so very hard. Slowly I was able to express myself, with a big enough vocab…. Once the door crack opens, I got my first glimpse of a big, bright world[1] behind the door. Immediately I could see myself thriving in that world.

With a safe retreat always available, there’s no real risk [2] to me so I boldly jumped in as soon as possible.

So which items in the 5advantages post? Job pool !

[1] Some people don’t need English and can do well using Chinese as primary language … I’m different.
[2] there’s indeed some risk to wife and to kids…

simple implementation of memory allocator#no src

P9 [[c++game development primer]] has a short implementation without using heap. The memory pool comes from a large array of chars. The allocator keeps track of allocated chunks but doesn’t reuse reclaimed chunks.

It showcases the header associated with each allocated chunk. This feature is also part of a real heap allocator.

reinterpret_cast is used repeatedly.

pointer/itr as field: a top3 implementation pattern

common interview question.

This mega-pattern is present in 90% of java and c# classes, and also very common in c++. Important data structure classes  relying on pointer fields include vectors, strings, hashtables and most STL or boost containers.

Three common choices for a pointer member variable:

  • Raw pointer — default choice
  • shared_ptr — often a superior choice
  • char pointer, usually a c-str

In each case, we worry about

  • construction of this field
  • freeing heap memory
  • RAII
  • what if the pointee is not on heap?
  • copy control of this field
  • ownership of the pointer
  • when to return raw ptr and when to return a smart ptr

shared_ptr field offers exception safety, automatic delete, default synthesized dtor, copier, op=


multicast address 1110xxxx #briefly

By definition, multicast addresses all start with 1110 in the first half byte. Routers seeing such a destnation (never a source) address knows the msg is a multicast msg.

However, routers don’t forward any msg with destnation address through because these are local multicast addresses. I guess these local multicast addresses are like 192.168.* addresses.


See also blog on c++ atomic<int> — primary usage is load/store, not CAS

lock free isn’t equal to CAS — lock-free construct also include Read-Copy-Update, widely used in linux and other kernels.

atomic isn’t equal to lock-free — For example, c++ atomic classes (actually templates) can use locks if the underlying processor lacks CAS instruction.


java8 lambda expression under the hood – phrasebook

Q: how are java8 lambda expressions translated?

* Special helper method – InvokeDynamic. You can see it in the bytecode
* Static methods – a non-capturing (stateless) lambda expression is simply converted to a static method
* a capturing lambda expression can also become a static method with the captures as additional method args. This may not be the actual compiler action, but it is a proven model. (Compare : separate chaining is one proven implementation of hash tables.)

However, static methods obscure an essential rule — the lambda expression’s type must “look like” a subtype of a SAM interface. Remember you often pass a lambda around as if it’s a SAM (subtype) instance.

So even if the number crunching is done in a static method, there must be some non-static wrapper method in a SAM subtype instance.

java override: strict on declared parameter types

Best practice – use @Override on the overriding method to request “approval” by the compiler. You will realize that is concise –

Rule 1: “return type of the overriding method can (but not c++ [1]) be a subclass of the return type of the overridden method, but the argument types must match exactly”

So almost all discrepancies between parent/child parameter types (like int vs long) will be compiled as overloads. The only exception I know is — overriding method can remove “” from List as the parameter type.

There could be other subtle rules when we consider generics, but in the world without parameterized method signatures, the above Rule 1 is clean and simple.

[[ARM]] P212 explains the Multiple-inheritance would be problematic if this were allowed.

cloud4java developers – brief notes

Am an enterprise java developer, not a web developer. I feel PaaS is designed more for the web developer.

I agree with the general observation that IaaS doesn’t impact us significantly.

I feel SaaS doesn’t either. SaaS could offer devops (build/delivery) services for java developer teams.

PaaS has the biggest impact. We have to use the API /SDK provided by the PaaS vendor. Often no SQL DB. Can’t access a particular host’s file system. MOM is rarely provided.

java ReentrantLock^synchronized keyword

I told Morgan Stanley interviewers that reentrantLock is basically same thing as Synchronized keyword. Basically same thing but with additional features:

  • Feature: lockInterruptibly() is very useful if you need to cancel a “grabbing” thread.
  • Feature: tryLock() is very useful. It can an optional timeout argument.

Above features all help us deal with deadlock:)

  • Feature: Multiple condition variables on the same lock.
  • Feature: lock fairness is configurable. A fair lock favors longest-waiting thread. Synchronized keyword is always unfair.
  • — query operations —
  • Feature: bool hasQueuedThread(targetThread) gives a best-effort answer whether targetThread is waiting for this lock
  • Feature: Collection getQueuedThreads() gives a best-effort list of “grabbing” threads on this lock
  • Feature: Collection getWaitingThreads (aConditionVar) gives a best-effort view of the given “waiting room”.
  • Feature: int getHoldCount() basically gives the “net” re-entrancy count
  • Feature: bool isHeldByCurrentThread()

MyType a=7: but conversion constructor is explicit@@


  explict MyType(int); // would disallow
  MyType a = 77; has a solution:

  MyType a = (MyType) 77; // Static cast would invoke the explicit conversion constructor!

In general, most custom types should make conversion constructors explicit to prevent hidden bugs, but smart pointer need an implicit conversion constructor, to support

  SmartPtr myPtr = new int(77);

–A real example from CFM quant code

 FwdCurve::const_iterator iter = find( key ); //non-const itr

 QL_ASSERT( iter != ((FwdCurve * const) this)->end(), "not found" ); // original code probably before "explicit". 

// I had to change it to
 FwdCurve_iterator endItr = ((FwdCurve * const) this)->end();
 QL_ASSERT( iter != FwdCurve_const_iterator(endItr), "not found" ); //call the conversion ctor explicitly

java exception passing between threads

Many people ask how to make a child thread’s exception “bubble up” to the parent thread.

Background — A Runnable task is unsure how to handle its own exception. It wants to escalate to parent thread. Note parent has to block for the entire duration of the child thread (right after child’s start()), blocked either in wait() or some derivative of wait().

This question is not that trivial. Here are my solutions:

1) Callable task and Futures results — but is the original exception escalated? Yes. P197 [[java threads]]

2) Runnable’s run() method can temporarily catch the exception, save the object in a global variable such as a blocking queue, and notifyAll(). Parent thread could check the global variable after getting notified. Any thread can monitor the gloabal.

If you don’t have to escalate to parent thread, then

3) setUncaughtExceptionHandler() – I think the handler method is called in the same exception thread — single-threaded. In the handler, you can give the task to a thread pool, so the exception thread can exit, but I don’t know how useful.

4) adopt invokeAndWait() design — invokeAndWait() javadoc says “Note that if the method throws an uncaught exception (on EDT) it’s caught and rethrown, as an InvocationTargetException, on the callers thread”

In c#, there are various constructs similar to Futures.get() — seems to be the standard solutions for capturing child thread exception.
* Task.Wait()
* Task.Result property
* EndInvoke()

java interfaces have only abstract method@@ outdated]java8

Compared to c#, java language design was cleaner and simpler, at the cost of lower power and flexibility. C++ is the most flexible, powerful and complex among the trio.

There was never strong reason to disallow static methods in an interface, but presumably disallowed (till java 7) for the sake of simplicity — “Methods in interface is always abstract.” No ifs and buts about it.

With “default methods”, java 8 finally broke from tradition. Java 8 has to deal with Multiple Inheritance issue. See other blog posts.

In total, there are now 4 “deviations” from that simple rule, though some of them are widely considered irrelevant to the rule.

  1. a concrete nested class in an interface can have a concrete method, but it is not really a method _on_ that interface.
  2. Suppose an interface MyInterFace re-declares toString() method of That method isn’t really abstract.
    • There’s very few reasons to do this.
  3. static methods
  4. default methods — the only real significant deviation from the rule


python bisect #cod`IV

The bisect module is frequently needed in coding tests esp. codility. In this write-up, I will omit all other function parameters.

* bisect.bisect_right(x)  # less useful … returns an index i such that
all(val <= x for val in a[lo] to a[i-1]) for the left side and all(val > for val in a[i] to a[hi-1]) for the right side.
* bisect.bisect_left(x) # returns an index i such that
all(val < x for val in a[lo] to a[i-1]) for the left side and all(val >= for val in a[i] to a[hi-1]) for the right side.

In other words,

  • bisect_left(needle) returns the first index above or matching needle.
  • bisect_right(needle) returns the first index above needle.

A few scenarios:

  1. If No perfect hit, then same value returned by both functions.
    • Common scenario: if needle is higher than all, then “i” would both be the last index + 1.
    • Common scenario: if the needle is lower than all, then “i” would both be 0
    • in all cases, You can always insert Before this position
  2. If you get a perfect hit on a list values, bisect_left would return that “perfect” index, so bisect_left() is more useful than bisect_right(). I feel this is similar to std::lower_bound
    • This is confusing, but bisect_right() would return a value such that a[i-1] == x, so the returned “i” value is higher. Therefore, bisect_right() would never return the “perfect” index.
  3. If you have a lower-bound input value (like minimum sqf) that might hit, then use bisect_left(). If it returns i, then all list elements qualify from i to end of list
  4. If you have an upper-bound input value that might hit, then use bisect_left(). If it returns i, then all list values qualify from 0 to i. I never use bisect_right.
  5. Note the slicing syntax in python a[lo] to a[i-1] == a[lo:i] where the lower bound “lo” is inclusive but upper bound “i” is exclusive.
import bisect
needle = 2
float_list = [0, 1, 2, 3, 4]
left = bisect.bisect_left(float_list, needle)
print 'left (should be lower) =', left # 2

right = bisect.bisect_right(float_list, needle)
print 'right (should be higher) =', right # 3

goto: justifications

Best-known use case: break out of deeply nested for/while blocks.

  • Alternative — extract into a function and use early return instead of goto. If you need “goto: CleanUp”, then you can extract the cleanup block into a function returning the same data type and replace the goto with “return cleanup()”
  • Alternative — extract the cleanup block into a void function and replace the goto with a call to cleanup()
  • Alternative (easiest) — exit or throw exception

Sometimes none of the alternatives are easy. To refactor the code to avoid goto requires too much testing, approval and release. The code is in a critical module in production. Laser surgery is preferred — introduce goto.


my_unsigned_int = -1

static const size_t npos = -1;

npos is a static std::string member constant value with the greatest possible value for an element of type size_t.This value, when used as the value for a len (or sublen) parameter in string‘s member functions, means “until the end of the string”. As a return value, it is usually used to indicate no matches.This constant is defined with a value of -1, which because size_t is an unsigned integral type, it is the largest possible representable value for this type.

##boost modules USED ]finance n asked]IV

#1) shared_ptr (+ intrusive_ptr) — I feel more than half  (70%?) of the “finance” boost usage is here. I feel every architect who chooses boost will use shared_ptr
#2) boost thread
#3) hash tables

Morgan uses Function, Tuple, Any, TypeTraits, mpl

Gregory (RTS) said bbg used shared_ptr, hash tables and function binders from boost

Overall, I feel these topics are mostly QQ types, frequently asked in IV (esp. those in bold) but not central to the language. I would put only smart pointers in my Tier 1 or Tier 2.

—— other modules used in my systems
* tuple
* regex
* noncopyable — for singletons
** private derivation
** prevents synthesized {op= and copier}, since base class is noncopyable.

* polymorphic_cast
* numeric_cast — mostly int types, also float types
* bind?
* scoped_ptr — as non-copyable [1] stackVar,
[1] different from auto_ptr

tiny team of elite developers

Upshot — value-creation per head and salary level would rival the high-flying manager roles.

Imagine a highly successful trading shop. Even though the trading profit (tens of millions) is comparable to a bank trading desk with hundreds of IT head count, this trading shop’s core dev team *probably* have a few (up to a few dozen) core developers + some support teams [1] such as QA team, data team, operations team. In contrast, the big bank probably have hundreds of “core” developers.

[1] Sometimes the core teams decide to take on such “peripheral” tasks as market data processing/storage, back testing, network/server set-up if these are deemed central to their value-add.

In the extreme case, I guess a trading shop with tens of millions of profit can make do with a handful of developers. They want just a few top geeks. The resultant efficiency is staggering. I can only imagine what personal qualities they want:

* code reading — my weakness
* tools – * manuals — reading tons of tech info (official or community) very quickly, when a “new” system invariably behave strangely
* local system knowledge
* trouble-shooting — and systematic problem-solving. I feel this largely depends on system knowledge.
* design — it right, and able to adjust it as requirements change * architecture?
* tuning?
* algorithms?

(soft skills:)
* clearly communicate design trade-offs in a difficult discussion * drive to get things done under pressure without cutting corners * teamwork — teamwork to back down when needed to implement a team decision

pyramid of wage levels, CN^sg^US

See also the similar post on NBA salary

Look at my brother-in-law. (I’m not too sure about his situation so I will use tentative statements.) He’s smart, dedicated. Rather long experience as a team lead. He has a masters from a top uni in Shanghai.

However, there are many people with similarly strong track record in China, so he can’t move up the pyramid to, say CNY 1000k. I guess 500k is also tough.

In Singapore I’m facing a similar challenge. A S$150k (after tax) tech job is rare and considered elite so you need to be rather strong to get it. In other words, the pyramid has a sharper tip than in the US pyramid, based on a sample of 5,000 IT jobs.

(I think the Shanghai salary distro is better than most China cities…)

The NBA post brings together other important factors — lifelong income; managerial skill; …

linux command elapse time

–q(time) command

output can be a bit confusing.

Q: does it work with redirection?

–$SECONDS variable

architect’s helicopter view of a system

This (pearl) post is not only describing some role models. This is a career tip, possibly survival tip.

Bottom line — my capabilities (code reading…) is not very high, so as architect/owner of any large codebase I would tend to struggle. There are some ways to survive, but not easy. I would think a high volume, high latency batch+web/quant system in java/SQL/python/javascript would be easier for me and harder for my peers.

Prime eg: Yang. Nikhil calls it “his system knowledge” – Without knowing the source code, Yang would ask sharp questions that baffles the actual coders. If the answer smells bad, then either answer is not exactly truthful, or existing system design is imperfect. Yang is quick to notice “inconsistency with reality”, a very sharp observation. This pattern is critical in troubleshooting and production support as well. I told XR that production support can be challenging and require logical, critical thinking about the systems as a whole.

An __effective__ geek needs more than high-level system knowledge. Somehow, every one of them soon gets down to the low level, unless her role is a purely high-level role … like presales?

Not everyone has the same breadth and width of helicopter view. Often I don’t need it in the first few years. You would think after many years, each would acquire enough low-level and high-level knowledge, but I imagine some individuals would refuse to put in the extra effort, and keep her job by taking care of her own “territory” and never step out.

— my own experience —

At Citi muni, I largely avoided the complexity. I did try and trace through the autoreo codebase but stopped when I was given other projects. In hind sight, I feel it’s too slow and probably not going to work. Mark deMunk pointed out the same.

At 95 Green and Barcap, I was a contractor and didn’t need to step out of my territory. This might be one reason I had an easier job and exceed expectations. See post on “slow decline” and, both in the pripri blog.

At OC, the Guardian codebase I did take up fully. Quest was given to me after I /fell out/ with boss, so I did enough research to implement any required feature. No extra effort to get a helicopter view.

Stirt was tough. I spent a few months trying to learn some but not all of the nuts and bolts in Quartz. This knowledge is fundamental and crucial to my job and I didn’t have too much time learning the Stirt system, which I did learn but didn’t master. In fact, I asked many good system-level questions about Sprite and Stirt-Risk, but failed to gain Yang’s insight.


python RW global var hosted in a module

Context: a module defines a top-level global var VAR1, to be modified by my script. Reading it is relatively easy:

from mod3 import *
print VAR1

Writing is a bit tricky. I’m still looking for best practices.

Solution 1: mod3 to expose a setter setVAR1(value)

Solution 2:
import mod3
mod3.VAR1 = ‘new_value’

Note “from mod3 import * ” doesn’t propagate the new value back to the module. See example below.

#!/usr/bin/python -u
from mod3 import *

def main():
  ''' Line below is required to propagate new value back to mod3
      Also note the semicolon -- to put two statements on one line '''
  import mod3; mod3.VAR1 = 'new value'
VAR1='initial value'
def mod3func():
  print 'VAR1 =', VAR1

## innovative features of python

Here’s my answer to a friend’s question “what innovative features do you see in python”

  • * decorators. Very powerful. Perhaps somewhat similar to AOP. Python probably borrowed it from Haskell?
  • * dynamic method/attribute lookup. Somewhat similar to C# “dynamic” keyword. Dangerous technique similar to java reflection.
  • * richer introspection than c# (which is richer than java)
  • * richer metaprogramming support (including decorator and introspection) … Vague answer!
  • * enhanced for-loop for a file, a string,
  • * listcomp and genexpr
  • * Mixin?
  • I wrote a code gen to enrich existing modules before importing them. I relied on hooks in the importation machinery.

RESTful – phrase book

agile – …

coupling – less coupling between client and server, so server side changes are easier. I think SOAP requires client rebuild.

b2b – still dominated by SOAP

resource-oriented services – …

object – each URL logically represents an object, which you can Get (query), POST (create), PUT (update its content) or DELETE

Q: is REST simpler than traditional SOA or SOAP web services?

Q: the concept of “resource” — does it sometimes mean a database record?

reach next level ] tech IV performance

See also the similar pripri post about “beat us”.

I really don’t care that much about zbs needed in a project:
* the so-called architecture expertise is overrated and overhyped. See my post on
** GTD
** software architect vs enterprise architect
* the optimization expertise is … similar

Next level requires .. more knowledge. See post on “Yaakov”
Next level requires .. more practice with Facebook algo questions.
Next level requires .. IDE coding against simple problems like codility.
Next level requires .. more mileage with hands-on dev.
Next level requires .. more “best practices” like google style guide

fwd contract often has negative value, briefly

An option “paper” is a right but not an obligation, so its holder has no obligation, so this paper is always worth a non-negative value.

if the option holder forgets it, she could get automatically exercised or receive the cash-settlement income. No one would go after her.

In contrast, an obligation requires you to fulfill your duty.

A fwd contract to buy some asset (say oil) is an obligation, so the pre-maturity value can be negative or positive. Example – a contract to “buy oil at $3333” but now the price is below $50. Who wants this obligation? This paper is a liability not an asset, so its value is negative.

openssh^putty key files

openssh key format is the one used in id_rsa /

  • Proven tip: I actually took the pair of id_rsa files and used them in any windows or linux account. It’s like a single fingerprint used everywhere. The key is not tied to any machine.
    • The authorized_keys file in ALL destination machines have the SAME line containing that public key.
  • Proven tip: To support ssh support@rtppeslo2 with ssh key, I copy a standard authorized_keys to rtppeslo2:~support/.ssh. See

transformation — Puttygen can read the same “fingerprint” to generate ppk files, in putty’s format. Then a putty session can use the ppk file to auto-login, in place of a password

Note when my remote_host home dir permission was too open, then the ssh key was ignored by sshd. I had to enter password.

Hull: estimate default probability from bond prices

label: credit

The arithmetic on P524-525 could be expanded into a 5-pager if we were to explain to people with high-school math background…


There are 2 parts to the math. Part A computes the “expected” (probabilistic) loss from default to be $8.75 for a notional/face value of $100. Part B computes the same (via another route) to be $288.48Q. Equating the 2 parts gives Q =3.03%.


Q3: How is the 7% yield used? Where in which part?


Q4: why assume defaults happen right before coupon date?

%%A: borrower would not declare “in 2 days I will fail to pay the coupon” because it may receive help in the 11th hour.


–The continuous discounting in Table 23.3 is confusing

Q: Hull explained how the 3.5Y row in Table 23.3 is computed. Why discount to  the T=3.5Y and not discounting to T=0Y ?


The “risk-free value” (Column 4) has a confusing meaning. Hull mentioned earlier a “similar risk-free bond” (a TBond). At 3.5Y mark, we know this risk-free bond is scheduled to pay all cash flows at future times T=3.5Y, 4Y, 4.5Y, 5Y. We use risk-free rate 5% to discount all cash flows to T=3.5Y. We get $104.34 as the “value of the TBond cash flows discounted to T=3.5Y”


Column 5 builds on it giving the “loss due to a 3.5Y default, but discounted to T=3.5Y”. This value is further discounted from 3.5Y to T=0Y – Column 6.

Part B computes a PV relative to the TBond’s value. Actually Part A is also relative to the TBond’s value.


In the model of Part B, there are 5 coin flips occurring at T=0.5Y   1.5  2.5  3.5  4.5 with Pr(default_0.5) = Pr(default_1.5) = … = Pr(default_4.5) = Q. Concretely, imagine that Pr(flip = Tail) is 25%. Now Law of total prob states


100% = Pr(05) + Pr(15) + Pr(25) + Pr(35) + Pr(45) + Pr(no default). If we factor in the amount of loss at each flip we get


Pr(05) * $65.08 + Pr(15) * $61.20 + Pr(25) * $57.52 + Pr(35) * $54.01 + Pr(45) * $50.67 + Pr(no default, no loss) + $0 == $288.48Q

XiaoAn@age discrimination:$5k offer !! bad4an aging programmer

I could keep this current job in Singapore for a few years. At age 44, or 45… I might be lucky again to get another finance IT job but what about 50?

The odds will grow against me. I’m on an increasingly tilted playing field. At 60 I’ll have very very low chance.

XiaoAn points out that at such an age, even finding a $5k job is going to be tough. I believe indeed some percentage of the hiring managers don’t like hiring someone older. XiaoAn admitted he’s one of them.

You could feel confident about age discrimination, but the reality is … in Singapore there are very few such job candidates so we just don’t know how the employers would react.

Also bear in mind at age 55 I am unlikely to perform as before on interviews.

##c++(GTD+)learning-aids to upgrade 6→7

Which yardstick? First must be IV. 2nd would be common (not the obscure) GTD skills
For IV, need to analyze how I was beaten in past interviews. For GTD zbs, a few major home projects has significantly increased my proficiency.
  • see recoll -> c++Q&A.txt
  • [ZZ] try out debug tools in the gdb book? LG
  • [ZZ] experiment with sample code in [[fin instrument pricing using c++]]
  • [ZZ] proj: valgrind (linux) – get hands-on experience. Might take too much time to get it working
  • problem – test if a linked list has cycles
  • problem: all permutations of 0 or more of abcde
  • problem: write skeleton c++ code for ref counted string or auto_ptr
  • problem: test if a given container is in ascending order
  • [ZZ means … see]

c++ parametrized functor – more learning notes

Parametrized Functor (class template) is a standard, recommended construct in c++, with no counterpart in java. C# delegae is conceptually simpler but internally more complex IMO, and represents a big upgrade from c++ functor. Better get some clarity with functor before comparing with delegates.

The most common functor is the simple stateless functor (like a simple lambda). The 2nd common category is the (stateful but) immutable functor. In all cases, the functor is designed for pass-by-value (not by ref or by pointer), cheap to copy, cheap to construct. I see many textbook code samples creating throwaway functor instances.

Example of the 2nd category – P127[[essentialC++]].

A) One mental block is using a simple functor Class as a template type param. This is different from java/c#

B) One mental block is a simple parametrized functor i.e. class template. C++ parametrized functor can take various type params, more “promiscuous” than c#/java.

C) A bigger mental block, combining A+B, is a functor parametrized with another functor Class as a template param. See P186[[EssentialC++]].

This is presented as a simple construct, with about 20 lines of code, but the more I look at it, the more strange it feels. I think this is somwehwat powerful but unpopular and unfamiliar to mainstream developers.

Functional programming in c++?

In java, we write a custom comparitor class rather than a comparitor class Template. We also have the simpler alternative of a Comparable interface, but that’s not very relevant in this discussion. Java 8 lambda —

std::vector memory allocation/free: always heap@@ is concise.

  • The vector “shell” can be on stack or heap or static memory, as you wish.
  • The continuous array (payload objects) are always allocated on the heap, so that the vector can grow or deallocate them.

Deallocation is explained in

coblood – each year you worked

master -> jobjob


Each year that you spent on some job,


you earn, most important of all, some cash to keep the family financially safe.

** you also earn something to plow back. Look at my UChicago experience…

you earn some experience, insight, and hopefully some zbs, but most of it will not be relevant in future jobs

you earn/build some track record that helps maintain marketability.

you either speed up or slow down brain aging

you incur stress

you incur sacrifice of family time and exercise time


c++IV: importance: knowledge imt dev xp

1) Many hard-core tech interviewers (Yaakov, Jump, 3Arrows, Bbg, nQuants …) often asked me to explain a language feature, then drill in to see if I really do understand the key ideas, including the rationale, motivation and history. This knowledge is believe to … /separate the wheat from the chaff/

This knowledge can’t be acquired simply by coding. In fact, a productive coder often lacks such knowledge since it’s usually unnecessary theoretical knowledge.

2) West Coast always drills in on algo (+ data structure). No way to pick up this skill in projects…

1+2 —> many interviewers truly believe a deep thinker will always learn faster and design better.

typedef as prefix: syntax demystified

I find the typedef syntax not as simple as it appeared. is the best explanation of the very confusing typedef syntax.

        int x; // declares a variable named ‘x’ of type ‘int’
typedef int x; // declares a type     named ‘x’ that is ‘int’
typedef char Decimal[20];
        int(*F)(size_t); // declares a variable named F of type ‘int(*)(size_t)’
typedef int(*F)(size_t); // declares a type     named F that is ‘int(*)(size_t)’

If you add the “typedef” prefix, instead of declaring a VARIABLE, you’ve declared a new TYPE-ALIAS instead.

dtor is simple thing.. really@@

update: [[safe c++]] has a concise argument that if ctor is allowed to throw exception, then it’s possible to have dtor skipped accidentally. Only an empty dtor can be safely skipped.

This is typical of c++ — destructor (dtor) is one of the most important features but quite tricky beneath the surface

* exception
* dtor sequence – DCBC
* virtual dtor
* synthesized dtor is usually no good if there’s any pointer field
* lots of undefined behaviors
* there are guidelines for dtor in a base class vs leaf class — never mindless
* objects put into containers need a reasonable dtor
* when the best practice is to leave the dtor to the compiler, you could look stupid by writing one, esp. in an interview.

* smart pointer classes is all about dtor

* RAII is all about dtor

* interplay with delete

*** placement new, array-delete vs delete

*** override operator delete

*** double-delete

*** ownership !

rope cliff – puzzle #Roger Lee

You are on the top of a 200m cliff. You have a 150m long rope and a knife. You can only tie your rope where you stand, or 100m below on a tree. Can you reach the bottom of the cliff alive, and how?

Any Free fall is considered deadly.


trick is a loop

Suppose the cliff is point C and the tree is T and the midpoint is point M.

Cut (1st and last cut) 100M of rope and put into pocket. Tie the other 50M at C then descend to M. Convert the end of this rope into a loop at the point M. Lubricate the loop then pass the 100M rope through, to make a 50+50 double rope. Descend to T then retrieve the entire 100M.

socket stats monitoring tools – on-line resources

This is a rare interview question, perhaps asked 1 or 2 times. I don’t want to overspend.

In ICE RTS, we use built-in statistics modules written in C++ to collect the throughput statistics.

If you don’t have source code to modify, I guess you need to rely on standard tools.

RESTful^SOAP web service, briefly

REST stands for Representational State Transfer, this basically means that each unique URL is a representation of some object. You can get the contents of that object using an HTTP GET, use a POST, PUT, or DELETE to modify the object (in practice most of the services use a POST for this).

— soap vs REST (most interviewers probably focus here) —
* REST has only GET POST PUT DELETE; soap uses custom methods “getAge()” etc
* SOAP takes more dev effort, despite it’s name
* SOAP dominates enterprise apps

python – some relatively innovative features

I’m relatively familiar with perl, java, c++, c# and php, though some of them I didn’t use for a long time.

IMO, these python features are kind of unique, though other unknown languages may offer them.

* decorators
* list comp
* more hooks to hook into object creation. More flexible and richer controls
* methods and fields are both class attributes. I think fundamentally they are treated similarly.

ccy swap^ IRS^ outright FX fwd

currency swap vs IRS
– Similar – exchange interest payments
– diff — Ccy swap requires a final exchange of principal. FX rate is set up front on deal date

currency swap vs outright fx fwd?
– diff — outright involves no interest payments
– similar — the far-date principal exchange has an FX rate. Rate is set on deal date
– diff — the negotiated rate is the spot rate on deal date for ccy swap, but in the outright deal, it is the fwd rate.

currency swap vs FX swap? Less comparable. Quite a confusing comparison.
– FX swap is more common more popular

## top 5 expertise I could grow #teach?

The most sought-after Expertise I could develop.

#1 personal investment – FX/option, HY and unit trust investment

# tech IV by top employers, including brain teasers
# Wall St techie work culture
# financial dnlg, appealing to pure techies
However, some of these are hard to make a teaching career. So which domain can i teach for a living, perhaps with a PhD
  1. programming
  2. data science, combining finance with…
  3. comp science
  4. fin math

physical, intuitive feel for matrix shape – fingers on keyboards

Most matrices I have seen so far in real world (not that many actually) are either
– square matrices or
– column/row vectors

However it is good to develop a really quick and intuitive feel for matrix shape. When you are told there’s some mystical 3×2 matrix i.e. 3-row 2-column.
– imagine a rectangle box
– on its left imagine a vertical keyboard
– on it put Left fingers, curling. 3 fingers only.
– Next, imagine a horizontal keyboard below (or above, if comfortable) the rectangle.
– put Right fingers there. 2 fingers only

For me, this gives a physical feel for the matrix size. Now let’s try it on a column matrix of 11. The LHS vertical keyboard is long – 11 fingers. Bottom keyboard is very short — 1 finger only. So it’s 11×1

The goal is to connect the 3×2 (abstract) notation to the visual layout. To achieve that,
– I connect the notation to — the hand gesture, then to — the visual. Conversely,
– I connect the visual to — the hand gesture, then to — the notation
Now consider matrix multiplication. Consider a 11×2. Note a 11×1 columnar matrix is more common, but it’s harmless to get a more general feel.

An 11x2 * 2x9 gives a 11x9.

Finger-wise, the left 11 fingers on the LHS matrix stay glued; and the right 9 fingers on the RHS matrix stay glued. In other words,

The left hand fingers on the LHS matrix remain.
The right hand fingers on the RHS matrix remain.

Consider a 11×1 columnar matrix X, which is more common.

X * X’ is like what we just showed — 11×11 matrix matrix
X’ * X is 1×11 multiplying 11×1 — 1×1 tiny matrix

IRS motivations – a few tips

See also – Trac Consultancy course handout includes many practical applications of IRS.
see also — There’s a better summary and scenarios in the blog on IRS dealers

I feel IR swap is flexible and “joker card” in a suite — with transformation power.

Company B (Borrower aka Issuer) wants to borrow. Traditional solution is a bond issue or unfortunately …. a bank loan (most expensive of all), either fixed or floating rate. A relatively new Alternative is an IRS.

Note bank loan is the most expensive alternative (in terms of capital charge, balance sheet impact …), so if possible you avoid it. Mostly small companies with no choice take bank loans.

Motivation 1  relative funding advantage
Motivation 2 for company B – reduce cost of borrowing fixed
Motivation 3 for Company B – betting on Libor.
* If B bets on Libor to _rise, B would “buy” the Libor income stream of {12 semi-annual payments}, at a fixed (par) swap rate (like 3.5%) agreed now, which is seen as a dirt cheap price. Next month, the par swap rate may rise (to 3.52%) for the same income stream, so B is lucky to have bought it at 3.5%.
* If B bets on Libor to _drop, B would “sell” (paying) the Libor income stream

Motivation 4 to cater to different borrowing preferences. Say Company C is paying a fixed 5% interest, but believes Libor will fall. C wants to pay floating. C can swap with company A so as to pay libor. C will end up paying floating interest to A and receive 5.2% from A to offset the original 5% cost.

Why would A want to do this? I guess A could be a bank.

##basic steps in vanilla IRS valuation, again

* First build a yield curve using all available live rates. This “family photo” alone should be enough to evaluate any IRS
* Then write down all (eg 20) reset dates aka fixing date.
* Take first reset date and use the yield curve to infer the forward 3M Libor rate for that date.
* Find the difference between that fwd Libor rate and the contractual fixed rate (negotiated on this IRS contract). Could be +/-
* Compute the net cashflow to occur on that fixing/reset date.
* Discount that cashflow to PV. The discounting curve could be OIS or Libor based.
* Write down that amount.

Repeat for the next reset date, until we have an amount for each reset date. Notice all 20 numbers are inferred from the same “family photo”. Tomorrow under a new family photo, we will recalc/reval all 20 numbers.

Add up the 20 amounts to get the net PV in that position. Since the initial value of the position is $0, this net value is also the PnL.

c# DYNAMIC – TryCall_method1 on an Unidentified object#src

With dynamic type, we can try-call an arbitrary method m1(), which need not exist in any public interface. Behaviour — If the underlying type exposes m1, it is invoked. If unavailable you get a run time error, not a compile time error.

One usage idea — Instead of calling the standard ToString() on the unknown object, we want to try-call ToStringXml(), and pray the underlying type supports it. Traditionally, If we have a few types to add ToStringXml(), we would prefer compile-time type check by introducing an interface IXmlDumper. However, if you have 55 types that could pass into TryCall() and 33 of them exposes ToStringXml(), you may need to cast to IXmlDumper then call ToStringXml(). Worse, you would need to edit source code for the 33 types!

With dynamnic type, basically the compile-time type check is postponed to run time. Compile time type check would need m1() declared in the type C, like —

C c = new C(); c.m1();

But if we use dynamic, and type-check at run time, then we don’t need m1() exposed on any public interface —

invoking a delegate instance – explict Invoke()

Most c# folks take the shortcut by writing “this.myDelegateInstance();”. But this sugar-coated syntax masks off the distinction between a method and a delegate. C# syntax sugar coating trivializes this distinction because c# “wants” you to treat delegate instances as if it’s a method defined nearby. However, sometimes I want the distinction highlighted.

When the morphing hurts (in high-complexity codebase), I prefer the long-winded syntax myDelegateInstance.Invoke()

This kinda small tweaks can add up to a more readable codebase.

c# DYNAMIC expando object (ExOb) – phrasebook

dynamic – better assign the exob instance to a dynamic variable. If you use a ExpandoObject variable, you lose a lot of features but i won’t elaborate …

javascript/python/perl – in these languages, a class instance is conceptually(?) and physically(?) a dictionary of name-value pairs. Exob is somewhat similar.

name-value pairs — In an exob instance, a field is (obviously) name-value pair. Ditto for a runtime-created method. The “value” is basically a (possibly void) lambda expression.

Emit — is the traditional technique to create a brand new class at run time and “emit” IL code into it. Exob is newer and simpler. I guess it’s more practical.

one event delegating to another event – WPF shows an interesting mini “pattern” of one event (“event-pseudo-field”) delegating all its add/remove logic to another event (pseudo-field).

public event EventHandler CanExecuteChanged{
  add { CommandManager.RequerySuggested += value; }
  remove { CommandManager.RequerySuggested -= value; }

This reveals the true meaning of an event member. If an object studentA (of class Student) has an event pseudo field evt2, this field is basically a chain of callbacks (or notification targets, or event handlers). The evt2 field is nothing but a handle to add/remove on this chain.

In many cases, the callbacks run sequentially on the same thread as the event-firing code.

See also

managed ^ unmanaged (dotnet)- what to Manage?

A popular IV question — exactly what runtime Services are provided by the dotnet Hosting runtime/VM to the Hosted/managed code? In other words, how is the “managed code” managed?

Re, I believe an executing thread often has it’s lowest level stack frames in kernel mode, middle frames in the VM, and top frames running end-user application code. The managed code is like a lawyer working out of a hotel room. The hotel provides her many business services. Now, host environments have always provided essential runtime services to hosted applications, since the very early days of computers. The ubiquitous runtime host environment is the OS. In fact, the standard name of an OS instance is a “hostname”. If you have 3 operating systems running on the same box sharing the CPU/RAM/disk/network then there are 3 hosts — i.e. 3 distinct Hosting-Environments. In the same tradition, the dotnet/java VM also provides some runtime services to hosted applications. Hotel needs “metadata” about the data types, so a dotnet assembly always include type metadata in additional to IL code.

Below is a dotnet-centric answer to the IV question. (JVM? probably similar.) For each, we could drill down if needed (usually unneeded in enterprise apps).

– (uncaught) exception handling. See [[Illustrated c#]] (but how different from unmanaged c++?)
– class loading. See [[Illustrated c#]]
– security
– thread management? But I believe unmanaged code can also get userland threads manufactured by the  (unmanaged) thread library.
– reflection.See [[Illustrated c#]]
– instrumentation? Remember jconsole
– easier debugging – no PhD required. Unmanaged code offers “limited” debugging??
– cross-language integration?
– memory management?
** garbage collection. This service is so prominent/important it’s often listed on its own, but I consider this part of mem  mgmt.
** memory request/allocation? A ansi-C app uses memory management library to grab memory wholesale from kernel (see other posts) and a VM probably takes over that task.
** translate a C# heapy object reference to a “real virtual” heap address. Both JVM and .Net collectors have to move (non-pinned[1]) objects from one real-virtual address to another real-virthal address. Note the OS (the so-called Paging supervisor) translates this real-virtual address into physical RAM address.** appDomain. VM isolates each appdomain from other appdomains within a single Process and prevents memory cross-access. See

[1] pinned objects are not relocatable.

multicast – highly efficient? my take

(Note virtually all MC apps use UDP.)
To understand MC efficency, we must compare with UC (unicast) and BC (broadcast). First we need some “codified” metrics —
  • TT = imposing extra Traffic on network, which happens when the same packet is sent multiple times through the same network.
  • RR = imposing extra processing workload on the Receiver host, because the packet is addressed TO “me” (pretending to be a receiver). If “my” address were not mentioned in the packet, then I would have ignored it without processing.
  • SS = imposing extra processing workload by the Sender — a relatively low priority.
Now we can contrast MC, UC and BC. Suppose there are 3 receiver hosts to be notified, and 97 other hosts to leave alone, and suppose you send the message via —
  1. UC – TT not RR — sender dispatches 3 copies each addressed to a single host.
  2. BC – RR not TT — every host on the network sees a packet addressed to it though most would process then ignore it, wasting receiver’s time. When CEO sends an announcement email, everyone is in the recipient list.
  3. MC – not RR not TT. However, MC can still flood the network.

premium adjusted delta – basic illustration says “When computing your delta it is important to know what currency was used to pay the premium. Returning to the stock analogy, suppose you paid for an IBM call option in IBM stock that you borrowed in the stock-lending market. Then I would inherit a long delta position from the option and a short delta position position from the premium payment in stocks. My overall net delta position will still be long (why?), but less long than it would have been if I had paid for it in dollars.”

Suppose we bought an ATM call, so the option position itself gives us +50 delta and let us “control” 100 shares. Suppose premium costs 8 IBM shares (leverage of 12.5). Net delta would be 50-8=42. Our effective exposure is 42%

The long call gives us positive delta (or “positive exposure”) of 50 shares as underlier moves. However, the short stock position reduces that positive delta by 8 shares, so our portfolio is now slightly “less exposed” to IBM fluctuations.

2nd scenario. Say VOD ATM call costs 44 VOD shares. Net delta = 50 – 44 = 6. As underlier moves, we are pretty much insulated — only 6% exposure. Premium-adjusted delta is significantly reduced after the adjustment.

You may wonder why 2nd scenario’s ATM premium is so high. I guess
* either TTL(i.e. expiration) is too far,
* or implied vol is too high,
* or bid ask spread is too big, perhaps due to market domination/manipulation

quiet confidence on go-live day

I used to feel “Let’s pray no bug is found in my code on go-live day. I didn’t check all the null pointers…”

I feel it’s all about …blame, even if manager make it a point to to avoid blame.

Case: I once had a timebomb bug in my code. All tests passed but production system failed on the “scheduled” date. UAT guys are not to blame.

Case: Suppose you used hardcoding to pass UAT. If things break on go-live, you bear the brunt of the blame.

Case: if a legitimate use case is mishandled on go-live day, then
* UAT guys are at fault, including the business users who signed off. Often the business come up with the test cases. The blame question is “why this use case isn’t specified”?
* Perhaps a more robust exception framework would have caught such a failure gracefully, but often the developer doesn’t bear the brunt of the blame.
**** I now feel business reality discounts code quality in terms of airtight error-proof
**** I now feel business reality discounts automated testing for Implementation Imperfections (II). See

Now I feel if you did a thorough and realistic UAT, then you should have quiet confidence on go-live day. Live data should be no “surprise” to you.

java visitor pattern – overloading vs overriding, again

In standard visitor, you have a hierarchy of visitable data classes (eg Product and 30 subclasses) + a bunch of visitor classes. In
this case, i have just one visitor class, which defines overloaded methods
Visitor.visit(Futures) etc

Now, I am given a random Product object myProduct. It might be an Option, a Bond, a Futures… but given the multitude of Product
subtypes, it’s too tedious to type-test.

I already have a myVisitor object, so I try calling myVisitor.visit(myProduct). I want it bound to the correct visit() among the
overloads, but guess what — this won’t compile because … (hold your breath)…. there’s no Visitor.visit(Product) defined.

Reason — overloaded method call is resolved at compile time based on declared type of myProduct

Solution — define
abstract Product.accept(Visitor)
Option.accept(Visitor v) {v.visit(this);}

Now, myProduct.accept(myVisitor) would bind the method call to the correct visit(). Why? Well, accept() is resolved at runtime and
binds to Bond.accept() assuming myProduct is a Bond. Inside Bond.accept(), visit(this) is always visit(Bond) — resolved at compile
time like all Overloaded methods.

In conclusion, Visitable.accept(Visitor) is resolved dynamically and Visitor.visit(Visitable) is resolved statically.

What if I create a FriendlyVisitor type? Is accept() dynamic-and-static?

double meanings (c#): new, using, yield

In C#, a number of keywords are given 2 unrelated meanings.

using — typedef
using — name space import
——– unrelated to ———–
using — resource management (IDisposable)
Yield() — static method Thread.Yield()
——– unrelated to ———–
yield — iterator
new MyClass() — on class types
new MyStruct() — on struct types
** does NOT necessarily hit heap; See P55 [[c#preciely]]
** doesn’t return pointer;
** entirely optional. Only needed if you want to call the ctor of the struct.
**** without new, a value type object is created uninitialized when you declare the variable.
——– unrelated to ———–
new — redefine/hide a field, method, property etc
Not a keyword, but 2D-array in c# can be either jagged or a matrix

reliably convert Any c++ source to C : IV

More than one person asked me “Can you compile a c++ app if you only have a c compiler?”

Some of the foremost experts on c/c++ compiler said on —

If you mean “can you convert C++ source to C source, and run the C through a C compiler to get object code“, as a way to run C++ code on a system that has only a C compiler, yes it is possible to implement all of the features of ISO standard C++ by translation to C source code, and except for exception handling this produces object code with efficiency comparable to that of the code generated by a conventional compiler.

For exception handling, it is possible to do an implementation using setjmp/longjmp that is completely conformant, but the code generated will be 5-20% slower than code generated by a true c++ compiler.

boost intrusive smart ptr phrase book #no IV

* MI — P35 [[beyond c++ standard lib]] shows your pointee CLASS can be written to multiple-inherit from a general-purpose ref_count holder class. This is a good use case of multiple inheritance, perhaps in Barclays QA codebase.

* real-estate — The 32-bit ref counter lives physically on the real estate of the pointee. Pointee type can’t be a builtin like “float”. In contrast, a club of shared_ptr instances share a single “club-count” that’s allocated outside any shared_ptr instance.

* legacy — many legacy smart pointer classes were written with a ref count in the pointee CLASS like QA YieldCurve. As a replacement for the legacy smart pointer, intrusive_ptr is more natural than shared_ptr.

* builtin — pointee class should not be a builtin type like float or int. They don’t embed ref count on their real estate; They can’t inherit; …

* TR1 — not in TR1 (, but popular

* ref-count — provided by the raw pointee Instance, not the smart pointer Instance. Note the Raw pointER Instance is always 32 bit (assuming 32-bit bus) and can never host the reference count.

* same-size — as a raw ptr

* expose — The pointee class must expose mutator methods on the ref count field

vega roll-up makes no sense #my take

We know dv01, duration, delta (and probably gamma) … can roll up across positions as weighted average. I think theta too, but how about vega?

Specifically, suppose you have option positions on SPX at different strikes and maturities. Can we compute weighted average of vega? If we simulate a 100bps change in sigma_i (implied vol), from 20% pa to 21% pa, can we estimate net change to portfolio MV?

I doubt it. I feel a 100 bps change in the ATM 1-month option will not happen in tandem with a 100 bps change across the vol surface.

– Along the time-dimension, the long-tenor options will have much __lower__ vol changes.
– Along the strikes, the snapshot vol smile curve already exhibit a significant skew. It’s unrealistic to imagine a uniform 100 bps shift of the entire smile (though many computer system still simulates such a parallel shift.)

Therefore, we can’t simulate a 100 bps bump to sigma_i across a portfolio of options and compute a portfolio MV change. Therefore vega roll-up can’t be computed this way.

What CAN we do then? I guess we might bucket our positions by tenor and aggregate vega. Imperfect but slightly better.

stop-the-world inevitable: either minor or major GC

Across All of Sun’s GC engines so far (2012), young generation (eden + survivors) algorithm has _always_ been STW. The algo changed from single-threaded to parallel, but remains STW. Therefore, during a minor GC ALL application threads are suspended. Usually a short pause.

Across All of Sun’s GC engines so far (2012), oldgen is at best low-pause but _never_ no-pause. For the oldgen, there is _always_ some pause, due to an inevitable STW phase.

Note one of the oldgen algorithms (i.e. CMS) is mostly-concurrent, meaning the (non-concurrent) STW phase is a brief phase — low-pause. However, young gen algorithm has always been STW throughout, without any concurrent phase.

PnL roll-up inside Coherence?

Hi friends,

In my previous project, the cache had to support PnL aggregation by account or by product type (US aviation stock or high-yield bonds etc).  I didn’t find out how it was implemented. How is it done in a typical Coherence system?

In fact, the accounts exist in a hierarchy. For example, individual accounts belong to an office. Each office belongs to a region. Each region belongs to a legal entity…. Similarly, the products are organized in a hierarchy too. PnL is first computed at position level. How are these individual position PnL aggregated in Coherence?

Answer 1 — is the recommended solution by some practitioners. Basically, api-users write 2 classes — a filter and a processor. Filter controls the subset of data entries or you may say the “universe”. Processor does the aggregation. Internally, I was told an aggregation loop is unavoidable.

I feel coherence data entries are organized into maps along with auxiliary lookup tables to support predicate-assisted select queries like “where = UK”.

Answer 2 — also mentions an EntryAggregator.

##some c++REAL productivity skills seldom covered by IV

pre-processor tricks
debugger — graphical or command line
remote debugging
(quick) navigating manpages
gcc (troubleshooting) command line
make (troubleshooting)
linker (troubleshooting)
core dump analysis — not too tough. Just know the basics
memory leak detection
scripting to automate dev-related processes to minimize mistakes and avoid rerunning something 30 times mannually.
text processing to analyze input/output text files and logs

With debugging c++ or others, I think many times you don’t need in-depth knowledge. Many of these tools have the intelligence to probe the runtime/binary and extract insightful data, often presented in a mountain of ascii text (at least human readable:). You just need to search and search in it.

Often one tool is unable to extract some critical data on one platform but another tool works better. You don’t really need to know in-depth though it often pays to be a bit curious. Just keep trying different tricks until you find a break through.

python features (broad) actually used inside 2 Wall St ibanks

If a python project takes 1 man-year, then the number of python features used would take 2 man-weeks of learning, because python is an easy-learning language. In such a project you probably won’t need fancy features like threading, decorators… Most of the work would be mundane.

After the initial weeks of exciting learning curve, you may feel bored. (c# took me months/years). You may feel your peers are deepening their expertise in c++ or wpf …

The tech quiz would be mostly on syntax.

========= the features =========
? lambda, closure, apply()
Everyday operations on list/dict/tuple, str, file object
** for-loop, conversion, filter(), map(), aggregation, argument passing,
Filesystem, input/output
Basics of module import (no advanced features needed)
command line processing + env vars
create functions –composite argument passing
config files and environment config like PYTHON_PATH

———a bit advanced and less used———
creating classes? fewer than functions, and usually simple classes only
list comprehension, generator expressions
basic platform-specific issues. Mostly unix
inheritance, overriding
spawn new processes
* Python calling C, not the other way round. In financial analytics, heavy lifting is probably done in C/C++ [1], so what’s doable in python? I guess the domain models (market models, instrument models, contract models …) are probably expressed in python as graph-aware python objects.

[1] I have a book on option pricing using things like Finite Element Method. It seems to use many c++ language features (template etc) but I’m not sure if java or python can meet the requirements.

glibc, an archetypical standard library (vs system calls)

Background — I was looking for a concrete example of a standard library.

Best example of a standard library is glibc, produced by GNU team. If you strip the G, it’s “libc” — THE C standard library.

“Standard” means this library is a “carpet” hiding all the platform-specific differences and presents a uniform interface to high-level application programmers — so-called App-Programmer-Interface or “API”.

There are many industry standards for this same purpose, such as POSIX, ANSI-C (which standardizes the C programming language + standard lib). Glibc supports all of these standards.

To clarify a common confusion, it’s worthwhile to understand this simple example — glibc functions (like printf) are implemented in platform-specific underlying syscalls.

Q: Exactly What are the platform differences? Short answer — system calls. Remember System calls call into the “hotel service desk”. These syscalls are tied to the processor and the operating system so they are by definition platform-specific. See other posts on syscall vs standard library.

b4 and af select() syscall

Note select() is usually used on the server-side. It allows a /single/ server thread to handle hundreds of concurrent clients.
— B4 —
open the sockets. Each socket is represented by an integer file descriptor. It can be saved in an int array. (A vector would be better, but in C the array also looks like an int pointer).

FD_SET(socketDes1, &readfds); /* add socketDes1 to the readfds */

select() function argument includes readfds — the list of existing sockets[1]. Select will test each socket.

— After —
check the set of incoming sockets and see which socket is “ready”.

FD_ISSET(socketDes1, &readfds)

If ready, Then you can either read() or recvfrom() has sample code.

[1] Actually Three independent sets of file descriptors are watched, but for now let’s focus on the first — the incoming sockets

most fundamental data type in c#/c++/java

Notice the inversion —
* C# — Simple types like “int” are based on struct — by aliaing
* C — struct is based on primitives like “int” — by composition
* C++ — classes are based on the C struct

See Ignoring enum types for a moment, all c# value types are struct.

Now we are ready to answer the big Question
Q: beside pointers, what’s the most fundamental data type in this language?
A: C# — most fundamental type is the struct. Simple ints, classes and all other types (except enums) are built on top of struct.
A: Java/C++ — most fundamental types are primitives. In C++ all other types are derived using pointer and composition.
A: Once again, java emerges as the cleanest

Now we know why c# doesn’t call int a PRIMITIVE type because it’s not monolithic nor a truly fundamental type.

IRS trading in a big IR Swap dealer

A3: each deal is tailor made for a particular client. A deal (or a “trade”) often has more than 100 attributes and a lifecycle of its own.

I spoke with a big US investment bank (Citi?) and a big European bank. IRS is a dealer market — No exchanges; each dealer takes positions. (There’s London Clearing House though). Each dealer bank maintains bid/offer but don’t publish them. Why? See A3. Each new client is signed on by an elaborate on-boarding/account-opening process, through the bank’s dedicated sales team. I guess that’s the signing of ISDA Master Agreement.

Once signed up, a client can send an RFQ with a deal size but without a price and without a Buy/Sell indicator. Bank typically responds with a fully disclosed bid/offer price pair. Client can either hit the bid or lift the offer.

Since IRS trade size is much larger than equities, the daily volume of trades is much smaller.

A dealer bank maintains IRS bid/offer pairs in multiple currencies. But here we are talking about single-currency swap.

There are dealers in cross-currency IRS, but that’s a different market.

java getResourceAsStream() – easist(?) way to read any file

You can easily read a file in the same dir (esp. in a jar) as your class or anywhere(?) on file system. Methods below are usually conveniently called on the Class object, like

1) simplest usage:
InputStream is = this.getClass().getResourceAsStream(“any file such as a.txt”);
InputStream is = AnyClass.class.getResourceAsStream(“any file such as a.txt”);

InputStream is = GetResourceTest.class.getResourceAsStream(“../../test.txt”);
if (is==null) System.err.println(“file not found”);

Behind the scene, these methods delegate to the classloader’s getResourceAsStream() has good examples.

Yang KPI:learn’huge codebase-goto person, connect user-expectations+codebase

This is an annotated conversation with a team mate.

> “Y (our team lead) has good system knowledge” like how things *should* work.

If Y asks a developer a specific question about how something actually works in the current implementation, he often points out “discrepancies” thanks to his system knowledge. In that case, either the developer is wrong or there’s a bug — Y’s system knowledge is quite sound. I (Bin) feel Y is sharp, quick and can filter and absorb the gist into his big picture.

I feel current implementation is probably fairly close to the “expecation” because it has been used (therefore verified) for years.

I feel Y approaches a complex system from top down, focusing on the how-things-should-work, or so-called “system knowledge” which is non-trivial. That’s one reason Y is more interested in the SQL queries. SQL is useful because DB serves as “checkpoint”.

Part of his learning is weekly user meeting on (high-level or detailed) requirements, bugs … — boils down to user EXPECTATIONS. Team members don’t get that insight. I feel that is more than biz knowledge — such insight connects user expectation with implementation. Much of the insight is summarized in jira and you can tell how much insight there is.

Another part of his learning is knowledge transfer session from his predecessor.

I feel Y asks better questions than I did. “system knowledge” questions. Roland told me he has difficulty getting answers but Y is *persistent* and asked lots of sharp questions until he got all his high/low level doubts cleared. My focus tended to be at a lower level.

Y became a go-to person quickly.

< By system knowledge i think you also mean the PATTERNS and SUMMARIES of logic in codebase. After you study codebase bottom up you inevitably see such patterns.
> “yes”

< how did you pick up in the beginning? There's so much code to get through.
> “only 2 ways to learn — support or development. I worked on development of pgrid and GCA. i followed emails between users and support guys and went into code to verify.”
I (Bin) think there’s also a lot of help from the Infosystem colleagues.

I didn’t follow all the email chains and take those as learning opportunities. Email overload.

Y gave me a plausible learning opportunity — document the haircut rules. But there’s too much logic. I studied it a few times but didn’t connect to the log messages or user issues. I didn’t learn enough to make me a go-to person. Now i know support is the way to go — i get to know what’s important among the hundreds of complex rules and also how-things-should-work.

another basic difference – container/algo/iterator

(Why bother? Well, you need to know these when you debug STL or extend STL.)

– Containers — are class templates. 100%
– Algorithms — are function templates. 100%

– iterators? A beginner can safely assume that Most of the time iterators are defined inside each container template as a Member type. Since a container has a dummy type T, the iterator must be a class template of T (or a typedef thereof).

– The container/algo/iterator adapters are typically class templates
– functor are typically class templates

A trivial consequence — the declarations of containers and iterators are complicated by the templates. Algorithm declarations are simpler in comparison.

central bank rate hikes bring ccy up or down@@

Higher interest rate helps a currency short-term. Inflation (sometimes associated with increasing IR) is sure to weaken a currency. According to Wikipedia — By increasing interest rates a central bank stimulate traders to buy their currency (carry trades) as it provides a high return on investment and this would strengthen the currency against others.

I feel this might reduce inflation and increase purchasing power.When inflation is perceived to rise (easy borrowing), central banks dampen it by rate hikes (expensive to borrow).

Very roughly, Interest rate is 70% influenced by gov; FX rate is 30% influenced by gov. Central bank has 90% control on the supply of their currency (but not other currencies). FX adjustment is one of the goals of Central bank IR actions. IR actions have other serious consequences not to be ignored. In this sense, FX trading desk needs IR expertise.

The interest rate is higher on some currency because there is a probability that it will depreciate. As long as the depreciation does not materialize, the carry trade is profitable, but makes big losses when it does. High yielder currencies aren’t always safe bets.

pbref between const levels (const radiates left

When you pass an arg  into a function param by reference, check the const-ness of LHS vs RHS. This post is about when they differ const-wise. Note the most common and tricky situation is func pbref (ie pass-by-reference). A programmer needs to recognize them without thinking. The assignment operations below are less common but simpler.

Note — in pbref, on LHS we sometimes create a completely new ref (4-byte) with(out) const-ness. This ref has its (unknowable) new address. This address is different from address of the RHS (which must be a lval — u can’t do int& r = 4444).

I feel a const stackVar is a const object. There’s no other way to get at the object, since the const var blocks all [3] edits.

Q: Given f(const int & i), can you pass in an arg of non-const int variable?
A: yes. Const LHS is tolerant and extremely widespread[1]. The pbref process adds constness to the new LHS reference. Equivalent to the simpler but less common

int arg = 9;
const int & a = arg;

Q: Remove const — Given f(int & i), can you pass in a const int variable?
A: illegal. The arg variable is const, so it promises not to modify state. The pbref process attempts to remove the constness. Equivalent to the simpler but less common

const int arg = 9;
int & a = arg; // illegal

In summary, Const-ness radiates from RHS to LHS. Illegal to block it.
* If RHS is a const ptr to a mutable object, then that constness doesn’t radiate left-ward
* If RHS is any handle[2] on a const object, then yes that constness radiates left-ward.
* If RHS is a handle on a mutable object, then LHS can be anything.

sound byte– if you have only a handle on a const object, then you can’t [3] modify its state even if you copy that handle. Object is not editable *via* this handle.

sound byte — If a (method or) variable is declared without “const”, compiler assumes this “handle” (can and therefore) will modify object state.

Compiler disallows assigning a const RHS handle to a mutable LHS handle. “Handle” is typically a ref. (For a ptr, “const handle” means ptr-to-const). However, if you deep-copy a target object (not the 4-byte address), then const-ness doesn’t radiate left. You can make a mutable deep copy from a const object, without compromising const-ness of the RHS object. All subsequent “edits” happen on the deep copy.

Tricky context — Constness still radiates leftward upon method-calling, because we perform a

implicit_this_param = & handleObj; //LHS is const param IFF a const method

  • constObj.constMethod() // good
  • constObj.mutableMethod() // won’t compile
  • mutableObj.constMethod() //good

Q: need to explicitly remove constness of a q(const char *), which is a c-string
A: Yes. Constness radiates left. Use const_cast. See post on const char *

[1] copiers and assignments usually have const ref params.
[2] handle can mean a nonref variable or a ref/ptr
[3] except via explicit const_cast

(below is a piece of work in progress….)
Now we can better understand why auto_ptr is bad for containers. Given an auto_ptr object A. A’s copier and assignment both take a non-const RHS. Container functions need copier with const param ie a const LHS, but
* can they enforce that copier LHS be declared const? No. If you could then auto_ptr + container would have been illegal combination??
* can they enforce that copier RHS be declared const? No compiler can’t enforce it — const radiates left. Compiler can only enforce LHS to be const.

shared^static library – phasebook
is good

zipfile — static lib (not shared lib) is created using q(ar) and conceptually a zipfile

  • static = unshared. A static library (some.a) is “copied” into your executable image, enlarging it.
  • copied
  • enlarge

shared = dynamic library — In unix, means SSSharedObject. In windows some.dll means DDDynamic Link Library.

  • baggage — using Static library, your executable doesn’t carry any external “baggage” i.e. the shared library files.
  • recompile — of your executable is necessary only if using “static” library explained it well — your libraries can be static or shared. If it is static then you don’t need to search for the library after your program is compiled and linked. If shared then LD_LIBRARY_PATH is used when searching for the shared library, even after your executable has been successfully compiled and linked. This resembles dotnet DLL or jars — you can upgrade a DLL/jar without recompiling the executable.

callback objects and func ptr: java^C++

Java has no func ptr. Method objects are sometimes used instead, but anonymous classes are more common.

in C/C++, func pointers are widely used to meet “callback” requirements. Java has a few callback solutions but under many names
* lambda
* observer/listener
* event handler
* command pattern

Func ptr is closer to the metal, but less object oriented.

Boost has a Boost.Function module for callback.

sql to list mutual funds

we don’t know when the database may get updated. we only know it’s very rare

Jingsong J. Feng if the change frequency is very low, create a trigger in the database, when data change, trigger a query, and update result

Jingsong J. Feng schedule Thread to run it every miniute or every 5 miniutes
Bin X. Tan/VEN… to update the local cache?
Jingsong J. Feng yes

Jingsong J. Feng oh, set a timeout variable — Hibernate

How about an update/insert/delete trigger to send a GET request to our servlet and expire the cache immediately? The next visitor to the website would see the updated list fresh from the database.

If there’s a jms broker, trigger can send a cache-expiry alert to the queue even when the web system is unavailable. When the web site comes up again, it receives the jms message and expires the cache?

Without a JMS broker, the GET request could update some persistent expiry-flag on the cache server. Even with frequent reboots, the cache server could check the expiry-flag periodically.

Q: Can a trigger send a GET?
A: yes

Q: Can a trigger be a jms client?
A: Not so common.
(AQ = Oracle Advanced Queuing )