primary downstream/client systems of SecDB

I’m sure over the years SecDB took on other roles but these are its usages recognized by business.

Business use different terms to describe the usage of SecDB. Many of these terms mean more or less the same.

1) risk
– stress test
– scenario analysis
– exposure analysis
– concentration risk

2) valuation/pricing for non-risk purposes (such as GL, EOD marking, RFQ?

objects (in addition to methods) on a call stack

In my previous java debugging experience, the call stack is often perceived as a stack of method calls. Since most of the methods are non-static, why don’t we care about the receiver (and caller) objects of these calls? I think it’s because we don’t need to.

Now, in multi-threaded, the objects making/receiving the calls are relevant.
* Frequently threads lock on those objects.
* Also the object states could get corrupt.
* wait/notify can use these objects as waiting rooms

See P287 of Doug Lea’s book.

boost thread — recursive, try-lock, reader/writer, timeout …

These lock features are implemented at different layers in Boost.

In java, Reentrance is the only option — all locks are recursive. Reader/writer lock is a feature built on top of the basic reentrant lock. tryLock and lock timeout are basic features of the Lock interface.

In Boost, those CONCEPTS, lock types, free functions, typedef are noises to a beginner;) Specifically, people say mutex objects are accessed not directly but through wrapper, but the wrappers add noise. Actually, core classes[1] are mutex and unique_lock. Both of them support try-locking. However, to understand our big 4 features, it’s cleaner to focus on the mutex classes —

* Try-lock — supported by all mutex classes
* Timed locking — supported by a subset of the mutex classes, namely timed_mutex and recursive_timed_mutex.
* Reader/writer — supported by exactly one mutex class, namely shared_mutex.
* Reentrance — supported by a subset of the mutex classes, namely recursive_mutex and recursive_timed_mutex. Note Reentrance is implemented by these 2 mutex classes only, and not in the Lockable concepts or those complicated lock types. Just scan the boost documentation.

Once we are clear on these features in the mutex classes, we can understand the structure among the Lockable CONCEPTS —

+ Try-lock — supported by all Lockable concepts.
+ Timed locking — TimedLockable and derivatives
+ Reader/writer — SharedLockable only.
(Reentrance — not in these concepts)

[1]How about the workhorse —> scoped_lock? Confusing to some novices, scoped_lock is a typedef of unique_lock

heap issue in concurrent C++ apps

Q: how to avoid double-delete across threads?
%%A: Java solves the problem with a garbage collector. C++ Smart ptr can be designed to do the delete()  in a critical section.

Q: how to avoid reading a pointer on Thread A after Thread B deletes it?
%%A: with smart ptr, thread B won’t delete it.
A: Basic technique is a reference-counted smart pointer such as shared_ptr and rogue-wave smart pointers.

A technique is to give different mini-heaps to different threads, provided they don’t share heap objects.

c# enum is more like c++, less like java@@

c++ enum is usually 16-bit but can be configured 8-bit in high-performance systems like market data feed.

java enum is very much like a class with static (singleton) fields. Nothing to do with integers. (perhaps implemented with integers.)

— c# enum is more like c++ IMO in terms of run time representation.
* No singleton. c# Enum objects are passed by value (pbclone), just like simple-type integers (which are physically struct objects)
* An enum Instance is usually 16-bit.
* An enum Type is always based on some integer type, slightly more flexible than c++
* For now, safely disregard the (confusing) fact that enum types extend System.Enum.

— C# enum is more like java enum in terms of compiler support.
A friend pointed out some differences between c# and c++ enums
$ c# value TYPES (not instances) have rich meta data. C++ enum is a simple int. No methods
$ c# enum type can customize ToString()


exactly y is a bond selling at a discount@@

…because this bond’s coupon rate is (perhaps significantly) lower than current “market rate” i.e. coupon rate of a bond trading at par, such as new issues.

“market rate” might also mean the spot interest rate or prevailing discount factor (i.e. yield) for the same credit quality?

Now, the same bond could appreciate in price in a few years, if market sentiment changes (yield curve drops), and this bond’s coupon rate suddenly looks attractive.

foundation — all rates are quoted as semi-annual compound

This is a basic concept worth repeating. The common basis/assumptions allow fair comparison of rates.

A 2 year spot rate of 10% means

“take my $1m today. in 2 years pay me (1+10%/2)(1+10%/2)(1+10%/2)(1+10%/2)*$1m”

A 6-month rate of 10% 2 years forward means

“in 2 years take my $1m. 6 months later pay me (1+10%/2)*$1m “

a 10% yield means

“This bond makes $100 coupon payments every 6 months from now. If you discount the payout 1.5 years away by (1+10%/2)(1+10%/2)(1+10%/2), and apply the same formula, and add up the present values, then you will get the price i’m proposing to you.”

fundamental choices — roll every6month^long term coupon bond

To understand forward rates (and to characterize market sentiments on a given day), it’s always useful to compare the 2 basic choices:

Investor Sam (short term investor) invests his $1m in a 6-month zero, and reinvest 6 months later, rolling every 6 months. If I expect rates to rise, i follow Sam.

Investor Leo (long term investor) invests his $1m in a 30-year bond at 4% (like Singapore’s CPF special account). If I expect interest to stay below 4%, I follow Leo.

Now let’s look at the market sentiment over the next x years. From today’s bond prices you can derive discount factors and forward rates. Leo is locking in all of those forward rates for 30 years.

STRIPS: simpler than bonds

To a bond math student, STRIPS (zeros) are always simpler than coupon bonds. A zero-coupon bond, to a bond mathematician, is one single cash inflow at a pre-set time; whereas a coupon bond is a series of periodic payouts, each to be discounted differently to give you the NPV.

To derive the NPV of a coupon bond, you need to add up the NPV of each payout. There’s no clever “silver bullet” to bypass this summation.

sentiments in Jan 2010

Each day, i work hard on 2 things — company projects and job hunting. Exhausting. A third task I do is learning java. I actually enjoy it more than the other 2. I find it hard to slow down. It feels like running on a treadmill.

Everyday, my mood is affected by the number of emails I receive from recruiters. I contact them by email/phone on a weekly basis. When I don’t hear from someone for a long time, i feel a bit disappointed.

I still feel reluctant about letting down my users. I think they gave me good reviews in Jul 2009 and they still are very nice to me, perhaps because they depend on me. Since i don’t want to let them down, i work hard to meet deadlines. These deadlines are set by my boss and are often unreasonable, but once set, i feel i need to meet them.

trade booking/capture in the big picture

For a novice who wonders just how important trade-capture is…

b/c (i.e. trade booking/capture) is the #1 essential component of trading systems, if you look across assets. B/c is often the _heart_ of an OTC trading desk or voice trading desk. But not true for trading desks against an exchange/interdealer, because pre-trade apps takes center stage, and post-trade
flow becomes middle-office.

b/c (along with position/pnl and trade blotter) is the first task to be computerized on wall street.

b/c is the basis of position master ie sub-ledger (often in mainframes), one of the most essential systems in any trading system. Sub-ledger is basis of pnl.

I feel b/c is relatively _low_tech_ compared to market data, low latency and some pre-trade systems. However, I feel in an exchange or a large sell-side firm, execution volume can be high.

I feel b/c demands more precision, more stability, better error rate, more robustness… than most pre-trade systems. This is because b/c is the point of no return — After an order is executed it can’t be canceled effortlessly.

In a voice trading desk, b/c is actually post-trade, because the trader executes the trade over phone and simply enters data into the books. Remember MTSTradeEngine? In contrast, the fully electronic b/c is not post-trade but sits at the choke point right between pre-trade and post-trade.

c# event+delegate #phrasebook

Related to closure?

Event is a non-trivial concept to a guy coming from C. Initially, Focus on the unicast format….

Inside a class Thermometer, suppose you define

public event DHandler Reading;

  • field — an event is a pseudo-field in this class. “Reading” is an event field….
  • type — of this field is the delegate type specified — DHandler
  • field modifier — the 5-letter keyword “event” feels like a field modifier like “mutable” or “volatile”

So what’s the real deal? (Focus – single-listener events) Every Thermometer Instance has a pointer field pointing to a callback functor (or a bunch). This is imprecise but short and sharp. Recall that a delegate-INSTANCE is an instance of a functor-wrapper. Having this field in the Thermometer instance means every Thermometer instance can react to some runtime events, by calling the listener(s) via the registered callback(s). Remember when a listener registers, that object reference is remembered inside the delegate instance.

call — you invoke the listeners like

if (some event happened){
Reading(…args…); //same syntax as invoking a delegate

Internally, this calls _hiddenDlgField.Invoke(…)

observable — In comp science, an “event” typically means the subject/msg-carrier in an observable pattern. In c#, event pseudo-field refers to a list of callbacks.

Listeners — are delegate INSTANCES — basically stateless functors holding both a func ptr and the holst object reference. — a simple example

new() and syscall – wholesale^retail

update — Exactly which code module performs the wholesale-retail …? [[linux sys programming]] P256 reveals glibc.

One of the shortest definitions of Operating System is — “software-controlling-access-to-hardware-devices”. OS encapsulates hardware resources and presents a facade to competing userland applications.

The most important hardware resource is memory. For memory, the OS control point is the _allocation_ of heap memory aka free-store, shared among competing applications. Once a particular memory cell is allocated to a user (or application, or process), the OS doesn’t intervene further. The application freely read/write to the memory location.

Q: At run time, when JVM calls new SomeClass(), does the thread go into kernel and executes a system call to grab a chunk of heap memory?
%%A: I would say NO.

Q: But first, let’s look at a malloc() call at run time… does the thread go into kernel and executes a system call to grab a chunk of heap memory?
A: answer is NO.

See the divvy-up “sawing” diagram on P188 [[Michael Daconta]]. It’s too _inefficient_ to make so many “small” syscalls. Instead when you call malloc(4), you call into glibc functions namely malloc() and friends. Synchronously, the library functions call into the kernel (brk()/sbrk() perhaps?) to grab a large chunk wholesale, then gives 4 bytes to you before your malloc() returns. This is one malloc() scenario, the wholesale scenario, when there’s insufficient memory “in the library”.

The more common scenario is the retail scenario — malloc() finds a free block of 4 bytes in one of the “large chunks” grabbed earlier. So malloc() marks those 4 bytes as ALLOCATED and gives you the starting address.

By the way, malloc() typically stores the size requested (i.e. 4) in the header block before returning the address. The size information is needed by free(). Since the header block is in addition to the requested 4 bytes, malloc() _loses_ more than 4 bytes.

FX option trading – a typical arch

Just as in equity options, the core component is risk engine, because positions are large and long-term. See other reasons in my post on option trading systems.

— My hypotheses —
* I guess for both fx option and IRD, core engine is a realtime event-driven position updater (another side of the same coin as risk engine). Each position has a lot of contract attributes and risk attributes, all subject to frequent updates. A typical FX option desk probably has “too many” positions each reacting to a lot of events, but each update is complex and time-consuming.
* In contrast, cash desk has fewer positions and simpler positions.
* In bond trading, any non-flat position is also subject to updates in terms of marking and unrealized PnL, but calc is simpler.
FX option is an OTC market – “no electronic trading” (i guess no ECN either), but there are electronic trade messages in addition to manual trade booking. There’s also plan to access CME listed FX options. Note this plan is not about fx options on futures, and not about PHLX.

%Q: so is it voice based?
A: various means.

Clearing could be done at the London Clearing House. I guess London is a bigger center than NY.

A lot of “exotic” fx option products come online every year. There’s pressure to automate and speed up new product launch. I would guess 1) position management and 2) booking are among the most essential features needed by any new FX option instrument. System must be able to persist positions in these exotic options. If automatic STP booking is hard, then ops can manually enter them, assuming volume is low on new products.

Volume of Trades – FX options desk gets about 1500 trades/day. In contrast, FX cash desk (includes futures + forwards) gets about 100 times the volume, but profit is perhaps 2 to 3 times that of FX options desk, obviously different margins.

Volume of Positions – FX cash desk keep most positions flat so very few positions are non-flat. FX options desk has “too many” open positions, a big headache to risk engine.

Entire FX options desk needs about 20 desk-specific developers world-wide. Besides, I guess there are many supporting systems owned by other teams outside the desk. These teams include (not limited to) firmwide teams, probably further away from the profit centers.

FX option trading is more complex than FX cash trading.

Q: Are FX derivatives simpler than equity derivatives?
A: not necessarily. FX involves 2 interest rates. Eq involves dividends.

— system modules owned by dedicated desk developers–
FIX server (perhaps for market data, not e-trading?)
GUI is in Tcl, early versions of C# platform and WPF.
Market data is a major component in FX. Many modules react to market data —
– risk
– pricing
To traders, real time pricing is presumably more important  than risk is. I guess they need to send out updated bid/offer. RT pricing uses spot prices (market data) and volatility data for calculation. For any pair of currencies, (every?) market data could trigger Automatic price updates across all strikes and expiration.

Actual option valuation math is in c++/JNI.

Biggest headache in fx option risk engine is performance. FX Option Valuation is slow. FX option position Volume is too large for real time risk update. Instead, the risk “report” system is on-demand and covers a requested subset of the full portfolio, presumably those positions belonging to a trader. Such a report takes a few minutes. If market data has changed by then, report is obsolete.

Risk rollup from trader-level to entity-level to firm-level. There’s an external team responsible for analytics library and they call FX options system’s services to get positions. I guess that external system is a firm-wide analytics or risk engine.

#1 essential component (among the distributed components over 30 servers) in the trading desk is trade capture/booking, written in c++ primarily + some java. There’s some c++ valuation module for FX options. Plan is to slowly phase out c++. Other than that, desk is mostly java.

–core architecture–
Since an option (or any derivative) is not settled right away like cash trades, there’s a _lifecycle_ to each derivative trade. Each derivative trade takes on a life of its own and is subject to many “lifecycle events” like
– origination, cancels, amends/modifications
– knock-in, knock-out
– fixings
– market data effecting risk reassessment

Just like bond repricing engine, this is Service Oriented Architecture – MQ facilitates the event-driven architecture, but there are other ways to pass messages like SOAP over TCP (not http).
1) MQ for high volume messages
2) SOAP for slow, complex processing. Possibly a few trades a day! I guess these are exotic products.
A typical event-driven server here is a socket server, holding a thread pool, started with main(). No container or web server.

c# value^ref type , briefly

I’m sure there are good summaries online, but i feel it’s worthwhile to come up with my own summary…

– only ref types can be locked
– only ref types can be singletons; value types are like primitive floats and ints

– unlike in java, value types can also be null — System.Nullable

– all ref types are on heap. Value types are like java primitives — on stack unless allocated as part of a heap object.
– since value/ref types all extend System.Object, they all support

– enum is value type in c#, just like C++
– even a struct (value type) can implement an interface, but seldom used.
– basically 3 reference types – arrays, delegates and regular classes

c# method "ref" argument of a Value-type

I feel this is obscure but not sure interview-wise…?

In Jon Skeet pointed out ref (“out” too) parameter can apply to value type or reference type.

When you hear the words “reference”/”value” used you should be very clear in your own mind whether you mean that a parameter is a reference/value parameter, or whether you mean that the type involved is a reference/value type. If you can keep the two ideas separated, they’re very simple.

As Jon pointed out, passing a Value type by “ref” (PVTBR) is different from passing a Refence type without “ref” marker (PRTBV). Notice the 2 differences between the acronyms. The difference is actually simple but requires a bit of mental gymnastics. I believe it only matters if in your method you reassign the parameter.

I feel PVTBR is occasionally needed[1], whereas the other 3 combinations below are widely used —

* PVTBV – including everyday argument passing of Struct types
* PRTBV – including everyday argument passing of Class types
* PRTBR – including “ref” parameter passing of Class types

[1] a function to swapping 2 incoming int variables — identical in c++ vs c#

static^dynamic binding

why call it “static”?

First we must understand “binding” — system binding an overriden name (of a MEMBER) to one of several incarnations within the class hierarchy.

Field names are resolved at compile-time — “static” — when system is not alive and runnning.

In contrast, method call is resolved at runtime — dynamic binding.

— learnt in 2010 —
Only virtual methods need dynamic binding, ie overridable methods. Overloaded methods are chosen at compile time based on arg list. See posts on static binding.

eg of explicit-interface-method-implement – c#^java

Suppose Interface1 Game declares void play(void) and Interface 2 Radio declares the same void play(void) but means something different. Suppose these 2 interface are created by different vendors.

Now suppose we create a Device class implementing both interfaces so it has to implement both play() methods, but with different logic. Java won’t support this. You may use HASA — have 2 fields implementing the 2 interfaces.

Even if the 2 methods’ logic can be merged into one method body, still the meaning of play() is ambiguous. People reading the calling context may not know if we are playing a game or a radio, because these 2 “play” operations are logically unrelated. Remember source code should clearly describe intention.

With C# explicit-interface-method-implementation, Device can implement 2 play() methods simultaneously


See also P109 [[c#precisely]]

msvs – integrate boost into msvs

My worst initial fears in c++ dev are A) IDEs like visual studio, B)external libraries [1] and integration of A and B. presents comprehensive tutorials showing
– How to add boost header files into include path
– How to add boost precompiled binary files into the link path

I feel in most projects, the compiler only need these 2 “locations” in order to use an External library.

Just a side note, compiler/linker needs to reference 2 types of external library content
– source code – by preprocessor copy-paste
– compiled code – by linker

[1] external meaning not built into the IDE, like boost library

operator-overload ^ functions – incomplete summary

(IV? no)

Warning: some of the most important features/operations in c++ are implemented as special customizable operators like new/delete, qq(<<), call operator, OOC…, and their rules are sometimes unique, unusual and "bucking the trend". These help make OOL a rather confusing aspect of c++.

OOL has similar variations as function can have —

– free-standing OOL? Yes often as a friend function.
**Exception — call operator must be a method though I’m not sure about static methods
**Exception — I believe OOC must be a instance method.
– non-static non-virtual OOL? I think so. operators like ++ needs an instance.
– private non-static OOL? I think so
– (pure) virtual OOL? Probably not
– overloaded OOL? Yes esp assignment operator. I think qq/operator+/ too

Here’s a major difference between OOL vs functions –

– Regular Functions have a return type, possibly void. In contrast, some OOL must return a type and some OOL must not. See other blog posts.

practical use cases of wait/notify@@

* most efficient blocking consumer/reader. See post on simplest wait/notify app.
* After finishing tasks, idle threads in thread pools wait in awaitNanos(), the new version of wait()
* callback, listeners
** callback — from server thread to client thread. Async. Client thread can wait()
** GUI event handler?
** listener. See the post on callback. Swing event queue thread waits in wait().
** event dispatcher thread — RV and swing use such a thread. I think this thread blocks in wait() when it has no event to process.

* producer/consumer — consumer can wait() when the queue is empty. producer can wait() when the queue is full. If you use BlockingQueues then u probably don’t need to call wait() yourself, and you don’t need to hold any lock. This (ie wait/notify) is especially appropriate in situations where threads have a producer-consumer relationship (actively cooperating on a common goal) rather than a mutual exclusion relationship (trying to avoid conflicts while sharing a common resource).

* Any time you want one thread to initiate and another thread to react (thread communication), the reacting thread will need to block when there’s nothing to do. Sleep-poll loop is inefficient. wait/notify is the standard solution

WPF EventArgs≅Swing EventObject: a message DTO

In every GUI toolkit there’s a class hierarchy of of event “value objects”. Such an object is passed into event handlers or callbacks as a function argument. In Swing it is known simply an “event” object. I feel this meaning of “event” is traditional, natural and intuitive — a data transfer object carrying a message about an occurrence.

WPF refers to such a DTO object as “EventArgs”, and user “Event” for something else. WPF turned the English word “Event” on its head and gave it a twist. The new meaning is “invitation”. When a WPF view model (or model) object fires an event, it sends out an invitation to a (or a inv-list of) registered listener, inviting them to “query me for my current state”. The listener is typically a data-bound visual component in the view

Therefore in the WPF context there are 2 slightly different meanings to the English word “event” —
– “An event pseudo field” — a list of listeners. If 2 visuals bind to a single model object, then list of 2 listeners.
– “firing event” — a single invitation sent to the listeners. You can rapid-fire 500 events in a minute[1] using the same event-pseudo-field. Each invitation will hit all the listeners.

[1] Incidentally, Such rapid-firing rate is not uncommon, because firing is often programmatic and driven by a collection.

simplest wait-notify app

String consume() {
synchronized (queue) {
while (queue.size() == 0) {
return queue.remove(0);
void produce(String s) {
synchronized(queue) {
// entire test program is
public class WaitNotifyDemo{
final private List queue = new ArrayList();
final private Random rand=new Random();
public static void main(String[] args) {
final WaitNotifyDemo instance=new WaitNotifyDemo();
//an anon class implementing Runnable, passed into new Thread() as constructor arg
Thread t = new Thread(new Runnable(){
public void run() {
for(Integer i=0;i<100;i++) {
try {
int sleep = Math.abs(instance.rand.nextInt()%5000);
} catch (InterruptedException e) {
new Thread("consumer"){
public void run() {
for(;;) {
String item = instance.consume();

String consume() {
synchronized (queue) {
while (queue.size() == 0) {
try {
} catch (InterruptedException e) {
System.out.println(Thread.currentThread().getName() + "] either timeout or notification happened");
String a= queue.remove(0);
System.out.println(Thread.currentThread().getName() + "] notification received. removing " + a);
return a;
void produce(String s) {
synchronized(queue) {
System.out.println(Thread.currentThread().getName() + "] notifying after producing " + s);

syscall^standard library functions (glibc)

(See also the syscalls in glibc in [[GCC]] )

C and assembly programs both compile to objectCode — platform-specific

* both make syscalls
* both “translate” to the same platform-dependent object code
** C never translates to human-readable assembly code.

* I guess part of the object code is the binary version of assembly code and consists of instructions defined by the processor chip, but a lot of heavy lifting is delegated to syscalls.
* object code like a.out is executable. I think object code file loads into memory as is.

I believe a standard library is in C, whereas system calls are available in C, assembly and any language chosen by the kernel developers.

In C source code you make (synchronous) syscalls like fopen(), write(), brk(), exit(). To my surprise, in an equivalent Assembly source code, you invoke exactly the same syscalls, though invoking by syscall ID.

Think about it — the kernel is the gate keeper and guardian angel over all hardware resources, so there’s no way for assembly program to bypass the kernel when it needs to create socket or print to console.

– C standard library (glibc?) exists on each platform. “printf()” is a standard library function, and translates to platform specific syscalls like write(). Since your C source code only calls printf(), it is portable.
– In contrast, your assembly program source code makes syscalls directly and is always platform specific. No standard library.

—-Differentiate between standard library functions (manpage Section 3) like printf() vs syscalls (manpage Section 2) like write()?  Look at the names —
system call — are calls to the hotel-service-desk, which is operating system kernel. In fact, system calls are written by kernel developers and runs in kernel space. See my post on kernel space
standard library calls — are standardized across platforms. They run in userland.

Many standard library functions are thin wrapppers over system calls, making them hard to differentiate. Someone said online “It’s often difficult to determine what is a library routine (e.g printf()), and what is a system call (e.g sleep()). They are used in the same way and the only way to tell is to remember which is which.”

specialize – to stand out, really@@

In an ever expanding /pool/ of software developers, how do you stand out? Like any technical field (including realtor, dancer, violin-maker and violin-player, photographer…), a proven route is specialization.

Compared to US developers, I feel Singapore young developers generally shun technical specialization. For the Singapore job market, you could specialize in

financial math
python, perl, shell scripting,
complex queries and query tuning


low delta always means OTM, intuitively

(Low delta means low absolute magnitude. The sign of delta is a separate feature.)

The more OTM, the less sensitive to underlier moves — low delta.

The more ITM, the more stock-like — high delta. This holds for both calls and puts.

For a call holder, the most stock-like is a delta of 100%
For a put holder, the most stock-like is a delta of -100% i.e. a short stock

For a put Writer, the most stock-like is a delta of +100% i.e. long stock. This put is so deep ITM it will certainly be exercised (unload/put to the Writer), so the put writer effectively owns the underlier.

On FX vol smile curve, people quote prices at low-strike points and high strike points both using low deltas like 25 delta or 10 delta. (The 50 delta point is ATM).

– On the low-strike side, they use an OTM Put. Eg a put on USD/JPY struck@55. Such a put is clearly OTM since as of today option holder will not “unload” her USD (the silver) at a dirt cheap price of 55 yen.

– On the high-strike side, they use an OTM Call. Eg a call on USD/JPY @140. Such a call is clearly OTM, since as of today option holder will not buy (“call in”) USD (the silver) at a sky high price of 120 yen.

signed shift and unsigned shifts #multiply

Q1: why is there signed-right-shift and unsigned-right-shift but just a single “left-shift”?
Q2: if i multiply 2 positive int in java, do i always get a positive int?
Q2b: how about in c++? See

In java, usually the “thing” to be shifted is an 32-bit int, but can also be a 64-bit long. For both, the left-most leading bit controls the sign — 0 means non-negative, 1 means negative.

Now consider an example. …0110 right shift once –> leaves the left-most leading position empty. Either put a 0 (unsigned shift) or (signed) keep the original leading bit there.

By definition
* Signed shift keeps the sign.
* unsigned shift always returns non-negative

Finally we can give A1 ie why there’s just a single left-shift. Left shift always shifts some bit INTO the left-most leading position. It never becomes empty. The sign of the result depends on that bit.

* new sign may be same as before
* new sign may be different

A2: 0x55555555<< 1 == 0x55555555*2 < 0 // so a positive int * 2 can be negative. See

nested transactions in sybase

If proc1 does “begin tran; exec proc2”, and proc2 does a “begin tran…”, how many transactions are active at that point? The way to test it in sybase is simple, open a session in sqsh, isql or aqua studio and

insert t … — a table
begin tran tr1
insert t…
commit tr1

— open another session and revoke insert permission

insert t… — causing an error and a rollback

— now see what’s rolled back.

concretized template class≠ordinary class

This topic is largely personal  curiosity, not needed for IV or project.

Many people say “use a concrete templ class just as an ordinary class” but nonono. A whale is a mammal, but not just another ordinary mammal. Concretized classes (i.e. concretized template class) differ from ordinary classes because

* when you declare a variable of the new type, your type must obey the “policy”
* a concrete template class’s behavior is specified in multiple places — the policy classes, the class of each template argument. A regular class’s behavior is specified in that class only. Look at the allocator param in a parametrized container.
* debugger for concrete templ classes is harder than ordinary classes
* casting is complicated
* when you plug in a functor type, the specialized template class’s instance instantiates a functor object. In short, you specify functor TYPE only — instantiation is implicit.
* a type involving templates interplays with ptr/ref in more complex ways than non-template types. (typedef often needed). When you add ptr (or ref) symbol to a non-template type, you just stick the qq(*) in the correct places. But how about qq[ map<char**, list*> & ] as a “first part” inside a parenthesis?

Fwd: cache causing JVM memory leak


Someone told me cache can often lead to memory leak. I believe our system maintains several caches such as model-cache and circuit-design-cache(?). If one of these caches lacks a size limit or an effective expiry schedule, then the cache could grow indefinitely, someone told me.

Curious to know if we have anything to control it. Thanks.

tan bin

Update — after talking to Vibrant, i realized cache is one of the most common, effective, yet complex techniques. Memory leak due to cache is growing in importance as competition grows and we move deeper into optimization.

Use SoftRef.