capture text selection in a JTextComponent

@Override
public void caretUpdate(CaretEvent e) {
int dot = e.getDot();
int mark = e.getMark();
if (dot == mark) return; // no text selection

if (!(e.getSource() instanceof JTextComponent)) return;
String selection = ((JTextComponent) e.getSource()).getSelectedText();

Advertisements

easiest way to display a DB record in jtable

DefaultTableModel model = new DefaultTableModel(new Object[] {“Col”,”Value” }, 0);

model.addRow(new Object[] {selection});

Map map = new JdbcTemplate().queryForMap(“select * from table1 where …”);

for (Map.Entry entry : map.entrySet()) {
  model.addRow(new Object[] { entry.getKey(),
  entry.getValue() });
}
this.table.setModel(model);
this.table.repaint(); // thread safe

Linq, extension method etc – random thoughts

I find the c#3.0 center piece — the LINQ feature — a lot of fancy beautification features and syntactic sugar. I don’t feel they
add any real capability that was completely “not doable” in C# 2.0.

However, Linq is indeed adopted in investment banks.

They certainly make c# source code more elegant, more natural, more like English, more “dense” but I feel in so doing they also add
additional layers of complexity. I don’t know if this is the best direction for a language.

Linq requires extension methods. Compiler magically converts static methods into functions that appear to be instance methods of the
IEnumerable type.

Linq encourage lambda (in place of plain old delegates). Compiler converts each lambda expression into a delegate instance.

Linq encourages anonymous types. Compiler converts each such type to a real type whose generated name is known only to the compiler.

Q: What problems do you have with Linq?
A(from a veteran): slow for SQL

interpolating on vol surface between tenors #worked eg

Q: how do you query the vol surface at a fitted strike but between 2 fitted maturities — Jun and next Dec, assuming today is Jan 1.

First Take sigma_J*sigma_J and sigma_D*sigma_D. Say we get Something like 20%^2 and 30%^2. Remember these are annualized sigmas. Suppose those maturities are 6 months and 24 months out. Raw variance values would be

Variance_J = (20%^2)* 6m/12m = .02
Variance_D = (30%^2)* 24m/12m = .18

Our assumption is that variance is linear with TTL, so let’s line up our raw variance values

6 months to expiry –> .02
15 months to expiry -> x
24 months to expiry -> .18

==> x = .10 (not annualized)

Annualized variance_x == x /(15/12) = .08
Annualized sigma_x = 28.28%

This estimate is better than a naïve linear interpolation like

6 -> 20%
15 -> ?????? — 25%
24 -> 30%

delegate invocation list, briefly

— based on http://www.ikriv.com/en/prog/info/dotnet/Delegates.html
An invocation list is a reverse singly linked list, with the last appended “single” as head. Like in hash table buckets, this is optimized for insertion. 

Invocation time — Since this is a singly linked list, we need recursion. Head node’s method (last appended) starts first, invoking 2nd-last-appended.

When you append [d1,d2, d3, d4] to an existing invocation list, system effectively adds d1, then adds d2… (Internally, it could do it faster with pointers.), so I call these d1, d2 .. items “single-method delegates”, or “singles”.

Note the link pointer between link nodes is named “PreviousNode”.

Remove() is by logical-equality, not by reference equality.

##implicit compiler acts on delegates

— Based on http://www.yoda.arachsys.com/csharp/events.html

# MyDelegate d1 = new MyDelegate(this.InstanceMethod1); // when you add this code in a non-static context, the “this.” is implicited added by compiler, but you can add it explicitly

# MyDelegate d2 = new MyDelegate(Class4.StaticMethod2); // the class name, if omitted, is inferred by compiler

# when you call d1(someArg), compiler implicitly calls d1.Invoke(someArg). You do not have to call this method explicitly. Invoke() is useful in reflection. See MSDN

# when you perform d3 = d1 + d2, compiler implicit calls the static Delegate.Combine(d1, d2)
# when you perofrm d3 – d2, compiler implicitly calls Delegate.Remove(d3, d2)

specify a functor-object^functor-type

Sometimes [1] you plug in a functor object; sometimes [2] you plug in a functor type into a template.

In [2], I confirmed that concretized class’s would instantiate a functor object. In short, you specify functor TYPE only– instantiation is implicit. Note a template usually specifies a type param. A function type meets the requirement; a functor object doesn’t.

(I seldom see a template with a NDTTP like “int” — kind of wizardry.)

[1] In the first case you usually have a regular function, not a template

One use case of functor-object-passing (pbclone) is a STL algorithm like

 class myLess {/*....*/};

 std::sort(V.begin(), V.end(), myLess()); // which is short version of


 std::sort <vector ::iterator, // 1st concrete type
               myLess // 2nd concrete type
           > //now we have a concretized function. Below are 3 args:
   (V.begin(), V.end(), myLess()); // ctor args

In this case, the functor instantiation is fairly explicit.

discount curve – cheatsheet

Based on P245 of [[complete guide to capital markets]]

Discount curve is designed to tells us how to discount to NPV $1 received x days from today, where x can be 1 to 365 * 30. If the curve value for Day 365 is 0.80, then SPOT rate is 25% or 2500bps — 80 cents invested today becomes $1. Note the value on the curve is not bps or spot rate, but a discount factor value from 0 to 1.

Q: how do I get forward rate between any 2 dates?
A: P246. Simple formula.

Discount curve is “built” from and must be consistent with
+ ED CD rates (spot rates) of 1,7,14..,90 days. As loans, these loan term always starts today.
+ ED futures rates (forward rates). Loan term always last 3 months, but start on the 4 IMM dates of this year, next year, next next year ….
(Note ED futures rates are determined on the market; ED CD rates are announced by BBA.)

(Note I have ignored IR swaps, which are more liquid than ED futures beyond 10Y tenor.)

Discount curve needs to cover every single date. First couple of months are covered by latest announced ED CD rates, interpolating when necessary. After we pass 90, all subsequent dates (up to 30 years) are covered by ED futures rates observed on CME. Being forward rates, these aren’t directly usable as those CD rates, but still very simple — If the 3-month forward rate 3/19/2008 – 6/19/2008 is 200bps, and discount factor for 3/19 is 0.9, then 6/19 discount factor is (0.9 / 1.02)

##hard CORE java/c++ skills across domains

— c++
void ptr
double pointer
ptr to ref; ref to ptr
ptr to method
new/delete global overload to intercept new/delete operations
allocators
debugger
scoped typedef, namespace
vptr/vtbl
static initialization sequence

—java
Note: MOM, spring, hibernate, data grid … aren’t core java and out of scope.

reflections
threads
generic

— Below are needed only in rare situations, IMHO
memory management — leak detectors, garbage collection, weak (soft) references, memory profilers,

jdk and cglib proxy
JMX
custom class loaders
byte code engineering
JVMTI — (eg jprobe) powerful tools to analyze production systems.
real time java

memcached – a few tips from Facebook

based on http://www.facebook.com/note.php?note_id=39391378919

* distributed hash map at heart
* sits between apache web server and mysql. Intercepts requests to mysql
* onMsg() updates to clients? I don’t think so.
* TCP between memcached and apache. Each TCP connection occupied memory, so if there are 1000 web hosts, each running 100 Apache processes, you can get 400,000 TCP connections open to memcached. They occupy 5GB memory. Solution is UDP — connection-less.

swing timer – simple eg

This creates an anon action listener and pass it to timer ctor.

Q: what does the listener perform?
a: anything

new Timer(millisec2sleep, new ActionListener() {
public void actionPerformed(ActionEvent notUsed) {
TimerSwingWorker.this.scanTables(command2getUpdates, " <- detected ");
}
}).start();

Spring can add unwanted (unnecessary) complexity

[5] T org.springframework.jms.core.JmsTemplate.execute(SessionCallback action, boolean startConnection) throws JmsException
Execute the action specified by the given action object within a JMS Session. Generalized version of execute(SessionCallback), allowing the JMS Connection to be __started__ on the fly, magically.
——–
Recently i had some difficulties understanding how jms works in my project. ActiveMQ hides some sophisticated stuff behind a simplified “facade”. Spring tries to simplify things further by providing a supposedly elegant and even simpler facade (JmsTemplate etc), so developers don’t need to deal with the JMS api[4]. As usual, spring hides some really sophisticated stuff behind that facade.

Now i have come to the view that such a setup adds to the learning curve rather than shortening it. Quickest learning curve is found in a JMS project using nothing but standard JMS api. This is seldom a good idea overall, but it surely reduces learning curve.

[4] I don’t really know how complicated or dirty it is to use standard JMS api directly!

In order to be proficient and become a problem solver, a new guy joining my team probably need to learn both the spring stuff and the JMS stuff [1]. When things don’t behave as expected[2], perhaps showing unexpected delays and slightly out-of-sync threads, you don’t know if it’s some logic in spring’s implementation, or our spring config, or incorrect usage of JMS or a poor understanding of ActiveMQ. As an analogy, when an alcoholic-myopic-diabetic-cancer patient complains of dizziness, you don’t know the cause.

If you are like me, you would investigate _both_ ActiveMQ and Spring. Then it becomes clear that Spring adds complexity, not reduces complexity. This is perhaps one reason some architects decide to create their own frameworks, so they have full control and don’t need to understand a complex framework created by others.

Here’s another analogy. If a grandpa (like my dad) wants to rely on email everyday, then he must be prepared to “own” a computer with all the complexities. I told my dad a computer is nothing comparable to a cell phone, television, or camera as a fool-proof machine.

[1] for example, how does the broker thread start, at what time, and triggered by what[5]? Which thread runs onMessage(), and at what point during the start-up? When and how are listeners registered? What objects are involved?

[2] even though basic functionality is there and system is usable

major forces shaping future of java (+software industry)

* B — software vendors like Bbbill Gates
* C — open source Cccommunity, a new force significant only in the last 20 years.
* D — Dddownstream industries like Wall Street and web 2.0
* A — Aaaacademic, research including commercial R&D, and hardware innovation. This force represents the pushers of the frontier of technology
* government? not really a major force except in S’pore;). This game is largely market-driven and innovation-driven.

There are non-trivial overlaps and straddles, yet it’s good to keep the big picture simple without being simplistic.

biggest changes af JDK 1.4

Xia Rong,

(Another blog.)

We once discussed the common interview question “what changes do you consider significant in 1.5”. Now i feel the concurrency changes are significant, perhaps second only to generics. Generics are a fundamental change affecting many parts of the core language.

On to concurrency. Reading a few threading books, a pattern seems to be there — since the late 90's, multi-threading developer had been “rolling their own” concurrency tools  because the JDK threading tools are too basic. To my surprise, a lot of these home-made components are very similar

* thread pool
* dispatcher thread
* worker threads
* task queue
* blocking queue
* semaphore….
* condition variables

After 1.5, every java trading system i know uses thread pools and concurrent collections. I don't know anyone still using wait/notify — too low-level.

Personally, i would point out the importance of Lock interface and atomic variables (esp the Compare-And-Set ie CAS functionality). CAS relies on *native* support just as wait/notify. Some people claim that the new concurrent collections derive huge performance gains partly due to atomic variables.

Why are threading improvements so important? I feel the biggest reason is the multiprocessor machines. Chip makers spent decades increasing clock speed but they reached a plateau. The way to add more power in a machine is going multi-core. Multiprocessor machines are hard to fully utilize until applications are multi-threaded.

What T needs in container of T

Update — [[c++coding standard]] points out T is usually value-like, such as a smart pointer (including an iterator) type. P95 further points out if we disable copier and op= then T can’t be go into a container.

Q1:  what does your class need, as Element in a container? In other words, if I want to put instances of my class C into an STL container, like vector, what must my class have?
A1: According to bloomberg,
– Copy constructors — C can’t be auto_ptr!
– Assignment operator — No auto_ptr!
– Default constructor
– Destructor

P16[[ObjectSpace]] covers this too. [[eff STL]] covers this too.

In addition, you almost always need operator==(). P21 [[stl tutorial]] shows this operator==() can be a free function defined for our type C.

Q: how about java collections? Similar?
A: equals() need overriding. Default equals() breaks contains() and remove().

extern C on myFunc: 1st demystification

This post is about extern “C”, See other posts (in this blog) on other uses of extern.

extern “C” is the only form in the standard. It is designed to help linking with Fortran, C or even other c++ modules. See the chapter in [[moreEffC++]]

  • Basic purpose on functions? suppress name mangling. See moreEffC++ for more details. In the most common usage as  incisive example showing diff: with^without extern-C ,
    • the pre-compiled c library function has No mangling its name.
    • without extern-C, then c++ calling function would apply mangling and fail to match the actual callee name
    • with extern-C, the c++ calling function would Not apply mangling on the callee name.
  • extern-c wraps func prototypes. You could make extern-c wrap func implementations, but not common.
  • linker — extern is a linkage feature.
  • Interaction with #include? See P858 [[c++Primer]]

— Re-declarations ?
Forget about extern-C first. Given any given function prototype, it can appear multiple times even in “rapid fire”, like void a(); void a(); void a(). Compiler just ignores these prototypes. Extern-c doesn’t change the rule. See p366 c++Primer.

(unwrapped) pointer assignment — double-pointer scenario 2 again

Before looking at double-pointers, let’s clarify the basic rules. For regular pointers p and q2, (assuming 32-bit),
Rule 1) Foo *p = 0; p = … // pointer p bitwise state change i.e. reseating
Rule 2) *q2 = …// pointer q2 still points to the same pointee, but pointee’s state changed via assignment operator (possibly overloaded)

Now, in http://www.parashift.com/c++-faq-lite/const-correctness.html#faq-18.17 shows a double pointer
Foo **q = &p; // q points to the address of pointer p

*q = … // q still points to the same 32-bit object, which is pointer p, but p’s content is changed, i.e. p reseated. Means the same as “p = …”
Now we are ready for

Rule 3) SomeType * ptr = … // ptr seating at initialization. This looks like Rule 2 but is really more like Rule 1.

public static nested class

XR

(another blog)

Just as constants can be defined in classes but better defined in interfaces, public nested static classes are better defined in interfaces. I feel this might be a best practice. It's more reusable and accessible.

More importantly, this is more readable than if defined in a class. When we define a public static nested class in an enclosing class, it appears to be tied to that class, but i feel that's an illusion. By putting the class in an interface, it's clearly presented as part of an open, shared interface and not tied to any object in the design.

However, i don't really know why we need public static nested classes at all. They look completely unnecessary.

In general, i feel anything that can go into classes or interfaces had better go into interfaces.

Coming back to nested classes, experts say static is better than non-static, if we have a choice. I agree, on the basis of readability, semantics, flexibility and loose coupling.

single-thread thread-pool

newSingleThreadExecutor()

Creates an Executor that uses a single worker thread operating off an unbounded queue. (Note however that if this single thread terminates due to a failure during execution prior to shutdown, a new one will take its place if needed to execute subsequent tasks.) Tasks are guaranteed to execute sequentially, and no more than one task will be active at any given time. Unlike the otherwise equivalent newFixedThreadPool(1) the returned executor is guaranteed not to be reconfigurable to use additional threads.

transform() to emulate for_each() in STL

I feel in coding tests, you can do a for() loop

struct null_output_iterator: std::iterator<std::output_iterator_tag,
        null_output_iterator> { // all 3 members needed
    template void operator=(T const&) {} // NOT a regular operator=
    null_output_iterator & operator++() {return *this;}
    null_output_iterator & operator*() {return *this;}
} noi;
void aaaa(){
    transform(input.begin(), input.end(), noi, checkDupe);

    //same as

    for_each(input.begin(), input.end(), checkDupe);

humor in a manager

Now I feel humor is a universal tool and often a first-aid kit for a development team leader, esp. in fast pace, high pressure environments…
The higher one climbs, the more important humor becomes. Look at Obama vs. Hilary…
People say Asian managers are less good at it…
I’m not good at it. As a young engineer I was a bit flamboyant, rather unconventional, and sometimes foolhardy, but as a saving grace I was honest, reliable, quick and helpful to colleagues. I broke lots of rules. was somewhat fun to work with, but unknowingly offended countless colleagues, esp. in large corporations. In fact this is a key reason why I always feel uncomfortable in large companies with structures and protocols
Within such constraints, I find it hard to exercise whatever little humor I have.
By contrast, in smaller companies I interact with CEO or top level managers. They know my personality and accept me. As a result, everyone else has to bear with me. In such a “freer” environment, I would relax and a tiny trickle of humor flows.  
Humorous people are usually smart and smart people are usually humorous. For me, a third thing to make a trio is cool confidence.  Smart and humorous people are usually relaxed and confident. Cool and smart people are usually humorous.…
My sister is humorous, everyone agrees. Also very confident, and a mid-level manger in a large MNC.

Can interviewers see through our soft skill weaknesses@@

Now I feel even experienced interviewers (I had chitchat with some recently) can only judge a candidate’s non-technical qualities to a very limited extent.
Experienced interviewers try to focus on “thought process” and “problem solving skill” (soft skills) rather than practical experience and textbook knowledge. White boarding, expanding a given problem’s scope — “what if we need …”, “What if we can’t …” … By the way, my Google interviews are all of this type. Heavy on algorithm and almost language-agnostic algorithms. Some of my most in-depth interviews with good companies seem to follow similar patterns. These are not designed to be traditional technical screening as they test analysis and problem solving, under pressure.

In addition, Many interviewers ask about java’s fundamental design principles, though we developers seldom need to know those. Examples include those wildly crazy questions on hash table implementations, wait/notify, equals() and hashCode(), generics implementation in Java 1.5, beauty of dependency injection, reentrant locks .. These questions are less about “productivity/competency” or design skill and more about “depth of understanding“. Perhaps they want to know how deep the candidate thinks. I consider these non-technical qualities.

Obviously interviewers watch out for personality issues like arguing, strong opinion, arrogance, refusal to admit defeat, insufficient give and take, lack of confidence… Luckily none of my serious weaknesses are visible to them — such as my weakness in office politics.

Taking a step back, if we analyze why a person A fails on a job, obviously a lot of responsibilities are on other people. But if we focus on the weaknesses in A herself and try to classify the common weaknesses in workers, very loosely i see 2 types
1) a range of technical weaknesses
2) a range of non-technical weaknesses

I feel the art and science of candidate screening has advanced more on the first front then the 2nd front. Sharp interviewers can now find out technical weaknesses more than they can non-technical weaknesses. I’m talking about things like real (not fake) honesty, ownership and initiative, efficiency, follow-up, prioritizing, sense of urgency, can-do attitude, client-service attitude, dedication, knowledge sharing, helping out, give and take, push-back, persistence and determination, respect for cultural differences, bias, fairness, attention to detail, code testing attitude, ethical standard on code quality, professionalism, self-starter, motivation, hard working, personal sacrifice … ( … some of my strongest and weakest parts!)

I think these are important to a hiring manager and can break or delay a project, but these personal qualities are invisible to even the smartest interviewers I can imagine. However, I was told some interviewers can be unbelievably smart, beyond my imagination. I tend to feel a perceptive person can be very sensitive but she can’t be sure about personalities based on a brief conversation.

My conclusion — once you pass the technical screening, then interviewers can’t easily find out your relative weaknesses in communication, personality, attitude .. provided you don’t make an obvious mistake.

learning curve in AutoReo

(Written in May 2010)

I thought about some friend’s suggestion to “deepen your java”. Now I feel the more I complain about being slower than others and having a lot of difficulties learning the code base, the better!

Learning curve in the Autoreo team is steep, but project risk is not too high.

By contrast, in other projects I don’t feel challenged because I was so familiar with the technologies and tools. I won’t improve. One day I step into a big trading system, I would not be prepared for the complexities. Project risk would be too high – I won’t be given a lot of time to learn.

I don’t fancy learning java in a school or getting a Master’s. I worked for 12 years and studied by myself for a long time. I know what I need to learn – stuff used in real (imperfect) systems. So my current project is probably the best classroom for my personality. I’m grateful.

200,000,000 orders in cache, to be queried in real time (UBS

Say you have up to 200,000,000 orders/ticks each day. Traders need to query them like “all IBM”, or “all IBM orders above $50”. Real time orders. How do you design the server side to be scalable? (Now i think you need a tick db like kdb)

First establish theoretical limits. entire population + response time + up-to-date results — choose any 2. google forgoes one of them.

Q: do users really need to query this entire population, or just a subset?

This looks like a reporting app. Drill-down is a standard solution. Keep order details (100+ fields) in a separate server. Queries don’t need these details.

Once we confirm the population to be scanned (say, all 200,000,000), the request rate (say a few traders, once a while) and acceptable response time (say 10 seconds), we can allocate threads (say 200 in a pool). We can divide each query into 50 parallel queries ie 50 tasks in the task queue. On average we can work on 4 requests concurrently. Others queue up in a JMS queue.

DB techniques to borrow:
* partitioning to achieve near-perfect parallel execution
* indexing — to avoid full scans? How about gemfire indexes?
* You can also use a standard DB like sybase or DB2 and give enough RAM so no disk IO needed. But it’s even better if we can fit entire population in a single JVM — no serialization, no disk, no network latency.

L1, L2 caches? What to put in the caches? Most used items such as index root nodes.

See post on The initiator mailbox MOM model.

fd_set in select() syscall, learning notes

First thing to really understand in select() is the fd_set. Probably the basis of the java Selector.

A fd_set is a struct holding a bunch[2] of file descriptors. (I don’t think there’s any boolean flag in it). A fd_set instance is used as an in/out parameter to select().
– upon entry, it carries the list of sockets to check
– upon return, it carries the subset of those sockets found “dirty” [1]

FD_SET(fd, fdSet) adds a file descriptor “fd” to fdSet. Used before select().
FD_ISSET(fd, fdSet) checks if fd is part of fdSet. Used after select().

—-
Now we understand fd_set, let’s look at …
First parameter to select() is max_descriptor. File Descriptors are numbered starting at zero, so the max_descriptor parameter must specify a value that is one greater than the largest descriptor number that is to be tested. I see a lot of confusion looking at how programmers populate this parameter.

See http://publib.boulder.ibm.com/infocenter/iseries/v5r3/index.jsp?topic=%2Frzab6%2Frzab6xnonblock.htm

[1] “ready” is the better word
[2] if you don’t have any file descriptor, then you should pass in NULL, not an empty fd_set

philosophy of java nested classes

“inner classes” = non-static nested classes. For beginners, let’s focus on typical scenarios:
1) C.java encloses N as a *private* inner class.
2) C.java has a field n1 of type N

Q: how would you use an instance of N? What does this object in memory represent?
A: a component of an object c (of type C).

Internalize — this.n1 is very similar to … a regular field, whereas N.class is more like a regular class than a C field. Allow me to repeat — Whenever you look at an inner class N, it’s very similar to a regular class.

Q: what does this.n1 resemble most? A field in an instance (c) of C?
A: No. Suppose a field j (of type J) in C has method j.m1(). m1() can’t (N can) access C’s private members
A: i think this.n1 most resembles a sophisticated and “trusted field“, a regular filed like j + the additional trust. The trust means this.n1 can have methods to access C’s private members.

Q: is such a construct never necessary and can be achieved using regular OO constructs?
A: No. The “trust” is hard to achieve otherwise.


Q: where do you instantiate N? How do you pass the instance around? How do you call N’s instance methods?
A: all inside C. Outside C, no other objects can see N

Q: how about a public (instead of private) inner class N2? What’s the use case or justification?
A: I would make N2’s constructor private, so outer class C.java is the only access point. So an instance (n2) of N2 becomes a slave object dedicated to the outer object. Note trust still applies.

——-
Now for static nested class S declared in C.java
Q: how would you use an instance of S? What does this object in memory represent?
A: not part of a C instance.
Q: what does this instance resemble most? A static field in an instance (c) of C?
A: No. I think in some usages this instance most resembles a better static method wrapper. You can group static methods in C and move them into S and make them non-static [1] inside S. An alternative design is the System.out pattern –. put (converting to non-static) C’s static methods [2] into a regular class A, and create an A instance as C’s static field. However, static nested class S (not A) can access C’s private static members.
A: in the case of AbstractMap.java, static nested class resembles….?

[2] they lose access to C’s private static members.

[1] I think most methods in S should be non-static. Static methods in a static nested class is a waste of time.

Q: where do you instantiate S? How do you pass the instance around? How do you call S’s instance methods?
A: instantiate in a static method in C. You can also do so in a static initializer or a non-static method.

sizeof : array on stack^heap

Here are 3 int pointers — almost indistinguishable at runtime, but compile-time….?

int main(){
cout<<sizeof (char)<<endl; // always 1 by definition
cout<<sizeof (void*)<<endl; // shows 8 on my 64-bit machine, the size of any address
cout<<sizeof (int)<<endl; // shows 4 on my 64-bit machine

int * ipNew = new int;
cout<<sizeof ipNew // 8, size of an address

int * iaNew = new int[5];
cout<<sizeof(iaNew) // 8 too, since array-new returns just an address

int ia[] = {1,3,9};
cout<<sizeof(ia) // 12, size of an Entire array variable on stack
}

At run time, ia probably is just a simple pointer (to stack) and doesn’t remember the array length. However, when sizeof is evaluated at compile time, compiler knows the array length!

transaction isolation levels, again

http://db.apache.org/derby/docs/10.3/devguide/cdevconcepts15366.html has more details

This article shows simple but clear examples — http://en.wikipedia.org/wiki/Isolation_(database_systems)#Example_queries. One of the best articles on isolation levels.

Before seeing this article, I studied isolation levels 3 times until I got them right.

If you are CEO (or ticket master) of Northwest Airlines, a repeatable-read isolation level will assure a traveler “what you see on screen is available for the next 10 seconds”.
However, most Sybase systems default to one level lower – Level 2 i.e. read-committed.

java array memory allocation

See if anyone can shed some light. Not an interview question. Just my own curiosity.

Q: When i declare a Serializable[2] array, how does JVM decide how many contiguous bytes to reserve for 2 array elements of type Serializable? The actual object can have any number of fields in it, so the object can occupy any amount of memory.

Now the background. In C, i remember array element addresses are always uniform and contiguous, so we can use pointer arithmetic for random access. Someone mentioned java array is similar to C array. For example, int[2] would need exactly 32 bytes * 2 = 64 bytes, probably same in java vs C.

There's more circumstantial evidence that java arrays occupy contiguous memory — when inserting an element in the middle, java need to shift or copy data in memory, that's why linked list is known to be more efficient for this operation.

It's possible to measure runtime memory usage before and after declaring a Serializable[10000] but I'd like to understand how system decides how much memory to allocate.

Thanks.

yield^price^coupon

For a given bond with a pre-determined series of payouts, you discount every payout with the same YTM to get the PV. Sum up the present values and you get the price — From YTM, get price. Therefore, given the pre-determined payouts, from the price, you can derive YTM numerically.

For a given bond, the higher you set its YTM, the deeper the discount, the lower the PV and price.

Q: how do people compare bonds with different coupons and maturities?
A: YTM. prices aren’t comparable. Therefore, YTM is a way to *characterize* a bond’s price, coupon rate and maturity.

Q: for a given bond, how is price/YTM determined by the market.
A: a bond trader set price (or YTM) on his bond. A buyer probably bid at another price. Offer is lifted when they match.

Among AA bonds for example, the higher the YTM, the more worthwhile(?) is this investment? I don’t think so. If it’s such a bargain, then the offer would be grabbed right away. Trader is forced to set the YTM so high (and price so low) perhaps because maturity is in the *distant* future.

YTM is not closely related to ROI. For a beginner, I would say it’s nothing to do with return. YTM is a *discount-rate*. Across all bonds, the higher this rate, the deeper the discount. I feel YTM is mostly influenced by credit rating and also maturity. I don’t think it’s influenced by coupon rate — Everything else being equal[Q1], a low coupon bond is priced below a high coupon bond. But I guess identical YTM.

Everything else being equal[Q2], a CCC bond is priced below a AAA bond. Therefore, that CCC’s YTM is much higher than that AAA bond. Sellers have to set the YTM high to attract buyers. More precisely, sellers must discount the CCC’s payouts more than the AAA’s payouts.

If a trader increases his bond’s YTM, he is applying deeper discount to lower his asking price.

In reality, premium bonds are discounted deeply compared to par bonds. Remember discount rate (ie yield) is chosen by sellers and buyers. Compared to a comparable par bond (comparable rating), a premium bond has higher price, slightly higher yield and higher coupon.

I don’t think FI developers need this level of familiarity, but If you need to get thoroughly comfortable with basic yield concepts, then master the reasoning behind scenarios below.

Q: For a single bond (ie same coupon rate), what does it mean when price drops?
A: Trader is discounting the payouts more deeply, so yield rises.

Q1: 2 bonds of same issurer and maturity but different coupon rates. Yield should match. what about price?
A: price probably follows coupon rate

Q: 2 bonds of same issurer and same price. What about yield and coupon rate?
A: yeild reflects credit rating so should match. price probably follows coupon rate, so coupon should match too. All 3 attributes should show no difference.

Q2: 2 bonds of different issuer but same maturity selling at the same price. What can you say about yields and coupon rates.
A: the higher coupon is discounted deeply to give the same NPV ie price.

dotnet PE file format, briefly

http://www.herongyang.com/C-Sharp/Intermediate-Language-CLR-Based-PE-File.html shows how to examine a real dotnet EXE file using IL Disassembler.

http://www.informit.com/articles/article.aspx?p=25350 compares to java bytecode and points out —

When you compile c# source code, you get an assembly. If it is a library, you will get a .DLL file. If it is an executable, you will get an .EXE file. To run a .NET program, Microsoft has taken the extra step to incorporate a .NET assembly “into” a standard Windows PE file. In my own words, the dotnet assembly is wrapped in the Extended-PE format. A typical c# EXE is both a .NET assembly and a PE file.

MSIL code (fairly readable, similar to java bytecode) is physically saved in the PE file.