sybase table size

sp__rowcount yourTable  — not always there

sp_spaceused yourTable

young, productive developers #GS

Regarding my young GS colleague … He learns the production codebase very fast, so

* he can understand, in-depth, how and where all those complicated logic is implemented in production system
* he can understand what users mean, which is usually condensed in very terse emails, or quick phone calls
* he can quickly come up with quick solutions
* he knows what test scenarios are extremely unlikely so we need not handle them or test them
* he is faster navigating the code base from java to perl/shell scripts, to stored proc and big queries, to autosys and javascript, where business logic lives.

(I was weaker on all these aspects, for a long time.)

He’s more confident fixing existing codebase than architecting a rewrite. Limited experience. In this aspect, employer would see limited value in him.

gemfire cache listener runs in updater-thread

By default, CacheListener methods run on the cache updater thread.  You can easily verify it.

 

            final CacheListener listener = new CacheListenerAdapter() {

                  @Override

                  public void afterCreate(EntryEvent event) {

                        log.info(“afterCreate({})”, event);

                        count.incrementAndGet();

                  }

            };

           

            final AttributesMutator mutator = region.getAttributesMutator();

            mutator.addCacheListener(listener);

            try {

                  final TestBean bean = new TestBean();

                  region.put(“testAddCacheListener”, bean);

how a spring app stays alive af main() method returns

In fact, the main() method doesn’t return — prevented from returning. Prevented because main() calls await() of this class below.


protected static class ContextClosedCompleteListener implements ApplicationListener {
private final CountDownLatch countDownLatch = new CountDownLatch(1);
public ContextClosedCompleteListener(ConfigurableApplicationContext applicationContext) {
applicationContext.addApplicationListener(this);
}

// required by spring's ApplicationListener
public void onApplicationEvent(ContextClosedCompleteEvent event) {
log.info("Received context closed complete event");
countDownLatch.countDown();
}

// called from main()
public void await() throws InterruptedException {
countDownLatch.await();
}
}

c# q[new] keyword – 3 meanings

I’m sure there are excellent online articles, but here’s my own learning note.

Java “new” always hits Heap, period. Simple and clean.

C++ has array-new, placement-new, and class-specific-new. C++ “new” almost always hits the heap except
– except placement-new
– except you can customize op-new for your class to hit your allocator.

C# “new” doesn’t always mean Heap. The code “new MyStruct” usually hits the stack rather than the heap. See P55[[c#precisely]] for a nice diagram.

C# actually gives 3 unrelated (mildly confusion) meanings to the “new” keyword. One is the Java meaning. The 2nd meaning is “hide base type Member” and can apply to
* virtual methods — Most confusing
* non-virtual methods
* fields
* nested classes/interfaces
* events??

“Type” can be a base interface.

3rd meaning – select-new creates an instance of an anonymous type [3]. See http://www.dotnetperls.com/select-new.  http://stackoverflow.com/questions/2263242/select-new-keyword-combination is a summary of select-new

[3] java also let’s you create an instance of an anonymous type but you need to specify the supertype.

UBS rttp^coherence

put/get/subscribe. rmi api 1000 writes/sec
Q: any 4th operation like sql-like query?
Q: indexing?

subscription with predicates
Q: how does subscription work again? what does it do?
Q: MOM?

Master node -> replication nodes. conflated updates?

optimistic locking -> concurrent modification exception

code quality
unit test
heavy engineering
no biz user interaction
Q: quality vs quantity

— sales pitch
ease of use. coherence is a complicated product. not many people know how to use it
proven. “look, it works”

–users
eq finance ie sec lending (front office, latency sensitive)
cash eq
Exchange traded derivatives
futures
fx
clearing
STP
———–eq deriv data services
biggest client is risk batch running on a compute grid
100G in memory, 400G in DB

tech skills:
#1 Java
#2 design patterns
SQL. All tuning is handled by DBA
threads
java sockets
IO
net
restful web service

max throughput ^ max concurrency

I now feel this is a a bit too academic. In practice, I feel concurrency is a technique to achieve throughput.

“max concurrency” is a system feature and probably means “full exploitation of processor and threading capacity”. I know a single SUN processor can have 4 cores and 32 concurrent kernel threads. In such a system, it’s also possible to create user-level threads so total concurrent threads can far exceed 32. I would think thousands are possible.

“max throughput” is a tangible benefit. I think it’s measured by the max number of completed requests per second. The highest throughput is usually achieved in latency-insensitive batch systems rather than synchronous or asynchronous request/response. The fastest way to transfer billions of terabytes of data is to load flash drives on trucks and ship to the receiving site – batch mode.

Max throughput requires finding out bottlenecks. Throughput of a given system is the throughput of the “narrowest” or “slowest” link on the critical path. Adding concurrency to the bottleneck spots (not other spots) improves throughput. For a realistic system, people would buy the max amount of memory they can afford to avoid hitting disk, put in a lot of processors, and somehow multiplex I/O. In low-latency systems, network I/O is often the biggest latency contributor. People use java NIO and exchange collocation, among other things.

Other concurrency techniques include partitioned table, parallel query, disk array, grid and cloud computing.

By “narrowest” I mean the high-way. The more lanes, the better the throughput. A 32-thread processor has 32 lanes. Each thread has a capacity limited by clock speed and software synchronization – that’s what I mean by “slowest”.

code tracing ^ performance tuning

In terms of practical value-add, keeping the job, workload management, time for family, keep up with colleagues and avoid falling into bottom quartile…. #1 challenge i see so far is legacy code tracing. How about latency and throughput?

market feed
eq/FX,
google, facebook
clearance

I think most performance optimization work is devoid of domain knowledge. Exceptions include
* high frequency trading
* CEP

## know what “nice” solutions work only on paper

We are talking about real technical value-adding, not interviews. Context — non-trivial trading system design. Team of 5 – 20 developers.

If you can get at least one credible [1] dose of non-trivial first-hand experience in pricing / booking / real time mark2market -> real time pnl -> real time VaR / market data feed / execution … then you can give real inputs to design discussions. Real inputs, not the typical speculative inputs from an “armchair quarterback”. In other words, if you have done it, then you can speak from experience. Speculators can’t be sure if their solutions will work. If it’s a top brain, she can be fairly confident, but actual development invariably requires many implementation decision. Even for a genius, It’s impossible to make an educated guess at every juncture and be right every time.

Most of the tools are made by “strangers”. No one can be sure about the hidden limitations esp. at integration time. Some experience is better than no experience.

Here are a small sample of those tools that came to mind. These aren’t trivial tools, so real experience will teach you what to *avoid*. In short, every solutions works on paper, but Speculators don’t know what don’t work.

  • temp queue
  • lock free
  • rv
  • cache triggers and gemfire updaters
  • gemfire
  • camel
  • large scale rmi
  • large scale web service
  • DB-polling as a notification framework

(I have a fundamental bias for time-honored tools like MOM, SQL, RMI..)

[1] need to be a *mainstream* system, not a personal trading system. Each trading system could have quite different requirements, so ideally you want to sample a good variety of, say, pricing engines.

##extend lead over non-trading developers

(context — IV, not performance) When you compete for trading jobs in SG, you must show convincing value-add otherwise why should they pick you rather than some regular java guy in SG with 15 years xp?

— ranked in terms of entry barrier

  1. * c++
  2. * WPF? i think in a growing number of cases, a java guy can help out on the front end
  3. * socket, TCP/IP, FIX
  4. * analytics
  5. * low latency
  6. * distributed cache
  7. * swing — few java guys do swing but swing is fading
  8. * db tuning
  9. * threading,
  10. * java memory management
  11. * MOM — RV, MQ
  12. * biz jargons — they buy it
  13. * serialization
  14. * CEP??

object graph serialization — gemfire warning

http://www.gemstone.com/docs/6.5.0/product/docs/japi/com/gemstone/gemfire/DataSerializable.html

Gemfire data-serialization should not be used with complex object graphs. Attempting to data serialize graphs that contain object cycles will result in infinite recursion and a StackOverflowError. Attempting to deserialize an object graph that contains multiple reference paths to the same object will result in multiple copies of the objects that are referred to through multiple paths. See diagrams.

gemfire continuous query, briefly

The CqListener allows clients to receive notification of changes to cache that satisfy a query that client registered to run on server using a CqQuery object. The query and the CqListener objects are both associated with a single CqQuery object.

The query is run on server while the CqListener is run on the client.
You can define multiple CqListeners for a single query.

Each listener registered with a CqQuery receives all of its continuous query events. Applications can subclass the CqListenerAdapter class and override the methods for the events you need.

When a CQ is running against a server region, each update is evaluated against the CQ query on the cache-updater-thread. If either the old or the new entry value satisfies the query, the cache-updater-thread puts a CqEvent in the client’s queue. Once received by the client, the CqEvent is passed to the onEvent method of all CqListeners defined for the CQ.

gemfire cache-callback vs cache-listener concepts

Here I’m more interested in the concepts rather than the 2 implementations. If you want to study the nitty-gritty, then you better know the event objects. I feel events are simpler than listeners.

The CacheCallback interface is the superinterface of most cache event handlers. CacheCallback has a single method, close().

Top 3 essential operations on a region – get/put/subscribe. I think subscribe means cache to notify listeners using callback methods and event objects.

The order in which the listeners are added is important, because it is the order in which they are called. GemFire maintains an ordered list of the region’s listeners. You can execute getCacheListeners on the AttributesFactory or AttributesMutator to fetch the current ordered list. The AttributesMutator.removeCacheListener method takes a specified listener off the list.

The cache listeners are *sequential* — listener1 must finish its work before listener2 begins. GemFire guarantees the order, as long as the same thread calls each listener. You can put the actual work into Command objects to execute on other threads — execution order is not guaranteed.

Cache listener notifications are invoked synchronously, so they will cause the cache *modification* operation to block if the callback method blocks. This is particularly likely if you perform cache operations from a callback. To avoid this, use the Executor interface discussed in Writing Callback Methods.

gemfire notification event types — 2+1

2 basic event types — RegionEvent and EntryEvent, are changes to a region or a data entry. Both extend CacheEvent interface.

1) The EntryEvent object contains information about an event affecting a data entry, including the key, the value before this event, and the value afterwards. Similar to a swing PropertyChange —  http://bigblog.tanbin.com/2012/03/how-to-pass-info-between-event.html

2) RegionEvent object provides information about operations that affect the whole region

3) CqEvent ie continuousQueryEvent — see other posts.
Warning — All cache events are synchronous. To make them asynchronous, you can hand them off to another thread for the callback. RoleEvent, TransactionEvent, BridgeEvent, GatewayEvent are less common.

gemfire region types — Partitioned^Replicated

http://www.gemstone.com/docs/6.5.0/product/docs/html/Manuals/wwhelp/wwhimpl/js/html/wwhelp.htm#href=SystemAdministratorsGuide/SAG%20Title/TitlePageHTML.html->DevGuide->DataRegion has many important points, but info overload… Let's focus on server cache. Server regions must have region type Partitioned or Replicated — the 2 dominant types.

1) Partitioned regions — Feels like memcached. I feel there's no real overlap between members. You are master of NY; I'm master of CA. NY + CA == universe. Optionally, you could keep a backup copy of my data.

“Partitioned regions are ideal for data sets in the hundreds of gigabytes and beyond.”

Data is divided into buckets across the members that define the region. For high availability, configure redundant copies, so that each data bucket is stored in more than one member, with one member holding the **primary**.

2) Replicated regions — Every node has a full copy. RTTP model.

“Replicated regions provide the highest performance in terms of throughput and latency.”

* Small amounts of data required by all members of the distributed system. For example, currency rate, Reps data, haircut rules, rev/execFee rules, classification rules, split rules …
* Data sets that can be contained entirely in a single VM. Each replicated region holds the complete data set for the region.
* High performance data access. Replication guarantees local access from the heap for application threads, providing the lowest possible latency for data access. Probably zero serialization cost.

3) distributed, non-replicated regions — I feel this is less useful because it can't be a server region.

* Peer regions, but not server regions or client regions. Server regions must be either Replicated or Partitioned.

timespan^datetime ] BS and other pricing formulas

In many BS formulas, “t” represents a number that represents a particular time value like “3 months before maturity”, but a point in time can’t be represented by a number. Actually, in most cases the “t” variable represents timespan. I borrowed a networking term — Time-To-Live or TTL.

c# has these 2 concepts well separated. Datetime is a point in time, whereas Timespan is the distance between 2 datetimes.

When a pricing formula mentions …”a function of time” it’s really a function of distance in time, i.e. function of timespan (against the maturity datetime).

The “t” is usually a floating point number measured in years — clearly a timespan.

Cell * head[HOW_MANY_BUCKETS];

How do we master this extremely common c++ declaration?

Cell * head [HOW_MANY_BUCKETS]; // HOW_MANY_BUCKETS can be constant like 99, or a variable in the new standard — VLA

1) head is the name of the variable.
2) qq(Cell *) means “pointer to Cell”

If you use typedef you can rewrite it as

ptr2Cell head [HOW_MANY_BUCKETS];

This reads “head is an array (size 99) of pointers”. In fact, it turns out this is the array of buckets in a hash table. There are 99 linked lists, so we have 99 list heads.

The above declaration is slightly different from:

Cell **head; // no need to specify array size.

sybase c++ driver used on wall street

http://manuals.sybase.com/onlinebooks/group-cnarc/cng1110e/dblib/@Generic__BookView

  • An application can call a stored procedure in two ways: by executing a command buffer containing a Transact-SQL execute statement or by making a remote procedure call (RPC).
  • Remote procedure calls have a few advantages over execute statements:
    • An RPC passes the stored procedure’s parameters in their native datatypes, in contrast to the execute statement, which passes parameters as ASCII characters. Therefore, the RPC method is faster and usually more compact than the execute statement, because it does not require either the application program or the server to convert between native datatypes and their ASCII equivalents.
    • It is simpler and faster to accommodate stored procedure return parameters with an RPC, instead of an execute statement. With an RPC, the return parameters are automatically available to the application. (Note, however, that a return parameter must be specified as such when it is originally added to the RPC via the dbrpcparam routine.) If, on the other hand, a stored procedure is called with an execute statement, the return parameter values are available only if the command batch containing the execute statement uses local variables, not constants, as the return parameters. This involves additional parsing each time the command batch is executed.
  • To make a remote procedure call, first call dbrpcinit to specify the stored procedure that is to be invoked. Then call dbrpcparam once for each of the stored procedure’s parameters. Finally, call dbrpcsend to signify the end of the parameter list. This causes the server to begin executing the specified procedure. You can then call dbsqlok, dbresults, and dbnextrow to process the stored procedure’s results. (Note that you will need to call dbresults multiple times if the stored procedure contains more than one select statement.) After all of the stored procedure’s results have been processed, you can call the routines that process return parameters and status numbers, such as dbretdata and dbretstatus.
  • If the procedure being executed resides on a server other than the one to which the application is directly connected, commands executed within the procedure cannot be rolled back.
  • For an example of a remote procedure call, see Example 8 in the online sample programs.

risk reversal represents … skew sentiment

RR (risk reversal) is a quantitative indication of skew. As a key soft mkt datum, it focuses on and expresses a specific aspect of market sentiment. A lot of raw market data distill into this one number.

See P 118 [[FX analysis and trading]] (Bloomberg Press) — Positive RR represents bullish sentiment because call i-vol (surge-insurance premium) is higher than the comparable put i-vol (sink-insurance premium). That means more insurers feel surge is more likely than sink. Here we assume just 2 risks exist in this simplified world — surge and sink.

Another source says positive risk reversal implies a skewed distribution of expected spot returns composed of a relatively large number of small down moves and a relatively small number of large upmoves. But I find this statement ambiguous.

Note for equities, put i-vol always exceeds call i-vol, so skew is always negative. See other blogs.

In a wider context, there exists a wide range of transformations (and extractions) on raw data including historical, economic and issuer data. Techniques vary between markets. Even between 2 players on the same market the techniques can vary widely. There are entire professions dedicated to data analysis — quant strategists and quant analysts and quant traders. Among data transformations, RR is one of the most essential and part of the industry-standard.

durable subscribers, persistent store, persistent delievery

In our Front Office app, we have 100+ destinations (mostly topics) in a single weblogic JMS broker. Weblogic web console shows the number of DURABLE subscribers on each topic — I see all zeros. Therefore, after a restart, broker doesn’t need to de-serialize from disk the “pending” messages. However, I believe we do have a persistent store — probably a basic, common requirement in FO trading apps.

Q: when is the persistent store cleared?

Note if there’s at least one durable subscriber and there’s any pending message for her, then broker must persist it forever until either expired or delivered. (Expiration could be disabled.) Therefore durable subscribers ==> (imply) ==> persistent store on broker, but not conversely.

Persistent delivery (http://download.oracle.com/javaee/1.4/api/javax/jms/DeliveryMode.html) is unrelated to durable subscription (DS).

PD covers producer => broker.
DS covers broker => consumer.

Also,
PD is a producer setting
DS is a subscriber setting

“A message is guaranteed to be delivered once and only once … if the delivery mode of the message is PERSISTENT and if the destination has a sufficient message retention policy” Therefore, PD doesn’t always guarantee exactly-once.

subject-based addressing and tibrv

http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.60.5735&rep=rep1&type=pdf says

Subjects are arranged in a subject tree by using a dot notation, and clients can either subscribe to a single subject (e.g., finance.quotes.NASDAQ.FooInc) or use wildcards (e.g., finance.quotes.NASDAQ.*).

JMS topic is basically subject-based, or more precisely “Channel-based”, as a (selector-less) subscriber gets everything in the channel.

jms queue “can be compared to TIBCO Rendezvous Distributes Queues (RVDQ)”. I feel tibrv inbox is also like a queue. See http://www.cs.cmu.edu/~priya/WFoMT2002/Pang-Maheshwari.pdf.

eq high frequency trading IV — non-tech questions

Q: what's a limit order?
Q: what's a busted trade?
Q: have you done JVM tuning?
Q: difference between an exchange and ECN?
A: an exchange should be self-regulating. There's a separate entity
regulating it. ECN don't need one.

A HF could connect to an exchange or connect to a GS algorithm or to
MS algorithm.

Order Pricing engine? Each trader has her own “secret” trading
strategy, unknown to the IT guys. Traders simply send orders in with
the prices they want.

[10]equity HFT IV #UBS

Seems to be mostly simple (sometimes obscure) QQ questions.

Q: how does a DB index tree work?
Q: how do you record every keystroke and mouse event in swing? Codebase is large and is in production.
Q: what’s DB statistics? What’s histogram? What are they used for?
Q: When are db stats updated?
AA: not automatically, unless you use sybase Job Scheduler

Q: how do you programmatically change tx isolation level?
Q: what’s a mem barrier?
Q: how do you create an object in Perl?
Q: what’s biased locking? java6 biasedLocking^lockfree^ST-mode
Q: what does JIT mean? When does the optimized compilation happen?
AA: the code can be compiled when it is about to be executed (hence the name “just-in-time”).

Q: for a concurrent hash map, if you are iterating while another thread updates the collection…?
%%A: if you both hit the same segment, then CMExeption

Q: can you get a snapshot of the CHM?
%%A: I doubt it

Q: local vars are on heap or stack?
Q: what kind of queues do you use in your thread pool?
%A: default queue, probably a blocking queue

Q: what’s string intern?
AA: https://stackoverflow.com/questions/10578984/what-is-string-interning

Q: are they in the perm gen?
AA: yes before java7

Q: what happens when GC fails to free up enough memory for a request and JVM attempts to grab additional memory?
%A: the requesting thread blocks
Q: what about other threads?

Q: do you know any thread-safe array-based collection?
%A: vector, synchronize your own ArrayList, and concurrent queues.

Q: how do you protect a hashmap in concurrent app?
%A: i don’t know how to lock on a bucket (no such pointer exposed), so i lock the whole thing.

Q: can a final variable (constant) be modified?
%A: yes reflection.

Q: when would you favor a tree map over a hash map?
%A: range query

Q: serialize a hash map of array of ints?
%A: no problem.
Q: how about circular reference
%A: gemfire may have problem with it but standard java serialization can handle it

Q: what jdk data structure or algorithm have you found insufficient and you had to re-implement yourself?
%A: the queue or stack was based on arrays and expose the intermediate nodes.
Q: to block the get() method, would you use encapsulation or inheritance?
%A: inheritance is dubious.

Q: what’s syn_sent in netstat output?
%A: part of the handshake

Q: say your have 1,000,000 orders in a map, keyed by orderId, but now you want to query by stock symbol?
%A: in both tree and hash maps, you must scan every entry. But it’s possible to build an index on symbol.

Session States in netstat output

State Description
LISTEN accepting connections
ESTABLISHED connection up and passing data
SYN_SENT TCP; session has been requested by us; waiting for reply from remote endpoint
SYN_RECV TCP; session has been requested by a remote endpoint for a socket on which we were listening
LAST_ACK TCP; our socket is closed; remote endpoint has also shut down; we are waiting for a final acknowledgement
CLOSE_WAIT TCP; remote endpoint has shut down; the kernel is waiting for the application to close the socket
TIME_WAIT TCP; socket is waiting after closing for any packets left on the network
CLOSED socket is not being used (FIXME. What does mean?)
CLOSING TCP; our socket is shut down; remote endpoint is shut down; not all data has been sent

gemfire write-behind and gateway queue #conflation, batched update

http://community.gemstone.com/display/gemfire60/Database+write-behind+and+read-through says (simplified by me) —
In the Write-Behind mode, updates are asynchronously written to DB. GemFire uses Gateway Queue. Batched DB writes. A bit like a buffered file writer.

With the asynch gateway, low-latency apps can run unimpeded. See blog on offloading non-essentials asynchronously.

GemFire’s best known use of Gateway Queue technology is for the distribution/propagation of cache update events between clusters separated by a WAN (thus they are referred to as ‘WAN Gateways’).

However, Gateways are designed to solve a more fundamental integration problem shared by both disk and network IO — 1) disk-based databases and 2) remote clusters across a WAN. This problem is the impedance mismatch when update rates exceed absorption capability of downstream. For remote WAN clusters the impedance mismatch is network latency–a 1 millisecond synchronously replicated update on the LAN can’t possibly be replicated over a WAN in the same way. Similarly, an in-memory replicated datastore such as GemFire with sustained high-volume update rates provides a far greater transaction throughput than a disk-based database. However, the DB actually has enough absorption capacity if we batch the updates.

Application is insulated from DB failures as the gateway queues are highly available by default and can be configured to allow zero data loss.

Reduce database load by enabling conflation — Multiple updates of the same key can be conflated and only the final entry (containing all updates combined) written to the database.

Each Gateway queue is maintained on at least 2 nodes, internally arranged in a primary + (one or multiple) secondary configuration.

gemfire pure-java-mode vs native code

http://www.gemstone.com/docs/6.5.0/product/docs/html/Manuals/wwhelp/wwhimpl/js/html/wwhelp.htm#href=SystemAdministratorsGuide/SAG%20Title/TitlePageHTML.html

GemFire Enterprise can run on platforms not listed in Supported Configurations. This is called running in pure Java mode, meaning GemFire runs without the GemFire native code.

In this mode, the following features may be disabled:
* Operating system statistics. Platform-specific machine and process statistics such as CPU usage and memory size.
* Access to the process ID. Only affects log messages about the application. The process ID is set to “0” (zero) in pure Java mode.

I think most features are available in pure-java-mode (PJM).

I think PJM means gemfire process is not a standalone process but more of a bunch of threads+ConcurrentHashMaps inside a JVM. A standalone process is usually c[1] code running in a dedicated address space. JVM, Perl, bash are all such processes. Such a “c” process makes system calls, which are always (as far as I know) c functions. These system calls aren't available to PJM.

[1]C++ compiles to the same executable code as c. I guess it's assembly code.

tibrv vs EMS — answer from a Tibco insider

Q: is RV still the market leader in the “max throughput” MOM market?

Q: for trading apps, i feel the demand for max-throughput MOM is just
as high as before. In this space, is RV (and multicast in general)
facing any competition from JMS or newer technologies? I feel answer
is no.

A: I think RV is still the market leader.  29west is a strong
competitor.  TIBCO now sells a “RV in hardware appliance” partly to
address that.  I feel JMS is not really targeting max-throughput
space.

Fed^treasury department

* By law, the Fed must follow the Treasury’s instructions even if it disagrees with them
* fed is responsible to set interest rate
* Fed is “Fed Reserve Bank” and can buy any stock or any currency — open market operations

Federal Reserve Bank can purchase through its open market operation $10 million worth of Microsoft stock. The bank pays for this
stock with its own check, a Fed check or its electronic equivalent. [The Fed check is created from thin air, there are no funds
backing this check. No funds!] The Fed check is deposited, by the stock seller, into a local bank. It is then returned to the
Federal Reserve Bank where the check is cleared and the local bank is given full credit for this deposit.

async JMS — briefly

Async JMS (onMessage()) is generally more efficient than sync (polling loop).

* when volume is low — polling is wasteful. onMessage() is like wait/notify in a producer/consumer pattern
* when volume is high — async can efficiently transfer large volume of messages in one lot. See P236 [[Weblogic the definitive guide]]

Q3: if I implement the MessageListener interface, which method in which thread calls my onMessage()[Q1]? I believe it’s similar to RMI, or doGet() in a servlet. [Q2]

Q1: is wait/notify used?
A1: i think so, else there’s a network blocking call like socket.accept

Q2: Servlets must exist in a servlet container (a network daemon), so how about a message listener? I believe it must exist in a network daemon, too.

distributed cache vendors

http://www.itmagz.com/index.php/technology-mainmenu/news-mainmenu-41/522-enterprise-applications-and-mid-tier-caching-.html?showall=1
Both reference data (shared read) and activity data (exclusive write) are ideal for caching. However, not all application data falls into these two categories. There is data that is shared, concurrently read and written into, and accessed by a large number of transactions.

MemcacheD is typically used in the LAMP/J stack. MemcacheD is essentially an implementation of a distributed hash table across a cluster, each with large memory. It supports only object get and put operations; there is no support for transactions or query. Also, MemcacheD does not provide any support for availability. Most Web applications primarily use MemcacheD for caching large amounts of reference data. For high availability, these applications build custom solutions.

Oracle (Tangosol) Coherence, Gemstone Gemfire are two of the leading cache providers in the enterprise application space. Coherence is a Java-based distributed cache that is highly scalable and available for enterprise applications. Like MemcacheD, Coherence supports a DHT (distributed hash table) for scalability. However, unlike MemcacheD, Coherence provides high availability by implementing replicated, quorum-based data consistency protocols. Coherence also supports distributed notifications and invalidations.

Microsoft info bytes, code named ‘Velocity’, offers distributed caching functionality like competitor products.

really simple scenario of data races

Q: Suppose we comment out increment(), is it thread-safe?
A: I feel yes. Imagine 2 threads calling the method on the same object in the same clock cycle, on 2 processors, setting it to 5 and 6. What’s the new value? I believe it’s one of the 2. This is a DataRace (http://java.sun.com/docs/books/jls/third_edition/html/memory.html#61871), but object state is still valid.

Q: Suppose we comment out set(), is it thread-safe?
A: thread-unsafe, due to lost-updates if 2 threads call increment() simultaneously.

Q: Suppose we remove the volatile keyword and comment out increment(), and keep only getter/setter ?
A: thread-unsafe. Threads can cache the new this.counter value in per-thread registers for many clock cycles (ie forever), so the new values are invisible to other threads indefinitely.

final class C {
 private volatile int counter;
 public void increment() {
    this.counter ++;
 }
 public void set(int newVal) {
    this.counter = newVal;
 }
 public int get() {
    return this.counter;
 }
}

##better stay in front office trading@@

After I told you “better stay in trading”, I came up with a few more “predictions” affecting Wall St developer salary.

* c++ will become LESS used. I hope not much.
* c# will become more widely used. Salary will remain high.
* multi threading will become more important. Salary will remain high.
* DB will continue to be widely used, often indispensable, even though distributed caching may look like an alternative. DB expertise will continue to fetch good salary but is not the most highly valued trading system expertise.
* will FIX and connectivity work become more important? Probably yes. See next prediction.
* trading volume will increase for many asset classes, partly because of emerging markets and technological advance.
* Front office will continue to be as important if not more important than back office. I believe for 10 or 20 years front office has been more important because traders are the profit (and loss;-) center. FO salary will remain high if not not higher than middle/back office.
* Will market risk (not FO) become more important? Not sure
* will high frequency trading become more widespread? Not sure
* will quant modeling become more important? Not sure

credit risk in trading vs commercial banking

Hi Rob,

I feel there are 2 major users of credit risk data.

* traders use it as part of market risk data
* lenders use it to decide on interest rate and perhaps collateral

Everyday thousands of individuals and organizations borrow money from banks, from corporate/muni bond markets, from repo market, from swap market … The interest rate they pay is calculated based on their credit score or credit rating. Essentially it boils down to default risk. For example, Treasuries have the lowest interest because these issuers have the maximum credit rating, even though the Greece government can default just as any issuer.

It's impossible to “guess” how much interest to charge a borrower. It has to be calculated. Therefore, Credit risk is an indispensable system for lending institutions. Not for traders. I feel traders generally look at market risk numbers after executing a trade. That's the all-important concept of VaR.

Credit risk (and market risk) data help traders get a better idea of their risk exposure, and may prompt them to set up hedges or trade less/more aggressively going forward. However, a trader could choose to trust her intuition more than the risk numbers.

Please correct my understanding. You can simply put “no” after any incorrect observation. If you can explain it's even better. Thanks.

automated test in non-infrastructure financial app

I worked in a few (non-infrastructure) financial apps under different senior managers. Absolutely zero management buy-in for automated testing (atest).

“automated testing is nice to have” — is the unspoken policy.
“regression test is compulsory” — but almost always done without automation.
“Must verify all known scenarios” — is the slogan but if there are too many (5+) known scenarios and not all of them critical then usually no special budget or test plan.

I feel atest is a cost/risk analysis for the manager. Just like market risk system. Cost of maintaining a large atest and QA system is real. It is justified on
* Risk, which ultimately must translates to costs.
* speed up future changes
* build confidence

Reality is very different on wall street. [1]
– Blackbox Confidence[2] is not provided by test coverage but by “battle-tested”. Many minor bugs (catch-able by atest but were not) will not show up in a fully “battle tested” system; but ironically a system with 90% atest coverage may show many types of problems once released. Which one enjoys confidence?

– future changes are only marginally enhanced by atest. Many atests become irrelevant. Even if a test scenario remains relevant, test method may need a complete rewrite.

– in reality, system changes are budgeted in a formal corporate process. Most large banks deal with man-months so won’t appreciate a few days of effort saving (actually not easy) due to automated tests.

– Risk is a vague term. For whitebox, automated tests provide visible and verifiable evidence and therefore provides a level of assurance, but i know as a test writer that a successful test can, paradoxically, cover up bugs. I never look at someone’s automated tests and feel that’s “enough” test coverage. Only the author himself knows how much test coverage there really is. Therefore Risk reduction is questionable even at whitebox level. Blackbox is more important from Risk perspective. For a manager, Risk is real, and automated tests offer partial protection.

– If your module has high concentration of if/else and computation, then it’s a different animal. Automated tests are worthwhile.

[1] Presumably, IT product vendors (and infrastructure teams) are a different animal, with large install bases and stringent defect tolerance level.
[2] users, downstream/upstream teams, and managers always treat your code as blackbox, even if they analyze your code. People maintaining your codebase see it as a whitebox. Blackbox confidence is more important than Whitebox confidence.

subject-based addressing

http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.60.5735&rep=rep1&type=pdf

Subjects are arranged in a subject tree by using a dot notation, and
clients can either subscribe to a single subject (e.g., finance.
quotes.NASDAQ.FooInc) or use wildcards (e.g., finance.quotes.NASDAQ.*).

JMS topic is basically subject-based. “Channel-based” is more
accurate, as a (selector-less) subscriber gets everything in the
channel.

jms queue is “can be compared to TIBCO Rendezvous Distributes Queues (RVDQ).”

algorithmic vs high-frequency vs low-latency trading

There is a lot of innovation, complexity and key principles in these domains. Below is a novice’s observations and hypotheses…In this blog there are other posts on the same topic.

The terms “algo trading”, “high frequency” and “low latency” are related, from a developer’s perspective. Low-latency is infrastructure optimization and engineering, like juice extraction from an orange (the hardware), which requires deep technical but modest financial or analytical knowledge. Low-latency infrastructure, once constructed, is used by many trading desks across a bank, not only by HF traders or algo traders.

HF and AT both originated on the buy side. Strictly speaking sell side (including investment banks) should seldom/never engage in these risky endeavors, but I know many do, under disguise. LL is needed on both buy-side and sell-side.

Algo trading can use many different mathematical algorithms to decide when to buy or sell (or cancel an uncompleted order) how many contracts at what price, using what type of order, in conjunction with what hedging orders. If a so-called “strategy” competes on frequency, then latency is often a key differentiator. The cold war was both /ideological/ and an arms race. Low latency infrastructure is like the nuclear weapon in the trading “arms race”, whereas the math algorithm is (very loosely) the ideology.

Some trading algorithms aren’t latency sensitive, and won’t be high-frequency. They compete only on the math. I believe most machine trading systems in reality are quite fast. If it’s very low frequency, then a human trader can make the decisions herself, aided by a machine with an algorithm

The math doesn’t need to be sophisticated or use stochastic PDE, but should match market reality and be faithfully aligned with the strategic intentions.

c# struct – bunch of primitives & still behaves as a primitive

“behave as” means

– pass by clone
– no inheritance
– no virtual method; no RTTI; no vtbl; no runtime binding
– readonly struct is immutable. State modifications only touches a temp copy P38 [[C#precisely]
– allocated in-situ, i.e. either on stack or as part of a “heapy-thingy”, even if you call …. new MyStruct() — P55[[c#precisely]]

** garbage collector never notices any struct instance in the heap, even if it’s a part of a “heapy-thingy” like a class instance or array.
** resembles a c++ class instance created without new or array-new. P55[[c#precisely]] has a nice diagram, showing every c#structInstance is allocated __in-situ__. By contrast, javaObjects or c#classInstances are always always heapy-thingies

I feel this is more like C than C++.

Java primitive wrappers are implemented in C# as boxing over structs. In fact, c# has no primitives. All simple types are alias of struct types. Developers may not need to create structs, but structs are an everyday
“nuisance” in C#.

mutable fields in concurrency — private volatile !! enough

say your class has an int counter field. “this.counter++” translates to 3 assembly instructions — LOAD , increment, STORE”. Executing thread could be preempted before STORE. This could remain for many clock cycles. Many things could happen on a 128-core machine.

If not careful, another thread could see the old value. Private field won’t help. Volatile won’t help. You must use lock to prevent all read/write access to this memory location.

By the way, remember volatile adds a special atomicity — to long/double field LOAD and STORE. Specifically,
* composite operations like increment is never made atomic by the volatile keyword

JMS temporary queue, tibrv and sybase temp table

Update — tibrv’s flexibility make JMS tmp queue look unnecessary, rigid, uptight and old fashioned.

tmp queue is (tmp topic?[4]) used for request/reply. I feel the essential elements of JMS request/reply are
1) tmp queue
2) set and get reply-to — setJMSReplyTo(yourTmpQueue)
3) correlation id — setJMSCorrelationID(someRequestID)

Server-side (not the broker, but another JMS user) uses getJMSCorrelationID() and then setJMSCorrelationID() in the response message, then replies to getJMSReplyTo()

tmp queue is like a Sybase tmp table — (I feel this is the key to understanding tmp queue.)
* created by requestor
* removed by the “system janitor”, or by requestor
* private to the session. Therefore creator threads must keep the session alive.

[4] temp topic too. p67 [[JMS]]

accrued interest – financing cost == interest gain

I had difficulty understanding accrued interest until I realized —

 

If I hold a bond for half a coupon period, then I’m entitled to half the coupon interest. However, to “finance” the purchase, I borrow fund from a bank at some interest rate.

 

In a trading desk, it’s common to consider the (accrued interest – financing cost) as a net interest, which is a profit to trader.

unix subshell — tips

http://www.ibm.com/developerworks/aix/tutorials/au-unixtips4/section5.html

To run a group of commands in a subshell, enclose them in parentheses. You can use redirection to send input to the subshell's standard input or to send its *collective* output to a file or a pipeline.

Since the subshell environment is a duplicate of its parent, it inherits all of the variables of the parent. But the parent shell never sees any changes that are made in the subshell environment, and the subshell, in turn, never sees any changes that are made in the parent *after* the subshell is spawned.

To save return code of a command (among many) from a subshell,

echo $? > /tmp/hamper1390848908.$$
# other commands….
) > ${LOG_FILE} 2>&1
read GMI_RET_CODE </tmp/hamper1390848908.$$

Here's a simpler alternative

(
someJavaJob
javaStatus=$?
## any other command
[ $javaStatus -ne 0 ] && exit $javaStatus
) > ${LOG_FILE} 2>&1
subshellStatus=$?
echo $subshellStatus
exit $subshellStatus

y mainframe is still important in finance and airline booking

Most profitable business in IBM is mainframe.

 

Q: why not yet replaced by newer technologies like java and unix?

A: actually mainframe can also run java and unix

 

Reason: extremely reliable. Decision-makers dare not touch them. It’s hard to recreate something so reliable. May take many years to demonstrate the reliability.

Reason: mainframe is improving too, as competing technologies improve.

Reason: Terminals (like ATM and airline booking terminals) are locked into IBM mainframe. They can’t work with newer servers. You must replace terminals along with mainframe.

Reason: mainframe processing power is still formidable.

Reason: IBM killed its mainframe competitors chiefly Hitachi.  I don’t know if this prolongs or shortens mainframe shelf-life.

ION-mkt — nested event handler

MkvPlatformListener — Platform events such as connection status.
MkvPublishListener — Data Events like publish and unpublish.
MkvRecordListener — Data Events like Record updates
MkvChainListener — Data Events like Chain updates.


class CustomChainListener implements MkvChainListener {
public void onSupply(MkvChain chain, String record, int pos,
MkvChainAction action) {
System.out.println("Updated " + chain.getName() + " record: " + record
+ " pos: " + pos + "action: " + action);
}
}
class CustomPublishListener implements MkvPublishListener {
public void onPublish(MkvObject object, boolean start, boolean dwl) {
if (object.getMkvObjectType() == MkvObjectType.CHAIN
&& object.getName().equals("MY_CHAIN") && start) {
MkvChain mychain = (MkvChain) object;
System.out.println("Published " + mychain.getName());
try {
////// the new() creates a listener just like swing ActionListeners
mychain.subscribe(new CustomChainListener());
} catch (MkvConnectionException e) {}
}
}
public void onPublishIdle(String component, boolean start) {
}
public void onSubscribe(MkvObject obj) {
}
}
..
Mkv.getInstance().getPublishManager().addPublishListener(new CustomPublishListener());

BNP IV 2

Q: if I set -Xmx500M, will it reserve 500M? Will I get OOM after using 300M?

Q6a: how is rv CM implemented?
A6a: each receiver can register itself as a CM receiver???
Q6b: what if a message fails to deliver initially
A6b: the distributed daemon will keep a copy on disk

Q: what're those fault tolerance features?
Q: what protocol uses multicast?
A: Tibco has its own multicast protocol. Unfamiliar names

Q: you said UDP is not reliable, so why would anyone use it?
Q7a: what is the idea of spring autowiring?
Q7b: any example

Q: Why do you have to receive all the 1 million trades? Why not filtered at the sender?
Q: why are your 11GB tables not partitioned?
Q: synchronized hashmap vs concurrent hash map?

4 types of iterators in [[EffectiveSTL]]

(Note these are not “fake-types” aka dummy types in template declarations.)

[[effSTL]] P116 highlights 4 important types of iterators (

http://www.sgi.com/tech/stl/stl_vector.h shows

vector::iterator is a typedef.
vector::const_iterator is a typedef.
vector::reverse_iterator is a class template.
vector::const_reverse_iterator is a typedef based on both

   typedef reverse_iterator const_reverse_iterator;

statement reorder for atomic variables #java,c++

see also [[effModernC++]]

–Java: Atomic is similar to volatile

See also http://www.cs.umd.edu/~pugh/java/memoryModel/jsr-133-faq.html#volatile

Atomic package javadoc says:

The memory effects for accesses and updates of atomics generally follow the rules for volatiles, as stated in  The Java Language Specification, Third Edition (17.4 Memory Model):

* get() has the memory effects of reading a volatile variable.
* set() has the memory effects of writing (assigning) a volatile variable.
* lazySet() … is weaker
* compareAndSet() and all other read-and-update operations such as getAndIncrement have the memory effects of both reading and writing volatile variables.

java IV questions found on robaustin.wikidot.com

http://robaustin.wikidot.com/50-java-interview-questions has a nice java quiz.

The readResolve() method will be called ( if exists ) to create the object. Note : the constructor of the object does not get called if readResolve() is called ( so this is one way to create an object without calling its constructor )

7) A class that supports serialization must implement serializable, if its base class is not serializable then what must it have?

17) With java collections, what’s the difference between Offer(E e) and Add(E e).

deadlock involving 1 lock (again):suspend/resume methods

if you app uses just one lock, you can still get deadlock. Eg: ThreadM (master) starts ThreadS (slave). S acquires a lock. M calls suspend method on S, then tries to acquire the lock, which is held by the suspended S. S is waiting to be resumed by M — deadlock.

Now the 4 conditions of deadlock:
* wait-for cycle? indeed
* mutually exclusive resource? Just one resource
* incremental acquisition? indeed
* non-Preemptive? Indeed. But if there’s a third thread, it can restart ThreadS.

In multithreaded apps, I feel single-lock deadlock is very common. Here’s another example — “S gets a lock, and wait for ConditionA. Only M can create conditionA, but M is waiting for the lock.”

Here, ConditionA can be many things, including a Boolean flag, a JMS message, a file, a count to reach a threshold, a data condition in DB, a UI event where UI depends partly on ThreadM.

bond price vs interest-rate swings, again

http://faculty.weatherhead.case.edu/ritchken/documents/Chap_9.pdf says

“As interest rates increase, bond prices[1] decline while the returns [2] from reinvested coupon receipts increase.”

The effect of 1 and 2 are for different reasons.

[1] bond sellers are *forced* to discount their bonds more deeply. Inflation (as indicated by prevailing interest rates — libor) immediately and automatically increases the discount advertisement known as “yield”. When a seller discounts more deeply, advertised price automatically declines. If a seller doesn’t deepen the discount, no one wants to buy his bond.

[2] is due to higher interest rate when you reinvest the money in higher-interest securities including a regular CD.

deadlock involving 1 lock,2 threads

See also ..

Deadlock is seldom found in real projects. Even if an observed problem is actually due to deadlock, the problem is usually resolved without the deadlock being identified. Most managers and teams are happy to see a problem disappear and not motivated to uncover root cause(s).

The classic 2-lock deadlock situation is usually hard to reproduce, which makes it a rare “natural disaster”, but possibly catastrophic. I’ve never seen it but if a deadlock were consistently reproducible, then it would be much easier to resolve.

However, if you are serious about deadlock risks in real projects, you can’t ignore one of the more common deadlock patterns — 2 threads deadlock each other with a single lock.

I’d /go out on a limb/ to claim that this pattern is arguably more common than the 2-lock textbook pattern. It may account for up to 90% of all uncovered deadlock cases.

P90 [[eff c#]] also shows a single-lock deadlock. Here’s my recollection — ThrA acquires the lock and goes into a while-loop, periodically checking a flag to be set by ThrB. ThrB wants to set the flag but need the lock.

y a regular developer need design patterns

I asked a friend familiar with design patterns. Here's his answer + my comments.

* Sometimes you need to provide an API to another developer. I feel it's often beneficial to provide a familiar API based on a familiar pattern.
* When you refactor existing code
* A lot of frameworks out there embody design patterns. If you have to create your own framework (for whatever reason), you might need to decipher and follow the same design patterns.

In all these scenarios, concept is more important than knowledge. Variations on the theme needed.

c# delegate – a few points

See also http://bigblog.tanbin.com/2011/11/c-delegate-closest-ancestors.html

Usage inside c# foundation
– thread creation
– event fields in classes
– GUI event handler
– onMsg() type of thing
– LINQ
– anon methods, lambda expressions

Physical implementation of a delegate instance? unchanged for 20 years —  a wrapper object over a function pointer. There’s only one address for the referenced function, but possibly many wrapper instances.

AtomicReference = a global lookup table

As briefly mentioned in another post, an AtomicReference object is (serious!) a global [1] lookup table.

Example 1: Imagine a lookup service to return the last trade on NYSE. If 2 trades execute in the same clock cycle (rare in a GHz server), then return a Collection.

This is a lookup table with just one default key and a value which is an object’s address. A simple example, but we may not need CAS or AtomicReference here — each thread just overwrite each other.

Example 2: say a high frequency trading account must maintain accurate cash balance and can’t afford to lose an update, because each cash balance change is large. CashBalance isn’t a float (there’s no AtomicFloat anyway) but an object.

This is also a single-entry lookup table. Now how do you update it lock-free? CAS and AtomicReference.

[1] In reality, the AtomicReference is not globally accessible. You need to get a reference to it, but it’s better to ignore this minor point and focus on the key concept.

AtomicReference needs immutables

Another post briefly explains AtomicReference + immutable = lockfree. Can we make do without the immutable guarantee? A subtle but crucial question that deserves a focused discussion.

Look at the earlier post showing sample code. Now introduce one of the common loopholes (reference leak for eg) into the Immutable wrapper. If i succeed with compareAndSet, but another thread actually modifies my Account object state in the mean time, then i have unknowingly put another thread’s data into the “global-lookup-table” .

Since my method is “committing” this object into the lookup table, then this method need control on the object state, and should prevent “brood parasite” like this.

FX dollar amount per trade

For a large FX ECN, USD 1 – 5 mio. “99% of trades are below 5 mio. A big trade should be sliced up to minimize market impact.”

http://www.investopedia.com/articles/forex/06/interbank.asp says — The minimum transaction size of that can be dealt on EBS/Reuters tends to one million of the base currency. The average one-ticket transaction size tends to five million of the base currency. What would be an extremely large trading amount (remember this is unleveraged) is the bare minimum quote that banks are willing to give – and this is only for clients that trade between $10 million and $100 million and just need to clear up some loose change on their books.

USD 10 mio for institutional (client) trades in a big bank.

FX spot – voice trading still happening

http://www.investopedia.com/articles/forex/06/interbank.asp

“even though online foreign exchange trading is available, many of the large clients who deal anywhere from $10 million to $100 million at a time (cash on cash), believe that they can get better pricing dealing over the phone than over the trading platform. This is because most platforms offered by banks will have a trading size limit because the dealer wants to make sure that it is able to offset the risk.”

##[10]result-oriented lead developer skills

(… that are never tested in interviews)

Many buy-side shops need a lead developer not to manage people but get things done.

In many of my past teams, Site-specific local system knowledge is 95% of the required knowledge. Generic knowledge pales in comparison including portable GTD and zbs.

[t] diagnosis
[t] Unravel and explain (high AND low-level) data flow and business logic. Requests come from business and other teams.
[t] how-to-achieve (H2A) a requirement — knowledge about existing system is far more relevant than generic techniques. Highly valued by employers. Budget is often allocated to requirements.

I feel the Lab49 consultants and the productive GS developers, and Sundip Jangi are good at all these areas.

[t=tracing large amounts of code is required]

pack() – quite different from revalidate, repaint etc

I believe system repeatedly executes revalidate() and repaint(), …. but pack() only once.

pack() is all about
1) sizing visual components (imprecise wording) but it has a side effect of
2) realizing a jcomponent.

Best summary (found online) for 1) —

pack() compacts the window, making it just big enough.”The pack method sizes the frame so that all its contents are at or above their Preferred sizes.”

leverageRatio – option should be a *low-cost* insurance

If you think in terms of “controlling” 100 IBM shares, then option gives you leverage over outright ownership. My CFA book explained that risk-management tools like options (and futures, swaps[3]) are like insurance products and therefore should be low-cost. Cheaper than …. than trading the underlier.

That means spot/premium (P/S) should be a high multiple, and the inverse a low percentage. See other posts on that percentage.

First derivative of premium against underlier is the #1 greek– delta. Note d_P/d_S != P/S. ATM delta is about 50, but leverage is much higher than 2 !

For a stock trading at $77, it takes $77,000 to own 1000 stocks. If an ITM call option sells for $6.50, then 10 contracts cost $6.50 * 100 * 10 = $6500, and give us a comparable exposure or control. Comparable (not “similar”) because for delta hedge, the 10 call contracts amount to 60% * 1000 = 600 stocks.

The ratio of $77/$6.50 is known as option “leverage ratio” or “leverage” for short. See http://www.tradingblock.com/Learn/public/ShowLearnContent.aspx?PageID=28 What determines leverage ratio? Volatility.

[3] I feel futures is also “low-cost” insurance because all futures contracts are traded by margin, so you use a small amount of cash to “control” a large “insurance” amount.

BNP IV

Q: If I do a System.exit() in try{}?
AA: finally block skipped

Q: when will sybase do an automatic update stats

Q: what if I override equals() but not hashCode()
A: (my guess) won't bother you unless you use the hashed containers.

Q: do u use spring application context
A: (my guess) the xml bean factory is a subclass or superclass of the app context?

Q: if I have a lot of concurrent insert/delete/updates to a collection, what collection shall I use? What about linkedList?
A: iterator will throw a lot of CME.
AA: now I know concurret linked queue, but no random access.

Q: error detection in sproc?
A: mssql has try/catch? Yes

Q: uninitialized local variable?
AA: now I know fields have default initial values. Local variables don't, so compiler won't allow uninitialized local vars.

Q: if a queue receiver comes online after a while, will it get everything?

Q: in unix, how do I look into the status of a java process?

Q: once you start a task in a thread pool, how do you control it?
A: I only known A future can cancel a task. Any other control?

profit margin in equities

* there is Direct Market Access (DMA) – like brokers / trading houses that facilitate equity trades – extremely low margins
* there are block trading venues that facilitate low-impact block trades between buy side vendors – high margins
* then there are various equity derivatives, like options trading and equity swaps – that's another ball game

vol skew and thick tail — bit of insight

(a beginner’s understanding.)

As observed on any equity vol surface, implied volatilities almost always increase with Decreasing strike – that is, OTM puts (dominating low strikes) trade at higher implied volatilities than OTM calls (dominating high strikes).

In theory, The naive constant-vol model [1] predicts the vol smile curve flattens to a flat line. Lognormal distribution assumes a stock is equally likely to gain 25% or drop 20% (see separate blog), but for a real stock, -20% is more likely. Real world stocks are more “panicky”.

[1] exact def? not sure. It could possibly mean “const local vol”. But i guess it means “at any moment in time, fair valuations (and bid/ask) of a chain of options should reflect the same implied volatility” — BS assumption.  When market players perceive a vol hike, it has to be a parallel shift across strikes, and all options on the chain should rise exactly those amounts to reflect the same but /heightened/ vol.

Skew steepens when markets decline — observation over the past decades (probably since 1987 crash). When markets crash, stocks can drop more than constant-vol suggests. Downside risk is higher than predicted. Option writers are insurers. Insurers know “downside” is bigger so they charge higher premiums. Buyers are actually willing to pay higher because they too know downside is more likely.

Therefore implied vol for low strike Puts are higher than ATM puts. Note bid/ask premium is obviously lower than near-the-money Puts — due to arbitrage.

If you plot log(daily close price relative) in a _histogram_, constant-vol predicts a bell histogram, but I believe a real histogram actually shows a thick tail.