Runnable object differ from a func ptr in a C thread creation API?

(C# Delegates???)
Q: how is a Runnable object different from a function pointer in a C thread creation API?

Let’s start from the simple case of a fieldless Command object. This command object could modify a global object but the command object itself has no field. Such a Runnable object is a replicable wrapper over a func ptr. In C, you pass the same func ptr into one (or more) thread ctor; in java you clone the same Runnable object and feed it to a new Thread. In both cases, the actual address of the run() method is a single fixed address.

Now let’s add a bit of complexity. I feel a Runnable can access a lot of objects in “creator-scope” — Since a Runnable object is created at run time often inside some methods, that creator method has a small number of objects in its scope.

Assuming a 32bit platform,

Q: are these variables (more precisely objects AND their 32bit references) living the stack of the run() method or as hidden fields of the Runnable object?
%%A: no difference. Since java compiler guarantees that these variables are “stable”, compiler can therefore create implicit fields in the Runnable object

Does a func ptr in C also have access to objects in creator-scope? I think so. Here’s an example taken from my blog on void ptr —

  thread (void (FVVP*)(void *),   void* vptr) // creates and starts a new thread. FVVP means a functor returning Void, accepting a Void Pointer

Above is a thread api with 2 arguments — a functor and a void ptr. The functor points to a function accepting a void ptr, so the 2nd argument (creator scope) feeds into the functor.

gemfire GET subverted by(cache miss -> read through)

get() usually is a “const” operation (c++ jargon), but gemfire can intercept the call and write into the cache. Such a “trigger” sits between the client and the DB. Upon a cache miss, it loads the missing entry from DB.

When Region.get(Object) is called for a region entry that has a null value, the load method of the region’s cache loader is invoked. The load method *creates* the value for the desired key by performing an operation such as a database query.

A region’s cache loader == a kind of DAO to handle cache misses.

In this set-up, gemfire functions like memcached, i.e. as a DB cache. Just like the PWM JMS queue browser story, this is a simple point but not everyone understands it.

When an application requests for an entry (for example entry key1) which is not already present in the cache, if read-through is enabled Gemfire will load the required entry (key1) from DB. The read-through functionality is enabled by defining a data loader for a region. The loader is called on cache misses during the get operation, and it populates the cache with the new entry value in addition to returning the value to the calling thread.

STL creators’ design priority

Simplicity and ease of use was a low priority for the STL creators. Efficiency is perhaps #1 design priority. Both time cost and space cost. I guess STL was designed to be usable in telecom and other real time small equipment with limited memory and CPU resources.

Whenever people optimize for efficiency, they *specialize* their components for various scenarios. I think that explains the proliferation of iterators. Also, many algorithms overlap, as methods and as free functions.

Template itself was invented for efficiency, at the cost of simplicity. STL exemplifies template usage. Efficiency in terms of code reuse and algorithm abstraction.

The father of STL doesn’t believe in OO. Any OO design in STL? STL was designed just like C (without classes) — for maximum algorithm efficiency and maximum abstraction. Classes and templates were employed as a means to that end.

##20 specific thread constructs developers need

Hey XR,

We both asked ourselves a few times the same question —
Q: why multi-threading java interview questions touch on almost nothing besides synchronized, wait/notify + some Thread creation.

Well these are about the only concurrency features at the language level[1]. For an analogy, look at integrated circuit designers — They have nothing but transistors, diodes and resistors to playwith, but they build the most complex microchips with nothing else.

I feel the 2nd level of *basic*, practical concurrency constructs would include

* basic techniques to stop a thread
* event-driven design patterns, listeners, callbacks, event queues
* nested class techniques — rather useful in concurrent systems
* join()
* deadlock prevention
* timers and scheduled thread pools
* thread pools and task queues
* concurrent collections
* consumer/producer task queues

More advanced but still comprehensible constructs
* interrupt handling
* reader writer lock
* exclusion techniques by using local objects
* exclusion techniques by MOM
* immutables
* additional features in
* futures and callable
* latch, exchanger, barrier etc* AtomicReference etc, volatile
* lock free
* counting semaphore

[1] 1.5 added some foundation classes such as Lock, Condition, atomic variable …

multi-threading with onMessage()

Update — is another tutorial. says -- A JMS Session is a single threaded context for producing and consuming messages. Once a connection has been started, any session with a registered message listener(s) is *dedicated* to the thread [1] of control that delivers messages to it. It is erroneous for client code to use this session or any of its constituent objects from another thread of control. The only exception to this is the use of the session or connection close method.

In other words, other threads should not touch this session or receive/mention this object in source code

[1] I think such a session is used by one thread only. but who creates and starts that thread? If I write the jms listener app, then it has to be started by my app. [[java enterprise in a nutshell]] P329 has sample code but looks buggy. Now I think java.jms.Connection.start() is the answer. API mentions “Starts delivery of *incoming* messages only”, not outgoing! This method is part of a process that
* creates a lasting connection
* creates the sessionSSS in memory (multiple-session/one-connection)
* perhaps starts the thread that will call onMessage()

——– says:
Sessions and Threading

The Chat application uses a separate session for the publisher and subscriber, pubSession and subSession, respectively. This is due to a threading restriction imposed by JMS. According to the JMS specification, a session may not be operated on by more than one thread at a time. In our example, two threads of control are active: the default main thread of the Chat application and the thread that invokes the onMessage( ) handler. The thread that invokes the onMessage( ) handler is owned by the JMS provider(??). Since the invocation of the onMessage( ) handler is asynchronous, it could be called while the main thread is publishing a message in the writeMessage( ) method. If both the publisher and subscriber had been created by the same session, the two threads could operate on these methods at the same time; in effect, they could operate on the same TopicSession concurrently — a condition that is prohibited.

currenex usage in a FX dealer bank

The bank utilizes currenex as a generic infrastructure to build a private ECN, so they can stream their customized live quotes to their private clients. No data is public.

I know this private ECN supports quote dissemination (“advertising”). Not sure about other functionalities.

##where implementation complexities lie: DB, java …

You could be familiar with all the generic tech below, but when you take over a production codebase as a new “owner”, sooner or later you have to wade through megabytes of code and learn to trace the flow. To me, that’s the essence of complexity.

For each DB column, how does data flow from end to end? It takes unknowable effort to document just one column in one table. in a DB-centric world (like GS), these are often the first step in understanding (or architecting) a system, either a tiny sub-system or a constellation of systems
DB join logic — often directly reflects real-world relationship
DB constraints — they affect system behavior
DB view definitions
stored proc, triggers
html — can implement categories, multiple-choices, page flow, data clump…
config files that control java/script/proc
everything in JIL
autosys dependencies
java/c++/c# — are the “main” implementation language in a typical financial apps, where most of the code logic exist.

Tracing code flow means we must trace through all of the above. Ironically, interviews can’t easily test how fast someone can “figure out” how data flows through a system. I feel this could be the hardest thing to find out during an interview.

This is yet another reason to stay in java (or c++/c#) and avoid spend too much time scripting. Over time you might become better at tracing code flow through java.

I used to think I can figure out the flow by myself. Now i know in a typical financial system this is impractical. You must ask the experienced developers.

scope-exit destructor call

When you exit a scope, the object(s) created on the current stack frame is destroyed (except RVO — see

* stackVars have matching var scope and object life time so they are destroyed.
* pbclone params — are just like stackVars
* pbref params — variable has scope but the object is not on the current stack frame and has a longer lifetime. Not destroyed.
* address passed in — the param is like a local pointer var. This var is destroyed when it goes out of scope, but the pointee is and should not.
* heapy-thingy instantiated locally — the pointer var is an auto/local/stackVar — so destroyed when it goes out of scope, but the pointee is not! This is a classic memory leak. In this case, whoever calling new should call delete.