How RTS achieved 400-700 KMPS #p2p

Official benchmark result – 700 KMPS per instance of rebus or parser. Here are some enabling features:

  • [! iii] mostly single-threaded apps. Some older parsers are multi-threaded but each thread operating in single threaded mode. No shared mutable:)
    • In the order book replication engine, we use lockfree in one place only
  • ! (ST mode loves) Stateless [1] parser, small footprint. Only downstream component (Rebus) handles cancels/busts, modifications.. So we can run many parser instances per machine, or many threads per instance, fully utilizing the cpu cores. See https://haydenjames.io/linux-performance-almost-always-add-swap-space/
  • [! ii] allocation avoided — on millions of Output message objects. Pre-allocated ring buffer eliminates new(). very few and small STL containers .. mostly arrays… pre-allocate DTOs@SOD #HFT
  • ! allocation avoided — on millions of Input message objects, thanks to reinterpret_cast() on pointers… nearly Zero-copy. See reinterpret_cast^memcpy in raw mktData parsing
  • ! allocation avoided — custom containers to replace STL containers used, since they all allocate from heap
  • ! p2p messaging beats MOM 
  • ! Socket buffer tuning — to cope with busts. 64-256MB receive buffer in kernel; 4MB UserModeBuffer
  • low memory footprint. Most parsers use around 60MB. (My parsers was 100 to 150MB.) I think there are many benefits in terms of latency.
  • epoll — to replace select() with 2 orders of magnitude improve in throughput
  • buffering of Output list (of messages). I guess this hurts latency but enhances throughput
  • Very fast duplicate seq check, without large history data — a hot function
  • large containers like the ring buffer are pre-sized. No reallocation.
  • mostly array-based data structures — cache-friendly
  • Hot functions use pbref or RVO or move semantics, minimizing stack allocations
  • aggressively simplified parsing. Minimal logic
  • Special Logger to reduce I/O cost
  • Sharding to speed up symbol lookup
  • kernel bypass : RTS
  • No virtual functions — enhancing data cache and inlining .. 3 overheads@vptr #ARM
  • Inline
  • —-Speed with control
  • Best of both to reduce the real risk of our connection becoming slow or lossy
  • Depth monitor to detect missed messages
  • Capacity tracker to monitor mps rates
  • Missed seq reporting
  • [!=one of the really effective techniques in RTS]
  • [i=presented in multiple (!!, !!!) interviews]

[1] parser does remembers the sequence numbers. In fact sequence number management is crucial. Before retrans was added to parser, we never sent any retrans request. We did keep track of gaps. Mostly we used the gap-tracker to check duplicates across Line A/B. We also relied on our sequence number state to handle sequence resets.

async messaging-driven #FIX

A few distinct architectures:

  • architecture based on UDP multicast. Most of the other architectures are based on TCP.
  • architecture based on FIX messaging, modeled after the exchange-bank messaging, using multiple request/response messages to manage one stateful order
  • architecture based on pub-sub topics, much more reliable than multicast
  • architecture based on one-to-one message queue

strategic value of MOM]tech evolution

What’s the long-term value of MOM technology? “Value” to my career and to the /verticals/ I’m following such as finance and internet. JMS, Tibrv (and derivatives) are the two primary MOM technologies for my study.

  • Nowadays JMS (tibrv to a lesser extent) seldom features in job interviews and job specs, but the same can be said about servlet, xml, Apache, java app servers .. I think MOM is falling out of fashion but not a short-lived fad technology. MOM will remain relevant for decades. I saw this longevity deciding to invest my time.
  • Will socket technology follow the trend?
  • [r] Key obstacle to MOM adoption is perceived latency “penalty”. I feel this penalty is really tolerable in most cases.
  • — strengths
  • [r] compares favorably in terms of scalability, efficiency, reliability, platform-neutrality.
  • encourages modular design and sometimes decentralized architecture. Often leads to elegant simplification in my experience.
  • [r] flexible and versatile tool for the architect
  • [rx] There has been extensive lab research and industrial usage to iron out a host of theoretical and practical issues. What we have today in MOM is a well-tuned, time-honored, scalable, highly configurable, versatile, industrial strength solution
  • works in MSA
  • [rx] plays well with other tech
  • [rx] There are commercial and open-source implementations
  • [r] There are enterprise as well as tiny implementations
  • — specific features and capabilities
  • [r] can aid business logic implementation using content filtering (doable in rvd+JMS broker) and routing
  • can implement point-to-point request/response paradigm
  • [r] transaction support
  • can distribute workload as in 95G
  • [r] can operate in-memory or backed by disk
  • can run from firmware
  • can use centralized hub/spoke or peer-to-peer (decentralized)
  • easy to monitor in real time. Tibrv is subject-based, so you can easily run a listener on the same topic
  • [x=comparable to xml]
  • [r=comparable to RDBMS]

HFT mktData redistribution via MOM #real world

Several low-latency practitioners say MOM is unwelcome due to added latency:

  1. The HSBC hiring manager Brian R was the first to point out to me that MOM adds latency. Their goal is to get the raw (market) data from producer to consumer as quickly as possible, with minimum hops in between.
  2. 29West documentation echoes “Instead of implementing special messaging servers and daemons to receive and re-transmit messages, Ultra Messaging routes messages primarily with the network infrastructure at wire speed. Placing little or nothing in between the sender and receiver is an important and unique design principle of Ultra Messaging.” However, the UM system itself is an additional hop, right? Contrast a design where sender directly sends to receiver via multicast.
  3. Then I found that the RTS systems (not ultra-low-latency ) have no middle-ware between feed parser and order book engine (named Rebus).

However, HFT doesn’t always avoid MOM.

  • P143 [[all about HFT]] published 2010 says an HFT such as Citadel often subscribes to both individual stock exchanges and CTS/CQS [1], and multicasts the market data for internal components of the HFT platform. This design has additional buffers inherently. The first layer receives raw external data via a socket buffer. The 2nd layer components would receive the multicast data via their socket buffers.
  • SCB eFX system is very competitive in latency, with Solarflare NIC + kernel bypass etc. They still use Solace MOM!
  • Lehman’s market data is re-distributed over tibco RV, in FIX format.
  • A major hedge fund was targeting sub-10 microsec latency and decided Solace is unacceptable and Aeron was chosen

[1] one key justification to subscribe redundant feeds — CTS/CQS may deliver a tick message faster than direct feed!

MOM+threading Unwelcome ] low latency@@ #FIX/socket

Piroz told me that trading IT job interviews tend to emphasize multi-threading and MOM. Some use SQL too. I now feel all of these are unwelcome in low latency trading.

A) MOM – see also HFT mktData redistribution via MOMFor order processing, FIX is the standard. FIX can use MOM as transport, but not popular and unfamiliar to me.

B) threading – Single-Threaded-Mode is generally the fastest in theory and in practice. (I only have a small observed sample size.) I feel the fastest trading engines are STM. No shared mutable. Nsdq new platform (in java) is STM

MT is OK if they don’t compete for resources like CPU, I/O or locks. Compared to STM, most lockfree systems introduce latency like retries, and additional memory barrier. By default compiler optimization doesn’t need such memory barriers.

C) SQL – as stated elsewhere, flat files are much faster than relational DB. How about in-memory relational DB?

Rebus, the order book engine, is in-memory.

MOM advantage over RMI

An Singapore ANZ telephone interviewer (Ivan?) 2011?) drilled me down — “just why is MOM more reliable than a blocking synchronous call without a middleware?” I feel this is a typical “insight” question, but by no means academic or theoretical. There are theories and (more importantly) there are empirical evidence. Here I will just talk about the theoretical explanations.
Capacity — MOM can hold a lot more pending requests than a synch service. A RMI or web server can have a limited queue. The TCP socket can hold requests in a queue, but all limited.  In contrast, MOM queue can be on disk or in the broker host’s memory. Hundreds or possibly millions time higher capacity.
Burst of request can bring down an RMI system even if it is loaded lightly 99% of the time.

But what if the synch service has enough capacity so no caller needs to wait? I feel this is wishful thinking. For the same hardware capacity, MOM can support 10x or 100x more concurrent requests. For now, let’s assume capacity isn’t the issue.

Long-running — if some of the requests take a long time (like a few sec) to complete then we don’t want too many “on-going” tasks at the same time. They compete for CPU/memory/bandwidth and can reduce stability and reliability. Even logging can benefit from async MOM design.
But again let’s assume the requests take no time to complete.
ACID — Reliable MOM always persists messages before replying with a positive ACK.

Jump IV Aug 2012

Q1: pros and cons of vector vs linked list?
Q1b: Given a 100-element collection, compare performance of … (iteration? Lookup?)

Q: UDP vs TCP diff?
%%A: multicast needs UDP.

Q: How would you add reliability to multicast?

Q: How would you use tibco for trade messages vs pricing messages?
%%A CM for trade and non-CM for price?

Q5: In your systems, how serious was data loss in non-CM multicast?
%%A: Usually not a big problem. During peak volatile periods, messaging rates could surge 500%. Data loss would deteriorate.

Q5b: how would you address the high data loss?
%%A: test with a target message rate. Beyond the target rate, we don’t feel confident.

Q7: how is order state managed in your OMS engine?
%%A: if an order is half-processed and pending the 3nd reply from ECN, the single thread would block.
AA: in ETS, Every order is persisted till EOD (to support bust/corrections by exchange or cancels by clients)

Q7b: even if multiple orders (for the same security) are waiting in the queue?
%%A: yes. To allow multiple orders to enter the “stream” would be dangerous.

Now I think the single thread should pick up and process all new orders and keep all pending orders in cache. Any incoming exchange messages would join the same task queue (or a separate task queue) – the same single thread.

3 main infrastructure teams
* exchange connectivity – order submission
* exchange connectivity – pricing feed. I think this is incoming-only, probably higher volume. Probably similar to Zhen Hai’s role.
* risk infrastructure – no VaR mathematics.

request/reply in one MOM transaction

If in one transaction you send a request then read reply off the queue/topic, i think you will get stuck. With the commit pending, the send won’t reach the broker, so you the requester will deadlock with yourself forever.

An unrelated design of transactional request/reply is “receive then send 2nd request” within a transaction. This is obviously for a different requirement, but known to be popular. See the O’Relly book [[JMS]]

tibRV java app receiving msg – the flow

The flow:
* my app creates “my private queue”, calling the no-arg constructor TibrvQueue(void)
* my app (or the system) creates a callback-method object, which must implement onMsg(TibrvListener,TibrvMsg). This is not a bean and definitely not a domain entity.
* my app creates myListner, calling its constructor with
** a callback-method object
** myPrivateQueue
** a subject
** other arguments

* my app calls queue.dispatch(void) or poll(void). Unlike JMS, NO message is returned, Not on this thread!
* Messages arrive and onMsg(..) runs in the dispatch thread, asynchronously, with
** first arg = myListner
** 2nd arg = the actual message

————————
Component – Description

Event Object – Represents program interest in a set of events, and the occurrence of a matching event. See Events on page 81.

Event Driver – The event driver recognizes the occurrence of events, and places them in the appropriate event queues for dispatch. Rendezvous software starts the event driver as part of its process initialization sequence (the open call). See Event Driver on page 83. No java api i believe.

Event Queue – A program (“”my app””) creates (private) event queues to hold event (message) objects in order until the program (“”my app””) can process them.

Event Dispatch Call – A Rendezvous function call that removes an event from an event queue or queue group, and runs the appropriate callback function to process the event. I think my app calls this in my app’s thread.

Callback Function – A program (“”my app””) defines callback functions to process events asynchronously. See Callback Functions on page 89.

Dispatcher Thread – Programs (“”my app””) usually dedicate one or more threads to the task of dispatching events. Callback functions run in these threads. class com.tibco.tibrv.TibrvDispatcher extends java.lang.Thread. Event Dispatcher Thread is a common design pattern in java, c#…

tibrv supports no rollback – non-transactional transport

Non-transactional “transports” such as TIBCO Rendezvous Certified and socket do not allow for message rollback so delivery is not guaranteed.  Non-transactional transports can be problematic because the operation is committed to the transport immediately after a get or put occurs, rather than after you finish further processing and issue a Commit command.

JMS does support transactional “transport”, so you can “peek” at a message before issuing a Commit command to physically remove it from the queue.

http://infocenter.sybase.com/help/index.jsp?topic=/com.sybase.ebd.eai.help.src/configuring_a_rendevous_reliable_session_and_transport.htm

Fwd: jms connection sharing

Reading [[java message service]] I realized that if a topic has 1000 concurrent client connections (a few thousands achievable by 2000 technology), we could have several thousand concurrent applications receiving a new message. JMS Connection sharing dramatically increase throughput and bandwidth.

Many middleware products support jms connection sharing.

RV vs 29West

http://kirkwylie.blogspot.com/2009/03/best-effort-delivery-fails-at-worst.html is a personal blog I find fairly trust-worthy.

As a reminder, distributed systems operate a series of broker instances, where each one is located in the same process, VM, or OS as the code participating in the message network. For example, it might be a library that's bound into your process space (like 29West), or it might be a daemon that is hosted on the same machine (like Rendezvous). These distributed components then communicate over a distributed, unreliable communications channel (such as Ethernet Broadcast or IP Multicast).

1 JMS queue, multiple sesssions and threads

My friend Kunal Khosla pointed out developers can increase queue reading performance by connecting 2 (or more) listener sessions to the same physical queue.

JMS rule — Each message is delivered to exactly 1 session.

Since the 2 sessions are active-active (no “standby”), who will get it? Round robin is the choice of OpenJMS.

JMS rule — 1 thread per session. So you end up with 2 threads load balancing off the queue.

JMS rule — ClientID is a unique ID. The 2 sessions probably need different ClientIDs.

See http://activemq.apache.org/multiple-consumers-on-a-queue.html and
http://activemq.apache.org/how-does-a-queue-compare-to-a-topic.html

tibrv is decentralized, peer-to-peer, !! hub/spoke

JMS unicast topic has a centralized [1] broker. RV is decentralized. All the rvd processes are equal in status. This is a little bit similar to
– email servers among Ivy League universities.
– file-sharing peer networks

RV is peer-to-peer; JMS is hub/spoke. Note in JMS request/response, “server” has ambiguous meanings. Here we mean the broker.

[1] You can load-balance the broker but the “outside world” views them as a single broker. It’s the Hub of hub/spoke. Scalability bottleneck.

MOM server sends data + thread to client runtime

On many (if not all major) aspects, a market data feed is implemented exactly as MOM, but our focus today is market data systems, not MOM.

A market data vendor almost always ships 2 pieces of software — a server and a client-runtime. Server is a standalone (but embeddable) process, a JVM or C application. This server process could be managed by the vendor (like NYSE). Alternatively a subscriber bank could run its own server process.

I borrowed the term from JMS literature — “client runtime” is never standalone, but an embedded jar or dynamically loadable library (*.so files in unix).

We all know that server PROCESS sends data to client runtime, but someone pointed out that server also sends the thread(s) along with the data. Here’s my interpretation —

Initially in the client JVM (or C process) there’s just one thread blocked in some kind of wait(). I feel it has to be a server socket blocked in accept(). When data comes down the wire, an additional socket is created in client JVM to receive it. This requires a new thread, since the old thread must go back to waiting.

Q1: what does the new thread do?
%%A: newSocket.getInputStream().readxxx() and then onMsg(). I believe system maintains a list of registered listener objects, each with its own STATE. Each listener instance can also unsubscribe and leave the “club”. Upon a single message arrival, the new thread calls listener1.onMsg(…), listern2.onMsg(…) and so on, where listener-N are interested in this message.

Q2: are these called in sequence or in parallel?
%%A: both possible. Since the new thread is recently created for this message, i believe it’s possible to create multiple threads. It’s also possible to reuse a thread pool.

Q3: what are the pros and cons of single vs multiple thread?
%%A: if real time, and onMsg() is time consuming (quite common), then no choice — multiple.

JMS queue is (almost) a FIFO

Not sure about topics, but a jms queue is a priority queue as far as I know.

JMS guarantees FIFO order for messages of the same priority but will attempt to expedite messages of higher priority.

Topic messages also have a priority field, so I assume delivery is based on priority too.

temporary topic – is actually a temporary queue

Q: Since the topic is unknown to other sessions, (See also blog on temp topic vs sybase temp table http://bigblog.tanbin.com/2010/10/temporary-queue-jms.html) how do other sessions publish to it?
A: this topic is *revealed* via the ReplyTo header. Senders receive private “invitations” to send to the temp topic. Once received, the recipient can save that topic and hit it any time.

Temp topic accepts no subscriber except the topic’s creator. 

This exclusive subscriber must be non-durable.

Since the topic has at most 1 receiver, it’s better named a TemporaryQueue.

when would JMS broker persist a msg

Condition: When a persistent producer sends a msg, broker saves it before sending ack to producer. This is __regardless__ of durable/nondurable subscribers.

Condition: If there’s any durable subscriber, then broker also saves the msg, probably before hitting subscriber. This is __regardless__ of the persistent/nonpersistent producer.

Note each condition alone necessitates saving msg to persistent store.

P101 of [[java message service]] describes the tricky scenario of durable-subscriber/non-persistent-producer. As stated above, broker does save the msg, but probably after sending ack to producer. Producer thinks it’s received by broker and records the ack as evidence, then goes offline. Broker can fail and lose the msg.

## if a event-driven trading engine is!! responding

  • Culprit: deadlock
  • Culprit: all threads in the pool are blocked in wait(), lock() or …
  • Culprit: bounded queue is full. Sometimes the thread that adds task to the queue is blocked while doing that.
  • Culprit: in some systems, there’s a single task dispatcher thread like swing EDT. That thread can sometimes get stuck
  • Suggestion: dynamically turn on verbose logging in the messaging module within the engine, so it always logs something to indicate activity. It’s like the flashing LED in your router. You can turn on such logging by JMX.
  • Suggestion: snoop
  • Suggestion: for tibrv, you can easily start a MS-windows tibrv listener on the same subject as the listener inside the trading engine. This can reveal activity on the subject

observer can be MT or SingleT

Observer pattern can either go MT or stay ST ie single-threaded.  Both are common.

The textbook showcase implementation is actually single threaded. The event happens on the same thread as the notifications, often looping through the list of observers. That’s actually a real world industrial-strength implementation.

(—> Notably, the single thread can span processes, where one method blocks while another method runs in an RMI server. This is the so-called synchronous, blocking mode.)

Other implementations are MT — event origination and and notifications happen on different threads. As explained in the post on async and buffer ( http://bigblog.tanbin.com/2011/05/asynchronous-always-requires-buffer-and.html ), we need a buffer to hold the event objects, as the observer could suffer any amount of latency.

handle all tibrv advisory messages

For each tibrv transport object, you need to link it to a callback
object like this. If you leave one transport unattended, advisories will
show up on it. Below is a partial solution that is almost water tight.

new TibrvListener(new TibrvQueue(), mySysAdvisoryCallback,
transport,”_RV.WARN.SYSTEM.>”,null);

static private TibrvMsgCallback mySysAdvisoryCallback = new
TibrvMsgCallback() {
public void onMsg(TibrvListener listner, TibrvMsg msg) {
if
(“_RV.WARN.SYSTEM.LICENSE.EXPIRE”.equals(msg.getSendSubject())) {
return;
}
log.warn(” Received advisory message: ” + msg) ;
}
};

buffer and queue within rvd process memory

rvd command accepts -max-consumer-buffer size

When present, the daemon enforces this upper bound (in bytes) on each consumer buffer (the queue of messages for a client transport). When data arrives faster than the client consumes it, the buffer overflows this size limit, and the daemon discards the oldest messages to make space for new
messages. The client transport receives a CLIENT.SLOWCONSUMER advisory.

When absent or zero, the daemon does not enforce a size limit. (However, a 60-second time limit on messages still limits buffer growth, independently of this parameter.)

tibrv subject based address`^jms topics

tibrv subject based address` (SBA)^jms topics

Among the defining features of Tibrv, [ A) decentralization and B) SBA ] are 2 sides of the same coin. For me, it’s a struggle to find out the real difference between SBA vs jms topics. Here’s my attempt.

#1) In SBA(subject-based-addressing), there’s no central server holding a subject. In tibrv, any sender/receiver can specify any subject. Nice — No admin to create subjects. I tested it with tibrvlisten/tibrvsend. By contrast, In JMS, “The physical creation of topics is an administrative task” on the central broker [2], not on the producer or consumer.

) If a jms broker goes down, so do topics therein.

) a jms publisher (or subscriber) must physically connect to a physical broker process[2]. The broker has a physical network address. Our topic is tied to that physical address. A tibrv subject has no physical address.

) tibrv SBA (subject-based-addressing) uses a subject tree. No such tree among JMS topics. The tree lets a subscriber receive q(prices.stocks.*)  but also q(*.your.*). See rv_concepts.

[2] such as the weblogic server in autoreo.

message requester to wait5min for response #wait()

Imagine a typical request/reply messaging system. I think in JMS it’s usually based on temp queues, reply-to and correlation-Id — See other blog post. In contrast, RV has no broker. It’s decentralized into multiple peer rv daemons. No difference in this case —

Suppose a message broker holds a lot of queues. One of the queues is for a request message, from requester system to a pricing system. Another queue is for pricing system to return the new price to the requester.

Now, pricing system is slow. Requester should wait for no more than 5 minutes. If the new price comes back through the reply-queue 301 sec later, requester will ignore this stale price since it’s too risky to place an order on a stale price in a fast market. How do you implement this?

My design — Requester main thread can use condVar wait(5*60*000). Another thread in requester JVM can block forever in onMsg(), and notify main thread when something received.

Timer is the 2nd usage of condVar as Stoustrup described.

(I actually implemented this in a few trading engines.)

ML muni trading engine IV

(Now I think this is an RV environment, so some of my assumptions below aren’t valid.)

Suppose a MOM holds a lot of queues. One of the queues is for a request message, from requester system to a pricing system. Another queue is for pricing system to return the new price to the requester.

Now, pricing system is slow. Requester should wait for no more than 5 sec. If the new price goes into the reply-queue 6 sec later, requester should ignore it. How do you implement this?

A: requester sends in one thread, then wait(5). Upon return, it checks if anything received on the onMsg() thread. If nothing received, then it assumes timeout.

message retention +! durable subscriber

Q: what conditions are necessary for keep-forever?
A (obvious): message has no expiration date
A (obvious) : if a durable sub has not sent an ACK.

From now on, we focus on non-durable, which is more widespread in trading engines.

A: If an NDS (non-durable-subscriber) subscribes, then is abruptly lost, right after a message is published? In this case, will the broker consider the message subscribed but undelivered?

A: if an NDS subscribes, but is too slow

A: if an NDS gets into an infinite loop so can’t disconnect

A: if an NDS gets into a deadlock so can’t disconnect

A: if an NDS freezes up but doesn’t disconnect??

NDS unsubscribe happens at TopicSubscriber.close(). P77 [[JMS]]. But does it happen during a crash????

msg overflow ] JMS^tibrv #60sec

Message overflow is a frequent headache in MOM. Basically consumption rate (possibly 0) is slower than production rate, eating up storage capacity, either in memory or on disk.

  • eg: JMS Topic with NDS (non-durable-subscriber) only — almost free of overflow. Bonnie said broker will discard. Jerry said broker keeps track of alive subscribers. Every incoming message is guaranteed to be delivered (by retry) to each. Broker must persist pending messages. When a subscribe disconnects gracefully, broker no longer needs to keep any pending message for her.
  • eg: chatrooms don’t persist data but what type of MOM is that? probably NDS topic
  • eg: JMS queue. While any number of producers can send messages to the queue, each message is guaranteed to be delivered, and consumed by one consumer. If no consumers are registered to consume the messages, the queue holds (on disk) them until a consumer registers to consume them.
  • eg: JMS temp queue??
  • eg: regular queue? In SIM upfront, unread messages –persist–
  • eg: mailing list (topic with durable subscribers)? –Persist–
  • eg: JMS topic without durable subscriber? see post on message retention
  • eg: tibrv CM? Storage is the ledger file. –persist–
  • eg: tibrv non-CM? market data feed? Discarded but how soon? 60 sec by default? Yes, in slow consume and fast producer scenario message lost will be possible — http://arun-tibco.blogspot.com/2010/09/tibco-rv-faqs.html

 

simple JMS publisher (windows) + timer thread

import java.util.Date;

import java.util.Properties;

import java.util.Timer;

import java.util.TimerTask;

import javax.jms.JMSException;

import javax.jms.ObjectMessage;

import javax.jms.Session;

import javax.jms.Topic;

import javax.jms.TopicConnection;

import javax.jms.TopicConnectionFactory;

import javax.jms.TopicPublisher;

import javax.jms.TopicSession;

import javax.naming.Context;

import javax.naming.InitialContext;

/**

 * can run in Windows!

 *

 * @author bt62935

 *

 */

public class SimpleRunAnywherePublisher extends TimerTask {

 

      public static final String INITIAL_CONTEXT_FACTORY = “weblogic.jndi.WLInitialContextFactory”;

      public static final String PROVIDER_URL = “t3://…..”;

      public static final String TOPIC_CONNECTION_FACTORY = “ArbPad.TopicFactory”;

      public static final String TOPIC = “ArbPad.Topic.ETF.BondPrice”;

 

      public static final String TEST_MSG = “Test message “;

 

      public static void main(String[] args) {

            Timer timer = new Timer();

            timer.schedule(new SimpleRunAnywherePublisher(),1*1000, 1*1000);

      }

 

      public void run() {

            Properties prop = new Properties();

            prop.put(Context.INITIAL_CONTEXT_FACTORY, INITIAL_CONTEXT_FACTORY);

            prop.put(Context.PROVIDER_URL, PROVIDER_URL);

 

            TopicConnection conn = null;

            try {

                  Context ctx = new InitialContext(prop);

                  TopicConnectionFactory tcf = (TopicConnectionFactory) ctx.lookup(TOPIC_CONNECTION_FACTORY);

                  System.out.println(tcf.toString());

                  conn = tcf.createTopicConnection();

                  System.out.println(conn.toString());

                  conn.start();

                  TopicSession session = conn.createTopicSession(false, Session.AUTO_ACKNOWLEDGE);

                  System.out.println(session.toString());

                  Topic topic = (Topic) ctx.lookup(TOPIC);

                  System.out.println(topic.getTopicName());

                  TopicPublisher tPub = session.createPublisher(topic);

                  System.out.println(tPub.toString());

                  ObjectMessage msg = session.createObjectMessage(TEST_MSG + new Date());

                  tPub.publish(msg);

            } catch (Exception e) {

                  e.printStackTrace();

            } finally {

                  if (conn != null)

                        try {

                              conn.stop();

                              conn.close();

                        } catch (JMSException e1) {

                              e1.printStackTrace();

                        }

            }

      }

}

 

reply-to — JMS vs email

If you see a JMSReplyTo header in a message, it may mean a forwarding address.

Think of email. Alice sends Bob an email with a reply-to header set to Casper. When Bob replies it goes to Casper. Same in a JMS work-flow.

That's a special use case. In standard scenarios, reply-to is an essential feature of the JMS request-reply model.

4 ways to characterize MOM

A beginner should internalize these characteristics until they become 2nd nature.

* queue ie point-to-point –vs– unicast topic ie pub/sub [1] –vs– multicast. This is the most important difference.

* guaranteed (reliable) vs non-guaranteed. Multicast can be reliable too.

* Within the pub/sub unicast topic modle,
** there is durable sub vs non-durable sub
** there is persistent delivery vs non-persistent. Note this only applies to unicast, and only covers producer->broker leg, NOT broker->subscriber

* simultaneous broadcast like radio –vs– individual delivery like mail. P2P is always mail style.

* tcpip vs multicast over udp

— the well known
* sync vs async. Only relevant to unicast receivers. Multicast is always async. Sender is always sync.

* server push or client pull. I think client pull is synchronous as polling client usually blocks.

reliable JMS in various configurations

Simple Rule: In both queues and topics, I believe expiration setting overrides every delivery guarantee, including Durable sub. Idea is, sender knows its OK to drop a pending message after its useful life. See P101, 104[[JMS]]

Simple Rule: In both queues and topics, durable or not, non-persistent messages are always at risk to get lost[2] , so reliability is always below 100% — P102 [[JMS]]

[2] if broker acknowledges receipt to sender, then goes down, with the message. Persistent Messages are persisted before broker sends ack to sender, and sender method returns. Without receiving the ack, sender blocks for some time and throws an exception.

As we go though different configurations below, we will realize 100% reliability requires multiple conditions. Each missing condition can break 100% reliability.

– durable, persistent message without expiration? guaranteed
— same config but nonpersistent? not reliable
— same config with expiration? will expire

– p2p, persistent message without expiry, but a disconnected receiver? guaranteed — consider the SIM upfront queue.
— same config with expiry? will expire

– disconnected non-durable subscriber? won't receive
– connected non-durable subscriber, persistent message without expiration? best-effort. I believe broker will attempt delivery. If client disconnects before sending ack, broker may give up.

persistent messages may get delivered twice

Even though a persistent message is “delivered exactly once”, it can get delivered twice to a given receiver, if the broker doesn't receive acknowledgment in time. See P102 [[JMS]]. Failure could be in the broker or the client runtime.

If a trading system must receive each order message exactly once, there are many solutions.

* check the message redelivery flag, which is guaranteed to be set on 2nd delivery by the broker.
* check message id
* client_ack
* transacted message

durable subscribers, persistent store, persistent delievery

In our Front Office app, we have 100+ destinations (mostly topics) in a single weblogic JMS broker. Weblogic web console shows the number of DURABLE subscribers on each topic — I see all zeros. Therefore, after a restart, broker doesn’t need to de-serialize from disk the “pending” messages. However, I believe we do have a persistent store — probably a basic, common requirement in FO trading apps.

Q: when is the persistent store cleared?

Note if there’s at least one durable subscriber and there’s any pending message for her, then broker must persist it forever until either expired or delivered. (Expiration could be disabled.) Therefore durable subscribers ==> (imply) ==> persistent store on broker, but not conversely.

Persistent delivery (http://download.oracle.com/javaee/1.4/api/javax/jms/DeliveryMode.html) is unrelated to durable subscription (DS).

PD covers producer => broker.
DS covers broker => consumer.

Also,
PD is a producer setting
DS is a subscriber setting

“A message is guaranteed to be delivered once and only once … if the delivery mode of the message is PERSISTENT and if the destination has a sufficient message retention policy” Therefore, PD doesn’t always guarantee exactly-once.

subject-based addressing and tibrv

http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.60.5735&rep=rep1&type=pdf says

Subjects are arranged in a subject tree by using a dot notation, and clients can either subscribe to a single subject (e.g., finance.quotes.NASDAQ.FooInc) or use wildcards (e.g., finance.quotes.NASDAQ.*).

JMS topic is basically subject-based, or more precisely “Channel-based”, as a (selector-less) subscriber gets everything in the channel.

jms queue “can be compared to TIBCO Rendezvous Distributes Queues (RVDQ)”. I feel tibrv inbox is also like a queue. See http://www.cs.cmu.edu/~priya/WFoMT2002/Pang-Maheshwari.pdf.

tibrv vs EMS — answer from a Tibco insider

Q: is RV still the market leader in the “max throughput” MOM market?

Q: for trading apps, i feel the demand for max-throughput MOM is just
as high as before. In this space, is RV (and multicast in general)
facing any competition from JMS or newer technologies? I feel answer
is no.

A: I think RV is still the market leader.  29west is a strong
competitor.  TIBCO now sells a “RV in hardware appliance” partly to
address that.  I feel JMS is not really targeting max-throughput
space.

async JMS — briefly

Async JMS (onMessage()) is generally more efficient than sync (polling loop).

* when volume is low — polling is wasteful. onMessage() is like wait/notify in a producer/consumer pattern
* when volume is high — async can efficiently transfer large volume of messages in one lot. See P236 [[Weblogic the definitive guide]]

Q3: if I implement the MessageListener interface, which method in which thread calls my onMessage()[Q1]? I believe it’s similar to RMI, or doGet() in a servlet. [Q2]

Q1: is wait/notify used?
A1: i think so, else there’s a network blocking call like socket.accept

Q2: Servlets must exist in a servlet container (a network daemon), so how about a message listener? I believe it must exist in a network daemon, too.

subject-based addressing

http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.60.5735&rep=rep1&type=pdf

Subjects are arranged in a subject tree by using a dot notation, and
clients can either subscribe to a single subject (e.g., finance.
quotes.NASDAQ.FooInc) or use wildcards (e.g., finance.quotes.NASDAQ.*).

JMS topic is basically subject-based. “Channel-based” is more
accurate, as a (selector-less) subscriber gets everything in the
channel.

jms queue is “can be compared to TIBCO Rendezvous Distributes Queues (RVDQ).”

JMS temporary queue, tibrv and sybase temp table

Update — tibrv’s flexibility make JMS tmp queue look unnecessary, rigid, uptight and old fashioned.

tmp queue is (tmp topic?[4]) used for request/reply. I feel the essential elements of JMS request/reply are
1) tmp queue
2) set and get reply-to — setJMSReplyTo(yourTmpQueue)
3) correlation id — setJMSCorrelationID(someRequestID)

Server-side (not the broker, but another JMS user) uses getJMSCorrelationID() and then setJMSCorrelationID() in the response message, then replies to getJMSReplyTo()

tmp queue is like a Sybase tmp table — (I feel this is the key to understanding tmp queue.)
* created by requestor
* removed by the “system janitor”, or by requestor
* private to the session. Therefore creator threads must keep the session alive.

[4] temp topic too. p67 [[JMS]]

onMsg() invocation in any MOM or mkt-data client-runtime

Q: what happens between socket receiving the msg and the invocation of onMsg()?

Q: is the listener thread blocked in read() or wait()
%%A: wait() I guess.

Q: if wait(), then which threads notify it?
%%A: perhaps a single socket reader thread is responsible to notify multiple listener threads that are interested in the same data. For max performance, listener threads might run on separate processors. They may be waiting in different monitors. I think the socket thread may either notifyAll or sequentially notify each listener thread.

large/time-consuming engine accessed via web@@

In risk, stress testing, scheduling, optimization, pricing systems, a request or task might take a long time (4 hours to price exotic options) to complete. Yet, almost always, web + email can provide an adequate interface. I *used* to dismiss http as good for quick round trip GET/POST requests.

A form can submit a request into the engine and simply complete the round-trip. What happens after the round-trip?

* browser can be programmed to auto-refresh
* user can visit (or be redirected to) another URL that shows incremental output
* progress bar on browser. I have seen such things.

Behind the scene, a DB or MOM can hold the task queue.

G5 message headers – replyTo, correlationId ..

A lot of essential JMS features are implemented as message headers. Producers set those headers (not MessageID though) to inform
– broker
– consumer

— Top headers —
Persistence mode
Expiration
MessageID — subsequently referenced by correlationID. Unlike correlationID, you can’t put your value in it — It’s system generated.
correlationID — can be set to your homemade id from DB or a previous MessageID
replyTo —
* often used with temp topics (“temp queues” in disguise)
* similar to our BWService. RV can do this easily.

sync/async available for both consumer types

Both sync and async are available to both queue and topic consumers. Receive() method is specified in the MessageConsumer interface.

Asynchronous onMessage() is the celebrated selling point of JMS and MOM in general.

I used queue receiver.receive() in the SIM upfront project.

Unfamiliar to some people, a topic subscriber can also call the blocking receive() method or the non-blocking receiveNoWait() method. What if you have logic in onMessage but also need to use receive() occasionally? P74 [[JMS]] has an exmple, where a topic subscriber, upon returning from receive(), immediately calls this.onMessage(theMsgReceived).

I think this technique can be useful in Swing, where a thread sends a request to a JMS broker, and blocks in receive(). Useful if the sender thread has a lot of local data unavilable to any other (like a listener) thread.

ack in JMS flow – which leg@@

The term “acknowledgement” is ambiguous in the JMS context. I always ask “which leg”.

Leg 1 sender-broker. Ack is from broker to sender.
* for persistent messages, broker always persists the message Before sending ack to sender
* for nonpersistent messages, broker may send ack then crash, losing the message.

Leg 2 broker-receiver. Ack is from receiver to broker.
* for unicast,
** for unicast nondurable subscriber A, messages published during A’s downtime are ignored and won’t be delivered to A
** for unicast durable subscriber, … obvious
* for p2p queue, delivery is guaranteed. Similar to unicast Durable.

Spring can add unwanted (unnecessary) complexity

[5] T org.springframework.jms.core.JmsTemplate.execute(SessionCallback action, boolean startConnection) throws JmsException
Execute the action specified by the given action object within a JMS Session. Generalized version of execute(SessionCallback), allowing the JMS Connection to be __started__ on the fly, magically.
——–
Recently i had some difficulties understanding how jms works in my project. ActiveMQ hides some sophisticated stuff behind a simplified “facade”. Spring tries to simplify things further by providing a supposedly elegant and even simpler facade (JmsTemplate etc), so developers don’t need to deal with the JMS api[4]. As usual, spring hides some really sophisticated stuff behind that facade.

Now i have come to the view that such a setup adds to the learning curve rather than shortening it. Quickest learning curve is found in a JMS project using nothing but standard JMS api. This is seldom a good idea overall, but it surely reduces learning curve.

[4] I don’t really know how complicated or dirty it is to use standard JMS api directly!

In order to be proficient and become a problem solver, a new guy joining my team probably need to learn both the spring stuff and the JMS stuff [1]. When things don’t behave as expected[2], perhaps showing unexpected delays and slightly out-of-sync threads, you don’t know if it’s some logic in spring’s implementation, or our spring config, or incorrect usage of JMS or a poor understanding of ActiveMQ. As an analogy, when an alcoholic-myopic-diabetic-cancer patient complains of dizziness, you don’t know the cause.

If you are like me, you would investigate _both_ ActiveMQ and Spring. Then it becomes clear that Spring adds complexity, not reduces complexity. This is perhaps one reason some architects decide to create their own frameworks, so they have full control and don’t need to understand a complex framework created by others.

Here’s another analogy. If a grandpa (like my dad) wants to rely on email everyday, then he must be prepared to “own” a computer with all the complexities. I told my dad a computer is nothing comparable to a cell phone, television, or camera as a fool-proof machine.

[1] for example, how does the broker thread start, at what time, and triggered by what[5]? Which thread runs onMessage(), and at what point during the start-up? When and how are listeners registered? What objects are involved?

[2] even though basic functionality is there and system is usable

200,000,000 orders in cache, to be queried in real time #UBS

Say you have up to 200,000,000 orders/ticks each day. Traders need to query them like “all IBM”, or “all IBM orders above $50”. Real time orders. How do you design the server side to be scalable? (Now i think you need a tick db like kdb)

First establish theoretical limits. entire population + response time + up-to-date results — choose any 2. google forgoes one of them.

Q: do users really need to query this entire population, or just a subset?

This looks like a reporting app. Drill-down is a standard solution. Keep order details (100+ fields) in a separate server. Queries don’t need these details.

Once we confirm the population to be scanned (say, all 200,000,000), the request rate (say a few traders, once a while) and acceptable response time (say 10 seconds), we can allocate threads (say 200 in a pool). We can divide each query into 50 parallel queries ie 50 tasks in the task queue. On average we can work on 4 requests concurrently. Others queue up in a JMS queue.

DB techniques to borrow:
* partitioning to achieve near-perfect parallel execution
* indexing — to avoid full scans? How about gemfire indexes?
* You can also use a standard DB like sybase or DB2 and give enough RAM so no disk IO needed. But it’s even better if we can fit entire population in a single JVM — no serialization, no disk, no network latency.

L1, L2 caches? What to put in the caches? Most used items such as index root nodes.

See post on The initiator mailbox MOM model.

non-trivial Spring features in my recent projects

FactoryBean —
lookup-method — method injection using CGLIB
InitializingBean — an alternative to init-method
Auto wiring
component-scan
@RunWith for unit testing
property place holders
converters
jms messageConverters
TestContextManager.java

–SmartLifecycle.java — Simply Magical ! Used to auto start threads when our custom ApplicationContext starts

— @Resource — on a field “cusip” would mean “Mr XMLBeanFactory, please inject into this.cusip an XML bean named cusip”. System implicitly declares this *field* in the xml bean definition of this class, but you need to declare the “cusip” bean.

Q: Just where and how does my aa_reoffer JMS test start the broker daemon thread?
A: Neoreo QueueFactoryBean (FactoryBean derivative) has a getObject(). In it, we create a new JmsTemplate and call execute() on it. This Spring method creates a full-blown jms session, which creates the daemon on demand.

GUI for fast-changing mkt-data (AMM

For a given symbol, market data MOM could pump in many (possibly thousands of) messages per second. Client jvm would receive all updates by regular MOM listener, and update an small (possibly one-element) hashmap — by repeated overwriting in the memory location. Given the update frequency, synchronization must be avoided by either CAS or double-buffering, in the case of object or a long/double. For an int or float, regular volatile field might be sufficient.

Humans don’t like screen updated so frequently. Solution – GUI worker thread (like swing timer) to query the local cache every 2 sec — a reasonable refresh rate. It will miss a lot of updates but fine.

(Based on a real implementation hooked to a OPRA feed)

DMA low latency interview #European bank Midtown

Q: Given a 32-core machine and a lot of CPU-bound calculations, how would you size your thread pool?
A: First, find out how many kernel threads — probably 32 * 2 or 4. i feel if tasks are IO intensive like waiting for a pricing or risk assessment from JMS, then a lot of threads would be waiting for IO. If each thread spends 90% of its time waiting for IO, then each cpu should handle 10 threads roughly. This way, each thread after getting the data needed won’t need to wait to get CPU.

The below questions were probably not prepared in advance.

Q1: for a one-direction linked list of size N, efficiently find the item at Position floor(N/2). Positions are 1 to N. You don’t know the size until you finish scanning.

Q2: for a strictly sorted binary tree of numbers, I give you a pointer to the root node and 2 arbitrary numbers in the tree, find the lowest common ancestor node. The only entry point is the root node address. The 2 input numbers are passed by value and guaranteed to exist in the tree. Note for this binary tree, any node on my left are less than me.

Q3: efficiently reshuffle a PokerCard[52] array. You can call a rand(int max) function to get a random number between 0 and max.
Tip: avoid “retry” as that wastes CPU
A: i think we need to call rand(51), rand(50), rand(49)….

which thread runs onMessage()

Anthony,

You brought up a good requirement: a MessageListener instance must receive messages in the order published.

My solution: A simple thread-pool worker thread picks up a task in the task queue and calls myListener.onMessage(..). Task queue (being a Queue) maintains the message sequence. (A Task object wraps a listener object and an immutable message object.)

Now, what if one of the onMessage() calls inserts to DB and runs too slow? I would argue it’s “your problem” as the app developer, not “my problem” as the JMS api developer. My job is to call your onMessage() sequentially. I can’t control if the messages go into your DB in sequence.

If 5 messages are found in the task queue, a worker thread should intelligently pick up all and somehow prevent another worker thread from picking up the 6th message (that comes later) and pushing it to myListener before the 5. Essentially, this worker thread should lock on the listener.

I feel in general onMessage() should NOT be synchronized.

generic callback in a concurrent java/c++ sys

(Note OO is a development-time concept. Once compiled, an OO program looks no different from a C program with tons of structs and function pointers.)

Most financial systems today are concurrent OO systems. The lowly callback pattern is extremely common but actually requires a thorough understanding of both thread and object interaction. Let’s examine some. Note just about every player in the callback scenarios is a pointer, including function pointers.

In the simplest 1-thread case, an observer pointee registers interest with a subject object, by giving its own address. After subject state changes in Thr 1, the same thread loops through the list of observers and invokes the callback method on each observer. Note the observer object must not be deallocated.

For onMsg() in tibrv or onMessage() in JMS, there’s a remote “upstream” thread in the message broker’s PID/runtime, and there’s a separate downstream thread in the listener’s PID. I believe this thread starts out blocking for messages. Upon receiving a msg, the runtime somehow wakes up this blocking thread and this thread invokes onMsg() on the listener pointee. Meanwhile what’s the remote upstream thread doing? Simplest design is synchronous — upstream thread waits for the listener thread to finish. This means the upstream thread can only dispatch the message to remote listeners sequentially. I feel it’s much better to get the upstream thread to return right away after sending the one-way-message[3] (see Doug Lea’s book) so this thread can contact the 2nd listener in the list. Rewind to registration time — Any thread could register the listener with “Hollywood”. This thread could terminate immediately but the listener pointee must live.

[3] in classic Reo, we actually send one-way messages to the market-facing gateway. We get the status message on a separate thread via onMessage().

In C/C++, you can also register into “Hollywood” a ptr to a free func. No object, usually no state (unless the free func uses stateful static objects). Free functions are free of destruction. Hollywood could save the (function) ptr in any collection. When there’s an event, Hollywood calls us back, in any thread.

tibrv-CM transport i.e. session #again

A tibrv Transport is like a JMS session. 2 Main actors in a CM conversation are the transports (“sessions”).
* sending CM transport
* listening CM transport
P 155 of [[rv concepts]] says

The sending CM transport is responsible to record each outbound message on
that subject, and to retain the message in its ledger until it receives
confirmation of delivery from the listener (or until the time limit of the
message elapses).

In return, the listening CM transport is responsible for confirming delivery of
each message.

Since the sender is a CM sender, i would assume receiver can’t be non-CM.

To send or receive messages using certified delivery features, a program must first
create a CM transport (also called a delivery-tracking transport). Each CM
transport employs an ordinary transport and adds some additional capabilities, such as a ledger.

Changi(citi)c++IV #onMsg dispatcher, STL, clon`#done

I feel this is more like a BestPractice (BP) design question.

void onMsg(Price& const price){ // need to persist to DB in arrival order
// Message rate is very high, so I chose async producer-consumer pattern like
  dispatch(price);
}

// sharding by symbol name is a common practice

// %%Q: should I dispatch to multiple queues?
// A: Requirement says arrival order, so I decided to use a single queue.

// The actual price object could be a StockPrice or an OptionPrice. To be able to persist the option-specific attributes, the queue consumer must figure out the actual type.

// Also, price object may be modified concurrently. I decided to make a deep copy of the object.

// Q: when to make a copy, in onMsg or the consumer? I personally prefer the latter, but let’s say we need to adopt the former. Here’s a poor solution —

//Instead of dispatch(price)
dispatchPBClone(price); // slicing a StockPrice down to a Price. Here’s my new solution —

dispatchPBPointer(price.cloneAndReturnPtr());

// clone() is a virtual method in StockPrice and OptionPrice.
//Q: clone() should allocate memory on the heap, but where is the de-allocation?
//A: Use smart ptr.

multi-threading with onMessage()

Update — http://outatime.wordpress.com/2007/12/06/jms-patterns-with-activemq/ is another tutorial.

http://download.oracle.com/javaee/1.4/api/javax/jms/Session.html says -- A JMS Session is a single threaded context for producing and consuming messages. Once a connection has been started, any session with a registered message listener(s) is *dedicated* to the thread [1] of control that delivers messages to it. It is erroneous for client code to use this session or any of its constituent objects from another thread of control. The only exception to this is the use of the session or connection close method.

In other words, other threads should not touch this session or receive/mention this object in source code

[1] I think such a session is used by one thread only. but who creates and starts that thread? If I write the jms listener app, then it has to be started by my app. [[java enterprise in a nutshell]] P329 has sample code but looks buggy. Now I think java.jms.Connection.start() is the answer. API mentions “Starts delivery of *incoming* messages only”, not outgoing! This method is part of a process that
* creates a lasting connection
* creates the sessionSSS in memory (multiple-session/one-connection)
* perhaps starts the thread that will call onMessage()

——– http://onjava.com/lpt/a/951 says:
Sessions and Threading

The Chat application uses a separate session for the publisher and subscriber, pubSession and subSession, respectively. This is due to a threading restriction imposed by JMS. According to the JMS specification, a session may not be operated on by more than one thread at a time. In our example, two threads of control are active: the default main thread of the Chat application and the thread that invokes the onMessage( ) handler. The thread that invokes the onMessage( ) handler is owned by the JMS provider(??). Since the invocation of the onMessage( ) handler is asynchronous, it could be called while the main thread is publishing a message in the writeMessage( ) method. If both the publisher and subscriber had been created by the same session, the two threads could operate on these methods at the same time; in effect, they could operate on the same TopicSession concurrently — a condition that is prohibited.

TibRV usage]%%department

* account cache invalidation -> re-fetch from DB for that account.

However, after (not before) cache server starts up and rebuilds itself, all messages are guaranteed. When it’s down, such notifications are ignored. No guaranteed delivery please.

* management console to put an app server into and out of debug mode without restarts. J4? Cheap, fast, light. EMS is heavier.

* centrally log all emails sent out from many app servers

* market data feed intra-day

JMS msg selector

for both p2p and unicast topic, only consumer can use msg selector

syntax is like the WHERE-clause [1] of SQL-92, including

* LIKE
* BETWEEN
* IN
* IS NULL
* AND / OR

Filtering is done on broker not consumer, to save bandwidth.

[1] but without the WHERE.

MOM guaranteed delivery

Compare RV CM and JMS guaranteed delivery.

Requirement: once and only once delivery

Requirement: large[1] persistent store

requirement: ack upon consumption. I feel auto_ack and explicit “client_ack” are fine.

[1] in practice, all msg stores have limited capacity. Alerts are needed.

y JMS session is Single-Threaded

1 session – 1 transaction.

How about database connection. 1 connection – 1 private memory space in the server to hold transaction data, so no multiple concurrent transaction. Even a select is part of a transaction, if you recall the isolation levels (repeatable read…) Usually, the same connection is not used in multiple concurrent threads, but no system enforcement.

How does any JMS system enforce the single-threaded rule? I doubt there’s any. If you break the rule, then result is perhaps similar to the c++ “undefined behavior”.

Note a Connection can be shared by threads, but a Session can’t. The java tutorial says “You use a connection to create one or more sessions.” Additionally, in the flow diagram anything upstream is thread-safe; anything downstream is single-threaded.

connFactory -> Conn -> Session -> producer/consumers

tibrv-CM ^JMS

[[ rv concepts ]] has a concise comparison. A few highlights on CM:

(CM is one of the most important RV use cases in trading.)

* A producer sends a message to consumers directly (peer to peer, decentralized). The producer stores the message until each consumer has acknowledged receipt.

**Note the rv daemon is not a central server. There is usually one daemon on each host. These function a bit like cisco routers in a network. They don’t store or retry messages.

**I think p2p is a defining feature of rv (not only CM) relative to JMS.

* Disk failure on a peer host computer affects only the messages that its programs produce or consume. However, disk mirroring (often done on jms servers) for each individual peer is often impractical.

initiator’s mailbox — a simple MOM model

An overloaded server has a long queue of big queries to service. Say average 5 seconds/query times 2,000 queries in queue. Initiator (eg GUI) should simply send the request to MOM broker and disappear — no tmp queue. Server gets to this query, returns “value” to the broker. Broker should keep the value in the initiator’s private mailbox.

Initiator have 2 ways to check the mailbox.
* onMessage — server push, triggered by incoming connection. However, if onMessage() is a long method, will broker (separate jvm) block waiting for it?
* poll — client pull. Broker could be overwhelmed by too many clients.

The initiator’s mailbox can be a dynamic queue.

For DQ, see http://tim.oreilly.com/pub/a/onjava/2007/04/10/designing-messaging-applications-with-temporary-queues.html

tibRV daemon — briefly

Rendezvous daemon (rvd) processes service all network communications for Rendezvous programs (“my apps”). Every Rendezvous program (in any supported language) uses a Rendezvous daemon, usually on local host, but possibly in another machine.

* When a program sends a message, rvd transmits it across the network.
* When a program listens to a subject, rvd receives messages from the network, and presents messages with matching subject names to the listening program.

(I think “program” means my app program that I write using RV api.)

Within Rendezvous apps, each network transport object represents a connection (“session??”) to an rvd process. Because rvd is an essential link in Rendezvous communications, if the transport cannot connect to an rvd process that is already running, it automatically starts a rvd daemon.

RVD is like a gateway. RVD sits between the network and “my app”.

http://www.cs.cmu.edu/~priya/WFoMT2002/Pang-Maheshwari.pdf says —
An RV Sender app (java or c++) passes the message and destination topic (subject) to RVD. RVD then broadcasts this message using User Data Packet(UDP) to the entire network. All subscribing computers with RVDs on the network will receive this message. RVD will filter the messages which non-subscribers will not be notified of the message.

tibrv daemon ^ agent

Rendezvous Java apps can connect to the network in either of two ways:
? An rvd transport (session?) connects to a Rendezvous daemon process (rvd).
? An rva transport (session?) connects to a Rendezvous agent process (rva), which in turn
connects to a Rendezvous daemon process.

For connections to the local network, a direct connection to rvd is more efficient
than an indirect connection through rva to rvd. In all situations we recommend
rvd transports (read “”connections””) in preference to rva transports—except for applets connecting to a remote home network, which must use rva transports

tibRV event object@@

C# “event” also has a dual-meaning…

In RV, an Event object can represent
1) a listener (or program’s interest in events) and also
2) an “event occurrence” (but NOT a TibrvMsg instance). I think we need to get comfortable with this unusual design.

As described in the “flow” blogpost, probably the main event object we use is the listener object. In fact, Chapter 5 of the java manual seems to suggest that
* TibrvListener is (probably the most important) subclass of TibrvEvent
* TibrvTimer is (probably the 2nd most important) subclass of TibrvEvent
* For beginners, you can think of TibrvEvent.java as an abstract base class of the 2

tibrv-UDP-multicast vs tcp-hub/spoke-JMS

http://narencoolgeek.blogspot.com/2006/01/tibco-rv-vs-tibco-ems.html
http://stackoverflow.com/questions/580858/tibco-rendezvous-and-msmq

Looks like the consensus is
* tibco rv — fastest, high volume, high throughput, less-then-perfect reliability even with certified delivery
* JMS — slower but perfect reliability.

Before adding bells and whistles, core RV was designed for
* efficient delivery — no bandwidth waste
* high volume, high throughput
* multicast, not hub/spoke, not p2p
* imperfect reliability. CM is on top of core RV
* no distributed transaction, whereas JMS standard requires XA. I struggled with a Weblogic jms xa issue in 2006

— simple scenario —

If a new IBM quote has to reach 100 subscribers, JMS (hub/spoke unicast topic) uses tcp/ip to send it 100 times, sequentially [3]. Each subscriber has a unique IP address and accepts this 1 message, and ignores the other 99. In contrast, Multicast sends [4] it once, and only duplicates/clones a message when the links to the destinations split [5]. RV uses a proprietary protocol over UDP.

[5] see my blogpost https://wordpress.com/post/bintanvictor.wordpress.com/32651

[3] http://www.cs.cmu.edu/~priya/WFoMT2002/Pang-Maheshwari.pdf benchmark tests reveals “The SonicMQ broker requires a separate TCP connection to each of the subscriber. Hence the more subscribers there are, the longer it takes for SonicMQ to deliver the same amount of messages.
[4] Same benchmark shows the sender is the RVD daemon. It simply broadcasts to the entire network. Whoever interested in the subject will get it.

— P233 [[weblogic definitive guide]] suggests
* multicast jms broker has a constant workload regardless how many subscribers. This is because the broker sends it once rather than 100 times as in our example.
* multicast saves network bandwidth. Unicast topic requires 100 envelopes processed by the switches and routers.

a fairly generalized request/reply model – MOM

background — req/rep is indispensable in trading as it replaces synchronous call.

I feel the scheme below is overly complex.

Note “server” is an vague term in req/rep. I use “broker” and “service” instead.

* Initiator sends request to message broker.
* broker delivers it to the service provider. I feel async allows an overloaded service to control message rate. Sync could overwhelm the service.
* depending on the acknowledgment mode, service might send an REQ-ACK right away[1], since the reply might take a long time. REQ-ACK can give initiator some useful assurance so it doesn’t need to guess “why no reply?”.

* x seconds/hours later, service sends the “value” to a private queue (initiator’s mailbox).
* broker delivers it to the initiator, async or sync. See post on initiator’s mailbox.
* depending on the acknowledgment mode, initiator might send a VALUE-ACK.

Generalized, request vs value can be considered 2 independent JMS flows. Perhaps 2 independent brokers, 2 independent queues. Zed used this model as the ringtone/joke/horoscope/…. might get generated after a long delay. If the request is a write operation, then more control is needed.

[1] Further decoupled, an explicit, custom ACK message can go into yet another queue. So the request/reply would use 4 queues. However, i think this is rare. ACK could be a builtin in the broker infrastructure, but perhaps the actual receivers (initiator and server) can issue the ACK.

tibrv-CM example usage scenarios #nice

The tibrv documentation [[RV concepts]] (https://docs.tibco.com/pub/rendezvous/8.3.1_january_2011/pdf/tib_rv_concepts.pdf) has good examples of when to use Certified Messaging. (minor edits by me)

Certified delivery is appropriate when a sending program requires individual confirmation of delivery for each message it sends. For example, a traveling sales representative enters sales orders on a laptop computer, and sends them to a central office. The representative must know for certain that the order processing system has received the order he sent.

Certified delivery is also appropriate when a receiving program cannot afford to miss any messages. For example, in an application that processes orders to buy and sell inventory items, each order is important. If any orders are omitted, then inventory records are incorrect. It’s like missing a incremental backup.

Certified delivery is appropriate when each message on a subject builds upon previous message (with that subject) — in sequence. For example, a sending program updates a receiving database, contributing part of the data fields in a record, but leaving other fields of the record unchanged. The database is correct only if all updates arrive in the order they are sent.

Certified delivery is appropriate in situations of intermittent physical connectivity—such as discontinuous network connections. For example, consider an application in which several mobile laptop computers must communicate with one another. Connectivity between mobile units is sporadic, requiring persistent storage of messages until the appropriate connections are re-established.

keywords of TibRV

* addressing — rv protocol is different from tcp/ip socket. No IP addresses. Rather, rv uses
** Subject-based addressing — For programs to communicate, they must agree upon a subject name at which to rendezvous (hence the name of this product). Subject-based addressing technology enables anonymous rendezvous, an important breakthrough

* rvd — is the name of the daemon. in most environments, one daemon process runs on each host computer.

* transport — one of the most important java classes to developers like us. There’s no jms counterpart, but the best you can get is the Session object + physical delivery options.

* unicast vs multicast. No other choice.
** multicast is prevalent
** unicast — sends a msg to exactly one Unix process id. point to point. inbox name

2 questions on tibRV to a Tibco friend #LS

Q1: Why is certified messaging so widely used in trading systems when CM is not as reliable as JMS? I guess if a system already uses Rendezvous, then there’s motivation to avoid mixing it with JMS. If the given system needs higher reliability than the standard non-guaranteed RV delivery, then the easiest solution is CM. Is that how your users think?

A: I don’t think CM is not as reliable as JMS. However you can say CM’s acknowledgment mechanisms are not as flexible as JMS/EMS. The latter supports several acknowledgment modes. An important advantage of RV is it supports multi-cast which makes broadcasting quotes more efficient. JMS 5 supports something similar for topic subscriber but not as natural as RV.

Q2: in JMS, a receiver can 1) poll the queue or 2) wait passively for the queue to call back, using onMessage() — known as a JMS listener. These are the synchronous receiver and asynchronous receiver, respectively. I feel Rendezvous supports asynchronous only, in the form of onMsg(). The poll() or dispatch() methods of the TibrvQueue object don’t return the message, unlike the JMS polling operation.

GUI-server communication — HFT (Piroz)

Mostly based on a veteran’s input…

Most GUI clients I have seen use some sort of messaging and you want that to be as asynchronous as possible.  There is no reason to keep the user waiting.  User clicks on something, the client sends a message to the server and the client is now waiting for a response.  When a response comes in (on a separate thread), the GUI takes it and displays whatever it needs.  Some people stick a unique request id in the message so when the response comes they can figure out to which request it belongs.

You also have subscription cases where you subscribe to prices for a security and the server will tell you listen to this channel for all updates.  That should be encapsulated in some sort of handshake.
RV, 29West and JMS are good choices.

y multicast efficiency imt pub/sub

If a market data update has to reach 100 subscribers, JMS uses tcp/ip to send it 100 times. Each subscriber has a unique IP address and “opens” the 1 “envelope” address to it, and ignores the 99 that it *receives* on the network (typically ethernet). This is unicast not broadcast…

Multicast sends the data once, with UDP/IP. This is significant for exchanges or any live feed provider

Certified delivery in RV stores the undelivered message in some file system, which is less reliable than JMS.

JMS acknowledgment – cheatsheet

If you can only remember one thing about JMS Ack, it’s

#1) 2 legs – there’s ACK on both legs of producer => broker => consumer. The 2 ACK can be present or absent independent of each other. See diagrams in [[JMS]]

#2) reliability – one of the most important features of JMS is reliability. Ack is a cornerstone.

tibrv is slightly less reliable. but CM-transport also uses confirmation on each individual message… see other blogpost.

asynchronous: meaning…@@

When people talk about async, they could mean a few things.

Meaning — non-blocking thread. HTTP interaction is not async because the browser thread blocks.
* eg: email vs browser
* eg: http://search.cpan.org/~mewp/sybperl-2.18/pod/sybperl.pod async query

Meaning — initiating thread *returns* immediately, before the task is completed by another thread.

Meaning — no immediate response, less stringent demand on response time. HTTP server need to service the client right away since the client is blocking. As a web developer i constantly worry about round-trip duration.

Meaning — delayed response. In any design where a response is provided by a different thread, the requester generally don’t need immediate response. Requester thread goes on with its business.
* eg: email, MQ, producer/consumer
* eg: jms onMessage() vs poll, but RV is purely async
* eg: acknowledgement messages in zed, all-http

You can (but i won’t) think of these “meanings” as “features” of async.

tibRV transport^connection^session

In JMS, session represents (roughly) a connection, or a socket. More precisely, one or more sessions are created from a single live connection. In JMS, sessions are single-threaded.

In RV, transport is the terminology. Means the same as a session, a socket, a connection to the “network”, to the daemon… Transport is a real object in memory.

In hibernate, session represents a connection to the DB. More precisely, a hibernate session HAS-A real connection object in memory

multicast tibrv ^ JMS #1st take

multicast is not supported in JMS standard. Note JMS (but no multicast?) unicast supports durable subscription.

RV is a good example of multicast. Weblogic JMS also supports multicast.

Multicast (and therefore RV) is more efficient in terms of resources and redundant data transfer.

Q: Unlikely TCP, multicast messages are usually not guaranteed to be once-and-only-once?
A: TIBCO offers a transactional augmentation of RV (RVTX) that guarantees a 1-and-only-once delivery; however, this solution essentially converts RV into a hub-and-spoke system. Consequently, very few RV implementations are transactional.

Q: the JMS synchronous receiver (poller) won’t work. You must use async like onMessage()?
A: i think every receiver must stand ready. A receiver can’t decide when to pull from server.

Q: out-of-sequence is possible in multicast and JMS?
A: both possible

guaranteed delivery (GD) vs NGD

i think GD stipulates exactly one delivery for each message, in expected sequence. Basic expectation in most async transactional environments. For trade booking, that’s the only acceptable delivery. NGD can deliver 0, 1 or many times for the same message. GD requirements include, but is not limited to

* persistent store
* (in the case of pub/sub) durable subscription, so a subscriber can safely go offline
* acknowledgments

Q: if client is offline and comes online, does GD and NGD behave any different?
A: i guess NGD may not deliver all the missed messages.

Q: if volume is high, does GD vs NGD offer any advantage?
A: multicast is most efficient and is usually NGD.

Q: I feel GD is too expensive for broadcast when volume and subscriber counts are high, right?
A: i think so.

Q: what about messages containing other people’s trades executed on the open market?
A: If you are following a particular security and want to track a particular investor’s position, then you need GD. Like a poker game.

G5 key tibRV objects for Java developers

* msg objects — well-defined entity beans
* transport objects

The 3 objects below build on top of msg and transports … These are described in one single chapter [[Events and Queues]] in the java documentation.

* listener objects
* callback objects, implementing onMsg()
* queue objects and dispatchable interface

Less important:
* dispatcher object (subclass of Thread)


Once you understand the above, you can see that certified messaging builds on top of them:
* CM transport objects
* CM listener objects
* CM review callback — for reviewing “missed calls”
* CM msg objects

java calling win32 tibrvsend.exe

            /**
            * daemon and the arg must be distinct strings
            *
             * quotes aren’t necessary
            */
            String[] strArrayWithoutQuote = new String[] {
                        “C:\\tibrv\\8.3\\bin\\tibrvsend”, “-daemon”,
                        “localhost:7500”, “-service”, “9013”, “-network”,
                        “;239.193.224.50,239.193.224.51;239.193.224.50”,
                        “GENERIC.g.TO_MTS_TRADE_REQUEST”, xml };
            System.out.println(Arrays.asList(strArrayWithoutQuote));
            execAndWait(strArrayWithoutQuote);
      }
      /**
      * http://www.rgagnon.com/javadetails/java-0014.html says If you need to
      * pass arguments, it’s safer to a String array especially if they contain
      * spaces.
      */
      static private void execAndWait(String[] command)
            try {
                  Runtime runtime = Runtime.getRuntime();
                  Process p = runtime.exec(command);
                  BufferedReader stdInput = new BufferedReader(new InputStreamReader(p
                              .getInputStream()));
                  String s = null;
                  while ((s = stdInput.readLine()) != null) {
                        System.out.println(s);
                  }
                  BufferedReader stdError = new BufferedReader(new InputStreamReader(p
                              .getErrorStream()));
                  while ((s = stdError.readLine()) != null) {
                        System.err.println(s);
                  }
                  p.waitFor(); // advised to do this after streams
            } catch (IOException e) {
                  throw new RuntimeException(e);
            } catch (InterruptedException e) {
                  throw new RuntimeException(e);
            }
      }

jms q&&a

q: how do u create a topic in a jms provider?
a: below

Q: beside topic names, what other names exist in a jndi registry?
a: below

a: one way is xml. create it, hot-deploy it (make sure the server picks it up).
A: ejb home names, connection factory names, queue names….

solace^tibcoApplicance #OPRA volume

http://solacesystems.com/news/fastest-jms-broker/ solace JMS broker (Solace Message Router) support 100,000 messages per second in persistent mode and 10 million messages non-persistent. In a more detailed article, http://solacesystems.com/solutions/messaging-middleware/jms/ shows 11 million 100-byte non-persistent messages.

A major sell-side’s messaging platform chief said his most important consideration was the deviation of peak-to-average latency and outliers. A small amount of deviation and (good) predictability were key. They chose Solace.

http://www.hpcwire.com/hpcwire/2009-09-14/solace_systems_sets_the_pace_in_the_race_to_zero_latency.html has good details.

In all cases (Solace, Tibco, Tervela), hardware-based appliances *promise* at least 10 fold boost in performance compared to software solutions. Latency within the appliance is predictably low, but the end-to-end latency is not. Because of the separate /devices/ and the network hops between them, the best-case latency is in the tens of microseconds. The next logical step is to integrate the components into a single system to avoid all the network latency and intermediate memory copies (including serializations). Solace has demonstrated sub-microsecond latencies by adding support for inter-process communications (IPC) via shared memory. Developers will be able to fold the ticker feed function, the messaging platform, and the algorithmic engine into the same “application” [1], and use shared memory IPC as the data transport (though I feel single-application design need no IPC).

For best results you want to keep each “application” [1] on the same multi-core processor, and nail individual application components (like the feed handler and algo engine) to specific cores. That way, application data can be shared between the cores in the Level 2 cache.

[1] Each “application” is potentially a multi-process application with multiple address spaces, and may need IPC.

Benchmark — Solace ran tests with a million 100-byte messages per second, achieving an average latency of less than 700 nanoseconds using a single Intel processor. As of 2009, OPRA topped out at about a million messages per second. OPRA hit 869,109 mps (msg/sec) in Apr 2009.

Solace vs RV appliance — Although Solace already offers its own appliance, it runs other messaging software. The Tibco version runs Rendezvous (implemented in ASIC+FPGA), providing a clear differentiator between the Tibco and Solace appliances.

Solace 3260 Message Router is the product chosen by most Wall St. customers.

http://kirkwylie.blogspot.com/2008/11/meeting-with-solace-systems-hardware.html provides good tech insights.

simple tibrvlisten/tibrvsend example using a winxp local rvd

Start 3 “black” DOS windows A B C.
A)
tibrvlisten -daemon localhost:7500 -service 9012 -network “” your.subject “your msg”
B)
tibrvsend -daemon localhost:7500 -service 9012 -network “” your.subject “your msg”

## listener and sender both complain about missing daemon.

C) Now type nothing but “rvd” to start the daemon process. It starts in 0.1 second, reporting the ip endpoint bound + admin website.

Now restart the listener and it starts listening.

Now restart sender – no more errors. Listener window shows the message.

Also, http://localhost:7580/ shows one client only — our listener

Merrill S’pore: fastest stock broadcast

Updates — RV or multicast topic; msg selector

I think this is a typical wall-street interview question for a senior role. System requirement as remembered by my friend the interviewee: ML needs a new relay system to receive real-time stock updates from the stock exachange such as SGX. Each ML client, one of many thousand[1], will each install a new client-software [3] to receive updates on the stocks [2] she is interested. Some clients use algorithmic trading system and need the fastest feed.

[1] Not clear about the order of magnitude. Let’s target 10,000
[2] Not clear how many stocks per client on average. Let’s target 100.
[3] Maintence and customer support for a custom client-software is nightmare and perhaps impractical. Practically, the client-software has to be extremely mature such as browsers or email clients.

Q: database locking?
A: I don’t think so. only concurrent reading. No write-contention.

Key#1 to this capacity planning is how to identify bottlenecks. Bandwidth might be a more severe bottleneck than other bottlenecks described below.

Key#2 — 2 separate architectures for algorithmic clients and traditional clients. Each architecture would meet a different minimum latency standard, perhaps a few seconds for traditional and sub-second for algorithmic.

Solution 0: Whatever broadcasting system SGX uses. In an idea world, no budget constraint. Highest capacity desired.

Solution 2: no MQ? No asynchronous transmission? As soon as an update is received from SGX, the relay calls each client directly. Server-push.

Solution 1: MQ — the standard solution in my humble opinion.

Solution 1A: topics. One topic per stock. If 2000 clients want IBM updates, they all subscribe to this topic.

Q: client-pull? I think this is the bottleneck.

Q: Would Client-pull introduce additional delays?

Solution 1B: queues. one queue for each client each stock.

If 2000 clients want IBM updates, Relay need to make that many copies of an update and send to that many queues — duplication of effort. I think this is the bottleneck. Not absolutely sure if this affects relay system performance. Massively parallel processing is required, with thousands of native CPU threads (not java green threads)

JMS guaranteed delivery — guaranteed by who@@

guaranteed delivery (GD) is a “service” provided by the JMS infrastructure — the part of the system outside your custom application code. It's also known as the vendor API, which is software provided by IBM, Tibco, BEA etc. A formal name is “JMS provider”. It helps to think of this infrastructure as encrypted source code (ESC).

To provide GD (or any JMS service), this infrastructure must include client-side ESC — formally known as client-runtime.

ESC is simpler in the JDBC world. Sybase provides both a server and a client-side ESC. Your code calls the client-side ESC as a lower-layer API. eSpeed also ships a client side ESC. You link into your custom trading engine, and the client side ESC talks to the eSpeed server.

In tibrv, the daemon runs in each client host and is comparable to a client-side ESC but
$ it runs in a separate process
$ tibrv daemons are peer-to-peer, no client no server.

onMessage() is one of the most important part of the client-side ESC.  But back to GD, ESC supports

* Stored and Forward
* Ack
* auto_ack is sent by the client ESC
* retry at each leg (sender->broker->receiver)

ambiguous terms to avoid in MOM discussions

synchronous – really means “blocking”

asynchronous – really means onMessage()

client – both a sender and receiver are clients of the broker, if you ask me.

server – “broker” is more specific. In a real world request/response system, A can ask B and B can ask A.

(However, in tibrv, there’s no broker. All rvd instances are peers. Decentralized.)

acknowledgment – which leg on the 2 legged flow — sender->broker->receiver?

php-servlet, php-JMS

C) There are at least 2 other php/java glues — popular frontier and playground

A) from http://www.php.net/java

      ” There are two possible ways to bridge PHP and Java: you can either integrate PHP into a Java Servlet environment, which is the more ***stable*** and efficient solution, or integrate Java support into PHP “,  using the extention on this page.  

Notice that the home page of the 2nd solution says the 1st solution is more stable and efficient.

Proof of this java-extention-in-php? A 2004 Onjava article on *JMS* in PHP http://www.onjava.com/pub/a/onjava/2004/10/27/php-jms.html
 

B)
      ” The Zend Platform PHP/Java Integration Bridge allows companies who have investments in J2EE application servers to take advantage of PHP. In addition, the Integration Bridge allows companies using PHP to take advantage of J2EE services. “

sql to list mutual funds

we don’t know when the database may get updated. we only know it’s very rare

Jingsong J. Feng if the change frequency is very low, create a trigger in the database, when data change, trigger a query, and update result

Jingsong J. Feng schedule Thread to run it every miniute or every 5 miniutes
Bin X. Tan/VEN… to update the local cache?
Jingsong J. Feng yes

Jingsong J. Feng oh, set a timeout variable — Hibernate

How about an update/insert/delete trigger to send a GET request to our servlet and expire the cache immediately? The next visitor to the website would see the updated list fresh from the database.

If there’s a jms broker, trigger can send a cache-expiry alert to the queue even when the web system is unavailable. When the web site comes up again, it receives the jms message and expires the cache?

Without a JMS broker, the GET request could update some persistent expiry-flag on the cache server. Even with frequent reboots, the cache server could check the expiry-flag periodically.

Q: Can a trigger send a GET?
A: yes

Q: Can a trigger be a jms client?
A: Not so common. http://www.oracle.com/technology/products/integration/bam/10.1.3/TechNotes/TechNote_BAM_AQ_Configuration.pdf
(AQ = Oracle Advanced Queuing )

jms AR notes, starring IBM-MQ

jdbc is a gateway to databases; jms is a gateway to MOM servers such as IBM MQSeries server, SunONE MQ server.

Q: is session needed for transaction or something else?
A: i think there’s a conversation state with the broker/server, just like DB connection session.

Confused about the jargons — jms-broker, jms-server, MOM-server, exactly what’s on the opposite end to a jms sender/receiver?

Point #1: A JMS connection sometimes requires 2 jvm’s — the client jvm and the broker/server jvm. Consider pushmail (ActiveMQ as a separate jvm). As a source of confusion, the jboss workbook and the Medrec tutorials probably use a single jvm for client and the broker.

Point#0: IBM MQ Series is definitely a separate unix process. See http://www.ibm.com/developerworks/websphere/library/techtip/pwd/wsmq_integration.html, servlet->MQSeries code sample

MQ servers made by Tibco and IBM

Hi LS,
 
A technical question for your consideration when you have time. I just don't have a lot of time for such topics of curiosity.
 
I learnt that a typical message-driven-bean container can only support a JMS broker made by the same vendor. For example, someone suggested perhaps weblogic MDB only works with weblogic-mq.
 
However, I guess ibm mq is a popular MOM infrastructure used in non-websphere environments. Similarly, Tibco's MQ broker is probably used in ejb containers made by other software makers such as BEA, Sun, Oracle.
 
How could (or couldn't?) a SunONE ejb container work with a JMS broker from Tibco?
 
Is it because Tibco MQ isn't a JMS broker but a more general purpose MOM server?
 
Thanks and have a lot of exercise!
 
tan bin

Fwd: commit/rollback ] jms pub-sub means …

when the jms tx mgr executes a commit
– each msg produced in the tx is commited to the queue@@
– consumed msg is removed from queue@@
And the consumer must send ack to broker

when a rollback takes place
– msg produced in the tx is destroyed@@
– msg consumed in the tx is put back into the queue@@
And broker must /redeliver/ it (push instead of the point-to-point pull)

perf tips4pushmail

based on http://www.java-interview.com/Java_Performance_Interview_Questions.html

  • worker threads: Maximize thread lifetimes and minimize thread creation/destruction cycles.
  • Use a separate timer thread to timeout socket operations
  • my EJB: Use the transient keyword to define fields to avoid having those fields serialized. Examine serialized objects to determine which fields do not need to be serialized for the application to work.
  • Start producer connection (dispatcher) after you start consumer (worker).
  • Control transactions by using separate transactional
    session for transactional messages and non-transactional session for non-transactional messages.
  • Choose non-durable JMS wherever appropriate to avoid persistence overhead.

JMS Q&A, again

J4: Looks like all my coding experience (++ Weblogic MQ, JbossMQ, ActiveMQ) doesn’t give me the nlg to appear experienced in front of the interviewer. Many answers are found in EJB book.

Q: major jms-specific objects involved for sending a msg to a Topic?
Hints: TopicConnectionFactory, TopicSession…

Q: there’s a simple but imp method for MDB and non-MDB alike?

Q: u know a jms program in a container need to query JNDI (for Session, Queue names ….) but how about a stand-alone java app?

Q: (vague question) how essential is onMessage()?
A: Not only MDB, but also needed by a stand-alone java app that subscribes to a jms queue/topic. P335 [[ ejb ]]
-> Q: what other methods in the stand-alone guy?
-> A: it need a way to bind to the queue/topic, often by JNDI

Q: which interface/abstract-class requires onMessage()?

A: topic conn factory <- jndi contains an entry for the factory
factory -> topic conn obj
topic conn -> session obj
topic <- jndi
session, topic -> topic publisher

A: needed too. see P334

A: onMessage()

A: MessageListener iface, which has only one method