I feel in most environments, the MOM design is most robust, relying on a reliable middleware. However, latency sensitive trading systems won’t tolerate the additional latency and see it as unnecessary.
Gregory (ICE) told me about his home-grown simple ring buffer in shared memory. He used a circular byte array. Message boundary is embedded in the payload. When the producer finishes writing to the buffer, it puts some marker to indicate end of data. Greg said the consumer is slower, so he makes it a (periodic) polling reader. When consumer encounters the marker it would stop reading. I told Gregory we need some synchronization. Greg said it’s trivial. Here are my tentative ideas —
Design 1 — every time the producer or the consumer starts it would acquire a lock. Coarse-grained locking
But when the consumer is chipping away at head of the queue, the producer can simultaneously write to the tail, so here’s
Design 2 — the latest message being written is “invisible” to the consumer. Producer keeps the marker unchanged while adding data to the tail of queue. When it has nothing more to write, it moves the marker by updating it.
The marker can be a lock-protected integer representing the index of the last byte written.
No need to worry about buffer capacity, or a very slow consumer.
MOM | UDP multicast or TCP or UDS | shared_mem | ||||||
how many processes | 3-tier | 2-tier | 2-tier | |||||
1-to-many distribution | easy | easiest | doable | |||||
intermediate storage | yes | tiny. The socket buffer can be 256MB | yes | |||||
producer data burst | supported | message loss is common in such a situation | supported | |||||
async? | yes | yes, since the receiver must poll or be notified | I think the receiver must poll or be notified | |||||
additional latency | yes | yes | minimal | |||||