[10]y memory footprint is key to latency#JGC

see also post on large in-memory search

I suspect there’s a dilemma —

  • large heap allocation request -> free list search is harder and slower. Need to avoid ad-hoc/unplanned request for large chunks.
  • many small heap allocation requests -> free list mgr becomes hotspot
  • .. in reality, pre-allocating large arrays is probably a performance win.

I thought in low latency, time costs outweigh space costs, but no. Garbage collection is a major performance issue in low latency systems. I guess so is object creation. I guess that’s why memory efficiency affects latency. GC probably takes less cpu time if there’s less stuff to scan.

Distributed cache (memory visualization) isn’t much used in low latency systems, possibly because

  • * serialization
  • * network latency
  • * TCP? I feel the fastest HFT systems have very limited IO perhaps in the form of shared memory? FIX is based on TCP so I would assume very high overhead, but in reality, it may be a small overhead.
Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s