why memory footprint hurts latency #my take

See also post on large in-memory search

Reason: the larger the heap size to scan, the slower is the GC (but paradoxically, large heap helps you postpone the GC). The less memory you use, the lower your GC penalty.

reason: favor in-memory data store rather than disk. In-memory can mean remote or local (best). The smaller your footprint. the easier.
reason: Favor single-VM — serialization-free, network-free. The smaller the easier.
reason: serialization (for IPC, cache replication, network or disk) is slower for larger footprints.
reason: allocation is costly, even for an 32-bit int. Look at shared_ptr and circular arrays.
reason: minimize object creation (i.e. allocation) — standard optimization advice
reason: reduce GC frequency
reason: reduce GC workload ie amount of memory to scan. If you must incur a full GC, then make it short.

reason? Compared to graphs, arrays enjoy 1)random access and 2)one-shot memory allocation, but if each element is bulky[2] then a long array requires too large a contiguous block of memory, which is hard to allocate. The smaller your object, the easier.

[2] Note java array element is usually a 32 bit pointer.

Major technique: favor iterative over recursive algorithms to reduce stack memory footprint

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s