server-push update ^ TTL ^ conditional-GET # write-through is not cache expiration
Few Online articles list these solutions explicitly. Some of these are simple concepts but fundamental to DB tuning and app tuning. https://docs.oracle.com/cd/E15357_01/coh.360/e15723/cache_rtwtwbra.htm#COHDG198 compares write-through ^ write-behind ^ refresh-ahead. I think refresh-ahead is similar to TTL.
B) cache-invalidation — some “events” would trigger an invalidation. Without invalidation, a cache item would live forever with a infinity TTL, like the list of China provinces.
After cache proxies get the invalidation message in a small payload (bandwidth-friendly), the proxies discard the outdated item, and can decide when to request an update. The request may be skipped completely if the item is no longer needed.
B2) cache-update by server push — IFF bandwidth is available, server can send not only a tiny invalidation message, but also the new cache content.
IFF combined with TTL, or with reliability added, then multicast can be used to deliver cache updates, as explained in my other blogposts.
T) TTL — more common. Each “cache item” embeds a time-to-live data field a.k.a expiry timestamp. Http cookie is the prime example.
In Coherence, it’s possible for the cache proxy to pre-emptively request an update on an expired item. This would reduce latency but requires a multi-threaded cache proxy.
G) conditional-GET in HTTP is a proven industrial strength solution described in my 2005 book [[computer networking]]. The cache proxy always sends a GET to the database but with a If-modified-since header. This reduces unnecessary database load and network load.
W) write-behind (asynchronous) or write-through — in some contexts, the cache proxy is not only handling Reads but also Writes. So the Read requests will read or add to cache, and Write requests will update both cache proxy and the master data store. Drawback — In distributed topology, updates from other sources are not visible to “me” the cache proxy, so I still rely one of the other 3 means.
|if frequent query, in-frequent updates||efficient||efficient||frequent but tiny requests between DB and cache proxy|
|if latency important||OK||lowest latency||slower lazy fetch, though efficient|
|if in-frequent query||good||waste DB/proxy/NW resources as “push” is unnecessary||efficient on DB/proxy/NW|
|if frequent update||unsuitable||high load on DB/proxy/NW||efficient conflation|
|if frequent update+query||unsuitable||can be wasteful||perhaps most efficient|