Skip to content

Client-Side Caching

Client-side caching in Valkey GLIDE stores responses from cacheable read commands in a local in-memory cache on the client, reducing network round-trips and server load. When a cached command is issued again, the response is served directly from local memory without contacting the server.

  1. When a cacheable read command is executed, GLIDE first checks the local cache.
  2. On a cache miss, the command is sent to the server, and the response is stored locally.
  3. On a cache hit, the cached value is returned immediately without a network call.
  4. Entries expire based on their configured TTL. Expiration is lazy — entries are removed when accessed after their TTL has elapsed, not proactively in the background.
  5. When the cache reaches its memory limit, entries are evicted according to the configured eviction policy.

Only the following read commands are cached:

CommandDescription
GETRetrieve a string value
HGETALLRetrieve all fields from a hash
SMEMBERSRetrieve all members from a set

All other commands bypass the cache entirely. Write commands (SET, HSET, SADD, etc.) are never cached.

  • NIL responses — If a key does not exist, the nil response is not stored in the cache.
  • Entries larger than the cache — If a single entry exceeds maxCacheKb, it is silently skipped.

To enable client-side caching, pass a ClientSideCache configuration when creating a client.

from glide import (
GlideClient,
GlideClientConfiguration,
GlideClusterClient,
GlideClusterClientConfiguration,
ClientSideCache,
EvictionPolicy,
NodeAddress,
)
# Create a cache configuration
cache = ClientSideCache.create(
max_cache_kb=1024, # 1 MB maximum cache size
entry_ttl_ms=60_000, # 60 second TTL per entry (0 = no expiration)
eviction_policy=EvictionPolicy.LRU, # LRU or LFU
enable_metrics=True, # Enable hit/miss tracking
)
# Standalone client
config = GlideClientConfiguration(
addresses=[NodeAddress("localhost", 6379)],
client_side_cache=cache,
)
client = await GlideClient.create(config)
# Cluster client
cluster_config = GlideClusterClientConfiguration(
addresses=[NodeAddress("localhost", 6379)],
client_side_cache=cache,
)
cluster_client = await GlideClusterClient.create(cluster_config)
OptionTypeDefaultDescription
maxCacheKbintegerMaximum cache size in kilobytes. Required.
entryTtlMsintegerTime-to-live per entry in milliseconds. Use 0 to disable TTL (entries persist until evicted).
evictionPolicyLRU or LFULRUPolicy for removing entries when the cache is full.
enableMetricsbooleanfalseWhen true, enables collection of hit/miss/eviction/expiration counters.

When the cache reaches its configured memory limit, it must remove entries to make room for new ones.

PolicyNameBehavior
LRULeast Recently UsedEvicts the entry that has not been accessed for the longest time. Best for workloads with temporal locality.
LFULeast Frequently UsedEvicts the entry with the lowest access count. Ties are broken by oldest access time. Best for workloads where popular items should stay cached.

When enableMetrics is set to true, you can query cache performance statistics at runtime.

# Get metrics
hit_rate = await client.get_cache_hit_rate() # float 0.0–1.0
miss_rate = await client.get_cache_miss_rate() # float 0.0–1.0
entry_count = await client.get_cache_entry_count() # int
evictions = await client.get_cache_evictions() # int
expirations = await client.get_cache_expirations() # int
total = await client.get_cache_total_lookups() # int
print(f"Hit rate: {hit_rate:.2%}")
print(f"Entries: {entry_count}, Evictions: {evictions}, Expirations: {expirations}")

Multiple clients can share the same cache instance by passing the same ClientSideCache object to each client. This is useful when you want several connections to benefit from a single pool of cached data.

# Both clients share the same cache
cache = ClientSideCache.create(max_cache_kb=1024, entry_ttl_ms=60_000)
client1 = await GlideClient.create(
GlideClientConfiguration(
addresses=[NodeAddress("localhost", 6379)],
client_side_cache=cache,
)
)
client2 = await GlideClient.create(
GlideClientConfiguration(
addresses=[NodeAddress("localhost", 6379)],
client_side_cache=cache,
)
)
# client1 populates the cache
await client1.set("key", "value")
await client1.get("key") # Cache miss — fetches from server
# client2 gets a cache hit without contacting the server
result = await client2.get("key") # Cache hit
LimitationDetails
TTL-only expirationNo server-side invalidation. Cached values may become stale if the key is modified on the server before the TTL expires.
Lazy expirationExpired entries are cleaned up on access, not proactively in the background.
Limited command coverageOnly GET, HGETALL, and SMEMBERS are cached. Other read commands are not cached.
NIL not cachedIf a key does not exist, the nil response is not stored.
No invalidation on writesWriting to a key (e.g., SET) does not automatically invalidate the local cache entry for that key.
  • Set an appropriate TTL — Choose a TTL that balances freshness with cache effectiveness. Shorter TTLs reduce staleness risk; longer TTLs improve hit rates.
  • Size the cache appropriately — Monitor eviction counts. High eviction rates indicate the cache is too small for the working set.
  • Use metrics to tune — Enable metrics during development and load testing to understand cache behavior and optimize configuration.
  • Consider data volatility — Client-side caching works best for data that changes infrequently relative to how often it is read. Rapidly changing data will produce stale reads.
  • Avoid sharing caches across databases — Keys in different databases may have the same name but different values.