Hire Me
← All Writing Spring Boot

Distributed Caching with Redis and Spring Cache

How to implement distributed caching in Spring Boot with Redis — @Cacheable, cache configuration, TTL strategies, cache-aside pattern, eviction, and the failure modes that catch teams out in production.

Caching is one of those areas where the initial implementation is deceptively simple and the production behaviour is where the complexity lives. Adding @Cacheable to a method takes five minutes. Understanding what happens when the cache is cold, when Redis goes down, when data changes out of band, and when two instances race to populate the same key — that’s what separates a caching layer that helps from one that introduces subtle correctness bugs.

I’ve used Redis caching on several production systems, including the Mosaic Smart Data ingestion platform where reference data lookups from financial institutions needed sub-millisecond response times under high concurrency. This is what a properly designed Spring Boot Redis cache layer looks like.

Dependencies and Configuration

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-data-redis</artifactId>
</dependency>
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-cache</artifactId>
</dependency>
spring:
  redis:
    host: localhost
    port: 6379
    timeout: 2000ms
    lettuce:
      pool:
        max-active: 20
        max-idle: 10
        min-idle: 5
        max-wait: 1000ms
  cache:
    type: redis

Spring Boot auto-configures a RedisCacheManager when spring-boot-starter-data-redis is on the classpath and spring.cache.type=redis is set. Enable caching with @EnableCaching on a configuration class.

Custom Cache Configuration

The default TTL is infinite, which is almost never what you want. Configure TTLs per cache name:

@Configuration
@EnableCaching
public class CacheConfig {

    @Bean
    public RedisCacheManager cacheManager(RedisConnectionFactory connectionFactory) {

        RedisCacheConfiguration defaults = RedisCacheConfiguration.defaultCacheConfig()
            .entryTtl(Duration.ofMinutes(10))
            .serializeKeysWith(
                RedisSerializationContext.SerializationPair.fromSerializer(
                    new StringRedisSerializer()))
            .serializeValuesWith(
                RedisSerializationContext.SerializationPair.fromSerializer(
                    new GenericJackson2JsonRedisSerializer()))
            .disableCachingNullValues();

        Map<String, RedisCacheConfiguration> cacheConfigs = new HashMap<>();

        // Reference data changes rarely — long TTL
        cacheConfigs.put("claimant-profiles",
            defaults.entryTtl(Duration.ofHours(1)));

        // Eligibility decisions change on policy updates — medium TTL
        cacheConfigs.put("eligibility-rules",
            defaults.entryTtl(Duration.ofMinutes(15)));

        // Market data is short-lived — short TTL
        cacheConfigs.put("market-snapshots",
            defaults.entryTtl(Duration.ofSeconds(30)));

        return RedisCacheManager.builder(connectionFactory)
            .cacheDefaults(defaults)
            .withInitialCacheConfigurations(cacheConfigs)
            .build();
    }
}

Using GenericJackson2JsonRedisSerializer stores values as JSON, which makes cache contents human-readable in Redis and survives application restarts without deserialization failures. The downside is slightly higher serialization overhead compared to Java serialization — worth it for the debuggability.

@Cacheable, @CachePut, @CacheEvict

@Service
@RequiredArgsConstructor
public class ClaimantProfileService {

    private final ClaimantProfileRepository repository;

    @Cacheable(value = "claimant-profiles", key = "#claimantId")
    public ClaimantProfile getProfile(String claimantId) {
        return repository.findById(claimantId)
            .orElseThrow(() -> new ClaimantNotFoundException(claimantId));
    }

    @CachePut(value = "claimant-profiles", key = "#profile.claimantId")
    public ClaimantProfile updateProfile(ClaimantProfile profile) {
        return repository.save(profile);
    }

    @CacheEvict(value = "claimant-profiles", key = "#claimantId")
    public void deleteProfile(String claimantId) {
        repository.deleteById(claimantId);
    }

    @CacheEvict(value = "claimant-profiles", allEntries = true)
    @Scheduled(cron = "0 0 2 * * *") // nightly full eviction
    public void evictAllProfiles() {
        log.info("Nightly cache eviction: claimant-profiles");
    }
}

@Cacheable — check the cache first; if present, return cached value; if absent, execute the method and cache the result.

@CachePut — always execute the method and update the cache with the result. Use this on write operations to keep the cache current without evicting.

@CacheEvict — remove a specific key (or all entries with allEntries = true). Use on deletes.

The key distinction between @Cacheable and @CachePut: @Cacheable short-circuits the method call on a cache hit; @CachePut always calls the method. Never use @CachePut where you expect cache hits to save work.

The Cache-Aside Pattern for Complex Lookups

Spring’s annotations work well for simple key-value lookups. For more complex cases — conditional caching, custom key generation, cache-aside with fallback logic — interact with the CacheManager directly:

@Service
@RequiredArgsConstructor
public class EligibilityService {

    private final CacheManager cacheManager;
    private final EligibilityRulesRepository rulesRepository;

    public EligibilityRules getRulesForPolicy(String policyId, LocalDate effectiveDate) {
        String cacheKey = policyId + ":" + effectiveDate;
        Cache cache = cacheManager.getCache("eligibility-rules");

        if (cache != null) {
            Cache.ValueWrapper cached = cache.get(cacheKey);
            if (cached != null) {
                return (EligibilityRules) cached.get();
            }
        }

        EligibilityRules rules = rulesRepository.findByPolicyAndDate(policyId, effectiveDate)
            .orElseThrow(() -> new RulesNotFoundException(policyId, effectiveDate));

        if (cache != null) {
            cache.put(cacheKey, rules);
        }

        return rules;
    }
}

This pattern also makes it easier to handle cache misses explicitly — you can log cache miss rates, trigger pre-warming, or apply different logic depending on whether data came from cache or database.

Handling Redis Failures Gracefully

The most important production concern with Redis caching is: what happens when Redis is unavailable? By default, Spring’s Redis cache throws an exception when it can’t reach Redis, which means a Redis outage takes down your application.

The fix is a custom CacheErrorHandler that logs the error and falls through to the underlying method:

@Configuration
@EnableCaching
public class CacheConfig extends CachingConfigurerSupport {

    @Override
    public CacheErrorHandler errorHandler() {
        return new CacheErrorHandler() {
            @Override
            public void handleCacheGetError(RuntimeException e, Cache cache, Object key) {
                log.warn("Cache GET error on cache '{}' for key '{}': {}",
                    cache.getName(), key, e.getMessage());
            }

            @Override
            public void handleCachePutError(RuntimeException e, Cache cache,
                                             Object key, Object value) {
                log.warn("Cache PUT error on cache '{}' for key '{}': {}",
                    cache.getName(), key, e.getMessage());
            }

            @Override
            public void handleCacheEvictError(RuntimeException e, Cache cache, Object key) {
                log.warn("Cache EVICT error on cache '{}' for key '{}': {}",
                    cache.getName(), key, e.getMessage());
            }

            @Override
            public void handleCacheClearError(RuntimeException e, Cache cache) {
                log.warn("Cache CLEAR error on cache '{}': {}", cache.getName(), e.getMessage());
            }
        };
    }
}

With this in place, a Redis outage degrades gracefully — every request hits the database instead of the cache, but the application continues to function. Alert on the elevated error rate; don’t let the application silently run without caching indefinitely.

Cache Stampede Prevention

A cache stampede (also called a thundering herd) happens when a popular cache entry expires and many concurrent requests all find a cache miss simultaneously, each triggering a database query. For expensive lookups under high concurrency, this can overwhelm the database.

The cleanest prevention in Spring Boot is to use @Cacheable with a sync = true attribute, which serialises cache population — only one thread executes the method while others wait:

@Cacheable(value = "eligibility-rules", key = "#policyId", sync = true)
public EligibilityRules getRules(String policyId) {
    return rulesRepository.findByPolicy(policyId)
        .orElseThrow(() -> new RulesNotFoundException(policyId));
}

sync = true is only supported with certain cache implementations — Spring’s Redis cache supports it via a lock-based approach. Use it selectively on caches where stampedes are a realistic concern (high traffic, expensive backend, predictable expiry times).

Cache Key Design

Poor key design is a common source of subtle bugs. Spring’s default key generation concatenates method parameter toString() values — which works until you have a parameter whose toString() isn’t unique or stable.

Always use explicit SpEL key expressions:

// Good — explicit, stable
@Cacheable(value = "profiles", key = "#claimantId")

// Good — composite key
@Cacheable(value = "rules", key = "#policyId + ':' + #effectiveDate")

// Risky — relies on toString() of a complex object
@Cacheable(value = "profiles")  // key = all parameters combined

For composite keys with complex objects, implement a dedicated key class:

@Cacheable(value = "market-data", key = "T(com.trinitylogic.cache.CacheKeys).marketKey(#marketId, #runnerIds)")

Monitoring Cache Performance

Without metrics, you’re flying blind on whether your cache is actually helping. Micrometer (included with Spring Boot Actuator) exposes Redis cache metrics automatically:

management:
  endpoints:
    web:
      exposure:
        include: metrics, health

Key metrics to track:

A hit rate below 70% on a frequently-accessed cache suggests either the TTL is too short, the key space is too broad, or the data changes faster than you assumed. Measure before tuning.

If you’re designing Spring Boot services where latency and throughput matter and want an engineer who’s built caching layers that hold up under production load, get in touch.