Hire Me
← All Writing Architecture

Idempotency Patterns in Event-Driven Java Systems

How to implement idempotency in event-driven Java systems — idempotency keys, at-least-once delivery, the outbox pattern, Redis deduplication, and Kafka consumer patterns.

In a benefit payments system you cannot afford to pay a claimant twice. At DWP Digital, every payment instruction event processed by the payments service must be idempotent — if the same event arrives five times due to Kafka redelivery, exactly one payment gets issued. Getting this wrong causes real harm to real people. Getting it right is engineering discipline, not luck.

Idempotency in event-driven systems is not a single technique. It is a layered set of patterns applied at the message producer, the broker, and the consumer. This post covers the full stack: how to generate idempotency keys that survive retries, how to deduplicate at the consumer, the outbox pattern as a producer-side guarantee, and how to test that your idempotency holds under the failure modes that actually occur in production.

Idempotency Keys

An idempotency key is a stable, unique identifier for a logical operation. It must be generated before the operation is attempted and remain the same across all retries. The most common mistake is generating a new UUID on each attempt — that defeats the purpose entirely.

For a payment instruction derived from a claim decision:

public record PaymentInstruction(
    String idempotencyKey,  // stable across retries
    String claimId,
    BigDecimal amount,
    String benefitType,
    Instant instructedAt
) {
    public static PaymentInstruction from(ClaimDecision decision) {
        // Deterministic key: derived from the business identity, not a random UUID
        String key = "payment:" + decision.claimId() + ":" + decision.decisionId();
        return new PaymentInstruction(
            key,
            decision.claimId(),
            decision.paymentAmount(),
            decision.benefitType(),
            Instant.now()
        );
    }
}

The key is derived from the business entities it represents, not generated fresh. The same claim decision will always produce the same key, regardless of how many times the code runs.

At-Least-Once vs Exactly-Once Delivery

Kafka guarantees at-least-once delivery by default with standard consumer semantics. Your consumer will call poll(), process records, commit offsets — but if the process crashes after processing and before committing, those records will be redelivered. This is not a bug; it is the deliberate design. The contract is: you will receive every message, possibly more than once. Your job is to make processing each message safe to repeat.

Exactly-once semantics (EOS) with Kafka’s transactional API does exist, but it covers exactly one case: reading from a Kafka topic, transforming, and writing back to a Kafka topic, within the Kafka transaction boundary. As soon as you touch a database or call an external API, EOS does not protect you. At DWP Digital, the correct approach is at-least-once delivery with idempotent consumers — it is simpler, more portable, and easier to reason about under failure.

Deduplication with a Processed Events Table

The most straightforward consumer-side deduplication pattern is an idempotent_events table (or collection) keyed on the idempotency key:

@Entity
@Table(name = "processed_events",
    uniqueConstraints = @UniqueConstraint(columnNames = "idempotency_key"))
public class ProcessedEvent {

    @Id
    @GeneratedValue(strategy = GenerationType.UUID)
    private String id;

    @Column(name = "idempotency_key", nullable = false, unique = true)
    private String idempotencyKey;

    @Column(nullable = false)
    private Instant processedAt;

    public ProcessedEvent(String idempotencyKey) {
        this.idempotencyKey = idempotencyKey;
        this.processedAt = Instant.now();
    }
}
@Service
@RequiredArgsConstructor
@Slf4j
public class PaymentInstructionConsumer {

    private final PaymentService paymentService;
    private final ProcessedEventRepository processedEventRepository;

    @KafkaListener(topics = "payment.instructions", groupId = "payments-service")
    @Transactional
    public void consume(PaymentInstruction instruction) {
        String key = instruction.idempotencyKey();

        if (processedEventRepository.existsByIdempotencyKey(key)) {
            log.debug("Duplicate event — skipping idempotency key {}", key);
            return;
        }

        paymentService.issuePayment(instruction);
        processedEventRepository.save(new ProcessedEvent(key));
    }
}

The @Transactional annotation wraps both the issuePayment call and the processedEventRepository.save — if either fails, neither persists. The unique constraint on the database column is your safety net: even if two threads process the same event simultaneously, the database rejects the second insert with a constraint violation, which you catch and treat as a duplicate.

Deduplication with Redis

For high-throughput scenarios where a relational database table becomes a bottleneck, Redis SET NX EX (set if not exists, with expiry) is an efficient alternative:

@Service
@RequiredArgsConstructor
public class RedisIdempotencyGuard {

    private final StringRedisTemplate redisTemplate;
    private static final Duration TTL = Duration.ofDays(7);

    /**
     * Returns true if this key is being processed for the first time.
     * Returns false if it has been seen before (duplicate).
     */
    public boolean tryAcquire(String idempotencyKey) {
        Boolean acquired = redisTemplate.opsForValue()
            .setIfAbsent("idem:" + idempotencyKey, "1", TTL);
        return Boolean.TRUE.equals(acquired);
    }
}
@KafkaListener(topics = "payment.instructions", groupId = "payments-service")
public void consume(PaymentInstruction instruction) {
    if (!idempotencyGuard.tryAcquire(instruction.idempotencyKey())) {
        log.debug("Duplicate detected via Redis — key {}", instruction.idempotencyKey());
        return;
    }
    paymentService.issuePayment(instruction);
}

The TTL prevents unbounded key accumulation. Set it to longer than your maximum realistic retry window — seven days is conservative and safe for most systems. The trade-off versus the database approach is durability: a Redis restart without persistence clears your deduplication state, so you need Redis Persistence (AOF or RDB) configured if you rely on this in production.

The Outbox Pattern as Producer-Side Idempotency

Idempotency at the consumer is only half the picture. The producer also needs to guarantee that it publishes each event exactly once — not zero times (lost event) and not duplicated at the source. The Transactional Outbox Pattern solves this.

Instead of writing to the database and publishing to Kafka in the same code path (the dual-write problem), you write your event into an outbox table in the same database transaction as your state change. A polling publisher picks it up and publishes to Kafka asynchronously:

@Transactional
public void processClaimDecision(ClaimDecision decision) {
    BenefitClaim claim = claimRepository.findById(decision.claimId())
        .orElseThrow();
    claim.applyDecision(decision);
    claimRepository.save(claim);

    // Same transaction — either both commit or neither does
    OutboxEvent event = OutboxEvent.of(
        decision.claimId(),
        "BenefitClaim",
        "payment.instructions",
        "PaymentInstructionCreated",
        objectMapper.writeValueAsString(PaymentInstruction.from(decision))
    );
    outboxRepository.save(event);
}

The outbox record carries the idempotency key. When the polling publisher delivers it to Kafka, and when the downstream consumer processes it, the key flows end-to-end and deduplication holds at every hop.

Kafka Consumer Idempotency: Offset Commit Timing

One subtlety that catches people out: by default, Spring Kafka commits offsets after successful message processing (AckMode.BATCH). This means if processing succeeds but offset commit fails (broker unavailable for a moment), the message is redelivered. Your idempotency implementation must handle this case — which it does, as long as you’re not committing offsets before processing and relying on “we’ve already committed this offset” as your deduplication mechanism. That is fragile. Idempotency keys in a store are robust.

Configure your consumers explicitly:

@Bean
public ConcurrentKafkaListenerContainerFactory<String, Object> kafkaListenerContainerFactory(
        ConsumerFactory<String, Object> consumerFactory) {
    ConcurrentKafkaListenerContainerFactory<String, Object> factory =
        new ConcurrentKafkaListenerContainerFactory<>();
    factory.setConsumerFactory(consumerFactory);
    // RECORD: commit after each record — tighter guarantees, more commits
    factory.getContainerProperties().setAckMode(ContainerProperties.AckMode.RECORD);
    return factory;
}

Use AckMode.RECORD when processing is low-throughput and correctness is paramount (benefit payments). Use AckMode.BATCH when throughput matters more and downstream idempotency is solid.

Testing Idempotent Workflows

Testing idempotency means deliberately delivering duplicates and asserting the side effect happens exactly once:

@Test
void duplicatePaymentInstructionIsProcessedOnlyOnce() {
    PaymentInstruction instruction = PaymentInstruction.from(sampleDecision());

    consumer.consume(instruction);
    consumer.consume(instruction);  // deliberate duplicate
    consumer.consume(instruction);  // and again

    verify(paymentService, times(1)).issuePayment(any());
    assertThat(processedEventRepository.count()).isEqualTo(1);
}

Also test the race condition: two threads processing the same event simultaneously. The unique constraint on the database is your protection here, but if you’re using Redis, test that setIfAbsent handles concurrent callers correctly under your transaction configuration.

Pro Tip: Monitor Your Deduplication Hit Rate

Deduplication events should be rare in steady state. If you’re seeing a high deduplication hit rate, it means either your retry configuration is too aggressive or an upstream system is producing duplicate messages more frequently than expected. Expose a counter:

private final MeterRegistry meterRegistry;

// In the consumer
if (processedEventRepository.existsByIdempotencyKey(key)) {
    meterRegistry.counter("events.deduplicated",
        "topic", "payment.instructions").increment();
    return;
}

Wire this into your Grafana dashboard. A baseline deduplication rate of 0.1% is normal. A rate of 5% means something upstream is misbehaving and you should investigate before it escalates into a correctness issue.

If you’re building event-driven systems where correctness and auditability are non-negotiable, get in touch.