Hire Me
← All Writing Architecture

Domain Events — Decoupling Aggregates Without a Message Broker

How to model and publish domain events within a Spring Boot application using the application event bus — decoupling aggregates without Kafka, with a clear path to async messaging later.

Domain events are facts about something that happened in your domain — OrderPlaced, MarketSuspended, RunnerRemoved. They express what the domain did, not what the infrastructure should do in response. Publishing them decouples the aggregate that raised the event from the components that react to it — the aggregate does not know about email sending, audit logging, or analytics.

The first implementation does not need Kafka. Spring’s ApplicationEventPublisher provides an in-process event bus that is synchronous, transactional, and zero-configuration. It is the right starting point, and the path to Kafka later is well-defined.

The domain event

A domain event is an immutable record of something that occurred:

public record OrderPlacedEvent(
    String orderId,
    String marketId,
    String selectionId,
    double price,
    double stake,
    String side,
    Instant occurredAt
) {
    public OrderPlacedEvent(Order order) {
        this(order.id(), order.marketId(), order.selectionId(),
             order.price(), order.stake(), order.side(), Instant.now());
    }
}

Events name things in the past tense. They carry only the data listeners need — not the full aggregate state.

Publishing from the application service

The aggregate’s job is to enforce invariants and produce the event. The application service publishes it:

@Service
@RequiredArgsConstructor
public class OrderApplicationService {

    private final OrderRepository    orderRepository;
    private final ApplicationEventPublisher eventPublisher;

    @Transactional
    public String placeOrder(PlaceOrderCommand command) {
        Order order = Order.place(command);
        orderRepository.save(order);

        eventPublisher.publishEvent(new OrderPlacedEvent(order));

        return order.id();
    }
}

The @Transactional boundary is significant: the event is published within the transaction. If the save fails, the transaction rolls back — and the event is never published. This is the correct behaviour; the event should only be raised if the state change actually persisted.

Listening to events

@Component
@RequiredArgsConstructor
public class OrderAuditListener {

    private final AuditRepository auditRepository;

    @EventListener
    @Transactional(propagation = Propagation.REQUIRES_NEW)
    public void on(OrderPlacedEvent event) {
        auditRepository.record(AuditEntry.from(event));
    }
}

@Component
public class OrderNotificationListener {

    @EventListener
    public void on(OrderPlacedEvent event) {
        log.info("Order placed: {} at {} for {}",
            event.orderId(), event.price(), event.marketId());
    }
}

@EventListener wires the method as a handler for the specific event type — Spring dispatches by type. Multiple listeners can handle the same event. They run synchronously in the same thread, in the same transaction unless you use REQUIRES_NEW.

Transactional event listeners

For listeners that should only run if the publishing transaction commits (not roll back), use @TransactionalEventListener:

@Component
public class OrderEmailListener {

    @TransactionalEventListener(phase = TransactionPhase.AFTER_COMMIT)
    public void on(OrderPlacedEvent event) {
        emailService.sendOrderConfirmation(event.orderId());
    }
}

AFTER_COMMIT means: only run this if the transaction that published the event committed successfully. No spurious emails on rollback. This is the right choice for any listener with side effects that cannot be rolled back (emails, push notifications, external API calls).

Async listeners

For listeners that should not block the publishing transaction:

@Component
public class OrderAnalyticsListener {

    @Async
    @EventListener
    public void on(OrderPlacedEvent event) {
        analyticsService.track(event);   // non-blocking
    }
}

Enable @Async processing in the configuration:

@Configuration
@EnableAsync
public class AsyncConfig {

    @Bean
    public Executor eventListenerExecutor() {
        ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
        executor.setCorePoolSize(4);
        executor.setMaxPoolSize(10);
        executor.setQueueCapacity(100);
        executor.setThreadNamePrefix("event-");
        executor.initialize();
        return executor;
    }
}

With @Async, the listener runs on the executor thread after the publishing method returns — the main flow is not blocked.

The path to Kafka

The in-process event bus is correct until you need cross-service communication. The migration path is clear:

  1. Keep the domain event class unchanged
  2. Replace (or supplement) the in-process ApplicationEventPublisher with a Kafka publisher in OrderApplicationService
  3. Or: add a @TransactionalEventListener(AFTER_COMMIT) that publishes to Kafka, keeping the internal and external event buses separate

The outbox pattern (persisting events to a database table and publishing them separately) is the correct approach for reliable cross-service publishing — it solves the dual-write problem that AFTER_COMMIT almost but doesn’t quite solve.

Event naming conventions

Domain events belong in the domain.event package:

With domain events in place, the aggregate remains isolated from side effects, the application service is a thin orchestrator, and listeners are independently testable. The architecture is ready for Kafka when the time comes.

If you’re designing event-driven systems in Java and want a review of your event model, get in touch.

Samuel Jackson

Samuel Jackson

Senior Java Back End Developer & Contractor

Senior Java Back End Developer — Betfair Exchange API specialist, Spring Boot, AWS, and event-driven architecture. 20+ years delivering high-performance systems across betting, finance, energy, retail, and government. Available for Java contracting.