Available Hire Me
← All Writing AWS

Structured Logging to CloudWatch from Spring Boot

How to configure Spring Boot to emit structured JSON logs, ship them to CloudWatch Logs, and query them efficiently with CloudWatch Logs Insights — with ECS Fargate and local development setups.

Most teams get logging wrong in the same way: they write log lines optimised for a human reading a terminal, ship them to CloudWatch, and then discover that querying free-text in CloudWatch Logs Insights is painful. The fix is structured logging — emitting JSON from the start, so every field is queryable without writing regex patterns.

This post covers the full setup: Logback JSON encoding, correlation IDs, CloudWatch Insights queries, and how to make local development work without AWS.

Why structured logging matters in CloudWatch

CloudWatch Logs Insights treats JSON log events as first-class objects. If your log line is:

2026-05-11 09:23:14.451 INFO  [market-service] Matched volume threshold reached marketId=1.234567 matchedAmount=15000.00 threshold=10000.00

You can query it, but it’s fragile — field extraction depends on your line format never changing.

If your log line is:

{"timestamp":"2026-05-11T09:23:14.451Z","level":"INFO","service":"market-service","message":"Matched volume threshold reached","marketId":"1.234567","matchedAmount":15000.00,"threshold":10000.00,"traceId":"abc123","spanId":"def456"}

The Insights query is:

fields @timestamp, marketId, matchedAmount
| filter matchedAmount > 10000
| sort @timestamp desc

No parsing, no fragile regex, no missed fields when developers change the log format. Structured logging is the foundation that makes CloudWatch Insights actually useful.

Logback JSON encoding with logstash-logback-encoder

The logstash-logback-encoder library is the standard way to produce JSON from Logback. Add to pom.xml:

<dependency>
    <groupId>net.logstash.logback</groupId>
    <artifactId>logstash-logback-encoder</artifactId>
    <version>7.4</version>
</dependency>

Then configure logback-spring.xml in src/main/resources:

<?xml version="1.0" encoding="UTF-8"?>
<configuration>
    <springProperty scope="context" name="serviceName" source="spring.application.name"/>

    <appender name="JSON_CONSOLE" class="ch.qos.logback.core.ConsoleAppender">
        <encoder class="net.logstash.logback.encoder.LogstashEncoder">
            <customFields>{"service":"${serviceName}"}</customFields>
            <fieldNames>
                <timestamp>timestamp</timestamp>
                <message>message</message>
                <logger>logger</logger>
                <thread>thread</thread>
            </fieldNames>
            <includeCallerData>false</includeCallerData>
            <throwableConverter class="net.logstash.logback.stacktrace.ShortenedThrowableConverter">
                <maxDepthPerCause>10</maxDepthPerCause>
                <shortenedClassNameLength>20</shortenedClassNameLength>
                <rootCauseFirst>true</rootCauseFirst>
            </throwableConverter>
        </encoder>
    </appender>

    <root level="INFO">
        <appender-ref ref="JSON_CONSOLE"/>
    </root>
</configuration>

Every log line is now a JSON object. The throwableConverter ensures stack traces appear as a structured field rather than multiline strings — multiline strings break CloudWatch Logs parsing.

Adding correlation IDs with MDC

Correlation IDs let you trace a request across all log lines it generates. The standard approach is an HTTP filter that sets the trace ID in the MDC (Mapped Diagnostic Context) at the start of each request:

@Component
@Order(Ordered.HIGHEST_PRECEDENCE)
public class CorrelationIdFilter extends OncePerRequestFilter {

    private static final String TRACE_ID_HEADER = "X-Trace-Id";
    private static final String MDC_TRACE_KEY   = "traceId";

    @Override
    protected void doFilterInternal(
            HttpServletRequest request,
            HttpServletResponse response,
            FilterChain chain) throws ServletException, IOException {

        String traceId = Optional.ofNullable(request.getHeader(TRACE_ID_HEADER))
                .filter(Predicate.not(String::isBlank))
                .orElse(UUID.randomUUID().toString());

        MDC.put(MDC_TRACE_KEY, traceId);
        response.setHeader(TRACE_ID_HEADER, traceId);

        try {
            chain.doFilter(request, response);
        } finally {
            MDC.clear();
        }
    }
}

The LogstashEncoder automatically includes all MDC fields in every log event. With this filter in place, every log line in a request automatically carries traceId:

{"timestamp":"...","level":"INFO","message":"Processing order","traceId":"3f9a7b2c-...","orderId":"ORD-456"}

To propagate the trace ID to downstream services, extract it from the MDC and add it to outgoing HTTP headers. If you’re using Spring Cloud Sleuth or Micrometer Tracing, this propagation is handled automatically.

Structured log fields with StructuredArguments

Beyond MDC, add per-log-event fields using StructuredArguments:

import static net.logstash.logback.argument.StructuredArguments.*;

log.info("Market volume threshold reached",
        keyValue("marketId", market.getId()),
        keyValue("matchedAmount", matchedAmount),
        keyValue("threshold", threshold),
        keyValue("durationMs", stopwatch.elapsed().toMillis()));

The fields appear as top-level JSON keys, not as string interpolations inside the message:

{
  "message": "Market volume threshold reached",
  "marketId": "1.234567",
  "matchedAmount": 15000.00,
  "threshold": 10000.00,
  "durationMs": 42
}

This is critical for CloudWatch Insights queries — filter matchedAmount > 10000 only works when matchedAmount is a numeric field, not when it’s buried in the message string.

CloudWatch log group and retention

On ECS Fargate, logs from stdout are automatically shipped to CloudWatch Logs via the awslogs log driver. Configure in your ECS task definition:

"logConfiguration": {
    "logDriver": "awslogs",
    "options": {
        "awslogs-group": "/ecs/market-service",
        "awslogs-region": "eu-west-2",
        "awslogs-stream-prefix": "ecs"
    }
}

With CloudFormation or CDK:

LogGroup logGroup = LogGroup.Builder.create(this, "ServiceLogGroup")
        .logGroupName("/ecs/market-service")
        .retention(RetentionDays.THIRTY)
        .removalPolicy(RemovalPolicy.DESTROY)
        .build();

LogDriver logging = LogDriver.awsLogs(AwsLogDriverProps.builder()
        .logGroup(logGroup)
        .streamPrefix("ecs")
        .build());

Set a retention policy. CloudWatch Logs default retention is indefinite, which accumulates cost silently. 30 days is a reasonable starting point for most services; keep 90 days or more if you need to correlate incidents with production traffic.

Querying with CloudWatch Logs Insights

With structured JSON, the Insights query language becomes powerful. Find all errors in the last hour:

fields @timestamp, message, traceId, @logStream
| filter level = "ERROR"
| sort @timestamp desc
| limit 50

Find slow requests:

fields @timestamp, traceId, durationMs, path
| filter durationMs > 1000
| stats avg(durationMs) as avgMs, max(durationMs) as maxMs by path
| sort maxMs desc

Find all log lines for a specific trace:

fields @timestamp, level, message, logger
| filter traceId = "3f9a7b2c-1234-5678-abcd-ef0123456789"
| sort @timestamp asc

The last query is what makes the investment worthwhile: given a trace ID from a user-reported error, you can reconstruct the complete request path across all services in seconds.

Local development without CloudWatch

During local development, JSON logs in a terminal are readable but not pleasant. Use a Spring profile to switch to human-readable output locally:

<springProfile name="local">
    <appender name="CONSOLE" class="ch.qos.logback.core.ConsoleAppender">
        <encoder>
            <pattern>%d{HH:mm:ss.SSS} %highlight(%-5level) [%cyan(%logger{36})] [%yellow(%X{traceId})] %msg%n</pattern>
        </encoder>
    </appender>
    <root level="DEBUG">
        <appender-ref ref="CONSOLE"/>
    </root>
</springProfile>

<springProfile name="!local">
    <root level="INFO">
        <appender-ref ref="JSON_CONSOLE"/>
    </root>
</springProfile>

The springProfile element in logback-spring.xml (note: not logback.xml) is evaluated by Spring Boot, which means it resolves active profiles correctly.

ProTips

Index your service name: Always include service as a field. When multiple services log to the same CloudWatch log group, it’s the first thing you filter on in Insights.

Log the request duration at the filter level: A filter that logs durationMs for every request gives you a complete latency picture without instrumenting each endpoint individually.

Avoid logging sensitive data: Structured logging makes it easy to accidentally log request bodies or response payloads. Establish a convention — log request metadata (path, method, status, duration, trace ID), never request content.

If you’re building observability into a Spring Boot service on AWS and want to review the logging and tracing setup, get in touch.

Samuel Jackson

Samuel Jackson

Senior Java Back End Developer & Contractor

Senior Java Back End Developer — Betfair Exchange API specialist, Spring Boot, AWS, and event-driven architecture. 20+ years delivering high-performance systems across betting, finance, energy, retail, and government. Available for Java contracting.