How to configure HikariCP in Spring Boot for production workloads — pool sizing, timeout settings, health checks, leak detection, and the common misconfigurations that cause connection exhaustion.
HikariCP is the default connection pool in Spring Boot since 2.0. It is fast, lean, and opinionated about correctness over configurability. Most applications run fine with the defaults — until they don’t. When connection exhaustion occurs in production, it is almost always because the pool was misconfigured, not because HikariCP is wrong. Understanding the settings that matter prevents the 2am page.
spring:
datasource:
hikari:
maximum-pool-size: 10
minimum-idle: 5
connection-timeout: 30000 # 30s — throw if no connection available
idle-timeout: 600000 # 10min — remove idle connections
max-lifetime: 1800000 # 30min — retire connections before DB times them out
keepalive-time: 60000 # 1min — ping idle connections
pool-name: TradingPool
The most important setting is maximum-pool-size. The common mistake is setting it too high. More connections do not mean more throughput — beyond the saturation point, each additional connection adds context-switching and lock contention at the database, slowing everything down.
For PostgreSQL on a modern server, the rule of thumb is (2 × CPU cores) + effective_spindle_count. For a 4-core RDS instance with SSD storage: (2 × 4) + 1 = 9. Round to 10. For an application with light DB usage, 5 is often sufficient.
minimum-idle sets the number of connections kept open when the application is quiet. Setting this equal to maximum-pool-size creates a fixed-size pool — every connection is always open, reducing the latency of connection establishment during traffic spikes.
connection-timeout is the maximum time a thread waits for a connection from the pool. If no connection is available within this period, SQLTimeoutException is thrown. The default 30 seconds is too long for most web applications — a request waiting 30 seconds for a connection has already timed out from the caller’s perspective.
For a REST API with a 5-second SLA, set connection-timeout to 3000ms (3 seconds). This surfaces connection exhaustion quickly rather than quietly queuing requests.
max-lifetime controls how long a connection stays in the pool before being retired and replaced. This must be shorter than your database’s wait_timeout or idle_in_transaction_session_timeout. If the database closes a connection before HikariCP retires it, the next request on that connection fails.
For RDS MySQL, the default wait_timeout is 8 hours (28800 seconds). Setting max-lifetime to 1800000ms (30 minutes) is safe and ensures connections are regularly refreshed — useful when credentials rotate or network policies change.
A connection that is checked out but never returned causes pool exhaustion over time. HikariCP will log a warning after leak-detection-threshold milliseconds:
spring:
datasource:
hikari:
leak-detection-threshold: 5000 # 5s — warn about connections held this long
The log output includes the full stack trace of where the connection was acquired, making the leak easy to find. Set this to a value longer than your slowest legitimate query. In development, 2000ms exposes leaks without noise. In production, 10000ms avoids false positives from slow analytics queries.
Register HikariCP with Spring Boot Actuator’s health endpoint:
management:
health:
db:
enabled: true
Spring Boot auto-configures this. The health check issues SELECT 1 (or your JDBC driver’s equivalent) against the pool. If the pool is exhausted or the database is unreachable, the endpoint returns DOWN.
For liveness/readiness separation:
management:
endpoint:
health:
group:
readiness:
include: db, diskSpace
liveness:
include: ping
A DOWN database takes the pod out of the load balancer rotation (readiness) but does not restart it (liveness). The database being slow is not a reason to restart the application.
HikariCP publishes Micrometer metrics when Micrometer is on the classpath:
| Metric | Meaning |
|---|---|
hikaricp.connections.active |
Connections currently in use |
hikaricp.connections.idle |
Connections available in pool |
hikaricp.connections.pending |
Threads waiting for a connection |
hikaricp.connections.timeout |
Connection timeout count |
hikaricp.connections.acquire |
Time to acquire a connection (histogram) |
Alert on connections.pending > 0 for more than 30 seconds — this indicates pool pressure before it becomes exhaustion. Alert on connections.timeout > 0 — every timeout is a request failure or a slow degradation.
management:
metrics:
distribution:
percentiles-histogram:
hikaricp.connections.acquire: true
For applications with read-heavy workloads and a read replica, configure two data sources with separate pools:
@Configuration
public class DataSourceConfig {
@Bean
@Primary
@ConfigurationProperties("spring.datasource.write")
public DataSource writeDataSource() {
return DataSourceBuilder.create().type(HikariDataSource.class).build();
}
@Bean
@ConfigurationProperties("spring.datasource.read")
public DataSource readDataSource() {
return DataSourceBuilder.create().type(HikariDataSource.class).build();
}
}
spring:
datasource:
write:
hikari:
maximum-pool-size: 10
pool-name: WritePool
read:
hikari:
maximum-pool-size: 20 # read replicas handle more concurrency
pool-name: ReadPool
Oversized pool: Setting maximum-pool-size to 100 does not make the application faster. It increases the number of concurrent queries, which increases database lock contention, which slows everything down.
max-lifetime longer than DB timeout: Leads to “connection is closed” errors under load when the database has already dropped the connection.
No leak detection in development: Connection leaks in unclosed ResultSet or Statement objects accumulate silently, only manifesting in production under load.
connection-timeout left at 30 seconds: Requests silently queue for up to 30 seconds during connection exhaustion rather than failing fast and surfacing the problem in metrics.
With pool sizing calibrated to your database, leak detection catching mistakes during development, and Micrometer metrics surfacing pressure before it becomes outage, HikariCP becomes a known quantity rather than a mystery.
If you’re tuning database performance for a Spring Boot service in production, get in touch.