How to upload, download, and manage files in AWS S3 from a Spring Boot service using the AWS SDK v2 — covering multipart uploads, presigned URLs, and IAM access patterns.
S3 is the right place for files in an AWS-hosted service — race replays, trade reports, uploaded documents, bulk data exports. The AWS SDK v2 for Java is the modern client, and Spring Cloud AWS wraps it cleanly. Understanding the direct SDK path first, before adding the Spring abstraction, means you know what’s happening when something goes wrong.
<dependency>
<groupId>software.amazon.awssdk</groupId>
<artifactId>s3</artifactId>
<version>2.25.16</version>
</dependency>
Or via the BOM to manage versions:
<dependencyManagement>
<dependencies>
<dependency>
<groupId>software.amazon.awssdk</groupId>
<artifactId>bom</artifactId>
<version>2.25.16</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
@Configuration
public class S3Config {
@Bean
public S3Client s3Client(@Value("${aws.region}") String region) {
return S3Client.builder()
.region(Region.of(region))
.credentialsProvider(DefaultCredentialsProvider.create())
.build();
}
}
DefaultCredentialsProvider follows the standard credential chain: environment variables, system properties, ~/.aws/credentials, and — in production — the ECS task role or EC2 instance profile. No credentials in code.
@Service
@RequiredArgsConstructor
public class S3StorageService {
private final S3Client s3Client;
@Value("${aws.s3.bucket}")
private String bucket;
public String upload(String key, byte[] content, String contentType) {
PutObjectRequest request = PutObjectRequest.builder()
.bucket(bucket)
.key(key)
.contentType(contentType)
.serverSideEncryption(ServerSideEncryption.AES256)
.build();
s3Client.putObject(request, RequestBody.fromBytes(content));
return key;
}
public String uploadStream(String key, InputStream stream, long contentLength,
String contentType) {
PutObjectRequest request = PutObjectRequest.builder()
.bucket(bucket)
.key(key)
.contentType(contentType)
.contentLength(contentLength)
.serverSideEncryption(ServerSideEncryption.AES256)
.build();
s3Client.putObject(request, RequestBody.fromInputStream(stream, contentLength));
return key;
}
}
Always set serverSideEncryption — S3 encrypts at rest by default in modern buckets, but being explicit prevents accidental uploads to older buckets without default encryption.
public byte[] download(String key) {
GetObjectRequest request = GetObjectRequest.builder()
.bucket(bucket)
.key(key)
.build();
ResponseBytes<GetObjectResponse> response =
s3Client.getObjectAsBytes(request);
return response.asByteArray();
}
public InputStream downloadStream(String key) {
GetObjectRequest request = GetObjectRequest.builder()
.bucket(bucket)
.key(key)
.build();
return s3Client.getObject(request);
}
For large files, use the stream form and write directly to the response output stream rather than buffering the entire file in memory.
@GetMapping("/reports/{reportId}")
public ResponseEntity<StreamingResponseBody> downloadReport(@PathVariable String reportId) {
String key = "reports/" + reportId + ".csv";
GetObjectResponse metadata = storageService.getMetadata(key);
StreamingResponseBody body = outputStream -> {
try (InputStream s3Stream = storageService.downloadStream(key)) {
s3Stream.transferTo(outputStream);
}
};
return ResponseEntity.ok()
.header(HttpHeaders.CONTENT_DISPOSITION,
"attachment; filename=\"" + reportId + ".csv\"")
.header(HttpHeaders.CONTENT_TYPE, "text/csv")
.body(body);
}
StreamingResponseBody writes directly to the HTTP response without loading the full file into the JVM heap.
S3’s multipart upload API is required for objects over 5GB and recommended over 100MB. The Transfer Manager in the SDK v2 handles this automatically:
@Bean
public S3TransferManager transferManager(S3AsyncClient asyncClient) {
return S3TransferManager.builder()
.s3Client(asyncClient)
.build();
}
public CompletableFuture<String> uploadLargeFile(String key, Path filePath) {
UploadFileRequest request = UploadFileRequest.builder()
.putObjectRequest(p -> p.bucket(bucket).key(key))
.source(filePath)
.build();
Upload upload = transferManager.uploadFile(request);
return upload.completionFuture()
.thenApply(result -> key);
}
The Transfer Manager splits the file, uploads parts in parallel, and assembles them. You get progress events if needed.
For browser-direct uploads or time-limited download links, generate a presigned URL rather than routing the file through your service:
@Bean
public S3Presigner presigner(@Value("${aws.region}") String region) {
return S3Presigner.builder()
.region(Region.of(region))
.credentialsProvider(DefaultCredentialsProvider.create())
.build();
}
public String generateDownloadUrl(String key, Duration expiry) {
GetObjectPresignRequest presignRequest = GetObjectPresignRequest.builder()
.signatureDuration(expiry)
.getObjectRequest(r -> r.bucket(bucket).key(key))
.build();
return presigner.presignGetObject(presignRequest)
.url()
.toString();
}
public String generateUploadUrl(String key, Duration expiry) {
PutObjectPresignRequest presignRequest = PutObjectPresignRequest.builder()
.signatureDuration(expiry)
.putObjectRequest(r -> r.bucket(bucket).key(key))
.build();
return presigner.presignPutObject(presignRequest)
.url()
.toString();
}
Presigned upload URLs allow browsers to upload directly to S3 without your service acting as a proxy — better throughput, lower costs, no memory pressure.
Grant only the operations your service needs:
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject"
],
"Resource": "arn:aws:s3:::my-bucket/reports/*"
}
Scope to the specific prefix your service owns — not s3:* on the whole bucket.
Attach an S3 lifecycle rule to move old objects to cheaper storage tiers automatically:
LifecycleRule rule = LifecycleRule.builder()
.id("move-old-reports")
.filter(f -> f.prefix("reports/"))
.transitions(Transition.builder()
.days(30)
.storageClass(TransitionStorageClass.STANDARD_IA)
.build())
.expiration(LifecycleExpiration.builder().days(365).build())
.status(ExpirationStatus.ENABLED)
.build();
Objects under reports/ move to Infrequent Access after 30 days and expire after a year — zero manual cleanup, significant cost reduction for high-volume reporting.
If you’re building AWS storage infrastructure for a Spring Boot service and want a review, get in touch.