Java Heap Space is insufficient to upload files on AWS S3

I'm trying to upload a file on AWS S3 by using Java-AWS API. The problem is my application is unable to upload large sized files because the heap is reaching its limit. Error: java.lang.OutOfMemoryError: Java heap space

I personally think extending heap memory isn't a permanent solution because I have to upload file upto 100 gb. What should I do ?

Here is the code snippet:

        BasicAWSCredentials awsCreds = new BasicAWSCredentials(AID, Akey);
        AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
        .withRegion(Regions.fromName("us-east-2"))
        .withCredentials(new AWSStaticCredentialsProvider(awsCreds))
        .build();

        InputStream Is=file.getInputStream();

        boolean buckflag = s3Client.doesBucketExist(ABuck);
        if(buckflag != true){
           s3Client.createBucket(ABuck);
        }
        s3Client.putObject(new PutObjectRequest(ABuck, AFkey,file.getInputStream(),new ObjectMetadata() ).withCannedAcl(CannedAccessControlList.PublicRead));

Answers 1

  • I strongly recommend to setContentLength() on ObjectMetadata, since:

    ..If not provided, the library will have to buffer the contents of the input stream in order to calculate it.

    (..which predictably will lead to OutOfMemory on "sufficient large" files.)

    source: PutObjectRequest javadoc

    Applied to your code:

     // ...
     ObjectMetadata omd = new ObjectMetadata();
     // a tiny code line, but with a "huge" information gain and memory saving!;)
     omd.setContentLength(file.length());
    
     s3Client.putObject(new PutObjectRequest(ABuck, AFkey, file.getInputStream(), omd).withCannedAcl(CannedAccessControlList.PublicRead));
     // ...
    

Related Articles