Why is Java & Cloud a Good Choice for your Business
Table of Contents

Why is Java & Cloud a Good Choice for your Business

Firstly, let answer question why do we really need to go cloud services nowadays. Why not to keep everything on company’s private(on-premises) servers? There are numerous benefits to using the cloud, and we will go through them, one by one in this post.

Focus on Company’s Core Business and Software That Supports It  

No need to focus on infrastructure, hardware, running and managing big data centres. This all just run things for you but shouldn’t take too much of your attention.  

There are different services models available for you:

  • Infrastructure as a Service (IaaS) - quite like on-premise infrastructure, user is responsible for operating system, data, applications, middleware and runtimes
  • Platform as a Service (PaaS) - delivers almost everything, user is responsible only for deploying your applications and its data
  • Software as a Service (SaaS) - delivers entire application accessible for example via your browser  

Lower Infrastructure Costs.  

No need to buy and maintenance hardware anymore to run your applications.  

Infrastructure is more ‘liquid’ - it can be treated as ‘short temporary’ assets, even last only for development phase.

Use Your Application Globally Without Single Points of Failure.

Infrastructure around the globe is at your disposal now. You can deploy to multiple, so called, Regions. Every Region is compounded of 2-3 Availability Zones (AZ).  This mean your application can be reached by customers fast and with great resilience. For example S3 is design to deliver 99,999999999% durability and 99,99% availability.

Pay Only for What You Really Use and Be Ready for Upcoming Spikes.

No need to make capacity calculations upfront. Your application can scale automatically up and down according to current traffic. This means also you can save a lot of money because you are not obligated to buy infrastructure that is needed only, let say, couple of time per year.

Increased Development Speed and Agility.

Infrastructure resources needed by your development team are just a few clicks away; not weeks away. Additionally, company has access to vast range of ready to use managed services so some in-house development can be reduced.  

Java Lambda Application Can Start Blazingly Fast With SnapStart.

Lambda cold starts on Java used to be the worst  

With SnapStart introduction Java cold start is reduced significantly, around 10x without any development need and no extra costs. This magic is based on making snapshot of memory and disk state and then cache it for future reuse. More information can be found here:  https://docs.aws.amazon.com/lambda/latest/dg/snapstart.html

Recently, AWS released SDK version 2 for Java, that allows you to use AWS services like S3, EC2, DynamoDB inside your application. We will run sample code from AWS’s GitHub examples repository and play around storage service S3, but much more services can be use later via AWS SDK.  

First, there are couple of steps to use AWS services with Java.

How to Get Started With AWS With Java

Step 1. Create a Free AWS Account.  

Please follow official instructions:  

https://repost.aws/knowledge-center/create-and-activate-aws-account

Note: the ‘free’ account means you are allowed to use AWS services within certain constraints for 1 year after creation. More information can be found here: https://aws.amazon.com/free  

For example, you are given for example: 750 hours for running EC2 or RDS server; 5GB for S3 storage; 1 milions calls for Lambda or SNS; and more.  

Now that you have a working AWS account, you have your private cloud playground ready.

Step 2. Create Dedicated IAM Account on AWS.

This is required to make API calls through SDK from CLI or your application.  

More information how to proceed can be found via link: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html

Remember to add ‘programmatic access’ and also add policy allowing only to access S3 services.  

After this step you will have to copy two variables ‘access key’ and ‘secret access key’ to your local file:  C:\Users\<your_login>\.aws\credentials  

[default]

aws_access_key_id = <your_access_key>

aws_secret_access_key = <your_secret_access_key>

Important tip: Keep this file and variables accessible only for your local user because it allows to trigger your AWS services. It would be strongly advised to keep this file safe.

As a side note, there are many other ways that credentials can be configured: https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html

Step 3 (optional). Optionally, check connection using AWS Client Line Interface (CLI).

Let's check if everything is ok with your access to AWS S3 from your local PC.

First please install AWS CLI, this is so straightforward task, just run installed and you have it: https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html

Then check installation by running: aws –version

Lastly check connection to your AWS S3 by listing all buckets: aws s3 ls

All CLI commands available are described here: https://docs.aws.amazon.com/cli/latest/index.html

Step 4. Download the Sample Code from AWS’s GitHub.

AWS has great examples repository with short examples: https://github.com/awsdocs/aws-doc-sdk-examples/tree/main/javav2 Please download it and take some time to check content of  ‘example_code’ and ‘usecases’.  

Take a look what dependencies we need in pom.xml and compile project:

<dependencyManagement>  
<dependencies>  
    <dependency>  
        <groupId>software.amazon.awssdk</groupId>  
        <artifactId>bom</artifactId>  
        <version>2.19.14</version>  
        <type>pom</type>  
        <scope>import</scope>  
    </dependency>  
</dependencies>  
</dependencyManagement>  
  
 
<dependency>  
	<groupId>software.amazon.awssdk</groupId>  
	<artifactId>s3</artifactId>  
</dependency> 

Step 5. Run AWS S3 and be happy.

Finally let’s focus on ‘example_code/s3’ on AWS github repository.  

In all our tests interface software.amazon.awssdk.services.s3.S3Client will be used. Take a look directly in your IntelliJ on many methods it provides.

To make changes on S3 we will need first to set couple of variables in config.properties file:

bucketName = andrewbuckettest1              - name has to be unique globally, choose yours

objectPath = c:/file1.txt                              - your local file that will be uploaded to S3

objectKey=Documents/file1.txt                         - location where uploaded file will be saved on S3

path = d:/file1_downloaded_from_s3.txt        - location where downloaded file will be saved locally

Lastly, run below Java test methods to trigger file operations on S3:

  • Create new bucket –> run createBucket()
  • List your buckets –> run ListObjects()
  • Upload file -> run putObject()
  • List files in bucket –> run ListObjects()
  • Download file –> run GetObjectData()
  • Delete file from bucket –> run deleteObjects()
  • Delete multiple files –> run DeleteMultipleObjects()  

That’s it. After running each test check changes on your AWS S3 directly!

Example code, from AWS official repository, to list objects in a bucket is:

String bucketName = “yourbucketname”;  
ProfileCredentialsProvider credentialsProvider = ProfileCredentialsProvider.create();  
Region region = Region.US_EAST_1;  
  
 
S3Client s3 = S3Client.builder()  
        .region(region)  
        .credentialsProvider(credentialsProvider)  
        .build();  
  
    try {  
            ListObjectsRequest listObjects = ListObjectsRequest  
            .builder()  
            .bucket(bucketName)  
            .build();  
  
            ListObjectsResponse res = s3.listObjects(listObjects);  
            List<S3Object> objects = res.contents();  
    for (S3Object myValue : objects) {  
    System.out.print("\n The name of the key is " + myValue.key());  
    System.out.print("\n The object is " + calKb(myValue.size()) + " KBs");  
    System.out.print("\n The owner is " + myValue.owner());  
    }  
  
    } catch (S3Exception e) {  
    System.err.println(e.awsErrorDetails().errorMessage());  
    System.exit(1);  
    }  
      
s3.close(); 

LocalStack – run AWS cloud simulation locally!

When we run code from a GitHub AWS repository, for example, we use the real AWS S3 service. But what if you don't want to rely on any external resource, no matter how reliable? You can do it on your laptop and simulate an AWS environment. Cloud application mocking and testing has never been easier or faster. You will only need to run another Docker container and connect to it through an application or CLI.  

Run LocalStack via docker:

docker run --rm -it -p 4566:4566 -p 4510-4559:4510-4559 localstack/localstack 

Check from CLI if LocalStack is running:

aws --endpoint-url http://localhost:4566 s3 ls 

In Java you need to connect to locally simulated AWS to URL ‘http://localhost:4566’ within your test profile.  

Go Serverless with AWS Lambda.

AWS Lambda, in essence, is an event-driven processing service. With it, you can run serverless processes triggered by various types of events such as Amazon S3, API Gateway, Kafka, Alexa, IoT, SNS, SQS, and many more. Moreover, there's no need to manage any servers or operating systems to execute your Java code. The application will scale automatically, and you only pay for the compute time consumed. This service is ideal for building serverless backends for web, mobile, and IoT applications. Creating a new Lambda function on AWS can be quick and efficient for development teams, provided they know how to do it.

Pros of using Lambda:

  • You are only charged for what you use
  • There is no need to manage any servers because Lambdas are serverless and scale automatically
  • Event Driven architecture style is loosely coupled
  • Many AWS events are integrated
  • Enhanced application resilience  
  • Cloud infrastructure is simplified, and large Dev Ops teams are no longer required.

Cons of using Lambda:

  • Long cold starts – but nowadays it can be eliminated with a few clicks with SnapStart
  • Limited number of Java supported versions – but Java 17 should be added soon, currently you can use Java 11
  • Connection pools to databases  
  • Function timeouts after 15 minutes at most – it means you cannot run lambda for long lasting processing
  • Slightly longer responses than from EC2 – the difference is around 100 milliseconds  

Below we will show steps to create and run a Java AWS lambda triggered by saving PDF file to S3. Then we will convert this file to text with Apache PDFBox library and save the result to a new output file on S3.

Step 1. Create Java handler class  

There are couple of ways you can run AWS Lambda handler with Java:

1. Custom POJO class with dedicated method.

public class LambdaHandler {  
    public String handleRequest(String input, Context context) {  
        context.getLogger().log("Input is: " + input);  
        return "Hello from simplest Java lambda handler: " + input;  
    }  
} 

2. Classes implementing AWS interfaces RequestHandler and RequestStreamHandler.  

public class LambdaHandler  
        implements RequestHandler<String, String> {  
    public String handleRequest (String input, Context context) {  
        context.getLogger().log("Input: " + input);  
        return "Hello: " + input;  
    }  
}  
 
  
 
public class LambdaStreamHandler  
        implements RequestStreamHandler {  
    public void handleRequest(InputStream inputStream,  
                              OutputStream outputStream, Context context) {  
        String input = IOUtils.toString(inputStream, "UTF-8");  
        outputStream.write(("Hello: " + input).getBytes());  
    }  
} 

3. Using Spring Cloud Functions library  

@SpringBootApplication  
public class Application {  
    public static void (String[] args) {  
        SpringApplication.run(Application.class, args);  
    }  
  
    @Bean  
    public Function<String, String> handleRequest() {  
        return value -> value.toUpperCase();  
    }  
} 

Each solution has its own pros:  

Option A. is great when you want to run a Lambda with Java code, fast. You need only to just give package.className::HandlerMethodName when creating lambda on AWS.  

Option B. gives you interfaces from AWS helping you to write code integrated with AWS Events. Two dependencies will be needed than: aws-lambda-java-coreand aws-lambda-java-events.

Option C. could be used when you would like to run your lambda code on different cloud providers (AWS, GCP, Azure) or even as a local rest endpoint. You only need to replace development start dependency spring-cloud-starter-function-web to a deployment cloud specific dependency such as spring-cloud-function-adapter-aws.

Step 2. Add needed dependencies in pom.xml.

Let’s assume we want to create Java code for solution B (with AWS interfaces), then we need to add dependencies like following:

<dependencies>  
<dependency>  
  <groupId>com.amazonaws</groupId>  
  <artifactId>aws-lambda-java-core</artifactId>  
  <version>1.2.1</version>  
</dependency>  
<dependency>  
  <groupId>com.amazonaws</groupId>  
  <artifactId>aws-lambda-java-events</artifactId>  
  <version>3.11.0</version>  
</dependency>  
<dependency>  
  <groupId>com.amazonaws</groupId>  
  <artifactId>aws-java-sdk-s3</artifactId>  
  <version>1.11.578</version>  
</dependency>  
<dependency>  
  <groupId>software.amazon.awssdk</groupId>  
  <artifactId>s3</artifactId>  
</dependency>  
</dependencies>  
<dependency>  
<groupId>org.apache.pdfbox</groupId>  
<artifactId>pdfbox</artifactId>  
<version>2.0.28</version>  
</dependency>  

There is also dependency on Apache PDFBox library, to convert input PDF into text. Also, the dependency on S3 to be able to later read and save a file inside our new Java Lambda handler method. You will also need to add “shade” plugin to have one jar with all classes needed when lambda will be run on AWS.

<plugin>  
    <groupId>org.apache.maven.plugins</groupId>  
    <artifactId>maven-shade-plugin</artifactId>  
    <version>3.0.0</version>  
    <executions>  
        <execution>  
            <phase>package</phase>  
            <goals>  
                <goal>shade</goal>  
            </goals>  
        </execution>  
    </executions>  
</plugin> 

 

Step 3. Create a Java handler method.  

It will read PDF file uploaded on S3 and with PDFBox convert it to text and save in output file on S3.

public class HandlerS3 implements RequestHandler<S3Event, String> {  
    @Override  
    public String handleRequest(S3Event event, Context context) {  
        LambdaLogger logger = context.getLogger();  
        logger.log("HandlerS3::handleRequest START");  
        S3EventNotification.S3EventNotificationRecord record = event.getRecords().get(0);  
        String srcBucket = record.getS3().getBucket().getName();  
        String srcKey = record.getS3().getObject().getUrlDecodedKey();  
  
        logger.log("Get PDF from S3");  
        S3Client s3Client = S3Client.builder().build();  
        InputStream inputStream = getObject(s3Client, srcBucket, srcKey);  
  
        String text;  
        try {  
            text = new 		PDFTextStripper().getText(PDDocument.load(inputStream));  
            logger.log("PDF text: " + text);  
        } catch (IOException e) {  
            logger.log("Error: " + e.getMessage());  
            throw new RuntimeException(e);  
        }  
  
        logger.log("Save new file with extracted text");  
        String dstBucket = srcBucket;  
        String dstKey = "output/converted-" + srcKey + ".txt";  
        PutObjectResponse putObjectResponse = putS3Object(s3Client, dstBucket, dstKey, text);  
        logger.log("putObjectResponse:" + putObjectResponse);  
  
        logger.log("HandlerS3::handleRequest END");  
        return text;  
    }  
  
 
  private InputStream getObject(S3Client s3Client, String bucket, String key) {  
        GetObjectRequest getObjectRequest = GetObjectRequest.builder()  
                .bucket(bucket)  
                .key(key)  
                .build();  
        return s3Client.getObject(getObjectRequest);  
    }  
  
    public static PutObjectResponse putS3Object(S3Client s3, String bucketName, String objectKey, String text) {  
        try {  
            PutObjectRequest putOb = PutObjectRequest.builder()  
                    .bucket(bucketName)  
                    .key(objectKey)  
                    .metadata(new HashMap<>())  
                    .build();  
            PutObjectResponse response = s3.putObject(putOb, RequestBody.fromString(text));  
            return response;  
        } catch (S3Exception e) {  
            System.err.println(e.getMessage());  
            System.exit(1);  
        }  
        return null;  
    }  
} 

Step 4. (optional, but good to have) Install the “AWS Toolkit” plugin for IntelliJ and SAM CLI.

These tools allow you to browse AWS resources inside your IntelliJ and run operation like run/create/deploy lambda function or upload/delete file on s3 with w few clicks. Great to have. See https://aws.amazon.com/intellij/ and https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/what-is-sam.html.

Step 5. Create and configure Lambda on AWS.

Create an AWS Lamba:

  • Go to Lambda service and click ‘Create function’  
  • Choose ‘Author from scratch’
  • Fill ‘Function name’
  • Set ‘Runtime’ to ‘Java 11’
  • Click ‘Create function’

Set permissions to allow to use service S3 by your new lambda:

  • Go to IAM service and ‘Roles’
  • Find role that is attached to your lambda
  • Choose ‘Add permission’ and ‘Attach policies’
  • Add permission ‘AmazonS3FullAccess’

Add a trigger for your Lambda - it will be an each pdf file saved in given bucket:

  • In Lambda service click ‘+Add trigger’ and select ‘S3’
  • Choose bucket that will be source of events
  • Choose ‘Event types’ as ‘All object create events’
  • Set prefix ‘input/’ and suffix ’.pdf’
  • Click ‘Add’

Upload the Java code with the handler:

  • In Lambda service ‘Code source’ click ‘Upload from’ and select ‘.zip or .jar file’
  • Select your jar to be uploaded

Set Java handler method:

  • In Lambda service ‘Runtime settings’ click ‘Edit’ and set as handler ‘org.example.HandlerS3::handleRequest’ and click ‘Save’  

Set SnapStart to reduce cold start time x10 times, for free:

  • In the Lambda service, go to ‘Configuration->General configuration’ click ‘Edit’
  • Set ‘SnapStart’ to ‘PublishedVersions’ and ‘Save’. That’s it.

Now you should be in good shape to make some tests!  

Step 6. Run and test your new Java AWS Lambda.

You can trigger your Java AWS lambda functions one many ways, to name a few:

  • From AWS console via ‘Test’ tab
  • Via dedicated rest endpoint. You can create it via Configuration->Function URL
  • Via gateway rest endpoint. You can create it via ‘+Add trigger’->API Gateway

In our case let’s do real case scenario test. Plese upload PDF file(s) on you bucket to /input directory. Then check in /output directory for converted TXT file. It should be there as result of your Java lambda processing. Additionally, you can check logs via ‘CloudWatch’ service or even better via AWS Toolkit plugin.

Conclusion

Lastly, using Java and Cloud services for your business can provide numerous benefits, such as increased focus on core business, lower infrastructure costs, global access, scalability, and faster development.

With the simple steps outlined in this post, you can quickly integrate AWS services into your Java applications and take advantage of the cloud's powerful features.  

Furthermore, the LocalStack option allows you to locally simulate an AWS environment for testing purposes. Don't pass up the opportunity to improve your company's efficiency and agility by utilizing Java and Cloud services. Contact us today to learn how we can assist your company in smoothly transitioning into the world of cloud computing with Java.

Liked the article? subscribe to updates!
360° IT Check is a weekly publication where we bring you the latest and greatest in the world of tech. We cover topics like emerging technologies & frameworks, news about innovative startups, and other topics which affect the world of tech directly or indirectly.

Like what you’re reading? Make sure to subscribe to our weekly newsletter!
Relevant Expertise:
Share

Subscribe for periodic tech i

By filling in the above fields and clicking “Subscribe”, you agree to the processing by ITMAGINATION of your personal data contained in the above form for the purposes of sending you messages in the form of newsletter subscription, in accordance with our Privacy Policy.
Thank you! Your submission has been received!
We will send you at most one email per week with our latest tech news and insights.

In the meantime, feel free to explore this page or our Resources page for eBooks, technical guides, GitHub Demos, and more!
Oops! Something went wrong while submitting the form.

Related articles

Our Partners & Certifications
© 2024 ITMAGINATION, A Virtusa Company. All Rights Reserved.