Building a Serverless App with AWS Lambda, S3, DynamoDB & API Gateway

What is AWS Lambda?

AWS Lambda is an on demand compute service, where code is invoked in response to certain events. Events can originate internally from other AWS services, for example, a file upload to an S3 bucket, or externally from your own applications via HTTP. Lambda functions can be written in any of the supported runtimes. At the time of writing, Lambda supports Python, NodeJS, C# and Java. Unlike a traditional server side application, a Lambda function is not a continuously running process that waits for incoming requests. When Lambda receives an event trigger, it spins up a new process, runs the function and then rips the process down again. In other words, the lambda function is only deployed for the duration of the invocation and is then terminated. When a new event occurs, a new process is created to execute the function for that specific event.

Why use Lambda?

  • The Lambda runtime is fully managed by AWS. Once a function is uploaded and configured, Lambda is responsible for managing the resources required to run the code. 
  • Developers are free from the traditional overhead of configuring and maintaining server instances. 
  • Lambda will immediately scale to meet spikes in demand. 
  • Lambda is cost effective as you only pay for the computational resources used. This is of course true for other AWS compute services, but the cost model for Lambda is more granular than EC2 for example, with resources being charged per 100 milli seconds.
  • Lambdas event driven model means you can integrate nicely with a range of AWS services, but still ensure loose coupling.

Sample Application

In this post we'll build a serverless app that demonstrates how Lambda can be used to process both internal AWS events and external events from a web app. The simple web app will allow users to upload images to S3 storage, view details of the uploaded images and then delete images. We'll use Lambda functions to handle events triggered by S3, as well as handling HTTP requests directly from the web app. As well as Lambda and S3, the sample app will use DynamoDB, API Gateway and Angular for the front end.

Application Components

The diagram below describes the various components/services and how they'll interact.

Serverless app architecture
  • Angular Client - Allows the user to add and delete images from S3. It also calls an API Gateway endpoint to retrieve details for all images uploaded.
  • Save/Delete Lambda Function - will handle image upload and delete events from S3. It will be invoked by S3 when
    • a new image is uploaded - the function will use the supplied event data to call back to S3 and retrieve the image. The image details will be saved to DynamoDB.
    • an image is deleted - the function will use the supplied event data to identify the deleted image and remove the associated details from DynamoDB.   
  • Retrieve Image Details Lambda Function - will retrieve image details from DynamoDB and return JSON result.        
  • Dynamo DB - A single Dynamo DB table will be used to persist image details. All interactions with DynamoDB will happen via the Lambda functions.  
  • API Gateway - an API Gateway endpoint will be used as a bridge to the Retrieve Image Details Lambda function, from the web app.

Source Code

The full source code for the sample app is available on Github. It includes the SaveAndDeleteImageDetails Lambda function, RetrieveImageDetails Lambda function and the Angular client. Having the full source code locally should make this post easier to follow.

Creating an S3 Bucket

The first thing we need to do is create an S3 bucket to store uploaded images. Login into the AWS console and search for S3 in the services menu. On the S3 landing page, click Create Bucket and you should be presented with a modal similar to the one below.

Creating an S3 Bucket

Give the bucket a name and choose a region. Note that bucket names are globally unique, so your first couple of choices may already be taken by other users. I've chosen the region closest to me, but you can choose any region you like. Click Create and you should see your newly created bucket listed alongside any other buckets you may have.

Configuring the Access Control List

On the S3 landing page select the newly created bucket and then click on the Permissions tab. Select the Access Control List sub tab, then select the Public Access option. When the popup appears select the List Objects option only. This will make the bucket publicly accessible, so its important that we limit access to read only as shown in the screenshot below.
Configure bucket with public read access

Configuring the Security Policy

Click the Bucket Policy sub tab and click the Policy Generator link. When the policy generator opens, select policy type S3 Bucket Policy and Allow from the Effect drop down.  
In the principal field enter the ARN (Amazon Resource Name) of the user account you want to grant access to. An ARN is a globally unique identifier that represents a specific AWS resource. You can get the user ARN by going to the Users tab in IAM (Identity Access Management).     
Next select S3 as the AWS service and select GetObject and PutObject from the Actions options list. Finally enter the ARN of the S3 bucket you wish to apply the policy to (the ARN of the bucket we created earlier). Your policy generator screen should look something like the screenshot below.

Define bucket policy using policy generator

Click Generate Policy to see a JSON representation of the policy you just created. Copy paste the policy into the Bucket Policy editor as shown in the screenshot below.

JSON representation of bucket policy

This policy grants an AWS user (the Principal, defined using ARN), permission to add and delete items from the specified S3 bucket (the Resource, defined using ARN). The S3 bucket this access applies to, is defined in Resource attribute. Note that the Resource attribute containing the bucket ARN has a /* at the end of it, to grant access to everything inside the bucket.

CORS Configuration

In order to access the bucket from the Angular app we'll need to enable CORS. Click the CORS Configuration sub tab and enter the configuration shown below.  This CORS policy allows access from any origin and with any request headers. This is fine for a simple demo app, but if you're building an app for production you'll need to be more restrictive.

Bucket CORS configuration

Creating a DynamoDB Table

Now that the S3 bucket has been created and configured, we're going to create a DynamoDB table to store the image details. Note that we'll use DynamoDB to store data about the images upload, not the actual images. As discussed already, the actual images are stored in S3.

On the DynamoDB landing page, click Create Table and you should be presented with a view similar to the one below.

Create DynamoDB table for Image Data

Enter the table name ImageDetails and s3Url as the primary key. If you want to run the sample code without modifying it, its important that you use these exact values. You can of course change these values, but you'll need to update the Lambda code accordingly. Click Create and you should see the newly created ImageDetails table listed as follows.

DynamoDB image table

Configuring Eclipse for Lambda Development

Amazon have put lots of effort into building a great toolkit, so that developers can easily integrate with common AWS services from their IDE. This post will cover Lambda development from an Eclipse perspective, but a similar toolkit is available for IntelliJ if that's your preference.
To download and install the Eclipse AWS toolkit follow this guide. Once the toolkit is installed open preferences-> AWS Toolkit and enter the Access Key Id and Secret Access Key credentials for the AWS account you want to use.

Configure AWS credentials in Eclipse

Creating a Lambda Project

This section describes how to create a new Lambda function from scratch, in Eclipse. If you want you can avoid this by simply importing that Lambda functions using the source code provided.
 
Creating a new Lambda project is straight forward thanks to the AWS toolkit. Click File -> New -> AWS Lambda Java Project and you'll be presented with the following.

Create AWS lambda java project using AWS toolkit

Enter a project name, group Id and artifact Id. Next specify the package and class name of the main handler function. Finally, select S3Event as the input type so that AWS knows what type of argument the Lambda function should receive. In this instance we want the Lambda function to be invoked as the result of an S3 event. Click Finish to create the new project which should include the LambdaFunctionHandler class we specified above.

A default implementation of the Lambda function handler is provided, which provides a nice template you can build upon. The class implements the RequestHandler interface, which defines the request and response type of the function. The S3Event request type contains metadata describing the S3 event that occurred (file upload or delete for example), while the response type indicates that a simple String value should be returned.

Default lambda function generated by AWS toolkit in Eclipse

The handleRequest method contains the main function logic and is the function entry point at runtime. The sample implementation reads the key and bucket name from the S3Event object, then uses the S3 client to call back into S3 to retrieve the object itself. The content type is taken from the retrieved object and returned to the client.

Creating the S3 Save & Delete Lambda Function

We are going to use the Lambda template above to create a Lambda function that responds to file creation and file delete events on S3. When a new file is added to the S3 bucket the function will
  • Use the S3 client (from the AWS SDK) to retrieve the object that has just been added to S3. The key and bucket values from the S3Event object will be used to identify the object we want to retrieve. 
  • When we retrieve the object from S3, we'll get some metadata from the object and persist it in DynamoDB. We'll persist the bucket name, image name, content type, file size, last modified time stamp and S3 URL. 
 When a file is deleted from the S3 bucket the function will
  • Use the key and bucket values from the S3Event object to delete the associated metadata from DynamoDB.  

Creating an S3 and DynamoDB Client

The AWS SDK provides a client API for interacting with each of the core AWS services. The S3 and DynamoDB clients are created using their respective builder objects as shown below. There are a range of configuration options available but for simplicity we'll just specify the region we want to use. I've set the region as a constant, which is fine for this demo. If you're building an app for production you should set the region via an environment variable and avoid tightly coupling your Lambda function to one region.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
public class LambdaFunctionHandler implements RequestHandler<S3Event, Void> {
 
    private String DYNAMODB_TABLE_NAME = "ImageDetails";
    private Regions REGION = Regions.EU_WEST_1;
    
    private AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
                                                     .withRegion(REGION)
                                                     .build();
    
    private AmazonDynamoDB dynamoDbClient = AmazonDynamoDBClientBuilder.standard()
                                                                 .withRegion(REGION)
                                                                 .build();

Handling the S3 Event 

The handleRequest method is the Lambda entry point and is called by AWS when specific S3 events occur. The Lambda function will be triggered when a new file is added to the S3 bucket, or an existing file is deleted. When either event occurs, AWS will invoke this Lambda, passing in event meta data via the S3Event argument. The Void return type indicates that we are not expected to return a value from this function. Note the use of Void instead of the lower case key word void. This is due to the fact that the RequestHandler interface that is being implemented, requires a request and return type.

Line 7 identifies the type of event being processed by looking at the configurationId of the first event record, then uses it to construct an EnumEventName. The value passed in configurationId will be either ItemAddedEvent or ItemDeletedEvent and will map directly to values in EnumEventName. These event names will be configure on S3, after the Lambda function is deployed.

Lines 10 and 11 get the S3 bucket and key associated with the event. These will be used to either add or remove the file metadata from DynamoDB. Lines 15 to 23 check the event type received and delegate to the appropriate method to process the event. We'll look at the handleCreateItemEvent and handleDeleteItemEvent methods in detail below.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
@Override
public Void handleRequest(S3Event event, Context context) {
        
    context.getLogger().log("Received S3Event: " + event.toJson());
     
    /* use name of event to construct EnumEventName */
    EnumEventName eventName = EnumEventName.valueOf(event.getRecords().get(0).getS3().getConfigurationId());     
     
    /* Get S3 bucket and key from the supplied event */
    String bucket = event.getRecords().get(0).getS3().getBucket().getName();
    String key = event.getRecords().get(0).getS3().getObject().getKey();
        
    try {
         
      if(eventName.equals(EnumEventName.ItemAddedEvent)){
          
         context.getLogger().log(String.format("Processing ItemAdded Event for bucket[%s] and key[%s]", bucket, key));
         handleCreateItemEvent(context, bucket, key);
      }         
      else if(eventName.equals(EnumEventName.ItemDeletedEvent)){
          
         context.getLogger().log(String.format("Processing ItemDeleted Event for bucket[%s] and key[%s]", bucket, key));
         handleDeleteItemEvent(context, bucket, key);
       }      
       else{
         throw new RuntimeException("Unable to process unexpected event type");
       }
    } catch (Exception ex) {
            
       context.getLogger().log("Error ocurred processing request");
       throw ex;
    }
        
    return null;
}

Handling the Create Item Event

The handleCreateItemEvent method is responsible for saving metadata to DynamoDB, when a new image is added to the S3 bucket. On line 4 the S3 client is used to retrieve the newly created object from S3 using the supplied bucket name and key. Lines 7 to 11 extract details about the new file and use them to create a new ImageData object. Lines 19 to 26 create a PutItemRequest that defines the data we want to save to DynamoDB and the target table. The request object is populated using the image data taken from the S3 object. The PutItemRequest represents the key/values that will be saved in the ImageDetails table. On line 29 the DynamoDB client is used to save the data.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
private void handleCreateItemEvent(Context context, String bucket, String key){
     
    /* call S3 to retrieve object that triggered event */
    S3Object s3Object = s3Client.getObject(new GetObjectRequest(bucket, key));
        
    /* get required file data from S3Object */        
    String name = s3Object.getKey();
    String contentType = s3Object.getObjectMetadata().getContentType();            
    String s3Uri = s3Object.getObjectContent().getHttpRequest().getURI().toString();
    Long sizeBytes = (Long)s3Object.getObjectMetadata().getRawMetadataValue("Content-Length");
    String lastModified = formatDate((Date)s3Object.getObjectMetadata().getRawMetadataValue("Last-Modified"));
        
    /* build up ImageData object to encapsulate data we want to save to dynamo */
    ImageData imageData = new ImageData(bucket, name, contentType, s3Uri, sizeBytes, lastModified);
        
    context.getLogger().log(imageData.toString());
        
    /* create request object for save to Dynamo */
    PutItemRequest putItemRequest = new PutItemRequest();
    putItemRequest.setTableName(DYNAMODB_TABLE_NAME);            
    putItemRequest.addItemEntry("bucket",new AttributeValue(bucket));
    putItemRequest.addItemEntry("name", new AttributeValue(imageData.getName()));
    putItemRequest.addItemEntry("s3Url", new AttributeValue(imageData.getS3Uri()));
    putItemRequest.addItemEntry("contentType", new AttributeValue(imageData.getContentType()));
    putItemRequest.addItemEntry("size", new AttributeValue(String.valueOf(imageData.getSizeBytes())));
    putItemRequest.addItemEntry("lastModified", new AttributeValue(String.valueOf(imageData.getLastModified())));
        
    /* save data to DynamoDB */
    PutItemResult putItemResult = dynamoDbClient.putItem(putItemRequest);
        
    context.getLogger().log(putItemResult.toString());
}

Handling the Delete Item Event

The handleDeleteItemEvent method is responsible for deleting data from the ImageDetails table, when an image is deleted from the S3 bucket. Line 4 calls the buildS3Url method to construct the URL associated with the item that was removed from S3. Lines 5 to 7 create a DeleteItemRequest and set the target table and the object URL for the data we want to delete. Finally the DynamoDB client is used to delete the specified item.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
private void handleDeleteItemEvent(Context context, String bucket, String key){
    
    String s3Url = buildS3Url(bucket, key);
     
    DeleteItemRequest deleteItemRequest = new DeleteItemRequest();
    deleteItemRequest.setTableName(DYNAMODB_TABLE_NAME);
    deleteItemRequest.addKeyEntry("s3Url", new AttributeValue(s3Url));
  
    /* delete from DynamoDB */
    DeleteItemResult deleteItemResult = dynamoDbClient.deleteItem(deleteItemRequest);
     
    context.getLogger().log(deleteItemResult.toString());
}
    
    
private String buildS3Url(String bucket, String key){
     
    return new StringBuffer()
             .append("https://")
             .append(bucket)
             .append(".s3-")
             .append(REGION.getName())
             .append(".amazonaws.com/")
             .append(key)         
             .toString();   
}

Creating the Retrieve Image Details Lambda Function

Now that the S3 Save & Delete lambda function is written we can move on to the Retrieve Image Details function. This function will retrieve data from the ImageDetails table in DynamoDB and return it in JSON format. It will be invoked by the web app (via API Gateway) to retrieve all image details so that they can be displayed to the user.

The function implements the RequestHandler interface, which specifies the function parameter type Object and response type List<ImageData>. To keep things simple, when calling this function from the web app, we won't pass any query parameters. As a result we'll retrieve all image data. As shown below, the handler class defines the target table and region, as well as the DynamoDB client.

1
2
3
4
5
6
7
public class LambdaFunctionHandler implements RequestHandler<Object, List<ImageData>> {

    private String DYNAMODB_TABLE_NAME = "ImageDetails";
    private Regions REGION = Regions.EU_WEST_1;    
    private AmazonDynamoDB dynamoDbClient = AmazonDynamoDBClientBuilder.standard()
                                                                   .withRegion(REGION)
                                                                   .build();

The handleRequest method pulls image details from DynamoDB and returns a list of  ImageData objects. On lines 6 to 8 we create a ScanRequest by specifying the target table and the maximum number of records to return. The DynamoDB client is used to perform the scan and retrieve the results.
The ScanResult getItems method returns a result of type List<Map<String, AttributeValue>>. We iterate over the list of Maps and create a new ImageData object for each Map<String, AttributeValue>. The actual values are extracted by calling the getValueByKey method and specifying the attribute key. The results are added to a list and returned to the client.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
@Override
public List<ImageData> handleRequest(Object input, Context context) {
    context.getLogger().log("Input: " + input);
        
    /* create scan request object for retrieving results from Dynamo */        
    ScanRequest scanRequest = new ScanRequest()
                                  .withTableName(DYNAMODB_TABLE_NAME)
                                  .withLimit(100);
        
    ScanResult scanResult = dynamoDbClient.scan(scanRequest);
       
    context.getLogger().log("ScanResult: " + scanResult.toString());
        
    /* convert scan results to a List of images */
    List<ImageData> images = scanResult.getItems()
                                       .stream()
                  .map(map ->{                             
                             return new ImageData()
                            .setBucket(getValueByKey(map, "bucket"))
                            .setName(getValueByKey(map, "name"))
                            .setS3Uri(getValueByKey(map, "s3Url"))
                            .setContentType(getValueByKey(map, "contentType"))
                            .setSizeBytes(Long.valueOf(getValueByKey(map, "size"))) 
                            .setLastModified(getValueByKey(map, "lastModified"));                 
           
                     }).collect(Collectors.toList());
        
    context.getLogger().log("Returning Images: " + images);
    return images;
}

    
private String getValueByKey(Map<String, AttributeValue> map, String key){
     
    AttributeValue attrValue = map.get(key);
    return attrValue.getS();
}

The List<ImageData> response is automatically serialized to JSON before being returned to the client. In order for serialization to happen automatically, the response object needs to have appropriate setters and getters defined. The ImageData model is defined as follows.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
import lombok.ToString;

@ToString
public class ImageData {

    private String bucket;
    private String name;
    private String contentType;
    private String s3Uri;
    private Long sizeBytes;
    private String lastModified;

    public String getBucket() {
        return bucket;
    }

    public ImageData setBucket(String bucket) {
        this.bucket = bucket;
        return this;
    }

    public String getName() {
        return name;
    }

    public ImageData setName(String name) {
        this.name = name;
        return this;
    }

    public String getContentType() {
        return contentType;
    }

    public ImageData setContentType(String contentType) {
        this.contentType = contentType;
        return this;
    }

    public String getS3Uri() {
        return s3Uri;
    }

    public ImageData setS3Uri(String s3Uri) {
        this.s3Uri = s3Uri;
        return this;
    }

    public Long getSizeBytes() {
        return sizeBytes;
    }

    public ImageData setSizeBytes(Long sizeBytes) {
        this.sizeBytes = sizeBytes;
        return this;
    }

    public String getLastModified() {
        return lastModified;
    }

    public ImageData setLastModified(String lastModified) {
        this.lastModified = lastModified;
        return this;
    }
}

Defining Policies & Roles for S3 Save & Delete Function 

Before deploying the Lambda functions, we will need to define 2 IAM security Roles so that both functions have the necessary permissions to interact with DynamoDB. The first will allow the Save & Delete Lambda function to save data to and delete data from the DynamoDB table. The second role will allow the Retrieve Image Details function to read from the ImageDetails.

Creating an Add and Delete Policy

In order for the Save & Delete function to be able to interact with S3 and DynamoDB, we will need to create a policy with read permissions on the S3 bucket and read/write permissions on the ImageDetails table. To create a new Policy, go to IAM and select Policies > New Policy. In the Services dropdown, select DynamoDB, then under actions select Scan (read), PutItem and DeleteItem. Under Resources select Specific and enter the ARN of the ImageDetails table you created earlier. This will limit permissions specifically to this table. See the screenshot below.

Create policy with read/write permissions on ImageDetails table

Click Review Policy and you should see a JSON representation of the policy you just created.

JSON representation of policy

Sometimes its handier to define or update a policy via the JSON editor. We'll update the above JSON policy definition to allow read access to the S3 bucket. Simply add a new Statement definition as shown in the screenshot below.

Update policy JSON to allow read access to S3

This Statement tells AWS to allow holders of this policy to retrieve objects (GetObject) from any S3 bucket. If we wanted to lock this down a little more we could remove the wild card from the ARN in the Resource attribute and specify the exact ARN of the bucket we want to access.

Creating a Role for the Add & Delete Image Details Function

To create a new Role, go to IAM and select Roles. Select AWS Service as the type of trusted entity, then select Lambda Function and click Next.

Create a new role for lambda fucntion

Next we need to specify the policies associated with this role. Search for and select the AWSLambdaBasicExecutionRole so that the Lambda fucntrion has the basic execution permissions and can write logs to Cloud Watch. Note that this is an existing AWS policy, not one you have to create yourself. 

Select AWSLambdaBasicExecutionRole to enable access to Cloud Watch.

Select the Dynamo_Image_Details_Add_Delete policy you created earlier. This will allow the Lambda function to access the ImageDetails table in DynamoDB.

Select the Dynamo_Image_Details_Add_Delete policy

Finally specify the role name SaveDeleteImageDetailsRole and add a brief description. You should see both selected policies in the Policies section.

Review and create role

Defining Policies & Roles for Retrieve Image Details Function 

The Retrieve Image Details function requires a single role to allow it to retrieve data from the ImageDetails table. Before creating the role we need to create the policy that it will use.

Creating a Scan Image Details Policy

The scan image details policy will permit the scan operation on the ImageDetails table. In the IAM menu create a new policy and select the JSON tab. Enter the following JSON to permit the scan operation to be called on the ImageDetails table. Note that you'll need to specify the ARN of the ImageDetails table you created, not the ARN below.

Scan ImageDetails policy

Click Review Policy and enter the name Scan_Image_Details_Policy and a short description. In the summary section you should see the read permission specified for DynamoDB.  

Review Scan Image Details Policy

Creating a Role for the Retrieve Image Details Function

From the IAM menu create a new role and add the AWSLambdaBasicExecutionRole (AWS managed) and the Scan_Image_Details_Table_Policy that was created earlier. The approach is the same as detailed for the SaveDeleteImageDetails role you created earlier.

Create retrieve image details role

Uploading a Lambda Function to AWS

The AWS toolkit makes it straight forward to upload a Lambda function from Eclipse. Right click on the project and select Amazon Web Services > Upload Function to AWS Lambda. When the dialog is displayed, enter the function name SaveAndDeleteImageDetailsFunction and a short description. Next select the SaveAndDeleteImageDetails role from the IAM role drop down. This is the role we created earlier and will allow the function to access S3 and DynamoDB.

Upload Lambda Function - Specify Role

When a Lambda function is uploaded, a zip file containing the compiled classes and any JAR dependencies is uploaded to an S3 bucket. The S3 Bucket for Function Code section in the modal above allows you to either specify an existing bucket or create a new one. To create a new bucket click Create, enter a bucket name and click OK.

Upload Lambda Function - Create S3 bucket for uploaded artefact

Click Finish to start uploading the Lambda function to S3. You should see a dialog similar to the one below.
Uploading Lambda Function to S3

Open the AWS console and go to the Lambda landing page. You should see the function listed as shown below.

Lambda Function Uploaded

Now that the S3 Save & Delete function has been uploaded, follow the same steps to upload the Retrieve Image Details function. You can create a new S3 bucket to store the function code or use the one you created for the S3 Save & Retrieve  function. Remember to specify the RetrieveImageDetails role before uploading.

Configuring S3 Trigger for Save & Delete Function

The next step is to define the triggers that will invoke the Lambda function when an item is added to or deleted from the S3 bucket. On the S3 landing page select the S3 bucket you created earlier and click the Properties tab. In the Advanced Settings section click Events.

S3 advanced settings

Click Add Notification and enter the name ItemAddedEvent. Its important that you use this exact name as it will be passed to the Lambda function as part of the S3 event metadata and used to construct the EnumEventName enum at run time. In the Events section select ObjectCreate (All) so that this event will get fired when an object is added to the bucket. We won't specify a Prefix or Suffix option, but these can be useful for applying fined grained criteria for the types of  files that trigger events. In the Send To drop down select Lambda Function and then select the SaveAndDeleteImageDetailsFunction from the Lambda drop down. The ItemAddedEvent trigger should look like the following.

S3 event trigger - ItemAddedEvent

Creating an event trigger for object removal is very similar except this time you should use the name ItemDeletedEvent. Again its important you use this exact name as it will map to one of the two values in the EnumEventName in the function. In the Events section select ObjectDelete (All) so that this trigger fires when files are removed from the bucket. The remaining configuration is the same as the ItemAddedEvent trigger and should look like the following.

S3 event trigger - ItemDeletedEvent

Testing the Save/Delete Lambda Function 

If you've followed the steps closely, everything should now be in place to test the Lambda function.
Go to the S3 landing page and select the bucket you created earlier. To trigger the ItemAddedEvent you created above, simply upload a file.

S3 File Upload

When the file is uploaded, AWS will fire the ItemAddedEvent trigger and invoke the SaveAndDeleteImageDetails function. To check that the function executed successfully, open the Lambda function and go to the Monitoring tab. This view provides metrics associated with Lambda function including, the total number of invocations over a specified period, the duration of those invocations in milli seconds, successful invocations, failed invocations, request throttling etc. For this demo we're only interested in invocation count and invocation errors. If the function ran as a result of the trigger you should see it register in the invocation count graph as shown below. If the function doesn't complete successfully, it will register on the invocation errors graph. As you can see from the graph below there are no invocation errors, so it looks like the function executed successfully.           

Lambda Function Monitoring

Clicking Jump to Logs will take you to Cloud Watch where you can view log output. The highlighted section below shows the log output for a single function call.

Cloud Watch Logs - Lambda Function Invocation

The second log entry in the highlighted section contains the S3 event metadata used to invoke the function. Clicking this entry will open the full JSON message, which can be very useful when trouble shooting failed invocations. A screenshot of the event metadata for this call is shown below. 

Cloud Watch Logs - S3 event metadata

Next we'll check DynamoDB to see if the image details were saved. Open DynamoDB and select the ImageDetails table. You should see an entry for the file you uploaded to S3, similar to the one below.

DynamoDB - Image Details Saved

Testing the Retrieve Image Details Lambda Function 

At this point we know that the image details are being saved to DynamoDB when a file is uploaded to the S3 bucket. The next step is to check that we can retrieve those details using the Retrieve Image Details function. Go to the Lambda menu and select the Retrieve Image Details function. Beside the test button click Configure Test Event and create a new test event with an empty JSON string. Click Test to run the function and you should see image a JSON response with image details from DynamoDB.

Retrieve Image Details Test
The screenshot above shows a JSON response for a single row retrieved from the ImageDetails table.

Creating an API with API Gateway

API Gateway is an AWS service that allows you to build APIs that act as a 'front door' to other services. We're going to use API Gateway to create an API that will act as a bridge to the Retrieve Image Details Lambda function. The API will expose a single endpoint that can be called by issuing a HTTP GET. The endpoint will make a down stream call to the Retrieve Image Details lambda function and return image details JSON.
As well as request routing, API Gateway can do many other things including payload validation, payload transformation, authentication and request throttling. In this instance we'll keep things simple and use it as a straight forward bridge to the back end Lambda function. 

In the AWS console navigate to the API Gateway home page. Select New API, enter a sensible name, description and keep the use the default Endpoint Type. Click Create API. 

API Gateway - create new API

Configure Lambda Fucntion Call

Next click Actions -> Create Method and select GET from the drop down. You should be presented with a configuration screen similar to the one below.
For Integration Type select Lambda Function and leave the Proxy Integration option deselected. If you select the region in which you've deployed the Retrieve Image Details function, the function should become available as an option in the Lambda Function field. Finally leave the Default Timeout option selected and click Save. 

API Gateway - configure GET resource

You will be prompted with a message telling you that you are about to grant API Gateway permission to invoke the Lambda function. Click OK to accept.

API Gateway - grant permission to invoke Lambda function

Enabling CORS on the API

CORS (Cross Origin Resource Sharing) needs to be enabled so that an app (in this instance the Angular app) will be able to call the API from a domain that is different from the APIs own domain. Before calling the API the browser, on behalf of the Angular app will issue a HTTP OPTIONS request, also known as a pre-flight request. API Gateway will respond to the OPTIONS request with a list of HTTP methods that are allowed to access the resource. We'll be calling the API with a HTTP GET so we need to make sure that GET requests are supported.

Enabling CORS is very straight forward. Click Actions -> Enable CORS and you should see the following options.

API Gateway - enable CORS

The GET method is selected by default so we can simply click Enable CORS and replace existing CORS headers to accept the defaults.  The following confirmation modal will be displayed describing the access control headers to be added.

API Gateway - CORS confirmation

Deploying the API

To deploy the API go to Actions -> Deploy API. In the Deployment Stage drop down select New Stage and then enter a name. The stage helps you differentiate between different versions of your API, for example, dev, test, prod etc. Enter dev and a short description for the stage and deployment as shown below.

API Gateway - Deploy API
Click Deploy and when the API is deployed and ready to use you should see the API URL displayed as follows.

API Gateway - API URL

Testing the API

You can test the API with a simple GET request. The screenshot below shows an API call using cURL. As you can see the image details JSON is returned as expected.

API Gateway - test

Angular App

The final step is to tie the various components together with a simple web app that allows users to upload images and delete images from an S3 bucket. When an image is uploaded it will trigger a Lambda call, which will persist image details to DynamoDB. The app will also periodically call the Image Details API to retrieve the latest image details from DynamoDB.

Running the Angular App

Don't worry, you don't have to know any Angular to get the app going. The full source is available Github, so its just a matter of downloading the source and running the app. To run the Angular app you'll need Node and the Angular CLI installed locally. I wont cover this here as the steps are detailed in lots of online resources. If you're looking for a decent guide its hard to beat the Angular starter guide here. Once you have the Angular CLI installed, you can run the app with the npm start command. This uses the Angular CLI to build the app and deploy it to a local server at localhost:4200/.

Angular App - running on command line
If the apps starts successfully you should see the landing screen at http://localhost:4200/home.

Landing Screen

The landing screen captures some import configuration values, such as AWS credentials, API Gateway URL, S3 bucket name and the AWS region. These values are captured in the UI so that you don't have to make any code changes to get the app running.

Angular App - landing screen

You should already have the access key and secret key on your local machine, as you would have needed it to upload the Lambda functions from Eclipse. The API Gateway URL is the URL endpoint that was provided after we created the Image Details API. For bucket name, enter the bucket you created at the beginning of this this post. Finally the AWS Region should be the region associated with the bucket (you can see this on the S3 home screen).

Image Upload & Details Screen

When you click Get Started you'll see the screen below. Here you can upload a new image, view details of images that have already been uploaded and delete images. The image details table is refreshed every few seconds, with a call to the API Gateway. When a new file is uploaded you should see a new entry appear in the Image Details table within a few seconds. Clicking the delete icon will send a delete request to S3, causing the image details in DynamoDB to be deleted. The item will be removed from the image details table within a few seconds.

Angular App - image upload & details screen

Trouble Shooting the Angular App   

If the correct config data is supplied the app should work out of the box. However, its easy miss something along the way or supply the wrong configuration. If the app doesn't work as expected you should open the browser debug tools (great on Chrome) and have a look at the Network tab. You'll be able to inspect any failed calls to S3 or API Gateway.

Wrapping Up

If you followed along closely you should now have a serverless app running that uses Angular, Lambda, S3, DynamoDB and API Gateway. There is quite a bit of setup required in AWS, so if you haven't been able to get the app running for whatever reason, leave a question in the comments below and I'll try to help. Don't forget the full source code is available on Github.

Comments

Popular posts from this blog

Spring Web Services Tutorial

Health Checks, Metrics & More with Spring Boot Actuator

An Introduction to Wiremock

REST Endpoint Testing With MockMvc

Spring JMS Tutorial with ActiveMQ

Externalising Spring Configuration

Spring Batch Tutorial

Axis2 Web Service Client Tutorial

Spring Boot & Amazon Web Services (EC2, RDS & S3)