In my previous article, we used a server proxy between Amazon Web Services (AWS) S3 and a web front-end to upload files. This time, we will do it serverlessly, without needing to proxy the upload requests through a server. For this, we will use AWS Lambda and AWS S3 presigned URLs.


Presigned URLs grant direct public access to private S3 objects, for a limited time, secured by the Identity and Access Management (IAM) permissions of the user that generates the URL. We will use an AWS Lambda function to generate S3 presigned URLs, fronted by an AWS API Gateway.

 

The diagram below illustrates the following flows for uploading a file via an S3 presigned URL:

  1. Get the S3 presigned URL.
  2. Upload the file to S3 via a presigned URL.

 

S3 presigned URL 1

 

 

Generating S3 presigned URLs

To implement the Lambda function that generates presigned URLs, we chose the latest version of Node.js runtime (v20) and AWS JS SDK v3. Here is the function’s code:


import * as AWS from "@aws-sdk/client-s3";
import { PutObjectCommand, S3Client } from "@aws-sdk/client-s3";
import { getSignedUrl } from "@aws-sdk/s3-request-presigner";

const s3Configuration = {
    region: process.env.AWS_REGION_NAME,
};
const client = new S3Client(s3Configuration);

export const handler = async (event, context) => {
  const key = event.queryStringParameters.key
  const command = new PutObjectCommand({ Bucket: process.env.BUCKET_NAME, Key: key });
  const uploadURL = await getSignedUrl(client, command, { expiresIn: process.env.URL_EXPIRATION_SECONDS });
  
  return {
    "statusCode": 200,
    "isBase64Encoded": false,
    "headers": {
      "Access-Control-Allow-Origin": "*"
    },
    "body": JSON.stringify({
      "uploadURL": uploadURL,
        "key": key
    })
  };
}

The code implementation instantiates an S3Client object by specifying the region of the S3 bucket to where we want to upload objects. If a region is not specified, the SDK will try to, respectively, pick it up from the Lambda function’s execution environment (AWS_REGION) or shared configuration files.


Since our Lambda function’s invocations will be triggered by the AWS API Gateway, we can expect that the structure of the event JSON object contains the queryStringParameters property, where we can pass an arbitrary number of properties. In our case, we expect a “key” query parameter that corresponds to the S3 key of the object to be stored in the bucket via the presigned URL that will be generated.


Next, the handler function configures a PutObjectCommand for the bucket and object key in question, which is passed as an input parameter to the SDK’s getSignedUrl function, along with the S3Client object and a configuration object containing the URL expiration time in seconds. For security reasons, the expiration time should be as short as possible, to reduce the risk of unauthorised access to our resources.


Lastly, the handler function returns a JSON object that includes the following:

  • statusCode: the HTTP success status code.
  • isBase64Encoded: a flag that indicates if the response body is encoded with base64. In this case, we don’t want it to be base64-encoded, since we are not transmitting binary data.
  • headers: additional headers to allow cross-origin resource sharing (CORS) for web apps that call our API Gateway endpoint, that, in turn, invokes this Lambda function.

The handler function must always return a Promise or use the callback function, otherwise we get an HTTP 502 error from the API Gateway, with the message “Internal server error”. In the API Gateway logs, we will also see the error: “Execution failed due to configuration error: Malformed Lambda proxy response”.

 

Note: the Lambda function handler file should have a “.mjs” extension so that it's designated as an ES module. Alternatively, a package.json file must be configured with the type “module”. This way, we can use the import statement to import other ES modules, e.g., the AWS S3 SDK modules. Otherwise, if the file has a “.js” extension, but no type “module” is configured in a package.json, we get the error “SyntaxError: Cannot use import statement outside a module”, when importing modules with the ES import statement.

 

Environment variables configuration

For cleaner code, the Lambda function code uses environment variables for the region name, the bucket name, and the URL expiration time in seconds. To add or edit environment variables for a Lambda function, we can do it in the Configuration tab, followed by Environment variables located on the side menu, as seen in the screenshot below:

 

S3 presigned URL 2

 

Added/edited environment variables become immediately available to the function’s Node process and can be read using Node’s process.env object, like, for example, process.env.BUCKET_NAME.


Note: it is advisable to manage common application properties centrally through services such as AWS System Manager (SSM) Parameter Store, especially for sensitive data like, e.g., credentials.

 

Permissions configuration

The generated presigned URLs inherit the permissions of the Lambda function’s IAM role. If our Lambda function does not have S3 with the necessary write permissions associated with its IAM role, the issued presigned URLs will not be able to upload files to the S3 bucket in question on behalf of clients, and we will get an error similar to this one:

 

<Error>
 <Code>AccessDenied</Code>
 <Message>Access Denied</Message>
 <RequestId>ASDGssffscSDQ4POL</RequestId>
<HostId>OLsT4/9tYWsdssyo0dxtFa7sdsdsKhdPqsdsd4L+9CmiKP2tFyGsdsL8Pr0E0rkDgzHsddsVjdsdsdwc=</HostId>
</Error>

 

Let’s follow cybersecurity’s Principle of Least Privilege (PoLP) and add an inline policy that only permits the PutObject S3 operation. Go to the lambda page’s Configuration tab, select Permissions in the side menu, and then, under Execution role, press the role name’s link to open the IAM role’s page:

 

S3 presigned URL 3

 

In the lambda function’s IAM role page, in the Permissions tab, press the Add permissions button and select Create inline policy:

 

S3 presigned URL 4

 

In the create policy page, select JSON and add the following policy with the ARN of the S3 bucket:

 

S3 presigned URL 5

 

Next, add the policy name and press the Create policy button:

 

S3 presigned URL 6

 

API Gateway configuration

To expose our Lambda function to front-end apps we are going to use AWS API Gateway. One way to do this is through the Add trigger button located in our function’s Lambda diagram:

 

S3 presigned URL 7

 

In the Add trigger page, seen in the screenshot below, select API Gateway as the source, followed by “Create a new API” with HTTP API as the API type. HTTP API is a more lightweight and low-latency API type and is more than enough for our needs. For the sake of brevity and demonstration purposes only, let's select “Open” as security, i.e., anyone can access the API endpoint – note that, especially in a production environment, it is highly recommended that the API is secured with, e.g., AWS Cognito User Pools or Lambda Authorizers.

 

S3 presigned URL 8

 

Once the API Gateway trigger is added, it is visible in the Lambda diagram and the respective endpoint is visible in the Configuration tab, under Triggers, as follows:

 

S3 presigned URL 9

 

In the next section, we will use this API endpoint URL to obtain S3 presigned URLs.

 

 

Uploading a file with an S3 presigned URL

To get the S3 presigned URL to upload a file, we will call the AWS API Gateway endpoint that will trigger a call to the Lambda function previously created, and we will use Postman for this.

 

In the next screenshot, we can see a GET request and a successful response to the previously mentioned endpoint. The JSON response body contains the property uploadURL with the presigned URL as its value.

 

S3 presigned URL 10

 

Finally, we’ll create a PUT request in Postman and use the presigned URL that can be copied from the previous request’s response. In the Body tab of the request, we should select “binary” and choose a file from our local disk that we want to upload to S3 via the presigned URL, as illustrated in the screenshot below. After we press the Send button, we should see a successful response, with an empty body and HTTP status 200:

 

S3 presigned URL 11

 

Note: for files larger than 100MB, it is recommended to do multipart uploads.

 

 

Conclusion

AWS S3 presigned URLs offer a way to directly upload files in a more scalable, efficient, and easy way when compared to using a server upload proxy. It can help lower operation costs by reducing the bandwidth and processing load on the servers, and, thus, increasing scalability since they don’t need to deal with the file upload traffic.

 

At the same time, it lowers the file upload request latency and allows the upload of large files by bypassing the middleware server’s request payloads limits, such as, e.g., AWS API Gateway 10MB request payloads limit.

 

But like everything else, some drawbacks should be considered when choosing AWS S3 presigned URLs:

  • If a malicious user gets a hold of the presigned URL, he can upload files to the bucket until the URL expires. Setting a short URL expiration time certainly helps to mitigate that risk. The risks can also be further mitigated, e.g., by implementing mechanisms to limit the number of times a URL is used.
  • By avoiding the use of servers, one cannot easily add custom middleware or processing steps to the file upload flow, like format validation or virus scanning.
  • There can also be reduced observability and control over the upload process. Monitoring and logging are limited to what S3 and CloudWatch provide, which may not be as comprehensive as what you could achieve with server-side processing. Keeping a detailed audit trail of file uploads can be more difficult since the upload traffic does not go through our servers.

 

  Read the article about Multipart Upload with Amazon S3 here.

Share this article