![]() ![]() We'll use this same data to validate whether the success whether the upload is successful or not. Now I will be dumping it into the destination bucket. With image anti layers option passed the image would have been now compressed we will again log the size now to see what is the to see what is the final results after compression and now we need to dump this image into a destination bucket for that we need to write the image in a buffer and then pass this buffer directly to the destination bucket.īuffer equal to bytes IO, we create a new by for object and then save the image to this buffer in the in the preferred format. To generate thumbnail this is the method IMG dot thumbnail and mention the preferred size, here I will mention 500 comma 500. I'm passing this image, just for the information purpose we will log the size of this image before we actually compress. We will pass this byte IO image dot open and passing this byte IO. This will load the image in memory and then we will use pillow image object. We will load this image in memory bytes IO for that we will import BytesIO Library, from IO import bytes IO then load the image in memory. Open the file in S3 by passing bucket and key value, I will extract the body of this and read it as a byte string. Next, we will get access to S3 for which, I need to use the import library boto3. We will call this bucket name as "alok-thumbnail-image-bucket-0007" and the destination file name we will just append underscore thumbnail to the original file name, for which I need to use OS dot path split, for that I will import OS library and I will pass the key which will return me the name and extension separately, thumbnail name extension from this I will build the thumbnail key which would be thumbnail name underscore thumbnail and the extension, this will be the destination key. Then we will define the destination bucket name and the key. This will be the bucket name and the key name. The first item in this record is S3 bucket name. Object with the bucket and the key, which is a file name being uploaded, we need to extract the bucket name and key. Whenever this Lambda function is called from an S3 trigger, the trigger will pass the event. Next, we need to extract the bucket name and the key from the event. I will import the logging Library and configure the logger.Īs we begin, we will print the event and the context. We will start writing the code, first we need to define a Lambda Handler, which will be the entry point for the Lambda function, which takes two parameters event and context. We will create a Lambda function in python, which I will put it in a directory called src. I will create a new directory for the project, we’ll call it as “thumbnail_generation_lambda-main” we will open this project in the visual studio code. I have already configured my CLI in my system using “aws configure” command. The prerequisite for this project is to have terraform and AWS CLI configured. Then we will configure an S3 trigger to trigger this Lambda function whenever an image is uploaded to the S3 bucket. We will first write a Lambda function which will open the original image from a source bucket and create a thumbnail and store it in thumbnail bucket. The use case is to generate a thumbnail image whenever an image is uploaded to S3 Original bucket. We are going to see how to use terraform to create an AWS Lambda function and configure an AWS S3 trigger. Through, Terraform we are going to create below resources and establish the connectivity between them: In this Blog, we are going to use Terraform as IAC tools for resource provisioning. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |