I created the following lambda function and its given me the following error. I am new to Python and I dont think there’s anything missing in the function. can someone please help to make this function work? Thanks Execution result Answer Very simple. Rename handler to lambda_handler, or change your lambda configuration to use the handler called handler rather than
Tag: aws-lambda
How to Connect to Informix DB from AWS Lambda Python
I am currently working on a project to connect to IBM Informix from AWS Lambda Python Environment. But so far I have not been able to find any packages or lambda layers that allows me to connect to IBM Informix DB. Is there anyone who have been able to do so? Or is there any lambda layers which is something
Am I able to use AWS EventBridge (or alternative) in order to re-run a lambda after an hour if the past one fails?
I have written a lambda using Python which is dependent upon external APIs which can occasionally go down. This is triggered once a day using EventBridge to gather data from yesterday, and updates a file in S3 at the same time every day. I was wondering how I would be able to re-run the Lambda, which includes a check as
Fire-and-forget upload to S3 from a Lambda function
I have a lambda function where, after computation is finished, some calls are made to store metadata on S3 and DynamoDB. The S3 upload step is the biggest bottleneck in the function, so I’m wondering if there is a way to “fire-and-forget” these calls so I don’t have do wait for them before the function returns. Currently I’m running all
botocore.exceptions.ConnectTimeoutError: Connect timeout on endpoint URL
I am using boto3 to read many text files in S3 through a Lambda Python function. My codes for the connection to S3 below About 30 text files are successfully ok. But after, it gets below error message Any way I can resolve this? Answer Lambda in a VPC does not have public IP and therefor can’t access internet from
GoneException when calling post_to_connection on AWS lambda and API gateway
I want to send a message to a websocket client when it connects to the server on AWS lambda and API gateway. Currently, I use wscat as a client. Since the response ‘connected’ is not shown on the wscat console when I connect to the server, I added post_to_connection to send a message ‘hello world’ to the client. However, it
How to upload pandas, sqlalchemy package in lambda to avoid error “Unable to import module ‘lambda_function’: No module named ‘importlib_metadata'”?
I’m trying to upload a deployment package to my AWS lambda function following the article https://korniichuk.medium.com/lambda-with-pandas-fd81aa2ff25e. My final zip file is as follows: https://drive.google.com/file/d/1NLjvf_-Ks50E8z53DJezHtx7-ZRmwwBM/view but when I run my lambda function I get the error Unable to import module ‘lambda_function’: No module named ‘importlib_metadata’ My handler is named lambda_function.lambda_handler which is the file name and the function to run. I
Validate data in Dynamodb is only working through if data is present
this is a rather easy and silly question but I can’t seem to understand the problem at hand. I am trying to create a register page where user could enter thier email, if their email is present, then the the function will put items into the database, if not, it will return “email is already present. EDIT:- My problem is
S3 Object upload to a private bucket using a pre-signed URL result in Access denied
I’m learning AWS and with my limited knowledge of AWS, am I right in saying that If I make pre-signed URLS to Upload and Download from a bucket – which is set to block all public access it should work? I take all my authentication and checks through API gateway, so if a user is able to hit the endpoint
Why does my Lambda function write an empty csv file to S3?
I’m calling the YouTube API to download and store channel statistics in S3. I can write a csv file to my S3 bucket without any errors, but it’s empty. I have checked this thread Why the csv file in S3 is empty after loading from Lambda, but I’m not using a with block in to_csv_channel(). I’m currently running the script