I am trying to write a lambda function that will tag ec2 instances that will go from pending to running state. However, I have a problem reading the csv file that holds my ec2 instance tags. Currently, I have gone to where the lambda returns me the following result. However, I need a list of dictionaries. Because the rest of
Tag: amazon-web-services
AWS Glue Job upsert from one db table to annother db table
I am trying to create a pretty basic Glue job. I have two different AWS RDS Mariadb’s, with two similar tables (field names are different). I would like to transform the data from table A so it fits with table B schema (this seems pretty trivial and is working). And then i would like to update all existing entries (on
Unexpected indentation when return in python
when I try to return but I got an error in 2nd return signup_result & return login_result https://github.com/microsoft/pyright/blob/main/docs/configuration.md#reportUndefinedVariable here is utils.py I also try to tab 2 times to avoid indentation error in return signup_result & return login_result but still got the same error Unexpected indentationPylance Answer The cognito_login() function contains only one line of code return login_result because that
AWS – NoCredentialsError: Unable to locate credentials
I’m new in AWS also a beginner in python. I have situation in here,I’m facing this kind of issue: “The NoCredentialsError is an error encountered when using the Boto3 library to interface with Amazon Web Services (AWS).Specifically, this error is encountered when your AWS credentials are missing, invalid, or cannot be located by your Python script. ” And i did
Parsing JSON in AWS Lambda Python
For a personal project I’m trying to write an AWS Lambda in Python3.9 that will delete a newly created user, if the creator is not myself. For this, the logs in CloudWatch Logs will trigger (via CloudTrail and EventBridge) my Lambda. Therefore, I will receive the JSON request as my event in : But I have trouble to parse it…
trying to use boto copy to s3 unless file exists
in my code below, fn2 is the local file and “my_bucket_object.key” is a list of files in my s3 bucket. I am looking at my local files, taking the latest one by creation date and then looking at the bucket and I only want to copy the latest one there (this is working) but not if it exists already. What
How Python AWS Lambda interacts specifically with the uploaded file?
I´m trying to do the following: when I upload a file in my s3 storage, the lambda picks this json file and converts it into a csv file. How can I specify in the lambda code which file must pick? example of my code in local: in this example, I provide the name of the file…but..how can I manage that
JSON string from a ‘kind’ of JSON string that needs to be evaled
I have a string (str_json) that I’m getting from elsewhere : This is a string str_json which is supposed to be JSON but it actually contains some python code and a number value and single quotes. json.loads will fail. What I need is the JSON string of str_json. json_data = json.loads(str_json) What I’ve attempted so far : Answer You can
S3 notifications generating multiple events and how to handle them
There is this S3 notification feature described here: Amazon S3 event notifications are designed to be delivered at least once. Typically, event notifications are delivered in seconds but can sometimes take a minute or longer. and discussed here. I thought I could mitigate the duplications a bit by deleting files I have already processed. The problem is, when a second
Query S3 from Python
I am using python to send a query to Athena and get table DDL. I am using start_query_execution and get_query_execution functions in the awswrangler package. The code above creates a dict object that stores query results in an s3 link. The link can be accessed by res[‘ResultConfiguration’][‘OutputLocation’]. It’s a text link: s3://…..txt Can someone help me figure how to access