I am having trouble reading images from the S3 bucket. I can read images locally like that. But I have no idea why S3 said error Answer You need to first establish a connection to S3, then download the image data, and finally decode the data with OpenCV. For the first bit (connecting to S3), Boto3 is a good alternative
Tag: amazon-web-services
How to download latest n items from AWS S3 bucket using boto3?
I have an S3 bucket where my application saves some final result DataFrames as .csv files. I would like to download the latest 1000 files in this bucket, but I don’t know how to do it. I cannot do it mannualy, as the bucket doesn’t allow me to sort the files by date because it has more than 1000 elements
Lambda path parameters are embedded inside path dictionary
I have some python AWS lambdas which are deployed using serverless framework and I was able to retrieve the path variables using: event.get(“variable”) I am not sure what has changed but now I need to retrieve these path parameters using: event.get(“path”).get(“variable”) I am using lambda integration and my serverless configuration has not changed and looks like: I want to retrieve
Validate data in Dynamodb is only working through if data is present
this is a rather easy and silly question but I can’t seem to understand the problem at hand. I am trying to create a register page where user could enter thier email, if their email is present, then the the function will put items into the database, if not, it will return “email is already present. EDIT:- My problem is
S3 Object upload to a private bucket using a pre-signed URL result in Access denied
I’m learning AWS and with my limited knowledge of AWS, am I right in saying that If I make pre-signed URLS to Upload and Download from a bucket – which is set to block all public access it should work? I take all my authentication and checks through API gateway, so if a user is able to hit the endpoint
Get tables from AWS Glue using boto3
I need to harvest tables and column names from AWS Glue crawler metadata catalogue. I used boto3 but constantly getting number of 100 tables even though there are more. Setting up NextToken doesn’t help. Please help if possible. Desired results is list as follows: lst = [table_one.col_one, table_one.col_two, table_two.col_one….table_n.col_n] UPDATED code, still need to have tablename+columnname: Answer Adding sub-loop did
Why does my Lambda function write an empty csv file to S3?
I’m calling the YouTube API to download and store channel statistics in S3. I can write a csv file to my S3 bucket without any errors, but it’s empty. I have checked this thread Why the csv file in S3 is empty after loading from Lambda, but I’m not using a with block in to_csv_channel(). I’m currently running the script
Why do I have no logs? empty web.stdout.logs?
So I have an AWS EB environment with and application deployed. I can’t view the applications log output (web.stdout.logs is empty) Answer The problem was not that I couldn’t see the output, it was always in the /var/log/web.stdout.log file however when I was zipping the file to upload it to the EB environment I was zipping it using the file
On AWS Lambda, Openpyxl doesn’t keep track of the image
when I have a model.xlsx with an image and this code is working perfectly on windows. (keeping the image in output.xlsx) Now when I do this on my AWS Lambda everything works perfectly BUT I don’t have the image on the output.xlsx. No error message raised. Should I raise a ticket to AWS ? openpyxl ? Why is there no
get data from s3 Bucket [closed]
Closed. This question needs to be more focused. It is not currently accepting answers. Want to improve this question? Update the question so it focuses on one problem only by editing this post. Closed 1 year ago. Improve this question This is my code I got this error while getting the data from the bucket. Answer The error message writes