How to describe snapshots owned by me and filtering by tag simultaneously? It describe snapshots owned by me with code below: But when I add “Filters”, its starts ignoring “OwnerIds” and filtering only by tag. I’m follow an official boto3 documentation: https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/ec2.html#EC2.Client.describe_snapshots Answer I think the Filters and OwnerIds options are working separately. I expect that the OwnerIds option is
Tag: amazon-web-services
Lambda Function to write to csv and upload to S3
I have a Python Script that gets the details of the unused security groups. I want that to write into a CSV file and upload to S3 Bucket. When I test it in local machine it writes to CSV in the local machine. But when I execute that as a lambda function, it needs a place to save the CSV.
aws sam build not able to build packages which require paramiko due to “Error: PythonPipBuilder:ResolveDependencies”
I’ve been learning the ropes with AWS SAM and have successfully deployed a number of lambdas together with dependencies and other AWS services. However, I seem to have run into a problem when trying to deploy a lambda which relies on some specific dependencies. Here is my requirements.txt file: This file is found in “packageRoot/myCodeUri/requirements.txt” When I run sam build
Pandas read_pickle from s3 bucket
I am working on a Jupyter notebook from AWS EMR. I am able to do this: pd.read_csv(“s3:\mypath\xyz.csv’). However, if I try to open a pickle file like this, pd.read_pickle(“s3:\mypath\xyz.pkl”) I am getting this error: However, I can see both xyz.csv and xyz.pkl in the same path! Can anyone help? Answer Pandas read_pickle supports only local paths, unlike read_csv. So you
How to use a pretrained model from s3 to predict some data?
I have trained a semantic segmentation model using the sagemaker and the out has been saved to a s3 bucket. I want to load this model from the s3 to predict some images in sagemaker. I know how to predict if I leave the notebook instance running after the training as its just an easy deploy but doesn’t really help
How to use AWS Xray to connect traces across multiple lambda and SQS
We are attempting to use AWS Xray to trace an event through multiple services. We have enabled Xray within lambda via the checkbox and added the python (v2) SDK. This is giving us good information for each lambda, but they are not connected. Here is our model: event hits SNS Lambda is triggered for preprocessing writes to SQS event in
AWS Glue python install – Could not find a version
I am trying to use the AWSGlue module in Python, but cannot install the module in the terminal. Is there a way around this or is there a way I can download this from a third-party? Does anyone have this AWSGlue module working? Any help would be appreciated. Answer I believe the awsglue package is only available in the images
ERROR: Bucket name must match the regex “^[a-zA-Z0-9.-_]{1,255}$”
When I try to upload images to a bucket, it throw an error “Invalid bucket name “thum.images “: Bucket name must match the regex “^[a-zA-Z0-9.-_]{1,255}$””. I think there is nothing wrong with a bucket name. This is my code to upload image: Answer The “Invalid bucket name “thum.images “: Bucket name must match the regex “^[a-zA-Z0-9.-_]{1,255}$”” error means just what
BOTO3 – generate_presigned_url for `put_object` return `The request signature we calculated does not match the signature you provided`
I’m trying to create a presigned url that will help some customers to upload files . Here my test script that is currently working But if I’m adding: to Params (or add some metadata following the information in the put_object documentation I receive back from the server: I’ve open also a issue on BOTO3: https://github.com/boto/boto3/issues/1722 Answer This is covered in
AWS region in AWS Glue
How can I get the region in which the current Glue job is executing? When the Glue job starts executing, I see the output Detected region eu-central-1. In AWS Lambda, I can use the following lines to fetch the current region: However, it seems like the AWS_REGION environment variable is not present in Glue and therefore a KeyError is raised: