Skip to content
Advertisement

How to use a pretrained model from s3 to predict some data?

I have trained a semantic segmentation model using the sagemaker and the out has been saved to a s3 bucket. I want to load this model from the s3 to predict some images in sagemaker.

I know how to predict if I leave the notebook instance running after the training as its just an easy deploy but doesn’t really help if I want to use an older model.

I have looked at these sources and been able to come up with something myself but it doesn’t work hence me being here:

https://course.fast.ai/deployment_amzn_sagemaker.html#deploy-to-sagemaker https://aws.amazon.com/getting-started/tutorials/build-train-deploy-machine-learning-model-sagemaker/

https://sagemaker.readthedocs.io/en/stable/pipeline.html

https://github.com/awslabs/amazon-sagemaker-examples/blob/master/advanced_functionality/inference_pipeline_sparkml_xgboost_abalone/inference_pipeline_sparkml_xgboost_abalone.ipynb

My code is this:

JavaScript

Advertisement

Answer

You can actually instantiate a Python SDK model object from existing artifacts, and deploy it to an endpoint. This allows you to deploy a model from trained artifacts, without having to retrain in the notebook. For example, for the semantic segmentation model:

JavaScript

And similarly, you can instantiate a predictor object on a deployed endpoint from any authenticated client supporting the SDK, with the following command:

JavaScript

More on those abstractions:

Advertisement