Containerising PyTorch models in a repeatable way. Deploy OpenAI's GPT-2 model and expose it over a Flask API. Finally deploy it to AWS Fargate container hosting using CloudFormation.
First, before anything else download the model
mkdir models
curl --output models/gpt2-pytorch_model.bin https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-pytorch_model.bin
Run the following to get started with your local python environment
python3 -m venv ./venv
source venv/bin/activate
pip install --upgrade pip
pip install -r requirements.txt
Then run the python flask server using the following
cd deployment
python run_server.py
docker-compose up --build flask
Go to http://localhost:5000
docker-compose down -v
First build and push the container to ECR
./container_push.sh
Setup the CloudFormation stack
./cloudformation_deploy.sh
Deploy the stack
aws cloudformation create-stack \
--stack-name "gpt-2-flask" \
--template-body file://cloudformation/deployment.yaml \
--parameters file://cloudformation/deployment-params.json \
--capabilities CAPABILITY_IAM