Skip to content

miven/gpt-2-flask-api

 
 

Repository files navigation

GPT-2 Flask API

Containerising PyTorch models in a repeatable way. Deploy OpenAI's GPT-2 model and expose it over a Flask API. Finally deploy it to AWS Fargate container hosting using CloudFormation.

architecture

First, before anything else download the model

mkdir models
curl --output models/gpt2-pytorch_model.bin https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-pytorch_model.bin

Local


Run the following to get started with your local python environment

python3 -m venv ./venv
source venv/bin/activate
pip install --upgrade pip
pip install -r requirements.txt

Then run the python flask server using the following

cd deployment
python run_server.py

docker-compose

Setup

docker-compose up --build flask

Go to http://localhost:5000

Shutdown

docker-compose down -v

AWS


First build and push the container to ECR

./container_push.sh

Setup the CloudFormation stack

./cloudformation_deploy.sh

Deploy the stack

aws cloudformation create-stack \
    --stack-name "gpt-2-flask" \
    --template-body file://cloudformation/deployment.yaml \
    --parameters file://cloudformation/deployment-params.json \
    --capabilities CAPABILITY_IAM

Attribution


About

OpenAI GPT-2 Flask API

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 74.0%
  • CSS 13.5%
  • Shell 3.8%
  • Dockerfile 3.2%
  • HTML 3.0%
  • JavaScript 2.5%