Simplifying transformer model integration into FastAPI applications for seamless NLP development.
uvicorn app.main:app
If everything is fine, you should be able to see output as below
INFO: Started server process [38901]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
example request
curl --location 'http://localhost:8000/analyze-sentiments/' \
--header 'Content-Type: application/json' \
--data '["bad programmer"]'
example response
[
{
"label": "LABEL_1",
"score": 0.5389513969421387
}
]
- use MODEL_NAME env variable to use different model, no need of re creating image
export MODEL_NAME="distilbert-base-uncased"