Skip to content

fireae/olympus

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

45 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Create a REST API for any AI model, in seconds.

Currently in beta

Like this project? Leave a ⭐️ and spread the ❤️!

Guide

What is this?

Olympus is basically a command-line tool that you can use to deploy any pre-trained ML/deep learning model as a REST api, in seconds.

We built this tool after becoming tired of manually creating REST apis for a bunch of deep learning models that we were tinkering with, especially for using them in the products that we're building.

So if you'd like to quickly deploy that cool deep learning model that you've been working on lately as a REST API, then this tool is for you.

Installation

pip install olympus

Usage

Deploying your model

Using your model's REST API

Features

  • Deploys any model with pretrained weights as a REST api, instantly.
  • Supports saving and deleting any model you deploy, for quick model management.
  • You can activate/deactivate any ML model you've deployed to enable/disable its specific API endpoint

How's this different from Tensorflow Serving?

As you probably already know, Tensorflow Serving is an open-source, production-grade model deployment tool for deploying Tensorflow models to the cloud.

One of the key differences between Olympus and TF Serving is that, while TF Serving is optimized for the production environment, Olympus is currently more geared towards the development phase.

For example, when you're building an ML model that needs to be deployed as a REST API so that it can be accessed from a mobile app you're developing, you could use Olympus to easily manage and deploy your models so that you don't have to setup servers manually.

However, when going to production, you would want to properly export your model and use a tool like TF Serving, which is built from the bottom-up for serving models at scale.

Supported ML Frameworks

Ideally, we're building Olympus to deploy any ML model as a REST API.

For now, we support models built with the frameworks listed below.

Don't see your framework? Don't worry! We're constantly adding integrations for more ML frameworks, and you can even extend Olympus with custom adapters for deploying models built with an unsupported framework (more docs on this soon!).

Contributions

Love this project and got an idea for making it better? We'd love your help!

Just send over a PR and we'll go on from there.

TODO

  • Write docs
  • Unit testing
  • Add more built-in model adapters to support more ML frameworks

About

An instant REST API for any AI model 🔥

Resources

License

Stars

Watchers

Forks

Packages

No packages published