This module scaffolds the creation of a standard inference worker to run on the Moises/Maestro infrastructure.
To install the main branch:
pip install git+https://github.com/moises-ai/maestro-worker-python.git
To install a version (recommended):
pip install git+https://github.com/moises-ai/[email protected]
Run the init script to scaffold a maestro worker in the current directory.
To create a different one, use the flag --folder
maestro-init
This will create a starter Maestro worker project, including:
- A
models
folder to include your models - A
docker-compose.yaml
file - A
DockerFile
- A
requirements.txt
file including this package - A
worker.py
with a worker example
Run the CLI passing your worker file as the first param, then, any parameters exposed by your class. In this example, input_1
will be sent to the worker, with the value Hello
.
maestro-cli ./worker.py --input_1=Hello
Run the maestro server with the path to your worker. To see all options, use maestro-server --help
maestro-server --worker=./worker.py
Send a request to the server inference endpoint:
curl --request POST --url http://localhost:8000/worker-example/inference --header 'Content-Type: application/json' \
--data '{"input_1": "Hello"}'
In order to avoid using signedurls for uploading/downloading files, you can use the maestro-upload-server
command. This will start a server in the default 9090
port that will upload/download files in the local ./uploads
folder.
Examples:
maestro-upload-server --port=9090
After server is running, you can upload files to it:
curl http://localhost:9090/upload-file/your_file_name
Then retrieve it:
curl http://localhost:9090/get-file/your_file_name
You can clean the files using:
curl http://localhost:9090/clean
You can also list files using:
curl http://localhost:9090/list-files
from maestro_worker_python.download_file import download_file
file_name = download_file("https://url_to_download_file")
from maestro_worker_python.upload_files import upload_files, UploadFile
files_to_upload = []
files_to_upload.append(UploadFile(file_path="test_upload1.txt", file_type="text/plain", signed_url="https://httpbin.org/put"))
files_to_upload.append(UploadFile(file_path="test_upload2.txt", file_type="text/plain", signed_url="https://httpbin.org/put"))
upload_files(files_to_upload)
from maestro_worker_python.convert_files import convert_files, FileToConvert
files_to_convert = []
files_to_convert.append(FileToConvert(input_file_path="input.mp3", output_file_path="output.wav", file_format="wav", max_duration=1200))
files_to_convert.append(FileToConvert(input_file_path="input.mp3", output_file_path="output.m4a", file_format="m4a", max_duration=1200))
convert_files(files_to_convert)
from maestro_worker_python.get_duration import get_duration
get_duration('./myfile.mp3')
docker-compose build
docker-compose run --service-ports worker
Install poetry
You can run it in development mode:
poetry install
poetry run maestro-init
If you get a keyring error (Ubuntu), you may need to run the following:
export PYTHON_KEYRING_BACKEND=keyring.backends.null.Keyring
To bump the package version:
poetry version (major|minor|patch)
Running tests:
poetry run python -m pytest