The repository contains code from:
- I3D Code Extraction: https://github.com/v-iashin/video_features
configs: I3D config *
images: Output directory for intermediate processing
modelcheckpoints: Contains the model flat files to be loaded on the server (e.g. temporal segmentation model, automated scoring model etc)
models: I3D models, Repnet models *
postman: Contains postman POST stub for testing the server
uploads: Output directory for videos that were submitted in the POST request
utils: I3D utils *
- means it was part of the I3D repository code (not developed by us)
There are 2 environments to setup, 1 for the Repnet Flask Server, and 1 for the Backend Flask Server
conda env create -n repnet --file repnet_env/environment_repnet_fromhist.yml python=3.6
conda create -n backend python=3.9.13
conda install pytorch==1.13.1 torchvision==0.14.1 torchaudio==0.13.1 pytorch-cuda=11.7 -c pytorch -c nvidia
pip install -r requirements_backend.txt
ipython kernel install --user --name=backend
Within the main folder, run:
conda activate repnet
python RepnetFlask.py
Repnet endpoint will be located at http://localhost:5001/sstwist
You can HTTP POST to this endpoint using the postman stub.
Open "Backend.ipnyb" in Jupyter Notebook, select the 'backend' python environment, run all cells. This will activate the Flask server at the last cell.
The server endpoint for video analysis will be located at http://localhost:5000/videoupload
You can HTTP POST to this endpoint using the postman stub.