-
cd models; mkdir -p onnx
-
Download the component ONNX files
- Listed here; save them to the
onnx
directory within this folder.
- Listed here; save them to the
-
Run
bash create_default_engines.sh
- Models generated with FP32 precision: image encoder
- Models generated with FP16 precision: image decoder, depth estimation model, face detection model, gaze estimation model, object detection model
-
cd models; mkdir -p tensorrt/int8/caches
-
Download INT8 calibration caches
- Listed here; save them to
tensorrt/int8/caches
. - Depending on your download method, the filenames may contain the
gazesam_int8_calib_caches_
prefix. To remove this prefix, runrename 's/^gazesam_int8_calib_caches_//' gazesam_int8_calib_caches_*.cache
(whilecd
'ed intotensorrt/int8/caches
).
- Listed here; save them to
-
Run
bash create_optimized_engines.sh
- Models generated with FP32 precision: image encoder
- Models generated with FP16 precision: image decoder, depth estimation model, face detection model
- Models generated with INT8 precision: gaze estimation model, object detection model
Note that default ONNX models are available, so this section is likely not going to be relevant to you unless you'd like to generate your own ONNXes. If you plan to generate an ONNX model and later use it to compile an engine, please remember to replace our defaults with your new file!
Instructions below indicate how to recreate our ONNX models.
- Downloaded directly from ProxylessNAS.
- L2CS-Net model, downloaded by running this script and choosing the
l2cs_net_1x3x448x448.onnx
variation.
- Depth-Anything-M model, downloaded by following these instructions. We use
vitb_14
by default.
python create_onnx/create_yolo.py --model-size [s | m | l] --runtime [trt | onnx]
.- Set the runtime flag to trt (controls the NMS format) if you plan to compile a TensorRT engine from it. We use the
yolo-nas-m
model.
-
Download
efficientvit-sam-l0.pt
-
python applications/efficientvit_sam/deployment/onnx/export_encoder.py \ --model efficientvit-sam-l0 \ --output demo/gazesam/models/onnx/evit_encoder_l0.onnx
-
python applications/efficientvit_gazesam/models/create_onnx/create_evit_decoder.py \ --output demo/gazesam/models/onnx/evit_decoder_l0.onnx \ --model-type efficientvit-sam-l0 \ --opset 17 \ --return-single-mask