diff --git a/samples/age_gender_recognition/README.md b/samples/age_gender_recognition/README.md index ca22b559..5b29ecb0 100644 --- a/samples/age_gender_recognition/README.md +++ b/samples/age_gender_recognition/README.md @@ -10,8 +10,8 @@ Preview: Tested on platforms: -- Xavier NX, Xavier AGX; -- Nvidia Turing, Ampere. +- Nvidia Turing +- Nvidia Jetson Orin family Demonstrated operational modes: diff --git a/samples/area_object_counting/README.md b/samples/area_object_counting/README.md index 4dcf26bc..600bf8cf 100644 --- a/samples/area_object_counting/README.md +++ b/samples/area_object_counting/README.md @@ -8,8 +8,8 @@ Preview: Tested on platforms: -- Xavier NX, Xavier AGX; -- Nvidia Turing, Ampere. +- Nvidia Turing +- Nvidia Jetson Orin family Demonstrated operational modes: diff --git a/samples/auxiliary_streams/README.md b/samples/auxiliary_streams/README.md index 375ee9a4..180de02c 100644 --- a/samples/auxiliary_streams/README.md +++ b/samples/auxiliary_streams/README.md @@ -2,6 +2,11 @@ A pipeline demonstrating the use of Auxiliary Streams in Savant. The pipeline contains element, [Multiple Resolutions](multiple_resolutions.py). It scales the frame to multiple resolution and sends the frames to the auxiliary streams. +Tested on platforms: + +- Nvidia Turing +- Nvidia Jetson Orin family + ## Prerequisites ```bash diff --git a/samples/buffer_adapter/README.md b/samples/buffer_adapter/README.md index 71b9b796..2e75e5d4 100644 --- a/samples/buffer_adapter/README.md +++ b/samples/buffer_adapter/README.md @@ -4,6 +4,11 @@ A pipeline demonstrates how Buffer Adapter works in Savant. In the demo video fr The buffer adapter metrics are stored in Prometheus and displayed on a Grafana dashboard. +Tested on platforms: + +- Nvidia Turing +- Nvidia Jetson Orin family + ## Prerequisites ```bash diff --git a/samples/conditional_video_processing/README.md b/samples/conditional_video_processing/README.md index 6bc02dac..945ebaa8 100644 --- a/samples/conditional_video_processing/README.md +++ b/samples/conditional_video_processing/README.md @@ -6,6 +6,11 @@ Preview: ![](assets/conditional-video-processing.webp) +Tested on platforms: + +- Nvidia Turing +- Nvidia Jetson Orin family + ## Prerequisites ```bash diff --git a/samples/face_reid/README.md b/samples/face_reid/README.md index 1852daf9..70f36fb5 100644 --- a/samples/face_reid/README.md +++ b/samples/face_reid/README.md @@ -10,6 +10,11 @@ Preview: The sample is split into two parts: Index Builder and Demo modules. +Tested on platforms: + +- Nvidia Turing +- Nvidia Jetson Orin family + ## Index Builder Index builder module loads images from [gallery](./assets/gallery), detects faces and facial landmarks, performs face preprocessing and facial recognition model inference. The resulting feature vectors are added into [hnswlib](https://github.com/nmslib/hnswlib) index, and the index (along with cropped face images from gallery) is saved on disk in the `index_files` directory. diff --git a/samples/fisheye_line_crossing/README.md b/samples/fisheye_line_crossing/README.md index f6ffcc17..2a79e45e 100644 --- a/samples/fisheye_line_crossing/README.md +++ b/samples/fisheye_line_crossing/README.md @@ -14,8 +14,8 @@ Preview: Tested on platforms: -- Xavier NX, Xavier AGX; -- Nvidia Turing, Ampere. +- Nvidia Turing +- Nvidia Jetson Orin family Demonstrated operational modes: diff --git a/samples/intersection_traffic_meter/README.md b/samples/intersection_traffic_meter/README.md index bd70f375..b841e0af 100644 --- a/samples/intersection_traffic_meter/README.md +++ b/samples/intersection_traffic_meter/README.md @@ -10,8 +10,8 @@ Preview: Tested on platforms: -- Xavier NX, Xavier AGX; -- Nvidia Turing, Ampere. +- Nvidia Turing +- Nvidia Jetson Orin family Demonstrated operational modes: diff --git a/samples/kafka_redis_adapter/README.md b/samples/kafka_redis_adapter/README.md index 5666755e..b8137831 100644 --- a/samples/kafka_redis_adapter/README.md +++ b/samples/kafka_redis_adapter/README.md @@ -4,6 +4,11 @@ A pipeline demonstrates how Kafka-Redis adapters works in Savant. In the demo vi ![kafka-redis-adapter-demo.png](assets/kafka-redis-adapter-demo.png) +Tested on platforms: + +- Nvidia Turing +- Nvidia Jetson Orin family + ## Prerequisites ```bash diff --git a/samples/keypoint_detection/README.md b/samples/keypoint_detection/README.md index 2b6c16de..0ebfcebf 100644 --- a/samples/keypoint_detection/README.md +++ b/samples/keypoint_detection/README.md @@ -10,8 +10,8 @@ Preview: Tested on platforms: -- Nvidia Jetson (Xavier NX, Xavier AGX, Orin family); -- Nvidia Turing, Ampere. +- Nvidia Turing +- Nvidia Jetson Orin family Demonstrated adapters: diff --git a/samples/kvs/README.md b/samples/kvs/README.md index 430b8f2e..da8668fc 100644 --- a/samples/kvs/README.md +++ b/samples/kvs/README.md @@ -2,6 +2,11 @@ A pipeline demonstrates how to send frames to and receive from Kinesis Video Stream. Pipeline consists of two parts: exporter and importer. Exporter processes frames from a video file, sends metadata to MongoDB and sends frames to Kinesis Video Stream. Importer receives frames from Kinesis Video Stream, retrieves metadata from MongoDB and draw bboxes on frames. +Tested on platforms: + +- Nvidia Turing +- Nvidia Jetson Orin family + ## Prerequisites ```bash diff --git a/samples/license_plate_recognition/README.md b/samples/license_plate_recognition/README.md index bd6e98df..fbd21341 100644 --- a/samples/license_plate_recognition/README.md +++ b/samples/license_plate_recognition/README.md @@ -8,8 +8,8 @@ Preview: Tested on platforms: -- Xavier NX, Xavier AGX; -- Nvidia Turing, Ampere. +- Nvidia Turing +- Nvidia Jetson Orin family Demonstrated adapters: diff --git a/samples/mjpeg_usb_cam/README.md b/samples/mjpeg_usb_cam/README.md index 8295ea9a..746957c0 100644 --- a/samples/mjpeg_usb_cam/README.md +++ b/samples/mjpeg_usb_cam/README.md @@ -4,6 +4,11 @@ A pipeline demonstrating how to capture MJPEG from a USB camera. MJPEG is a comm The resulting stream can be accessed via LL-HLS on `http://locahost:888/stream/video` +Tested on platforms: + +- Nvidia Turing +- Nvidia Jetson Orin family + ## Hardware Acceleration Notes On X86, JPEG decoding and encoding is done in software (or hardware-assisted if dGPU and drivers support it). On Jetson, JPEG decoding and decoding is done in hardware with NVJPEG. diff --git a/samples/multiple_gige/README.md b/samples/multiple_gige/README.md index b78f0bae..d7edf146 100644 --- a/samples/multiple_gige/README.md +++ b/samples/multiple_gige/README.md @@ -4,6 +4,11 @@ A simple pipeline demonstrates how GigE Vision Source Adapter works in Savant. I The resulting streams can be accessed via LL-HLS on `http://locahost:888/stream/gige-raw` (raw-rgba frames) and `http://locahost:888/stream/gige-encoded` (HEVC-encoded frames). +Tested on platforms: + +- Nvidia Turing +- Nvidia Jetson Orin family + Run the demo: ```bash diff --git a/samples/multiple_rtsp/README.md b/samples/multiple_rtsp/README.md index f2fa92d4..57a35de7 100644 --- a/samples/multiple_rtsp/README.md +++ b/samples/multiple_rtsp/README.md @@ -4,6 +4,11 @@ A simple pipeline demonstrates how multiplexed processing works in Savant. In th The resulting streams can be accessed via LL-HLS on `http://locahost:888/stream/city-traffic` and `http://locahost:888/stream/town-centre`. +Tested on platforms: + +- Nvidia Turing +- Nvidia Jetson Orin family + ## Prerequisites ```bash diff --git a/samples/nvidia_car_classification/README.md b/samples/nvidia_car_classification/README.md index 561451a5..1149442c 100644 --- a/samples/nvidia_car_classification/README.md +++ b/samples/nvidia_car_classification/README.md @@ -8,8 +8,8 @@ Preview: Tested on platforms: -- Xavier NX, Xavier AGX; -- Nvidia Turing, Ampere. +- Nvidia Turing +- Nvidia Jetson Orin family Demonstrated adapters: diff --git a/samples/opencv_cuda_bg_remover_mog2/README.md b/samples/opencv_cuda_bg_remover_mog2/README.md index 48114167..a025280a 100644 --- a/samples/opencv_cuda_bg_remover_mog2/README.md +++ b/samples/opencv_cuda_bg_remover_mog2/README.md @@ -23,8 +23,8 @@ step-by-step [tutorial](https://blog.savant-ai.io/building-a-500-fps-accelerated Tested on platforms: -- Xavier NX, Xavier AGX; -- Nvidia Turing, Ampere. +- Nvidia Turing +- Nvidia Jetson Orin family Demonstrated operational modes: diff --git a/samples/original_resolution_processing/README.md b/samples/original_resolution_processing/README.md index feab8b8a..a18053bc 100644 --- a/samples/original_resolution_processing/README.md +++ b/samples/original_resolution_processing/README.md @@ -2,6 +2,11 @@ A pipeline demonstrates processing of streams at the original resolution, i.e. without scaling to a single resolution. `parameters.frame` in [module.yml](module.yml) is not specified. The sample sends two streams with resolutions 1280x720 and 1920x1080 to the module. +Tested on platforms: + +- Nvidia Turing +- Nvidia Jetson Orin family + ## Prerequisites ```bash diff --git a/samples/panoptic_driving_perception/README.md b/samples/panoptic_driving_perception/README.md index 53ee5aab..9c2fe096 100644 --- a/samples/panoptic_driving_perception/README.md +++ b/samples/panoptic_driving_perception/README.md @@ -8,8 +8,8 @@ Preview: Tested on platforms: -- Xavier NX, Xavier AGX; -- Nvidia Turing, Ampere. +- Nvidia Turing +- Nvidia Jetson Orin family Demonstrated adapters: diff --git a/samples/pass_through_processing/README.md b/samples/pass_through_processing/README.md index 213ce097..18a546ab 100644 --- a/samples/pass_through_processing/README.md +++ b/samples/pass_through_processing/README.md @@ -12,6 +12,11 @@ The modules performance is also stored in Prometheus and displayed on a Grafana Detector and tracker are running in pass-through mode (`codec: copy`). Draw-func encodes frames to H264. +Tested on platforms: + +- Nvidia Turing +- Nvidia Jetson Orin family + ## Prerequisites ```bash diff --git a/samples/peoplenet_detector/README.md b/samples/peoplenet_detector/README.md index 3932fb79..ed451e1f 100644 --- a/samples/peoplenet_detector/README.md +++ b/samples/peoplenet_detector/README.md @@ -28,8 +28,8 @@ step-by-step [tutorial](https://blog.savant-ai.io/meet-savant-a-new-high-perform Tested on platforms: -- Xavier NX, Xavier AGX; -- Nvidia Ampere. +- Nvidia Turing +- Nvidia Jetson Orin family Demonstrated operational modes: diff --git a/samples/rtdetr/README.md b/samples/rtdetr/README.md index 1be73cc7..e070c5b5 100644 --- a/samples/rtdetr/README.md +++ b/samples/rtdetr/README.md @@ -8,8 +8,8 @@ Weights used: `v0.1/rtdetr_r50vd_6x_coco_from_paddle.pth` from the [RT-DETR rel Tested on platforms: -- Nvidia Turing; -- Nvidia Jetson Orin Nano. +- Nvidia Turing +- Nvidia Jetson Orin family Demonstrated operational modes: diff --git a/samples/rtsp_cam_compatibility_test/README.md b/samples/rtsp_cam_compatibility_test/README.md index a10592bc..6ad3d726 100644 --- a/samples/rtsp_cam_compatibility_test/README.md +++ b/samples/rtsp_cam_compatibility_test/README.md @@ -6,6 +6,11 @@ It uses NVDEC and NVENC internally and Savant protocol. Thus, if the pipeline wo The resulting video is broadcast in 640x360 resolution. You can access it at `http://:888/stream/test`. +Tested on platforms: + +- Nvidia Turing +- Nvidia Jetson Orin family + ## Specifying the RTSP URL Edit `.env` file and set the `URI` variable to the RTSP URL of the camera. diff --git a/samples/source_adapter_with_json_metadata/README.md b/samples/source_adapter_with_json_metadata/README.md index 5f6696e5..795a246d 100644 --- a/samples/source_adapter_with_json_metadata/README.md +++ b/samples/source_adapter_with_json_metadata/README.md @@ -8,6 +8,11 @@ In the demo it is assumed that there is only one person in the picture and the IOU of the true box and from the Yolo detection model is calculated. The IOU value is added as a tag to the frame metadata. +Tested on platforms: + +- Nvidia Turing +- Nvidia Jetson Orin family + ## Prerequisites ```bash diff --git a/samples/traffic_meter/README.md b/samples/traffic_meter/README.md index b587d18a..779db3f1 100644 --- a/samples/traffic_meter/README.md +++ b/samples/traffic_meter/README.md @@ -1,6 +1,5 @@ # Traffic meter demo - **NB**: The demo optionally uses **YOLOV8** model which takes up to **10-15 minutes** to compile to TensorRT engine. The first launch may take a decent time. The pipeline detects when people cross a user-configured line and the direction of the crossing. The crossing events are attached to individual tracks, counted for each source separately and the counters are displayed on the frame. The crossing events are also stored with Graphite and displayed on a Grafana dashboard. @@ -17,8 +16,8 @@ Article on Medium: [Link](https://blog.savant-ai.io/efficient-city-traffic-meter Tested on platforms: -- Xavier NX, Xavier AGX; -- Nvidia Turing, Ampere. +- Nvidia Turing +- Nvidia Jetson Orin family Demonstrated operational modes: diff --git a/samples/yolov8_seg/README.md b/samples/yolov8_seg/README.md index 18aecd67..e7742bf6 100644 --- a/samples/yolov8_seg/README.md +++ b/samples/yolov8_seg/README.md @@ -14,10 +14,10 @@ Preview: ![](assets/shuffle_dance.webp) - Tested on platforms: -- Xavier NX, Xavier AGX; -- Nvidia Turing, Ampere. + +- Nvidia Turing +- Nvidia Jetson Orin family ## Prerequisites diff --git a/savant/deepstream/nvinfer/build_engine.py b/savant/deepstream/nvinfer/build_engine.py index 83bf73b4..1941e341 100644 --- a/savant/deepstream/nvinfer/build_engine.py +++ b/savant/deepstream/nvinfer/build_engine.py @@ -35,16 +35,15 @@ def build_engine(element: ModelElement, rebuild: bool = True): pipeline: Gst.Pipeline = Gst.Pipeline() elements = [ PipelineElement( - 'videotestsrc', - properties={'num-buffers': model.batch_size}, + 'nvvideotestsrc', + properties={ + 'num-buffers': model.batch_size, + }, ), - PipelineElement('nvvideoconvert'), PipelineElement( element='nvstreammux', name='muxer', properties={ - 'width': model.input.width if model.input.width else 1280, - 'height': model.input.height if model.input.height else 720, 'batch-size': 1, }, ),