Skip to content

Commit

Permalink
455 lpr demo (#467)
Browse files Browse the repository at this point in the history
* license plate recognition  demo
  • Loading branch information
dorgun authored Oct 6, 2023
1 parent cedebfc commit 760fed6
Show file tree
Hide file tree
Showing 15 changed files with 626 additions and 0 deletions.
6 changes: 6 additions & 0 deletions docs/performance.md
Original file line number Diff line number Diff line change
Expand Up @@ -110,3 +110,9 @@ Note: `yolov8_seg` always has a buffer length of 10. `BUFFER_QUEUES` env doesn't
| [#347](https://github.com/insight-platform/Savant/issues/347) | 44.34 | 13.07 |
| [#407](https://github.com/insight-platform/Savant/issues/407) | 67.73 | 21.57 |
| [#456](https://github.com/insight-platform/Savant/issues/456) | 68.48 | 21.71 |

### license_plate_recognition

| Savant ver. | A4000 | Jetson NX |
|---------------------------------------------------------------|-------|-----------|
| [#455](https://github.com/insight-platform/Savant/issues/455) | 92.4 | 25.29 |
3 changes: 3 additions & 0 deletions samples/assets/stub_imgs/smpte100_1920x1080.jpeg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
3 changes: 3 additions & 0 deletions samples/assets/stub_imgs/smpte100_3840x2160.jpeg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
58 changes: 58 additions & 0 deletions samples/license_plate_recognition/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,58 @@
# License plate recognition

The app partially reproduces [deepstream_lpr_app](https://github.com/NVIDIA-AI-IOT/deepstream_lpr_app) in the Savant framework. The pipeline detects cars using YoloV8 models and detects license plate using NVidia LPD model. Cars and plates track using NVidia traker the license plate is recognized using the NVidia LPR model. The results are displayed on the frames.

Preview:

![](assets/license-plate-recognition-1080.webp)

Tested on platforms:

- Xavier NX, Xavier AGX;
- Nvidia Turing, Ampere.

Demonstrated adapters:

- RTSP source adapter;
- Always-ON RTSP sink adapter.

**Note**: Ubuntu 22.04 runtime configuration [guide](../../docs/runtime-configuration.md) helps to configure the runtime to run Savant pipelines.

Run the demo:

```bash
git clone https://github.com/insight-platform/Savant.git
cd Savant/samples/license_plate_recognition

# if x86
../../utils/check-environment-compatible && docker compose -f docker-compose.x86.yml up

# if Jetson
../../utils/check-environment-compatible && docker compose -f docker-compose.l4t.yml up

# open 'rtsp://127.0.0.1:554/stream' in your player
# or visit 'http://127.0.0.1:888/stream/' (LL-HLS)

# Ctrl+C to stop running the compose bundle

# to get back to project root
cd ../..
```

## Performance Measurement

Download the video file to your local folder. For example, create a data folder and download the video into it (all commands must be executed from the root directory of the project Savant)

```bash
# you are expected to be in Savant/ directory

mkdir -p data && curl -o data/lpr_test_1080p.mp4 \
https://eu-central-1.linodeobjects.com/savant-data/demo/lpr_test_1080p.mp4
```

Now you are ready to run the performance benchmark with the following command:

```bash
./samples/license_plate_recognition/run_perf.sh
```

Git LFS file not shown
Git LFS file not shown
62 changes: 62 additions & 0 deletions samples/license_plate_recognition/docker-compose.l4t.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,62 @@
version: "3.3"
services:

video-loop-source:
image: ghcr.io/insight-platform/savant-adapters-gstreamer-l4t:latest
restart: unless-stopped
volumes:
- zmq_sockets:/tmp/zmq-sockets
- /tmp/video-loop-source-downloads:/tmp/video-loop-source-downloads
environment:
- LOCATION=https://eu-central-1.linodeobjects.com/savant-data/demo/lpr_test_1080p.mp4
- DOWNLOAD_PATH=/tmp/video-loop-source-downloads
- ZMQ_ENDPOINT=dealer+connect:ipc:///tmp/zmq-sockets/input-video.ipc
- SOURCE_ID=nvidia-sample-processed
- SYNC_OUTPUT=True
entrypoint: /opt/savant/adapters/gst/sources/video_loop.sh
depends_on:
module:
condition: service_healthy

module:
build:
context: .
dockerfile: docker/Dockerfile.l4t
restart: unless-stopped
volumes:
- zmq_sockets:/tmp/zmq-sockets
- ../../models/license_plate_recognition:/models
- ../../downloads/license_plate_recognition:/downloads
- .:/opt/savant/samples/license_plate_recognition
command: samples/license_plate_recognition/module.yml
environment:
- ZMQ_SRC_ENDPOINT=router+bind:ipc:///tmp/zmq-sockets/input-video.ipc
- ZMQ_SINK_ENDPOINT=pub+bind:ipc:///tmp/zmq-sockets/output-video.ipc
- FPS_PERIOD=1000
runtime: nvidia

always-on-sink:
image: ghcr.io/insight-platform/savant-adapters-deepstream-l4t:latest
restart: unless-stopped
ports:
- "554:554" # RTSP
- "1935:1935" # RTMP
- "888:888" # HLS
- "8889:8889" # WebRTC
volumes:
- zmq_sockets:/tmp/zmq-sockets
- ../assets/stub_imgs:/stub_imgs
environment:
- ZMQ_ENDPOINT=sub+connect:ipc:///tmp/zmq-sockets/output-video.ipc
- SOURCE_ID=nvidia-sample-processed
- STUB_FILE_LOCATION=/stub_imgs/smpte100_1920x1080.jpeg
- DEV_MODE=True
- RTSP_LATENCY_MS=500
- ENCODER_PROFILE=High
- ENCODER_BITRATE=8000000
- FRAMERATE=30/1
command: python -m adapters.ds.sinks.always_on_rtsp
runtime: nvidia

volumes:
zmq_sockets:
74 changes: 74 additions & 0 deletions samples/license_plate_recognition/docker-compose.x86.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,74 @@
version: "3.3"
services:

video-loop-source:
image: ghcr.io/insight-platform/savant-adapters-gstreamer:latest
restart: unless-stopped
volumes:
- zmq_sockets:/tmp/zmq-sockets
- /tmp/video-loop-source-downloads:/tmp/video-loop-source-downloads
environment:
- LOCATION=https://eu-central-1.linodeobjects.com/savant-data/demo/lpr_test_1080p.mp4
- DOWNLOAD_PATH=/tmp/video-loop-source-downloads
- ZMQ_ENDPOINT=dealer+connect:ipc:///tmp/zmq-sockets/input-video.ipc
- SOURCE_ID=nvidia-sample-processed
- SYNC_OUTPUT=True
entrypoint: /opt/savant/adapters/gst/sources/video_loop.sh
depends_on:
module:
condition: service_healthy

module:
build:
context: .
dockerfile: docker/Dockerfile.x86
restart: unless-stopped
volumes:
- zmq_sockets:/tmp/zmq-sockets
- ../../models/license_plate_recognition:/models
- ../../downloads/license_plate_recognition:/downloads
- .:/opt/savant/samples/license_plate_recognition
command: samples/license_plate_recognition/module.yml
environment:
- ZMQ_SRC_ENDPOINT=router+bind:ipc:///tmp/zmq-sockets/input-video.ipc
- ZMQ_SINK_ENDPOINT=pub+bind:ipc:///tmp/zmq-sockets/output-video.ipc
- FPS_PERIOD=1000
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities: [gpu]

always-on-sink:
image: ghcr.io/insight-platform/savant-adapters-deepstream:latest
restart: unless-stopped
ports:
- "554:554" # RTSP
- "1935:1935" # RTMP
- "888:888" # HLS
- "8889:8889" # WebRTC
volumes:
- zmq_sockets:/tmp/zmq-sockets
- ../assets/stub_imgs:/stub_imgs
environment:
- ZMQ_ENDPOINT=sub+connect:ipc:///tmp/zmq-sockets/output-video.ipc
- SOURCE_ID=nvidia-sample-processed
- STUB_FILE_LOCATION=/stub_imgs/smpte100_1920x1080.jpeg
- DEV_MODE=True
- RTSP_LATENCY_MS=500
- ENCODER_PROFILE=High
- ENCODER_BITRATE=8000000
- FRAMERATE=30/1
command: python -m adapters.ds.sinks.always_on_rtsp
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities: [gpu]

volumes:
zmq_sockets:
43 changes: 43 additions & 0 deletions samples/license_plate_recognition/docker/Dockerfile.l4t
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
# build nvinfer custom library for yolo models (create engine and parse bbox functions)
# https://github.com/marcoslucianops/DeepStream-Yolo
# build custom parser for licence plate recognition model
ARG DS_YOLO_PATH=/opt/yolo
ARG DS_LPR_APP_PATH=/opt/lpr
ARG NVDSINFER_PATH=/opt/nvidia/deepstream/deepstream/sources/libs/nvdsinfer

FROM nvcr.io/nvidia/deepstream:6.3-triton-multiarch as builder

ENV CUDA_VER=11.4
ARG DS_YOLO_VER=000bcd676d48eb236076aed111ab23ff0105de3d
ARG DS_LPR_APP_VER=9c761e5ec9fea5ac4c6e3f4357326693d2d3cf48
ARG DS_YOLO_PATH
ARG DS_LPR_APP_PATH
ARG NVDSINFER_PATH

RUN git clone https://github.com/NVIDIA-AI-IOT/deepstream_lpr_app.git $DS_LPR_APP_PATH \
&& cd $DS_LPR_APP_PATH \
&& git checkout $DS_LPR_APP_VER \
&& cd $DS_LPR_APP_PATH/nvinfer_custom_lpr_parser \
&& make

RUN git clone https://github.com/marcoslucianops/DeepStream-Yolo.git $DS_YOLO_PATH \
&& cd $DS_YOLO_PATH \
&& git checkout $DS_YOLO_VER \
&& make -C nvdsinfer_custom_impl_Yolo

# patch nvdsinfer_model_builder.cpp: use engine path to place created engine
COPY nvdsinfer_model_builder.patch $NVDSINFER_PATH/
RUN cd $NVDSINFER_PATH && \
patch nvdsinfer_model_builder.cpp < nvdsinfer_model_builder.patch && \
make

FROM ghcr.io/insight-platform/savant-deepstream-l4t:latest

ARG DS_YOLO_PATH
ARG DS_LPR_APP_PATH
ARG NVDSINFER_PATH

COPY --from=builder $DS_YOLO_PATH/nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so /opt/savant/lib/
COPY --from=builder $DS_LPR_APP_PATH/nvinfer_custom_lpr_parser/libnvdsinfer_custom_impl_lpr.so /opt/savant/lib/
COPY --from=builder $NVDSINFER_PATH/libnvds_infer.so /opt/nvidia/deepstream/deepstream/lib/
COPY --from=builder $DS_LPR_APP_PATH/deepstream-lpr-app/dict_us.txt /opt/savant/dict.txt
43 changes: 43 additions & 0 deletions samples/license_plate_recognition/docker/Dockerfile.x86
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
# build nvinfer custom library for yolo models (create engine and parse bbox functions)
# https://github.com/marcoslucianops/DeepStream-Yolo
# build custom parser for licence plate recognition model
ARG DS_YOLO_PATH=/opt/yolo
ARG DS_LPR_APP_PATH=/opt/lpr
ARG NVDSINFER_PATH=/opt/nvidia/deepstream/deepstream/sources/libs/nvdsinfer

FROM nvcr.io/nvidia/deepstream:6.3-triton-multiarch as builder

ENV CUDA_VER=12.1
ARG DS_YOLO_VER=000bcd676d48eb236076aed111ab23ff0105de3d
ARG DS_LPR_APP_VER=9c761e5ec9fea5ac4c6e3f4357326693d2d3cf48
ARG DS_YOLO_PATH
ARG DS_LPR_APP_PATH
ARG NVDSINFER_PATH

RUN git clone https://github.com/NVIDIA-AI-IOT/deepstream_lpr_app.git $DS_LPR_APP_PATH \
&& cd $DS_LPR_APP_PATH \
&& git checkout $DS_LPR_APP_VER \
&& cd $DS_LPR_APP_PATH/nvinfer_custom_lpr_parser \
&& make

RUN git clone https://github.com/marcoslucianops/DeepStream-Yolo.git $DS_YOLO_PATH \
&& cd $DS_YOLO_PATH \
&& git checkout $DS_YOLO_VER \
&& make -C nvdsinfer_custom_impl_Yolo

# patch nvdsinfer_model_builder.cpp: use engine path to place created engine
COPY nvdsinfer_model_builder.patch $NVDSINFER_PATH/
RUN cd $NVDSINFER_PATH && \
patch nvdsinfer_model_builder.cpp < nvdsinfer_model_builder.patch && \
make

FROM ghcr.io/insight-platform/savant-deepstream:latest

ARG DS_YOLO_PATH
ARG DS_LPR_APP_PATH
ARG NVDSINFER_PATH

COPY --from=builder $DS_YOLO_PATH/nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so /opt/savant/lib/
COPY --from=builder $DS_LPR_APP_PATH/nvinfer_custom_lpr_parser/libnvdsinfer_custom_impl_lpr.so /opt/savant/lib/
COPY --from=builder $NVDSINFER_PATH/libnvds_infer.so /opt/nvidia/deepstream/deepstream/lib/
COPY --from=builder $DS_LPR_APP_PATH/deepstream-lpr-app/dict_us.txt /opt/savant/dict.txt
Loading

0 comments on commit 760fed6

Please sign in to comment.