-
Notifications
You must be signed in to change notification settings - Fork 51
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
* license plate recognition demo
- Loading branch information
Showing
15 changed files
with
626 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,58 @@ | ||
# License plate recognition | ||
|
||
The app partially reproduces [deepstream_lpr_app](https://github.com/NVIDIA-AI-IOT/deepstream_lpr_app) in the Savant framework. The pipeline detects cars using YoloV8 models and detects license plate using NVidia LPD model. Cars and plates track using NVidia traker the license plate is recognized using the NVidia LPR model. The results are displayed on the frames. | ||
|
||
Preview: | ||
|
||
![](assets/license-plate-recognition-1080.webp) | ||
|
||
Tested on platforms: | ||
|
||
- Xavier NX, Xavier AGX; | ||
- Nvidia Turing, Ampere. | ||
|
||
Demonstrated adapters: | ||
|
||
- RTSP source adapter; | ||
- Always-ON RTSP sink adapter. | ||
|
||
**Note**: Ubuntu 22.04 runtime configuration [guide](../../docs/runtime-configuration.md) helps to configure the runtime to run Savant pipelines. | ||
|
||
Run the demo: | ||
|
||
```bash | ||
git clone https://github.com/insight-platform/Savant.git | ||
cd Savant/samples/license_plate_recognition | ||
|
||
# if x86 | ||
../../utils/check-environment-compatible && docker compose -f docker-compose.x86.yml up | ||
|
||
# if Jetson | ||
../../utils/check-environment-compatible && docker compose -f docker-compose.l4t.yml up | ||
|
||
# open 'rtsp://127.0.0.1:554/stream' in your player | ||
# or visit 'http://127.0.0.1:888/stream/' (LL-HLS) | ||
|
||
# Ctrl+C to stop running the compose bundle | ||
|
||
# to get back to project root | ||
cd ../.. | ||
``` | ||
|
||
## Performance Measurement | ||
|
||
Download the video file to your local folder. For example, create a data folder and download the video into it (all commands must be executed from the root directory of the project Savant) | ||
|
||
```bash | ||
# you are expected to be in Savant/ directory | ||
|
||
mkdir -p data && curl -o data/lpr_test_1080p.mp4 \ | ||
https://eu-central-1.linodeobjects.com/savant-data/demo/lpr_test_1080p.mp4 | ||
``` | ||
|
||
Now you are ready to run the performance benchmark with the following command: | ||
|
||
```bash | ||
./samples/license_plate_recognition/run_perf.sh | ||
``` | ||
|
3 changes: 3 additions & 0 deletions
3
samples/license_plate_recognition/assets/license-plate-recognition-1080.webp
Git LFS file not shown
3 changes: 3 additions & 0 deletions
3
samples/license_plate_recognition/assets/license-plate-recognition-400.webp
Git LFS file not shown
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,62 @@ | ||
version: "3.3" | ||
services: | ||
|
||
video-loop-source: | ||
image: ghcr.io/insight-platform/savant-adapters-gstreamer-l4t:latest | ||
restart: unless-stopped | ||
volumes: | ||
- zmq_sockets:/tmp/zmq-sockets | ||
- /tmp/video-loop-source-downloads:/tmp/video-loop-source-downloads | ||
environment: | ||
- LOCATION=https://eu-central-1.linodeobjects.com/savant-data/demo/lpr_test_1080p.mp4 | ||
- DOWNLOAD_PATH=/tmp/video-loop-source-downloads | ||
- ZMQ_ENDPOINT=dealer+connect:ipc:///tmp/zmq-sockets/input-video.ipc | ||
- SOURCE_ID=nvidia-sample-processed | ||
- SYNC_OUTPUT=True | ||
entrypoint: /opt/savant/adapters/gst/sources/video_loop.sh | ||
depends_on: | ||
module: | ||
condition: service_healthy | ||
|
||
module: | ||
build: | ||
context: . | ||
dockerfile: docker/Dockerfile.l4t | ||
restart: unless-stopped | ||
volumes: | ||
- zmq_sockets:/tmp/zmq-sockets | ||
- ../../models/license_plate_recognition:/models | ||
- ../../downloads/license_plate_recognition:/downloads | ||
- .:/opt/savant/samples/license_plate_recognition | ||
command: samples/license_plate_recognition/module.yml | ||
environment: | ||
- ZMQ_SRC_ENDPOINT=router+bind:ipc:///tmp/zmq-sockets/input-video.ipc | ||
- ZMQ_SINK_ENDPOINT=pub+bind:ipc:///tmp/zmq-sockets/output-video.ipc | ||
- FPS_PERIOD=1000 | ||
runtime: nvidia | ||
|
||
always-on-sink: | ||
image: ghcr.io/insight-platform/savant-adapters-deepstream-l4t:latest | ||
restart: unless-stopped | ||
ports: | ||
- "554:554" # RTSP | ||
- "1935:1935" # RTMP | ||
- "888:888" # HLS | ||
- "8889:8889" # WebRTC | ||
volumes: | ||
- zmq_sockets:/tmp/zmq-sockets | ||
- ../assets/stub_imgs:/stub_imgs | ||
environment: | ||
- ZMQ_ENDPOINT=sub+connect:ipc:///tmp/zmq-sockets/output-video.ipc | ||
- SOURCE_ID=nvidia-sample-processed | ||
- STUB_FILE_LOCATION=/stub_imgs/smpte100_1920x1080.jpeg | ||
- DEV_MODE=True | ||
- RTSP_LATENCY_MS=500 | ||
- ENCODER_PROFILE=High | ||
- ENCODER_BITRATE=8000000 | ||
- FRAMERATE=30/1 | ||
command: python -m adapters.ds.sinks.always_on_rtsp | ||
runtime: nvidia | ||
|
||
volumes: | ||
zmq_sockets: |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,74 @@ | ||
version: "3.3" | ||
services: | ||
|
||
video-loop-source: | ||
image: ghcr.io/insight-platform/savant-adapters-gstreamer:latest | ||
restart: unless-stopped | ||
volumes: | ||
- zmq_sockets:/tmp/zmq-sockets | ||
- /tmp/video-loop-source-downloads:/tmp/video-loop-source-downloads | ||
environment: | ||
- LOCATION=https://eu-central-1.linodeobjects.com/savant-data/demo/lpr_test_1080p.mp4 | ||
- DOWNLOAD_PATH=/tmp/video-loop-source-downloads | ||
- ZMQ_ENDPOINT=dealer+connect:ipc:///tmp/zmq-sockets/input-video.ipc | ||
- SOURCE_ID=nvidia-sample-processed | ||
- SYNC_OUTPUT=True | ||
entrypoint: /opt/savant/adapters/gst/sources/video_loop.sh | ||
depends_on: | ||
module: | ||
condition: service_healthy | ||
|
||
module: | ||
build: | ||
context: . | ||
dockerfile: docker/Dockerfile.x86 | ||
restart: unless-stopped | ||
volumes: | ||
- zmq_sockets:/tmp/zmq-sockets | ||
- ../../models/license_plate_recognition:/models | ||
- ../../downloads/license_plate_recognition:/downloads | ||
- .:/opt/savant/samples/license_plate_recognition | ||
command: samples/license_plate_recognition/module.yml | ||
environment: | ||
- ZMQ_SRC_ENDPOINT=router+bind:ipc:///tmp/zmq-sockets/input-video.ipc | ||
- ZMQ_SINK_ENDPOINT=pub+bind:ipc:///tmp/zmq-sockets/output-video.ipc | ||
- FPS_PERIOD=1000 | ||
deploy: | ||
resources: | ||
reservations: | ||
devices: | ||
- driver: nvidia | ||
count: 1 | ||
capabilities: [gpu] | ||
|
||
always-on-sink: | ||
image: ghcr.io/insight-platform/savant-adapters-deepstream:latest | ||
restart: unless-stopped | ||
ports: | ||
- "554:554" # RTSP | ||
- "1935:1935" # RTMP | ||
- "888:888" # HLS | ||
- "8889:8889" # WebRTC | ||
volumes: | ||
- zmq_sockets:/tmp/zmq-sockets | ||
- ../assets/stub_imgs:/stub_imgs | ||
environment: | ||
- ZMQ_ENDPOINT=sub+connect:ipc:///tmp/zmq-sockets/output-video.ipc | ||
- SOURCE_ID=nvidia-sample-processed | ||
- STUB_FILE_LOCATION=/stub_imgs/smpte100_1920x1080.jpeg | ||
- DEV_MODE=True | ||
- RTSP_LATENCY_MS=500 | ||
- ENCODER_PROFILE=High | ||
- ENCODER_BITRATE=8000000 | ||
- FRAMERATE=30/1 | ||
command: python -m adapters.ds.sinks.always_on_rtsp | ||
deploy: | ||
resources: | ||
reservations: | ||
devices: | ||
- driver: nvidia | ||
count: 1 | ||
capabilities: [gpu] | ||
|
||
volumes: | ||
zmq_sockets: |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,43 @@ | ||
# build nvinfer custom library for yolo models (create engine and parse bbox functions) | ||
# https://github.com/marcoslucianops/DeepStream-Yolo | ||
# build custom parser for licence plate recognition model | ||
ARG DS_YOLO_PATH=/opt/yolo | ||
ARG DS_LPR_APP_PATH=/opt/lpr | ||
ARG NVDSINFER_PATH=/opt/nvidia/deepstream/deepstream/sources/libs/nvdsinfer | ||
|
||
FROM nvcr.io/nvidia/deepstream:6.3-triton-multiarch as builder | ||
|
||
ENV CUDA_VER=11.4 | ||
ARG DS_YOLO_VER=000bcd676d48eb236076aed111ab23ff0105de3d | ||
ARG DS_LPR_APP_VER=9c761e5ec9fea5ac4c6e3f4357326693d2d3cf48 | ||
ARG DS_YOLO_PATH | ||
ARG DS_LPR_APP_PATH | ||
ARG NVDSINFER_PATH | ||
|
||
RUN git clone https://github.com/NVIDIA-AI-IOT/deepstream_lpr_app.git $DS_LPR_APP_PATH \ | ||
&& cd $DS_LPR_APP_PATH \ | ||
&& git checkout $DS_LPR_APP_VER \ | ||
&& cd $DS_LPR_APP_PATH/nvinfer_custom_lpr_parser \ | ||
&& make | ||
|
||
RUN git clone https://github.com/marcoslucianops/DeepStream-Yolo.git $DS_YOLO_PATH \ | ||
&& cd $DS_YOLO_PATH \ | ||
&& git checkout $DS_YOLO_VER \ | ||
&& make -C nvdsinfer_custom_impl_Yolo | ||
|
||
# patch nvdsinfer_model_builder.cpp: use engine path to place created engine | ||
COPY nvdsinfer_model_builder.patch $NVDSINFER_PATH/ | ||
RUN cd $NVDSINFER_PATH && \ | ||
patch nvdsinfer_model_builder.cpp < nvdsinfer_model_builder.patch && \ | ||
make | ||
|
||
FROM ghcr.io/insight-platform/savant-deepstream-l4t:latest | ||
|
||
ARG DS_YOLO_PATH | ||
ARG DS_LPR_APP_PATH | ||
ARG NVDSINFER_PATH | ||
|
||
COPY --from=builder $DS_YOLO_PATH/nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so /opt/savant/lib/ | ||
COPY --from=builder $DS_LPR_APP_PATH/nvinfer_custom_lpr_parser/libnvdsinfer_custom_impl_lpr.so /opt/savant/lib/ | ||
COPY --from=builder $NVDSINFER_PATH/libnvds_infer.so /opt/nvidia/deepstream/deepstream/lib/ | ||
COPY --from=builder $DS_LPR_APP_PATH/deepstream-lpr-app/dict_us.txt /opt/savant/dict.txt |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,43 @@ | ||
# build nvinfer custom library for yolo models (create engine and parse bbox functions) | ||
# https://github.com/marcoslucianops/DeepStream-Yolo | ||
# build custom parser for licence plate recognition model | ||
ARG DS_YOLO_PATH=/opt/yolo | ||
ARG DS_LPR_APP_PATH=/opt/lpr | ||
ARG NVDSINFER_PATH=/opt/nvidia/deepstream/deepstream/sources/libs/nvdsinfer | ||
|
||
FROM nvcr.io/nvidia/deepstream:6.3-triton-multiarch as builder | ||
|
||
ENV CUDA_VER=12.1 | ||
ARG DS_YOLO_VER=000bcd676d48eb236076aed111ab23ff0105de3d | ||
ARG DS_LPR_APP_VER=9c761e5ec9fea5ac4c6e3f4357326693d2d3cf48 | ||
ARG DS_YOLO_PATH | ||
ARG DS_LPR_APP_PATH | ||
ARG NVDSINFER_PATH | ||
|
||
RUN git clone https://github.com/NVIDIA-AI-IOT/deepstream_lpr_app.git $DS_LPR_APP_PATH \ | ||
&& cd $DS_LPR_APP_PATH \ | ||
&& git checkout $DS_LPR_APP_VER \ | ||
&& cd $DS_LPR_APP_PATH/nvinfer_custom_lpr_parser \ | ||
&& make | ||
|
||
RUN git clone https://github.com/marcoslucianops/DeepStream-Yolo.git $DS_YOLO_PATH \ | ||
&& cd $DS_YOLO_PATH \ | ||
&& git checkout $DS_YOLO_VER \ | ||
&& make -C nvdsinfer_custom_impl_Yolo | ||
|
||
# patch nvdsinfer_model_builder.cpp: use engine path to place created engine | ||
COPY nvdsinfer_model_builder.patch $NVDSINFER_PATH/ | ||
RUN cd $NVDSINFER_PATH && \ | ||
patch nvdsinfer_model_builder.cpp < nvdsinfer_model_builder.patch && \ | ||
make | ||
|
||
FROM ghcr.io/insight-platform/savant-deepstream:latest | ||
|
||
ARG DS_YOLO_PATH | ||
ARG DS_LPR_APP_PATH | ||
ARG NVDSINFER_PATH | ||
|
||
COPY --from=builder $DS_YOLO_PATH/nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so /opt/savant/lib/ | ||
COPY --from=builder $DS_LPR_APP_PATH/nvinfer_custom_lpr_parser/libnvdsinfer_custom_impl_lpr.so /opt/savant/lib/ | ||
COPY --from=builder $NVDSINFER_PATH/libnvds_infer.so /opt/nvidia/deepstream/deepstream/lib/ | ||
COPY --from=builder $DS_LPR_APP_PATH/deepstream-lpr-app/dict_us.txt /opt/savant/dict.txt |
Oops, something went wrong.