An OpenVino application to track the people in queue and to check whether it exceeds or not.
- What it Does
- How it Works
- Requirements
- Intel DevCloud
- Setup
- Run the application
The Smart Queuing System deomnstrates how to create a video AI solution on the edge using Intel® hardware and software tools. The app detects people in a specified area and accordingly detects the number of people in a queue. It would then also notify whether a person would need to change the queue to reduce congestation. I strongly recommend you to read the WRITEUP
The people counter script will use the Inference Engine included in the Intel® Distribution of OpenVINO™ Toolkit. To test out the script and determine which hardware would be best for a particular use case we use a job submission script Intel DevCloud to test the scripts on different hardware. To propose a hardware we take a note of the model loading time, inference time and FPS to do so.
- This project makes the use of Intel DevCloud to test on CPU, GPU, FPGA and VPU so no specific hardware is required..
- Intel® Distribution of OpenVINO™ toolkit 2019 R3 release.
- Python > 3.5, 3.6
The Intel® DevCloud for the Edge is a cloud service designed to help developers prototype and experiment with computer vision applications using the Intel® Distribution of OpenVINO™ Toolkit. Once registered, developers can access a series of Python and C++ based Jupyter Notebook tutorials and sample solutions and execute them directly from a web browser. Then, developers can create their own Jupyter Notebooks and quickly try them out on a variety of hosted Intel® hardware solutions specifically designed for deep learning inferencing.
- Reduced time to access comprehensive Intel® development solutions, hardware and software, for deep learning and computer vision application development with just an internet connection.
- Access to fully configured physical edge machines pre-installed with the Intel® Distribution of OpenVINO™ Toolkit (CPU, iGPU, VPU and FPGA) hosted in the cloud powered by Intel® Xeon® Scalable processors.
- Ability to evaluate and choose the right Intel® hardware acceleration option for your application.
- A vast library of pre-trained models from the Intel® Distribution of OpenVINO™ Toolkit and ability to upload your own custom pre- trained models to evaluate the best framework, topology, and hardware acceleration solution for your unique application.
Utilize the classroom workspace, or refer to the relevant instructions for your operating system for this step.
Much of the Intel® DevCloud for the Edge documentation can be accessed without registering. You will need to register for an Intel® DevCloud for the Edge account to explore, run the examples, upload your own code and test the hardware.
- On the Home page, click Sign in on the top right corner.
- Click Register and follow the prompts to enter the information requested.
- Within 48 hours you will receive an invitation email to your Intel® DevCloud for the Edge account.
- For increased security, the Intel® DevCloud for the Edge is protected by 2-factor authentication. Please check your email for the 6- digit security code. Copy/paste the full URL from that email containing the uuid argument into a browser window. All current web browsers are supported.
- Follow the prompts to complete your Intel DevCloud account registration.
- Once you have completed account registration, you can return any time to the Home page and click Sign in at the top right corner to access your account.
- Each time you sign in, the top right corner displays the total number of days you have access to the Intel® DevCloud for the Edge resource. You can request an extension from within the portal.
The figure below illustrates the user workflow for code development, job submission and viewing results.
The person_detect.py file does the person counting part for you. Try to experiment around with the threshold value and see how the predictions turn up.
The queue_job.sh is the utility which
helps in the submission of the job to multiple devices on the DevCloud.
The project majorly includes 3 notebooks for different use cases-
Each of the above notebooks follow this process, be sure to change the original video location according to your system in each case.
We write a script to submit a job to an IEI Tank* 870-Q170 edge node with an Intel Core™ i5-6500TE processor. The inference workload should run on the CPU.
cpu_job_id = !qsub queue_job.sh -d . -l nodes=1:tank-870:i5-6500te -F "[model_path] CPU [original_video_path] /data/queue_param/manufacturing.npy [output_path] 2" -N store_core
We write a script to submit a job to an IEI Tank* 870-Q170 edge node with an Intel® Core i5-6500TE. The inference workload should run on the Intel® HD Graphics 530 integrated GPU.
gpu_job_id = !qsub queue_job.sh -d . -l nodes=tank-870:i5-6500te:intel-hd-530 -F "[model_path] GPU [original_video_path] /data/queue_param/manufacturing.npy [output_path] 2" -N store_core
We write a script to submit a job to an IEI Tank 870-Q170 edge node with an Intel Core i5-6500te CPU. The inference workload should run on an Intel Neural Compute Stick 2 installed in this node.
vpu_job_id = !qsub queue_job.sh -d . -l nodes=tank-870:i5-6500te:intel-ncs2 -F "[model_path] MYRIAD [original_video_path] /data/queue_param/manufacturing.npy [output_path] 2" -N store_core
We write a script to submit a job to an IEI Tank 870-Q170 edge node with an Intel Core™ i5-6500te CPU. The inference workload will run on the IEI Mustang-F100-A10 FPGA card installed in this node.
fpga_job_id = !qsub queue_job.sh -d . -l nodes=1:tank-870:i5-6500te:iei-mustang-f100-a10 -F "[model_path] HETERO:FPGA,CPU [original_video_path] /data/queue_param/manufacturing.npy [output_path] 2" -N store_core
We then compare performance on these devices on these 3 metrics-
- FPS
- Model Load Time
- Inference Time