Skip to content

Commit

Permalink
update readme
Browse files Browse the repository at this point in the history
  • Loading branch information
EC-Zed committed Jan 26, 2022
1 parent 4b43f7a commit eb0f257
Show file tree
Hide file tree
Showing 7 changed files with 4 additions and 4 deletions.
Binary file modified docs/mainres/aicamwithncs.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/mainres/frame.pptx
Binary file not shown.
Binary file modified docs/mainres/mulitincs.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/mainres/ncsmode.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/mainres/uvcwithncs.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
## Introduction

This example is used to run serial synchronous face detection and face feature point marking inference with camera video(YU12) streaming on NCC .
This example is used to run serial synchronous face detection and landmarks regression inference with camera video(YU12) streaming on NCC .

```
video(YU12) -> sync inference -> face image(BGR) -> sync inference -> composite display
Expand Down
6 changes: 3 additions & 3 deletions native_vpu_api/readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ The openncc native SDK provides the following working modes:

![ncs mode][1]

* In the accelerator card mode, the host app obtains the video stream from local file, IPC, webcam or v4l2 Mipi CAM.Could configure the preprocessing module according to the resolution and the format of the input image of the AI model, sends the images to the openncc SOM through the openncc native SDK, and device returns the AI-meta results. The inference supports asynchronous and synchronous modes. The inference pipeline on the openncc SOM could be configured through the openncc model [JSON](https://eyecloudai.github.io/openncc_frame/tutorials/how-to-write-json-config.html).
* In the accelerator card mode, the host app obtains the video stream from local file, IPC, webcam or v4l2 Mipi CAM.Could configure the preprocessing module according to the resolution and the format of the input image of the AI model, sends the images to the openncc SOM through the openncc native SDK, and device returns the AI-meta results. The inference supports asynchronous and synchronous modes. The inference pipeline on the openncc SOM could be configured through the [openncc model JSON](https://eyecloudai.github.io/openncc_frame/tutorials/how-to-write-json-config.html).

* Openncc SOM can support 6 inference pipelines configuration locally, and 2 pipelines can run concurrently in real time. Users can realize multi-level chain or multi model concurrency of inference through intermediate processing of host app.

Expand All @@ -21,14 +21,14 @@ The openncc native SDK provides the following working modes:
![UVC with NCS][3]

* In this mode, openncc is a combination of USB camera and AI accelerator card. The HD and 4K sensors supported by openncc are connected. After completing the ISP on the VPU, it outputs the video stream to the Host APP as a standard UVC camera.
* After the Host APP obtains the video stream, it carries out preprocessing, configures the preprocessing module according to the resolution and format of the input image of the inference model, download the image to the openncc SOM for inference through the openncc native SDK, and returns the AI results. The inference supports asynchronous and synchronous modes. The inference pipeline on the openncc SOM could be configured through the openncc model [JSON](https://eyecloudai.github.io/openncc_frame/tutorials/how-to-write-json-config.html).
* After the Host APP obtains the video stream, it carries out preprocessing, configures the preprocessing module according to the resolution and format of the input image of the inference model, download the image to the openncc SOM for inference through the openncc native SDK, and returns the AI results. The inference supports asynchronous and synchronous modes. The inference pipeline on the openncc SOM could be configured through the [openncc model JSON](https://eyecloudai.github.io/openncc_frame/tutorials/how-to-write-json-config.html).

### UVC AI Camera with AI Acceleration Card(As Intel NCS) mode

![ai cam with ncs mode][4]

* In this mode, openncc is a combination of USB camera and AI accelerator card. The HD and 4K sensors supported by openncc are connected. After completing the ISP on the VPU, it outputs the video stream to the Host APP as a standard UVC camera.
* At the same time, it supports configuration, directly connect the video stream after ISP on the camera to the local inference pipeline of the camera, and output the inference results to the host app. This mode avoids downloading pictures to the inference engine, reduces the processing delay and saves bandwidth. The inference pipeline is configured through openncc model [JSON](https://eyecloudai.github.io/openncc_frame/tutorials/how-to-write-json-config.html).
* At the same time, it supports configuration, directly connect the video stream after ISP on the camera to the local inference pipeline of the camera, and output the inference results to the host app. This mode avoids downloading pictures to the inference engine, reduces the processing delay and saves bandwidth. The inference pipeline is configured through [openncc model JSON](https://eyecloudai.github.io/openncc_frame/tutorials/how-to-write-json-config.html).

## Native API VPU Reference

Expand Down

0 comments on commit eb0f257

Please sign in to comment.