From 085d20fbfd500b7e8a8acaae231f5e58cde4672d Mon Sep 17 00:00:00 2001 From: Yongjoo Ahn Date: Mon, 20 May 2024 14:30:05 +0900 Subject: [PATCH] [trivial] Fix broken link to nns source code - Fix broken link to NNS boundingbox source. Signed-off-by: Yongjoo Ahn --- README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index 2f1aacef..b41cbbc1 100644 --- a/README.md +++ b/README.md @@ -64,7 +64,7 @@ Refer to this link for more examples : [[NNStreamer example](https://github.com/ | Application | Implementations | Used gstreamer/nnstreamer feature | | -- | -- | -- | -| [Object Detection](./bash_script/example_object_detection_tensorflow_lite/gst-launch-object-detection-tflite.sh) | - [tflite](./bash_script/example_object_detection_tensorflow_lite/gst-launch-object-detection-tflite.sh)
- [tf](./bash_script/example_object_detection_tensorflow/gst-launch-object-detection-tf.sh) | - v4l2src for input image stream
- [`tensor_decoder mode=bounding_boxes`](https://github.com/nnstreamer/nnstreamer/blob/main/ext/nnstreamer/tensor_decoder/tensordec-boundingbox.c) for postprocessing
- compositor for drawing decoded boxes | +| [Object Detection](./bash_script/example_object_detection_tensorflow_lite/gst-launch-object-detection-tflite.sh) | - [tflite](./bash_script/example_object_detection_tensorflow_lite/gst-launch-object-detection-tflite.sh)
- [tf](./bash_script/example_object_detection_tensorflow/gst-launch-object-detection-tf.sh) | - v4l2src for input image stream
- [`tensor_decoder mode=bounding_boxes`](https://github.com/nnstreamer/nnstreamer/blob/main/ext/nnstreamer/tensor_decoder/tensordec-boundingbox.cc) for postprocessing
- compositor for drawing decoded boxes | | [Image Segmentation](./bash_script/example_image_segmentation_tensorflow_lite) | - [tflite](./bash_script/example_image_segmentation_tensorflow_lite/gst-launch-image-segmentation-tflite.sh)
- [Edge TPU](./bash_script/example_image_segmentation_tensorflow_lite)
    * [Edge-AI server](./bash_script/example_image_segmentation_tensorflow_lite/gst-launch-image-seg-flatbuf-edgetpu-server.sh)
    * [Edge-AI client](./bash_script/example_image_segmentation_tensorflow_lite/gst-launch-image-seg-flatbuf-edgetpu-client.sh)
| - v4l2src for input image stream
- [`tensor_decoder mode=image_segment`](https://github.com/nnstreamer/nnstreamer/blob/main/ext/nnstreamer/tensor_decoder/tensordec-imagesegment.c) for postprocessing
- tcpclientsrc / tcpserversink / gdppay / gdpdepay for networking between devices
- `tensor_converter` / `tensor_decoder mode=flatbuf` for using Flatbuffers | | [Pipeline Flow Control in Face Detection](./bash_script/example_tensorif) | - [OpenVINO + tflite + passthrough](./bash_script/example_tensorif/gst-launch-tensorif-passthrough.sh)
- [OpenVINO + tflite + tensorpick](./bash_script/example_tensorif/gst-launch-tensorif-tensorpick.sh) | - v4l2src for input image stream
- `tensor_if` for flow control | @@ -73,7 +73,7 @@ Refer to this link for more examples : [[NNStreamer example](https://github.com/ | Application | Implementations | Used gstreamer/nnstreamer feature | | -- | -- | -- | | [](./Tizen.native/ImageClassification)
[Image Classification](./Tizen.native/ImageClassification) | - [Native App with Pipeline API](./Tizen.native/ImageClassification)
- [Native App with Single API](./Tizen.native/ImageClassification_SingleShot) | - appsrc for input image data from hardward camera
- [Machine Learning Inference Native API](https://docs.tizen.org/application/native/guides/machine-learning/machine-learning-inference/) | -| [![object-detection](./Tizen.native/ObjectDetection/tizen-objectdetection-demo.webp)](./Tizen.native/ObjectDetection)
[Object Detection](./Tizen.native/ObjectDetection) | - [Native App with Pipeline API](Tizen.native/ObjectDetection) | - appsrc for input image data from hardward camera
- [`tensor_decoder mode=bounding_boxes`](https://github.com/nnstreamer/nnstreamer/blob/main/ext/nnstreamer/tensor_decoder/tensordec-boundingbox.c) for postprocessing
- [Pipeline API](https://docs.tizen.org/application/native/guides/machine-learning/machine-learning-inference/#pipeline-api) | +| [![object-detection](./Tizen.native/ObjectDetection/tizen-objectdetection-demo.webp)](./Tizen.native/ObjectDetection)
[Object Detection](./Tizen.native/ObjectDetection) | - [Native App with Pipeline API](Tizen.native/ObjectDetection) | - appsrc for input image data from hardward camera
- [`tensor_decoder mode=bounding_boxes`](https://github.com/nnstreamer/nnstreamer/blob/main/ext/nnstreamer/tensor_decoder/tensordec-boundingbox.cc) for postprocessing
- [Pipeline API](https://docs.tizen.org/application/native/guides/machine-learning/machine-learning-inference/#pipeline-api) | | [![text-classification](./Tizen.NET/TextClassification/text-classification-demo.webp)](./Tizen.NET/TextClassification)
[Text Classification](./Tizen.NET/TextClassification) | - [Native App with Pipeline API](./Tizen.native/TextClassification)
- [.NET App with C# API](./Tizen.NET/TextClassification) | - appsrc for input text data
- [Pipeline API](https://docs.tizen.org/application/native/guides/machine-learning/machine-learning-inference/#pipeline-api)
- [C# API](https://samsung.github.io/TizenFX/API8/api/Tizen.MachineLearning.Inference.html) | | [![orientation-detection](./Tizen.native/OrientationDetection/orientation_detection.webp)](./Tizen.native/OrientationDetection)
[Orientation Detection](./Tizen.native/OrientationDetection) | - [Native App with Pipeline API](./Tizen.native/OrientationDetection)
- [.NET App with C# API](./Tizen.NET/OrientationDetection) | - [`tensor_src_tizensensor type=accelerometer`](https://github.com/nnstreamer/nnstreamer/blob/main/ext/nnstreamer/tensor_source/tensor_src_tizensensor.c) for hardware sensor data
- [Pipeline API](https://docs.tizen.org/application/native/guides/machine-learning/machine-learning-inference/#pipeline-api)
- [C# API](https://samsung.github.io/TizenFX/API8/api/Tizen.MachineLearning.Inference.html) | | [![face_landmark](./Tizen.platform/Tizen_IoT_face_landmark/face_landmark.webp)](./Tizen.platform/Tizen_IoT_face_landmark)
[Face Landmark](./Tizen.platform/Tizen_IoT_face_landmark) | - [Tizen IoT App](./Tizen.platform/Tizen_IoT_face_landmark) | - v4l2src for input data from camera
- tizenwlsink for lendering video to display
- cairooverlay for drawing dots |