From 1b17b8f5bb10631b8c7c374ae673bf4dd0fba991 Mon Sep 17 00:00:00 2001
From: iChizer0 <62390647+iChizer0@users.noreply.github.com>
Date: Wed, 22 May 2024 15:41:40 +0800
Subject: [PATCH] chore: update readme
* chore: update readme (draft wip)
* chore: cleanup
* chore: update docs
* docs: update benchmark
* docs: update comments
---
README.md | 100 +++++++++++++++++++++++++++++++++------------
README_zh-CN.md | 106 ++++++++++++++++++++++++++++++++----------------
2 files changed, 145 insertions(+), 61 deletions(-)
diff --git a/README.md b/README.md
index 240822fe..27aa3bcd 100644
--- a/README.md
+++ b/README.md
@@ -1,19 +1,55 @@
-# SenseCraft Model Assistant by Seeed Studio
-
-
-
-English | [简体中文](README_zh-CN.md)
+
+ SenseCraft Model Assistant by Seeed Studio
+
+
+[![docs-build](https://github.com/Seeed-Studio/ModelAssistant/actions/workflows/docs-build.yml/badge.svg)](https://github.com/Seeed-Studio/ModelAssistant/actions/workflows/docs-build.yml)
+[![functional-test](https://github.com/Seeed-Studio/ModelAssistant/actions/workflows/functional-test.yml/badge.svg?branch=main)](https://github.com/Seeed-Studio/ModelAssistant/actions/workflows/functional-test.yml)
+![GitHub Release](https://img.shields.io/github/v/release/Seeed-Studio/ModelAssistant)
+[![license](https://img.shields.io/github/license/Seeed-Studio/ModelAssistant.svg)](https://github.com/Seeed-Studio/ModelAssistant/blob/main/LICENSE)
+[![Average time to resolve an issue](http://isitmaintained.com/badge/resolution/Seeed-Studio/ModelAssistant.svg)](http://isitmaintained.com/project/Seeed-Studio/ModelAssistant "Average time to resolve an issue")
+[![Percentage of issues still open](http://isitmaintained.com/badge/open/Seeed-Studio/ModelAssistant.svg)](http://isitmaintained.com/project/Seeed-Studio/ModelAssistant "Percentage of issues still open")
+
+
+
+
## Introduction
-Seeed SenseCraft Model Assistant (or simply SSCMA) is an open-source project focused on embedded AI. We have optimized excellent algorithms from [OpenMMLab](https://github.com/open-mmlab) for real-world scenarios and made implementation more user-friendly, achieving faster and more accurate inference on embedded devices.
+**S**eeed **S**ense**C**raft **M**odel **A**ssistant is an open-source project focused on providing state-of-the-art AI algorithms for embedded devices. It is designed to help developers and makers to easily deploy various AI models on low-cost hardwares, such as microcontrollers and single-board computers (SBCs).
+
+
+
+
+
+
+
+**Real-world deploy examples on MCUs with less than 0.3 Watts power consumption.*
-## What's included?
+### 🤝 User-friendly
-Currently we support the following directions of algorithms:
+SSCMA provides a user-friendly platform that allows users to easily perform training on collected data, and to better understand the performance of algorithms through visualizations generated during the training process.
+
+### 🔋 Models with low computing power and high performance
+
+SSCMA focuses on end-side AI algorithm research, and the algorithm models can be deployed on microprocessors, similar to [ESP32](https://www.espressif.com.cn/en/products/socs/esp32), some [Arduino](https://arduino.cc) development boards, and even in embedded SBCs such as [Raspberry Pi](https://www.raspberrypi.org).
+
+### 🗂️ Supports multiple formats for model export
+
+[TensorFlow Lite](https://www.tensorflow.org/lite) is mainly used in microcontrollers, while [ONNX](https://onnx.ai) is mainly used in devices with Embedded Linux. There are some special formats such as [TensorRT](https://developer.nvidia.com/tensorrt), [OpenVINO](https://docs.openvino.ai) which are already well supported by OpenMMLab. SSCMA has added TFLite model export for microcontrollers, which can be directly converted to [TensorRT](https://developer.nvidia.com/tensorrt), [UF2](https://github.com/microsoft/uf2) format and drag-and-drop into the device for deployment.
+
+## Features
+
+We have optimized excellent algorithms from [OpenMMLab](https://github.com/open-mmlab) for real-world scenarios and made implementation more user-friendly, achieving faster and more accurate inference. Currently we support the following directions of algorithms:
### 🔍 Anomaly Detection
@@ -21,45 +57,59 @@ In the real world, anomalous data is often difficult to identify, and even if it
### 👁️ Computer Vision
-Here we provide a number of computer vision algorithms such as object detection, image classification, image segmentation and pose estimation. However, these algorithms cannot run on low-cost hardware. SSCMA optimizes these computer vision algorithms to achieve good running speed and accuracy in low-end devices.
+Here we provide a number of computer vision algorithms such as **object detection, image classification, image segmentation and pose estimation**. However, these algorithms cannot run on low-cost hardwares. SSCMA optimizes these computer vision algorithms to achieve good running speed and accuracy in low-end devices.
### ⏱️ Scenario Specific
SSCMA provides customized scenarios for specific production environments, such as identification of analog instruments, traditional digital meters, and audio classification. We will continue to add more algorithms for specified scenarios in the future.
-## Features
+## What's New
-### 🤝 User-friendly
+SSCMA is always committed to providing the cutting-edge AI algorithms for best performance and accuracy, along with the community feedbacks, we keeps updating and optimizing the algorithms to meet the actual needs of users, here are some of the latest updates:
-SSCMA provides a user-friendly platform that allows users to easily perform training on collected data, and to better understand the performance of algorithms through visualizations generated during the training process.
+### 🔥 YOLO-World, MobileNetV4 and lighter SSCMA (Comming Soon)
-### 🔋 Models with low computing power and high performance
+We are working on the latest [YOLO-World](https://github.com/AILab-CVC/YOLO-World), [MobileNetV4](https://arxiv.org/abs/2404.10518) algorithms for embedded devices, we are also refactoring the SSCMA with less dependencies to make it more lightweight and easier to use, please stay tuned for the latest updates.
-SSCMA focuses on end-side AI algorithm research, and the algorithm models can be deployed on microprocessors, similar to [ESP32](https://www.espressif.com.cn/en/products/socs/esp32), some [Arduino](https://arduino.cc) development boards, and even in embedded SBCs such as [Raspberry Pi](https://www.raspberrypi.org).
+### YOLOv8, YOLOv8 Pose, Nvidia Tao Models and ByteTrack
-### 🗂️ Supports multiple formats for model export
+With [SSCMA-Micro](https://github.com/Seeed-Studio/SSCMA-Micro), now you can deploy the latest [YOLOv8](https://github.com/ultralytics/ultralytics), YOLOv8 Pose, [Nvidia TAO Models](https://docs.nvidia.com/tao/tao-toolkit/text/model_zoo/cv_models/index.html) on microcontrollers. we also added the [ByteTrack](https://github.com/ifzhang/ByteTrack) algorithm to enable real-time object tracking on low-cost hardwares.
-[TensorFlow Lite](https://www.tensorflow.org/lite) is mainly used in microcontrollers, while [ONNX](https://onnx.ai) is mainly used in devices with Embedded Linux. There are some special formats such as [TensorRT](https://developer.nvidia.com/tensorrt), [OpenVINO](https://docs.openvino.ai) which are already well supported by OpenMMLab. SSCMA has added TFLite model export for microcontrollers, which can be directly converted to [TensorRT](https://developer.nvidia.com/tensorrt), [UF2](https://github.com/microsoft/uf2) format and drag-and-drop into the device for deployment.
+
+
+### Swift YOLO
+
+We implemented a lightweight object detection algorithm called Swift YOLO, which is designed to run on low-cost hardware with limited computing power. The visualization tool, model training and export command-line interface has refactored now.
+
+
+
+### Meter Recognition
+
+Meter is a common instrument in our daily life and industrial production, such as analog meters, digital meters, etc. SSCMA provides meter recognition algorithms that can be used to identify the readings of various meters.
-## Application Examples
+
-### Object Detection
+## Benchmarks
-
+SSCMA aims to provide the best performance and accuracy for embedded devices, here are some benchmarks for the latest algorithms:
-### Pointer Meter Recognition
+
-
+**Note: The bechmark mainly includes 2 architectures, each architecture has 3 models with different sizes (inputs `[192, 224, 320]`, parameters may various), represented by the size of the point in the graph. Quanitizied models are also included in the benchmark, all latency is measured on NVIDIA A100.*
-### Digital Meter Recognition
+## The SSCMA Toolchains
-
+SSCMA provides a complete toolchain for users to easily deploy AI models on low-cost hardwares, including:
-More application examples can be found in [Model Zoo](https://github.com/Seeed-Studio/sscma-model-zoo)
+- [SSCMA-Model-Zoo](https://github.com/Seeed-Studio/sscma-model-zoo) SSCMA Model Zoo provide a series of pre-trained models for different application scenarios for you to use.
+- [SSCMA-Micro](https://github.com/Seeed-Studio/SSCMA-Micro) A cross-platform framework that deploys and applies SSCMA models to microcontrol devices.
+- [Seeed-Arduino-SSCMA](https://github.com/Seeed-Studio/Seeed_Arduino_SSCMA) Arduino library for devices supporting the SSCMA-Micro firmware.
+- [SSCMA-Web-Toolkit](https://seeed-studio.github.io/SenseCraft-Web-Toolkit) A web-based tool that updates the device's firmware, SSCMA model, and parameters.
+- [Python-SSCMA](https://github.com/Seeed-Studio/python-sscma) A Python library for interacting with microcontrollers using SSCMA-Micro, and for higher-level deep learning applications.
## Acknowledgement
-SSCMA referenced the following projects:
+SSCMA is a united effort of many developers and contributors, we would like to thank the following projects and organizations for their contributions which SSCMA referenced to implement:
- [OpenMMLab](https://openmmlab.com/)
- [ONNX](https://github.com/onnx/onnx)
diff --git a/README_zh-CN.md b/README_zh-CN.md
index 8721b02e..7fee62bc 100644
--- a/README_zh-CN.md
+++ b/README_zh-CN.md
@@ -1,19 +1,55 @@
-# SenseCraft Model Assistant by Seeed Studio
-
-
-
-英文 | [简体中文](README_zh-CN.md)
+
+ SenseCraft Model Assistant by Seeed Studio
+
+
+[![docs-build](https://github.com/Seeed-Studio/ModelAssistant/actions/workflows/docs-build.yml/badge.svg)](https://github.com/Seeed-Studio/ModelAssistant/actions/workflows/docs-build.yml)
+[![functional-test](https://github.com/Seeed-Studio/ModelAssistant/actions/workflows/functional-test.yml/badge.svg?branch=main)](https://github.com/Seeed-Studio/ModelAssistant/actions/workflows/functional-test.yml)
+![GitHub Release](https://img.shields.io/github/v/release/Seeed-Studio/ModelAssistant)
+[![license](https://img.shields.io/github/license/Seeed-Studio/ModelAssistant.svg)](https://github.com/Seeed-Studio/ModelAssistant/blob/main/LICENSE)
+[![Average time to resolve an issue](http://isitmaintained.com/badge/resolution/Seeed-Studio/ModelAssistant.svg)](http://isitmaintained.com/project/Seeed-Studio/ModelAssistant "Average time to resolve an issue")
+[![Percentage of issues still open](http://isitmaintained.com/badge/open/Seeed-Studio/ModelAssistant.svg)](http://isitmaintained.com/project/Seeed-Studio/ModelAssistant "Percentage of issues still open")
+
+
+
+
## 简介
-SSCMA 是一个专注于嵌入式人工智能的开源项目。我们从 [OpenMMLab](https://github.com/open-mmlab) 优化了优秀的算法,并使实现更加用户友好,在嵌入式设备上实现更快速、更准确的推理。
+**S**eeed **S**ense**C**raft **M**odel **A**ssistant 是一个专注于为嵌入式设备提供最先进的人工智能算法的开源项目。它旨在帮助开发人员和制造商轻松部署各种人工智能模型到低成本硬件上,如微控制器和单板计算机(SBCs)。
-## 包含内容
+
+
+
+
+
+
+**在功耗低于 0.3 瓦的微控制器上的真实部署示例。*
+
+### 🤝 用户友好
+
+SenseCraft 模型助手提供了一个用户友好的平台,方便用户使用收集的数据进行训练,并通过训练过程中生成的可视化结果更好地了解算法的性能。
+
+### 🔋 低计算功耗、高性能的模型
-目前,我们支持以下算法方向:
+SenseCraft 模型助手专注于边缘端人工智能算法研究,算法模型可以部署在微处理器上,类似于 [ESP32](https://www.espressif.com.cn/en/products/socs/esp32)、一些 [Arduino](https://arduino.cc) 开发板,甚至在嵌入式 SBCs(如 [Raspberry Pi](https://www.raspberrypi.org) )上。
+
+### 🗂️ 支持多种模型导出格式
+
+[TensorFlow Lite](https://www.tensorflow.org/lite) 主要用于微控制器,而 [ONNX](https://onnx.ai) 主要用于嵌入式Linux设备。还有一些特殊格式,如 [TensorRT](https://developer.nvidia.com/tensorrt)、[OpenVINO](https://docs.openvino.ai),这些格式已经得到 OpenMMLab 的良好支持。SenseCraft 模型助手添加了 TFLite 模型导出功能,可直接转换为 [TensorRT](https://developer.nvidia.com/tensorrt) 和 [UF2](https://github.com/microsoft/uf2) 格式,并可拖放到设备上进行部署。
+
+## 功能
+
+我们已经从 [OpenMMLab](https://github.com/open-mmlab) 优化了出色的算法,针对实际场景进行了改进,并使实现更加用户友好,实现了更快、更准确的推理。目前我们支持以下算法方向:
### 🔍 异常检测
@@ -27,55 +63,53 @@ SSCMA 是一个专注于嵌入式人工智能的开源项目。我们从 [OpenMM
SenseCraft 模型助手为特定的生产环境提供了定制化场景,例如模拟仪器、传统数字仪表和音频分类的识别。我们将继续在未来添加更多的指定场景算法。
-## 特点
-
-### 🤝 用户友好
-
-SenseCraft 模型助手提供了一个用户友好的平台,方便用户使用收集的数据进行训练,并通过训练过程中生成的可视化结果更好地了解算法的性能。
+## 新特性
-### 🔋 低计算功耗、高性能的模型
+SSCMA 一直致力于为用户提供最先进的人工智能算法,以获得最佳性能和准确性。我们根据社区反馈不断更新和优化算法,以满足用户的实际需求。以下是一些最新的更新内容:
-SenseCraft 模型助手专注于边缘端人工智能算法研究,算法模型可以部署在微处理器上,类似于 \[ESP32\] (https://www.espressif.com.cn/en/products/socs/esp32)、一些 [Arduino](https://arduino.cc) 开发板,甚至在嵌入式 SBCs(如 [Raspberry Pi](https://www.raspberrypi.org) )上。
+### 🔥 YOLO-World、MobileNetV4 和更轻量的 SSCMA(即将推出)
-### 🗂️ 支持多种模型导出格式
-
-[TensorFlow Lite](https://www.tensorflow.org/lite) 主要用于微控制器,而 [ONNX](https://onnx.ai) 主要用于嵌入式Linux设备。还有一些特殊格式,如 [TensorRT](https://developer.nvidia.com/tensorrt)、[OpenVINO](https://docs.openvino.ai),这些格式已经得到 OpenMMLab 的良好支持。SenseCraft 模型助手添加了 TFLite 模型导出功能,可直接转换为 [TensorRT](https://developer.nvidia.com/tensorrt) 和 [UF2](https://github.com/microsoft/uf2) 格式,并可拖放到设备上进行部署。
+我们正在为嵌入式设备开发最新的 [YOLO-World](https://github.com/AILab-CVC/YOLO-World)和 [MobileNetV4](https://arxiv.org/abs/2404.10518) 算法。同时,我们也正在重新设计 SSCMA,减少其依赖项,使其更加轻量级和易于使用。请密切关注最新的更新。
-## 应用示例
+### YOLOv8、YOLOv8 Pose、Nvidia Tao Models 和 ByteTrack
-### 目标检测
+通过 [SSCMA-Micro](https://github.com/Seeed-Studio/SSCMA-Micro),现在您可以在微控制器上部署最新的 [YOLOv8](https://github.com/ultralytics/ultralytics)、YOLOv8 Pose 和 [Nvidia TAO Models](https://docs.nvidia.com/tao/tao-toolkit/text/model_zoo/cv_models/index.html)。我们还添加了 [ByteTrack](https://github.com/ifzhang/ByteTrack) 算法,以在低成本硬件上实现实时物体跟踪。
-
+
-### 模拟仪器识别
+### Swift YOLO
-
+我们实现了一个轻量级的目标检测算法,称为 Swift YOLO,它专为在计算能力有限的低成本硬件上运行而设计。可视化工具、模型训练和导出命令行界面现已重构。
-### 传统数字仪表识别
+
-
+### 仪表识别
-更多应用示例请参考 [模型仓库](https://github.com/Seeed-Studio/sscma-model-zoo)。
+仪表是我们日常生活和工业生产中常见的仪器,例如模拟仪表、数字仪表等。SSCMA 提供了可以用来识别各种仪表读数的仪表识别算法。
-## 应用示例
+
-### 目标检测
+## 基准测试
-
+SSCMA 旨在为嵌入式设备提供最佳性能和准确性,以下是最新算法的一些基准测试结果:
-### 模拟仪器识别
+
-
+**注意: 基准测试主要包括 2 种架构,每种架构有 3 种不同大小 (输入尺寸 `[192, 224, 320]`,参数量可能有更多不同) 的模型,用图中点的大小表示。基准测试还包括量化模型,所有延迟都是在 NVIDIA A100上测量的。*
-### 传统数字仪表识别
+## SSCMA 工具链
-
+SSCMA 提供了完整的工具链,让用户可以轻松地在低成本硬件上部署 AI 模型,包括:
-更多应用示例请参考 [模型仓库](https://github.com/Seeed-Studio/sscma-model-zoo)。
+- [SSCMA-Model-Zoo](https://github.com/Seeed-Studio/sscma-model-zoo) SSCMA 模型库为您提供了一系列针对不同应用场景的预训练模型。
+- [SSCMA-Micro](https://github.com/Seeed-Studio/SSCMA-Micro) 一个跨平台的框架,用于在微控制器设备上部署和应用 SSCMA 模型。
+- [Seeed-Arduino-SSCMA](https://github.com/Seeed-Studio/Seeed_Arduino_SSCMA) 支持 SSCMA-Micro 固件的 Arduino 库。
+- [SSCMA-Web-Toolkit](https://seeed-studio.github.io/SenseCraft-Web-Toolkit) 一个基于 Web 的工具,用于更新设备固件、SSCMA 模型和参数。
+- [Python-SSCMA](https://github.com/Seeed-Studio/python-sscma) 用于与微控制器进行交互的 Python 库,使用 SSCMA-Micro,并用于更高级别的深度学习应用。
## 致谢
-SenseCraft模型助手参考了以下项目:
+SSCMA 是许多开发人员和贡献者的共同努力,感谢以下项目和组织对 SSCMA 的实现提供了参考和贡献:
- [OpenMMLab](https://openmmlab.com/)
- [ONNX](https://github.com/onnx/onnx)
@@ -84,4 +118,4 @@ SenseCraft模型助手参考了以下项目:
## 许可证
-本项目在[Apache 2.0 开源许可证](LICENSE)下发布。
+本项目在 [Apache 2.0 开源许可证](LICENSE) 下发布。