ncnn is a high-performance neural network inference framework optimized for the mobile platform
-
Updated
Nov 22, 2024 - C++
ncnn is a high-performance neural network inference framework optimized for the mobile platform
FeatherCNN is a high performance inference engine for convolutional neural networks.
Up to 200x Faster Dot Products & Similarity Metrics — for Python, Rust, C, JS, and Swift, supporting f64, f32, f16 real & complex, i8, and bit vectors using SIMD for both AVX2, AVX-512, NEON, SVE, & SVE2 📐
A modern C++17 glTF 2.0 library focused on speed, correctness, and usability
Heterogeneous Run Time version of Caffe. Added heterogeneous capabilities to the Caffe, uses heterogeneous computing infrastructure framework to speed up Deep Learning on Arm-based heterogeneous embedded platform. It also retains all the features of the original Caffe architecture which users deploy their applications seamlessly.
benchmark for embededded-ai deep learning inference engines, such as NCNN / TNN / MNN / TensorFlow Lite etc.
RV: A Unified Region Vectorizer for LLVM
Heterogeneous Run Time version of MXNet. Added heterogeneous capabilities to the MXNet, uses heterogeneous computing infrastructure framework to speed up Deep Learning on Arm-based heterogeneous embedded platform. It also retains all the features of the original MXNet architecture which users deploy their applications seamlessly.
Single Header Quite Fast QOI(Quite OK Image Format) Implementation written in C++20
Heterogeneous Run Time version of TensorFlow. Added heterogeneous capabilities to the TensorFlow, uses heterogeneous computing infrastructure framework to speed up Deep Learning on Arm-based heterogeneous embedded platform. It also retains all the features of the original TensorFlow architecture which users deploy their applications seamlessly.
Simple neural network microkernels in C accelerated with ARMv8.2-a Neon vector intrinsics.
Hardkernel Odroid HC4 Ubuntu 20.04LTS install tutorial & tool build
Colorful Mandelbrot set renderer in C# + OpenGL + ARM NEON
Pipelined lowlevel implementation of COLM for ARM-based systems
Add a description, image, and links to the arm-neon topic page so that developers can more easily learn about it.
To associate your repository with the arm-neon topic, visit your repo's landing page and select "manage topics."