- Linux or Windows running on x86_64 or arm64 CPUs
- GCC >= 4.9 or LLVM/Clang >= 6.0, or Visual Studio >= 2015
- CMake >= 3.14
- Git >= 2.7.0
- CUDA Toolkit >= 9.0 (for CUDA)
- Python >= 3.5 (for CUDA and Python API support)
- Lua >= 5.2.0 (optional, for Lua API support)
git clone https://github.com/openppl-public/ppl.nn.git
./build.sh -DPPLNN_USE_X86_64=ON
Headers and libraries are installed in pplnn-build/install
.
If you want to enable openmp, please specify PPLNN_USE_OPENMP
as following:
./build.sh -DPPLNN_USE_X86_64=ON -DPPLNN_USE_OPENMP=ON
Using vs2015 for example:
build.bat -G "Visual Studio 14 2015 Win64" -DPPLNN_USE_X86_64=ON
Headers and libraries are installed in pplnn-build/install
.
./build.sh -DPPLNN_USE_CUDA=ON
Note that if you want to build X86 engine along with CUDA engine, you should specify -DPPLNN_USE_X86_64=ON
explicitly like this:
./build.sh -DPPLNN_USE_X86_64=ON -DPPLNN_USE_CUDA=ON
Headers and libraries are installed in pplnn-build/install
.
If you want to use specified CUDA toolkit version, please specify CUDA_TOOLKIT_ROOT_DIR
as following:
./build.sh -DPPLNN_USE_CUDA=ON -DCUDA_TOOLKIT_ROOT_DIR=/path/to/cuda-toolkit-root-dir
Using the following command:
CUDA_TOOLKIT_ROOT=/path/to/cuda/toolkit/root/dir ./build.sh -DPPLNN_USE_CUDA=ON -DPPLNN_TOOLCHAIN_DIR=/path/to/arm/toolchain/dir -DCMAKE_TOOLCHAIN_FILE=cmake/toolchains/aarch64-linux-gnu.cmake
Note that the CUDA_TOOLKIT_ROOT
environment variable is required.
You can also specify CUDA_TOOLKIT_ROOT_DIR
without setting CUDA_TOOLKIT_ROOT
, which will be set to CUDA_TOOLKIT_ROOT_DIR
by ppl.nn:
./build.sh -DPPLNN_USE_CUDA=ON -DPPLNN_TOOLCHAIN_DIR=/path/to/arm/toolchain/dir -DCMAKE_TOOLCHAIN_FILE=cmake/toolchains/aarch64-linux-gnu.cmake -DCUDA_TOOLKIT_ROOT_DIR=/path/to/cuda/toolkit/root/dir
Using vs2015 for example:
build.bat -G "Visual Studio 14 2015 Win64" -DPPLNN_USE_CUDA=ON
Headers and libraries are installed in pplnn-build/install
.
We use runtime-compiling version by default. If you want to use static version (build all kernels in advance), please specify PPLNN_ENABLE_CUDA_JIT
as following:
./build.sh -DPPLNN_USE_CUDA=ON -DPPLNN_ENABLE_CUDA_JIT=OFF
If you want to run debug model, please specify CMAKE_BUILD_TYPE
as following:
./build.sh -DPPLNN_USE_CUDA=ON -DCMAKE_BUILD_TYPE=Debug
If you want to profile running time for each kernel, please specify PPLNN_ENABLE_KERNEL_PROFILING
as following and add arg --enable-profiling
during executing pplnn.
./build.sh -DPPLNN_USE_CUDA=ON -DPPLNN_ENABLE_KERNEL_PROFILING=ON
You need to download c906 toolchain package from https://occ.t-head.cn/community/download?id=3913221581316624384.
tar -xf riscv64-linux-x86_64-20210512.tar.gz
export RISCV_ROOT_PATH=/path/to/riscv64-linux-x86_64-20210512
Build pplnn:
./build.sh -DPPLNN_TOOLCHAIN_DIR=$RISCV_ROOT_PATH -DCMAKE_TOOLCHAIN_FILE=cmake/toolchains/riscv64-linux-gnu.cmake -DPPLNN_USE_RISCV64=ON -DPPLNN_ENABLE_KERNEL_PROFILING=ON -DPPLNN_ENABLE_PYTHON_API=OFF -DPPLNN_ENABLE_LUA_API=OFF -DCMAKE_INSTALL_PREFIX=pplnn-build/install
Headers and libraries are installed in pplnn-build/install
.
./build.sh -DPPLNN_USE_AARCH64=ON
Headers and libraries are installed in pplnn-build/install
.
If you want to enable openmp, please specify PPLNN_USE_OPENMP
as following:
./build.sh -DPPLNN_USE_AARCH64=ON -DPPLNN_USE_OPENMP=ON
If you want to enable FP16 inference, please specify PPLNN_USE_ARMV8_2
(your compiler must have armv8.2-a
ISA support):
./build.sh -DPPLNN_USE_AARCH64=ON -DPPLNN_USE_ARMV8_2=ON
If your system has multiple NUMA nodes, it is recommended to build with PPLNN_USE_NUMA
(please make sure libnuma
has been installed in your system):
./build.sh -DPPLNN_USE_AARCH64=ON -DPPLNN_USE_NUMA=ON
If you want to run on mobile platforms, please use the Android NDK package:
./build.sh -DPPLNN_USE_AARCH64=ON -DANDROID_PLATFORM=android-22 -DANDROID_ABI=arm64-v8a -DANDROID_ARM_NEON=ON -DCMAKE_TOOLCHAIN_FILE=<path_to_android_ndk_package>/android-ndk-r22b/build/cmake/android.toolchain.cmake
add -DPPLNN_ENABLE_PYTHON_API=ON
to the build command if you want to use PPLNN
in python:
./build.sh -DPPLNN_ENABLE_PYTHON_API=ON
If you want to use a specified version of python, you can pass PYTHON3_INCLUDE_DIRS
to build.sh
:
./build.sh -DPPLNN_ENABLE_PYTHON_API=ON -DPYTHON3_INCLUDE_DIRS=/path/to/your/python/include/dir [other options]
Run the python demo with the following command:
PYTHONPATH=./pplnn-build/install/lib python3 ./tools/pplnn.py [--use-x86 | --use-cuda] --onnx-model tests/testdata/conv.onnx
or use both engines:
cd ppl.nn
PYTHONPATH=./pplnn-build/install/lib python3 ./tools/pplnn.py --use-x86 --use-cuda --onnx-model tests/testdata/conv.onnx
There is a python packaging configuration in python/package. You can build a .whl
package:
./build.sh
and then install this package with pip
:
cd /tmp/pyppl-package/dist
pip3 install pyppl*.whl
After installation, you can use from pyppl import nn
directly without setting the PYTHONPATH
env.
add -DPPLNN_ENABLE_LUA_API=ON
to the build command if you want to use PPLNN
in lua:
./build.sh -DPPLNN_ENABLE_LUA_API=ON
If you want to use a specified version of lua, you can pass LUA_SRC_DIR
to build.sh
:
./build.sh -DPPLNN_ENABLE_LUA_API=ON -DLUA_SRC_DIR=/path/to/lua/src [other options]
or you already have a pre-compiled version, you can pass LUA_INCLUDE_DIR
and LUA_LIBRARIES
to build.sh
:
./build.sh -DPPLNN_ENABLE_LUA_API=ON -DLUA_INCLUDE_DIR=/path/to/your/lua/include/dir -DLUA_LIBRARIES=/path/to/your/lua/lib [other options]
Run the lua demo with the following commands:
cd ppl.nn
LUAPATH=./pplnn-build/install/lib /path/to/your/lua-interpreter ./tools/pplnn.lua
Note that your lua interpreter should be compiled with options MYCFLAGS="-DLUA_USE_DLOPEN -fPIC" MYLIBS=-ldl
to enable loading .so plugins.
There is a test tool named pplnn
generated from tools/pplnn.cc
. You can run pplnn
using the following command:
./pplnn-build/tools/pplnn [--use-x86 | --use-cuda] --onnx-model tests/testdata/conv.onnx