(English|简体中文)
Check the following table, and copy the address of hyperlink then run pip3 install
. For example, if you want to install paddle-serving-server-0.0.0-py3-non-any.whl
, right click the hyper link and copy the link address, the final command is pip3 install https://paddle-serving.bj.bcebos.com/test-dev/whl/paddle_serving_server-0.0.0-py3-none-any.whl
.
for most users, we do not need to read this section. But if you deploy your Paddle Serving on a machine without network, you will encounter a problem that the binary executable tar file cannot be downloaded. Therefore, here we give you all the download links for various environment.
- download the serving server whl package and bin package, and make sure they are for the same environment
- download the serving client whl and serving app whl, pay attention to the Python version.
pip install
the serving andtar xf
the binary package, thenexport SERVING_BIN=$PWD/serving-gpu-cuda11-0.0.0/serving
(take Cuda 11 as the example)
develop whl | stable whl | |
---|---|---|
Python3 | paddle_serving_app-0.0.0-py3-none-any.whl | paddle_serving_app-0.9.0-py3-none-any.whl |
for kunlun user who uses arm-xpu or x86-xpu can download the wheel packages as follows. Users should use the xpu-beta docker DOCKER IMAGES We only support Python 3.6 for Kunlun Users.
for arm kunlun user,
# paddle-serving-server
wget https://paddle-serving.bj.bcebos.com/test-dev/whl/arm/paddle_serving_app-0.9.0-py3-none-any.whl
# paddle-serving-client
wget https://paddle-serving.bj.bcebos.com/test-dev/whl/arm/paddle_serving_client-0.9.0-cp36-none-any.whl
# paddle-serving-app
wget https://paddle-serving.bj.bcebos.com/test-dev/whl/arm/paddle_serving_server_xpu-0.9.0.post2-py3-none-any.whl
# SERVING BIN
wget https://paddle-serving.bj.bcebos.com/test-dev/bin/serving-xpu-aarch64-0.9.0.tar.gz
for x86 xpu user, the wheel packages are here.
https://paddle-serving.bj.bcebos.com/test-dev/whl/paddle_serving_server_xpu-0.9.0.post2-py3-none-any.whl