-
Notifications
You must be signed in to change notification settings - Fork 39
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Signed-off-by: Sylveon <[email protected]>
- Loading branch information
Showing
19 changed files
with
226 additions
and
135 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file was deleted.
Oops, something went wrong.
This file was deleted.
Oops, something went wrong.
Binary file not shown.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,144 @@ | ||
# Resnet18 Example For WASI-NN with PyTorch Backend | ||
|
||
This package is a high-level Rust bindings for [wasi-nn] example of Resnet18 with PyTorch backend. | ||
|
||
[wasi-nn]: https://github.com/WebAssembly/wasi-nn | ||
|
||
## Dependencies | ||
|
||
This crate depends on the `wasi-nn` in the `Cargo.toml`: | ||
|
||
```toml | ||
[dependencies] | ||
wasi-nn = "0.6.0" | ||
``` | ||
|
||
## Build | ||
|
||
Compile the application to WebAssembly: | ||
|
||
```bash | ||
cargo build --target=wasm32-wasi --release | ||
``` | ||
|
||
Because here, we will demonstrate two ways of using wasi-nn. So the output WASM files will be at [`target/wasm32-wasi/release/wasmedge-wasinn-example-resnet18-image.wasm`](wasmedge-wasinn-example-resnet18-image.wasm) and [`target/wasm32-wasi/release/wasmedge-wasinn-example-resnet18-image-named-model.wasm`](wasmedge-wasinn-example-resnet18-image-named-model.wasm). | ||
To speed up the image processing, we can enable the AOT mode in WasmEdge with: | ||
|
||
```bash | ||
wasmedgec rust/target/wasm32-wasi/release/wasmedge-wasinn-example-resnet18-image.wasm wasmedge-wasinn-example-resnet18-image-aot.wasm | ||
|
||
wasmedgec rust/target/wasm32-wasi/release/wasmedge-wasinn-example-resnet18-image-named-model.wasm wasmedge-wasinn-example-resnet18-image-named-model-aot.wasm | ||
``` | ||
|
||
## Run | ||
|
||
### Generate Model | ||
|
||
First generate the fixture of the pre-trained mobilenet with the script: | ||
|
||
```bash | ||
pip3 install torch==2.4.1 numpy pillow --extra-index-url https://download.pytorch.org/whl/lts/1.8/cpu | ||
# generate the model fixture | ||
python3 gen_resnet18_model.py | ||
``` | ||
|
||
(Or you can use the pre-generated one at [`resnet18.pt`](resnet18.pt)) | ||
|
||
### Test Image | ||
|
||
The testing image `input.jpg` is downloaded from <https://github.com/bytecodealliance/wasi-nn/raw/main/rust/examples/images/1.jpg> with license Apache-2.0 | ||
|
||
### Generate Tensor | ||
|
||
If you want to generate the [raw tensor](image-1x3x224x224.rgb), you can run: | ||
|
||
```bash | ||
python3 gen_tensor.py input.jpg image-1x3x224x224.rgb | ||
``` | ||
|
||
### Execute | ||
|
||
Users should [install the WasmEdge with WASI-NN PyTorch backend plug-in](https://wasmedge.org/docs/start/install#wasi-nn-plug-in-with-pytorch-backend). | ||
|
||
Execute the WASM with the `wasmedge` with PyTorch supporting: | ||
|
||
- Case 1: | ||
|
||
```bash | ||
wasmedge --dir .:. wasmedge-wasinn-example-resnet18-image.wasm resnet18.pt input.jpg | ||
``` | ||
|
||
You will get the output: | ||
|
||
```console | ||
Loaded graph into wasi-nn with ID: 0 | ||
Created wasi-nn execution context with ID: 0 | ||
Read input tensor, size in bytes: 602112 | ||
Executed graph inference | ||
1.) [954](18.0458)banana | ||
2.) [940](15.6954)spaghetti squash | ||
3.) [951](14.1337)lemon | ||
4.) [942](13.2925)butternut squash | ||
5.) [941](10.6792)acorn squash | ||
``` | ||
|
||
- Case 2: Apply named model feature | ||
> requirement wasi-nn >= 0.5.0 and WasmEdge-plugin-wasi_nn-(*) >= 0.13.4 and | ||
> --nn-preload argument format follow <name>:<encoding>:<target>:<model_path> | ||
```bash | ||
wasmedge --dir .:. --nn-preload demo:PyTorch:CPU:resnet18.pt wasmedge-wasinn-example-resnet18-image-named-model.wasm demo input.jpg | ||
``` | ||
|
||
You will get the same output: | ||
|
||
```console | ||
Loaded graph into wasi-nn with ID: 0 | ||
Created wasi-nn execution context with ID: 0 | ||
Read input tensor, size in bytes: 602112 | ||
Executed graph inference | ||
1.) [954](18.0458)banana | ||
2.) [940](15.6954)spaghetti squash | ||
3.) [951](14.1337)lemon | ||
4.) [942](13.2925)butternut squash | ||
5.) [941](10.6792)acorn squash | ||
``` | ||
|
||
## Run from AOTInductor | ||
|
||
### Generate Model | ||
PyTorch backend also support load from the AOTInductor (Shared Library). To compile the pytorch model, please follow the Pytorch official tutorial. | ||
|
||
* https://pytorch.org/tutorials/recipes/torch_export_aoti_python.html | ||
|
||
|
||
Or you can use the pre-generated one at [`resnet18_pt2.so`](resnet18_pt2.so). However it may not suitable for your machine. it is suggested to use [`gen_resnet18_aoti`](gen_resnet18_aoti) recompile the model. | ||
|
||
> Notice: The AOTInductor from pip will use old c++ abi interface, it is maybe incompatible with wasmedge release, you may need to install the libtorch **without c++11 abi** and rebuild the wasmedge with `-DWASMEDGE_USE_CXX11_ABI=OFF`. | ||
|
||
```bash | ||
## Build Wasmedge with cmake example | ||
cmake -Bbuild -GNinja -DWASMEDGE_USE_CXX11_ABI=OFF -DWASMEDGE_PLUGIN_WASI_NN_BACKEND=PyTorch . | ||
``` | ||
|
||
### Execute | ||
|
||
To run the AOT Inductor, you need use `--nn-preload` with `PyTorchAOTI` interface and specify absolute path to load the shared library. | ||
|
||
```bash | ||
export LD_LIBRARY_PATH=/path_to_libtorch/lib | ||
./wasmedge --dir .:. --nn-preload demo:PyTorchAOTI:CPU:/absolute_path_model/resnet18_pt2.so wasmedge-wasinn-example-resnet18-image-named-model.wasm demo input.jpg | ||
``` | ||
|
||
```console | ||
Loaded graph into wasi-nn with ID: 0 | ||
Created wasi-nn execution context with ID: 0 | ||
Read input tensor, size in bytes: 602112 | ||
Executed graph inference | ||
1.) [954](18.0458)banana | ||
2.) [940](15.6954)spaghetti squash | ||
3.) [951](14.1337)lemon | ||
4.) [942](13.2925)butternut squash | ||
5.) [941](10.6792)acorn squash | ||
``` |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,39 @@ | ||
# More detail please follow the link below | ||
# https://pytorch.org/tutorials/recipes/torch_export_aoti_python.html | ||
|
||
import os | ||
import torch | ||
from torchvision.models import ResNet18_Weights, resnet18 | ||
|
||
model = resnet18(weights=ResNet18_Weights.DEFAULT) | ||
model.eval() | ||
|
||
with torch.inference_mode(): | ||
|
||
# Specify the generated shared library path | ||
aot_compile_options = { | ||
"aot_inductor.output_path": os.path.join(os.getcwd(), "resnet18_pt2.so"), | ||
} | ||
if torch.cuda.is_available(): | ||
device = "cuda" | ||
aot_compile_options.update({"max_autotune": True}) | ||
else: | ||
device = "cpu" | ||
|
||
model = model.to(device=device) | ||
example_inputs = (torch.randn(2, 3, 224, 224, device=device),) | ||
|
||
# min=2 is not a bug and is explained in the 0/1 Specialization Problem | ||
batch_dim = torch.export.Dim("batch", min=2, max=32) | ||
exported_program = torch.export.export( | ||
model, | ||
example_inputs, | ||
# Specify the first dimension of the input x as dynamic | ||
dynamic_shapes={"x": {0: batch_dim}}, | ||
) | ||
so_path = torch._inductor.aot_compile( | ||
exported_program.module(), | ||
example_inputs, | ||
# Specify the generated shared library path | ||
options=aot_compile_options | ||
) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,17 @@ | ||
import os | ||
import torch | ||
from torch import jit | ||
|
||
with torch.no_grad(): | ||
fake_input = torch.rand(1, 3, 224, 224) | ||
model = torch.hub.load('pytorch/vision:v0.10.0', 'resnet18', pretrained=True) | ||
model.eval() | ||
out1 = model(fake_input).squeeze() | ||
|
||
sm = torch.jit.script(model) | ||
if not os.path.exists("resnet18.pt"): | ||
sm.save("resnet18.pt") | ||
load_sm = jit.load("resnet18.pt") | ||
out2 = load_sm(fake_input).squeeze() | ||
|
||
print(out1[:5], out2[:5]) |
File renamed without changes.
File renamed without changes.
File renamed without changes
Binary file not shown.
Binary file not shown.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,12 @@ | ||
import os | ||
import torch | ||
|
||
device = "cuda" if torch.cuda.is_available() else "cpu" | ||
model_so_path = os.path.join(os.getcwd(), "resnet18_pt2.so") | ||
|
||
model = torch._export.aot_load(model_so_path, device) | ||
example_inputs = (torch.randn(1, 3, 224, 224, device=device),) | ||
|
||
with torch.inference_mode(): | ||
output = model(example_inputs) | ||
print(output) |
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.