Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Chapter 4 : Integrate Existing Libraries in the Environment Error: Target triple should not be empty #200

Open
DarkenStar opened this issue Aug 11, 2024 Discussed in #199 · 0 comments

Comments

@DarkenStar
Copy link

Discussed in #199

Originally posted by DarkenStar August 9, 2024
I met an error when run the program of Existing Libraries in the Environment in Chapter 4.
Here is my code:

'''
Integrate Existing Libraries in th Environment
'''
print(tvm.target.Target.list_kinds())

# Register Runtime Function
@tvm.register_func("env.linear", override=True)
def torch_linear(x: tvm.nd.NDArray,
                 w: tvm.nd.NDArray,
                 b: tvm.nd.NDArray,
                 out: tvm.nd.NDArray):
    # from_dlpack is a zero-copy conversion
    x_torch = torch.from_dlpack(x)
    w_torch = torch.from_dlpack(w)
    b_torch = torch.from_dlpack(b)
    out_torch = torch.from_dlpack(out)
    torch.mm(x_torch, w_torch.T, out=out_torch)
    torch.add(out_torch, b_torch, out=out_torch)
    
@tvm.register_func("env.relu", override=True)
def lnumpy_relu(x: tvm.nd.NDArray,
                out: tvm.nd.NDArray):
    x_torch = torch.from_dlpack(x)
    out_torch = torch.from_dlpack(out)
    torch.maximum(x_torch, torch.Tensor([0.0]), out=out_torch)
    
    
@tvm.script.ir_module 
class MyModuleWithExternCall:
    @R.function
    def main(x: R.Tensor((1, 784), "float32"), # type: ignore
             w0: R.Tensor((128, 784), "float32"), # type: ignore
             b0: R.Tensor((128, ), "float32"), # type: ignore
             w1: R.Tensor((10, 128), "float32"), # type: ignore
             b1: R.Tensor((10, ), "float32")): # type: ignore
        # block 0
        with R.dataflow():
            lv0 = R.call_dps_packed("env.linear", (x, w0, b0), out_sinfo=R.Tensor((1, 128), dtype="float32"))
            lv1 = R.call_dps_packed("env.relu", (lv0, ), out_sinfo=R.Tensor((1, 128), dtype="float32"))
            out = R.call_dps_packed("env.linear", (lv1, w1, b1), out_sinfo=R.Tensor((1, 10), dtype="float32"))
            R.output(out)
        return out
    
cuda_device = tvm.device("cuda", 0)
if cuda_device.exist:
    print("CUDA device is available")
else:
    print("CUDA device is not available")

ex = relax.build(MyModuleWithExternCall, target="llvm")
try:
    vm = relax.VirtualMachine(ex, tvm.cpu())
except Exception as e:
    print("Error:", e)

print(f"vm is None? {vm is None}")
nd_res = vm["main"](data_nd, 
                    nd_params["w0"],
                    nd_params["b0"],
                    nd_params["w1"],
                    nd_params["b1"])

pred_kind = np.argmax(nd_res.numpy(), axis=1)
print("MyModuleWithExternCall Prediction:", class_names[pred_kind[0]])

and the output of python debug console is here:

['llvm', 'c', 'cuda', 'nvptx', 'rocm', 'metal', 'opencl', 'vulkan', 'webgpu', 'sdaccel', 'aocl', 'aocl_sw_emu', 'hexagon', 'stackvm', 'ext_dev', 'hybrid', 'composite', 'test', 'ccompiler', 'example_target_hook']
CUDA device is available
Error: Target triple should not be empty
vm is None? False
MyModuleWithExternCall Prediction: Coat

It's wierd that try-except caught an error but the program can execute, if I don't use the try-except block, the program will not execute to the end.
Additionaly, other parts of the Chapter which use the relax.VirtualMachine don't cause any error
My tvm, llvm and pytorch version are here

(tvm-build) D:\Work\tvm\tvm0.18\tvm\python>python -c "import tvm; print(tvm.__version__)"
0.18.dev0

(tvm-build) D:\Work\tvm\tvm0.18\tvm\python>llvm-config --version
14.0.6

(tvm-build) D:\Work\tvm\tvm0.18\tvm\python>conda list | findstr torch
pytorch                   2.3.1           py3.9_cuda11.8_cudnn8_0    pytorch
pytorch-cuda              11.8                 h24eeafa_5    pytorch
pytorch-mutex             1.0                        cuda    pytorch
torchaudio                2.3.1                    pypi_0    pypi
torchvision               0.18.1                   pypi_0    pypi

求助 非常感谢!!!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant