Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RFC-0031: ZenDNN Integration #52

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

naveenthangudu
Copy link

This RFC proposes an approach for integrating ZenDNN library into PyTorch.
This integration will enable inference optimizations for deep learning workloads on AMD CPUs.

Signed-off-by: Naveen Kumar <[email protected]>
@facebook-github-bot
Copy link
Contributor

Hi @naveenthangudu!

Thank you for your pull request and welcome to our community.

Action Required

In order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you.

Process

In order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA.

Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with CLA signed. The tagging process may take up to 1 hour after signing. Please give it that time before contacting us about it.

If you have received this in error or have any questions, please contact us at [email protected]. Thanks!

@SherlockNoMad
Copy link

SherlockNoMad commented Mar 23, 2023

Hi @naveenthangudu
TorchScript is a legacy path that is currently maintained by pytorch community.
We would recommend integrating new backends via the PT2 path.

See
https://dev-discuss.pytorch.org/t/registering-new-compiler-backend-in-pytorch2-0/1092/4
https://colab.research.google.com/drive/1Zh-Uo3TcTH8yYJF-LLo5rjlHVMtqvMdf#scrollTo=KA_FS0D831T5

@naveenthangudu
Copy link
Author

@SherlockNoMad, Thanks for links.
We are evaluating integration of our graph optimizations into torch.compile. We plan to add our optimizations in Torch Script Path prior to torch.compile backend. Is Torch Script going to be deprecated soon?”

@facebook-github-bot
Copy link
Contributor

Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Meta Open Source project. Thanks!

1 similar comment
@facebook-github-bot
Copy link
Contributor

Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Meta Open Source project. Thanks!

@gottbrath
Copy link

Thanks for filing this RFC. Just to connect the dots for reviewers this is a follow up to

pytorch/pytorch#76244 and pytorch/pytorch#76242

+1 Sherlock's comment that integration with the new compile() and export() stack is probably a more fruitful approach than torchscript.

@albanD
Copy link
Contributor

albanD commented Mar 27, 2023

Is Torch Script going to be deprecated soon?

@naveenthangudu this is being looked into. Also given that there are no maintainers from the core team working on it, I have no idea who could review a PR integrating a new backend there. So it is very possible for it to stay in limbo for a while :/

@facebook-github-bot
Copy link
Contributor

Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Meta Open Source project. Thanks!

3 similar comments
@facebook-github-bot
Copy link
Contributor

Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Meta Open Source project. Thanks!

@facebook-github-bot
Copy link
Contributor

Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Meta Open Source project. Thanks!

@facebook-github-bot
Copy link
Contributor

Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Meta Open Source project. Thanks!

@naveenthangudu naveenthangudu marked this pull request as draft April 19, 2023 05:55
@naveenthangudu naveenthangudu marked this pull request as ready for review April 19, 2023 05:58
@kumardeepakamd
Copy link

Hello all, I wanted to follow up on this request. We would like to upstream AMD EPYC processor specific optimizations termed as ZenDNN to PyTorch. PyTorch 2.0 is fine. Should we file a new request for PT2.0? Kindly advise.

@albanD
Copy link
Contributor

albanD commented Jun 5, 2023

cc @SherlockNoMad @ezyang for questions about torch.compile integrations

@houseroad
Copy link
Member

We are resuming the integration, since CPU optimization becomes quite important for Meta internal use cases.

For AMD CPU use case, at least there are two feasible paths: 1) integrate the related optimization in the triton CPU backend, or 2) we integrate ZenDNN as a backend for inductor.

cc: @kumardeepakamd , @naveenthangudu

@jgong5
Copy link

jgong5 commented Aug 8, 2024

We are resuming the integration, since CPU optimization becomes quite important for Meta internal use cases.

For AMD CPU use case, at least there are two feasible paths: 1) integrate the related optimization in the triton CPU backend, or 2) we integrate ZenDNN as a backend for inductor.

cc: @kumardeepakamd , @naveenthangudu

I guess we are talking about Conv/GEMM optimizations for inductor, right? I'm not sure how mature Triton CPU backend is for Conv and GEMMs. For x86 and ARM CPUs, we are leveraging oneDNN Conv and GEMMs ATen fusion ops in inductor. For AMD CPUs, perhaps, integrating AMD-specific optimizations into oneDNN is a better way to go, which can leverage existing PyTorch integration. Or, if that is harder, perhaps, integrating ZenDNN the same way as how we integrate oneDNN might work.

@houseroad
Copy link
Member

We do have triton-cpu: https://github.com/triton-lang/triton-cpu. The goal is using the Inductor (AOT mode) to 1) leverage/generate high performance kernels; 2) remove the framework overhead. The model is similar to model arches like dlrm (https://github.com/facebookresearch/dlrm), not CV models like ResNet.

@jgong5
Copy link

jgong5 commented Aug 8, 2024

We do have triton-cpu: https://github.com/triton-lang/triton-cpu. The goal is using the Inductor (AOT mode) to 1) leverage/generate high performance kernels; 2) remove the framework overhead. The model is similar to model arches like dlrm (https://github.com/facebookresearch/dlrm), not CV models like ResNet.

If it is all about GEMMs, we also have CPP GEMM template support in inductor, for which we are actively developing: pytorch/pytorch#125683

@naveenthangudu
Copy link
Author

We are resuming the integration, since CPU optimization becomes quite important for Meta internal use cases.

For AMD CPU use case, at least there are two feasible paths: 1) integrate the related optimization in the triton CPU backend, or 2) we integrate ZenDNN as a backend for inductor.

cc: @kumardeepakamd , @naveenthangudu

Hi @houseroad

According to @SherlockNoMad's suggestion, we have introduced a CPU inference extension based on torch compile for the AMD EPYC™️ series (known as zentorch). This extension combines ZenDNN and optimizes deep learning inference workloads. Our team has recently launched the first version of this extension. For more information, please visit this link: https://www.amd.com/en/developer/resources/technical-articles/supercharge-your-ai-inference-with-zendnn-on-amd-epyc-cpus.html.

@houseroad
Copy link
Member

Thanks for the update (@naveenthangudu ), does this work with AOTInductor? We probably need to give it a try.

@naveenthangudu
Copy link
Author

naveenthangudu commented Aug 13, 2024

Thanks for the update (@naveenthangudu ), does this work with AOTInductor? We probably need to give it a try.

Hi @houseroad

We started a POC of using extension with export and AOT Inductor. Would you be interested in collaborating?

@houseroad
Copy link
Member

Hi @naveenthangudu , sure, we would like to see if we can adopt it for Meta's use cases.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants