Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[New Project] Inference of Stable-Diffisuion on All Platform with ONNXRuntime #446

Open
Windsander opened this issue Jun 20, 2024 · 0 comments

Comments

@Windsander
Copy link

CppFast Diffusers Inference (CFDI)

CppFast Diffusers Inference (CFDI) is a C++ project. Its purpose is to leverage the acceleration capabilities of ONNXRuntime and the high compatibility of the .onnx model format to provide a convenient solution for the engineering deployment of Stable Diffusion.

You can find Project here: https://github.com/Windsander/CFDI-StableDiffusionONNXFast

The project aims to implement a high-performance SD inference library based on C/C++ using ONNXRuntime, comparable to HuggingFace Diffusers, with high model interchangeability.

Why choose ONNXRuntime as our Inference Engine?

  • Open Source: ONNXRuntime is an open-source project, allowing users to freely use and modify it to suit different application scenarios.

  • Scalability: It supports custom operators and optimizations, allowing for extensions and optimizations based on specific needs.

  • High Performance: ONNXRuntime is highly optimized to provide fast inference speeds, suitable for real-time applications.

  • Strong Compatibility: It supports model conversion from multiple deep learning frameworks (such as PyTorch, TensorFlow), making integration and deployment convenient.

  • Cross-Platform Support: ONNXRuntime supports multiple hardware platforms, including CPU, GPU, TPU, etc., enabling efficient execution on various devices.

  • Community and Enterprise Support: Developed and maintained by Microsoft, it has an active community and enterprise support, providing continuous updates and maintenance.

  • Below show What actually happened in [Example: 1-step img2img inference] in Latent Space (Skip All Models):

See Details on the Project Main Page

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant