You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
CppFast Diffusers Inference (CFDI) is a C++ project. Its purpose is to leverage the acceleration capabilities of ONNXRuntime and the high compatibility of the .onnx model format to provide a convenient solution for the engineering deployment of Stable Diffusion.
The project aims to implement a high-performance SD inference library based on C/C++ using ONNXRuntime, comparable to HuggingFace Diffusers, with high model interchangeability.
Why choose ONNXRuntime as our Inference Engine?
Open Source: ONNXRuntime is an open-source project, allowing users to freely use and modify it to suit different application scenarios.
Scalability: It supports custom operators and optimizations, allowing for extensions and optimizations based on specific needs.
High Performance: ONNXRuntime is highly optimized to provide fast inference speeds, suitable for real-time applications.
Strong Compatibility: It supports model conversion from multiple deep learning frameworks (such as PyTorch, TensorFlow), making integration and deployment convenient.
Cross-Platform Support: ONNXRuntime supports multiple hardware platforms, including CPU, GPU, TPU, etc., enabling efficient execution on various devices.
Community and Enterprise Support: Developed and maintained by Microsoft, it has an active community and enterprise support, providing continuous updates and maintenance.
Below show What actually happened in [Example: 1-step img2img inference] in Latent Space (Skip All Models):
CppFast Diffusers Inference (CFDI)
CppFast Diffusers Inference (CFDI) is a C++ project. Its purpose is to leverage the acceleration capabilities of ONNXRuntime and the high compatibility of the .onnx model format to provide a convenient solution for the engineering deployment of Stable Diffusion.
You can find Project here: https://github.com/Windsander/CFDI-StableDiffusionONNXFast
The project aims to implement a high-performance SD inference library based on C/C++ using ONNXRuntime, comparable to HuggingFace Diffusers, with high model interchangeability.
Why choose ONNXRuntime as our Inference Engine?
Open Source: ONNXRuntime is an open-source project, allowing users to freely use and modify it to suit different application scenarios.
Scalability: It supports custom operators and optimizations, allowing for extensions and optimizations based on specific needs.
High Performance: ONNXRuntime is highly optimized to provide fast inference speeds, suitable for real-time applications.
Strong Compatibility: It supports model conversion from multiple deep learning frameworks (such as PyTorch, TensorFlow), making integration and deployment convenient.
Cross-Platform Support: ONNXRuntime supports multiple hardware platforms, including CPU, GPU, TPU, etc., enabling efficient execution on various devices.
Community and Enterprise Support: Developed and maintained by Microsoft, it has an active community and enterprise support, providing continuous updates and maintenance.
Below show What actually happened in [Example: 1-step img2img inference] in Latent Space (Skip All Models):
See Details on the Project Main Page
The text was updated successfully, but these errors were encountered: