Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Overlap matrix multiplication (needs tensor core) and other things like activation (needs cuda core and memory bandwidth) to speed up #341

Open
fzyzcjy opened this issue Nov 4, 2024 · 0 comments

Comments

@fzyzcjy
Copy link

fzyzcjy commented Nov 4, 2024

🚀 The feature, motivation and pitch

Hi thanks for the kernel! I have a naive thought: We know deep learning forward/backward cannot be parallelized, because you have to compute one operation/layer before computing the next one. But what if we have two batches computed almost parallelly? Then, for example, when our first batch is computing a big matrix multiplication (i.e. tensor core busy, cuda core idle, memory bandwidth idle), maybe we can issue some cuda instructions to compute the activation functions (tensor core idle, cuda core busy, memory bandwidth somehow busy).

For liger kernel, maybe there can be a kernel that inputs a batch for matmul, and another batch for activation.

Also discussions here: https://forums.developer.nvidia.com/t/concurrent-execution-of-cuda-and-tensor-cores/222985/33?u=ch271828n

Cross-posted: unslothai/unsloth#1233

Alternatives

Additional context

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant