You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi thanks for the kernel! I have a naive thought: We know deep learning forward/backward cannot be parallelized, because you have to compute one operation/layer before computing the next one. But what if we have two batches computed almost parallelly? Then, for example, when our first batch is computing a big matrix multiplication (i.e. tensor core busy, cuda core idle, memory bandwidth idle), maybe we can issue some cuda instructions to compute the activation functions (tensor core idle, cuda core busy, memory bandwidth somehow busy).
For liger kernel, maybe there can be a kernel that inputs a batch for matmul, and another batch for activation.
🚀 The feature, motivation and pitch
Hi thanks for the kernel! I have a naive thought: We know deep learning forward/backward cannot be parallelized, because you have to compute one operation/layer before computing the next one. But what if we have two batches computed almost parallelly? Then, for example, when our first batch is computing a big matrix multiplication (i.e. tensor core busy, cuda core idle, memory bandwidth idle), maybe we can issue some cuda instructions to compute the activation functions (tensor core idle, cuda core busy, memory bandwidth somehow busy).
For liger kernel, maybe there can be a kernel that inputs a batch for matmul, and another batch for activation.
Also discussions here: https://forums.developer.nvidia.com/t/concurrent-execution-of-cuda-and-tensor-cores/222985/33?u=ch271828n
Cross-posted: unslothai/unsloth#1233
Alternatives
Additional context
The text was updated successfully, but these errors were encountered: