All notable changes to this project will be documented in this file.
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
- fMHA/cutlass (backward): Massive performance improvements when
batch_size * num_heads
is low (10x+) - fMHA/cutlass: Further performance improvements for both the forward & backward kernels
- fMHA (backward): Now dispatching to cutlass when
embed_dim>64
- fMHA: Updated Flash-Attention to
v1.0.5
- fMHA now runs on H100 (support is experimental)
- Display
nvcc
version used to compilexformers
inpython -m xformers.info
- Fixed performance regression with
nvcc>11.6
(facebookresearch#712) - fMHA/cutlass: Fixed
nan
in the output when using atorch.Tensor
with-inf
prefixes asattn_bias
(facebookresearch#722) - fMHA/cutlass: Fixed
nan
in the output when the sequence length is larger than2 ** 15
(facebookresearch#719) - fMHA/cutlass: Significative performance improvements (up to 2x) for both the forward pass and backward pass
- fMHA/cutlass: The kernel are now deterministic
- fMHA/cutlass: Fixed backward pass correctness when using dropout (facebookresearch#724)
- Added
xformers.ops.index_select_cat
andxformers.ops.scaled_index_add
- those are experimental functions that only work with a few shapes, and can be used to write efficient stochastic depth in transformer architectures for instance
- fMHA:
memory_efficient_attention
now acceptstorch.Tensor
as attention bias for any seqlen, although there are still requirements on the alignment of the bias tensor (see facebookresearch#683)
- fMHA: Fixed BW pass on Sm86/Sm89 GPUs when
K > 64
(RTX 3090, RTX 4090, A6000, ..) [facebookresearch#631]
- fMHA/CUTLASS: Added tensor attn bias support [facebookresearch#587] - contribution from @jfc4050
- fMHA/CUTLASS: Added tensor attn bias grad support [facebookresearch#587] - contribution from @jfc4050
- fMHA/CUTLASS: Added dropout support [facebookresearch#587] - contribution from @jfc4050
- fMHA: Added support for varying sequence lengths [facebookresearch#500]
- Updated triton dependency [facebookresearch#418]
- Stripe lineinfo from binaries, reducing the binary size [facebookresearch#549]
- Added support for pip wheels [facebookresearch#588, facebookresearch#573, facebookresearch#534, facebookresearch#523, ...] big thanks to @AbdBarho!
- Fixed compatibility with Python 3.7 [facebookresearch#541] - thanks to @susumuota
- fMHA: Fixed strides for QKV gradients for cutlass attention [facebookresearch#535]
- fMHA: Stricter inputs validation to avoid CUDA errors for unsupported inputs [facebookresearch#592]
- fMHA/Flash-Attention: Updated to https://github.com/HazyResearch/flash-attention/commit/a1f49a2b92b6fa022379bbebafed9d7f5e96a675 with multiple changes from @TriDao that make the operator up to 20% faster
- fMHA/Flash-Attention: Fixed backward pass wrapper, where non-contiguous gradients could give the wrong result [facebookresearch#548]
- fMHA: Separate each operator into forward and backward operators. It's now possible to use any combination of forward+backward (for instance Triton forward and Flash-Attention backward) [facebookresearch#560]
- fMHA: Added Triton operator for forward pass from Flash-Attention authored by @TriDao, will be automatically used on A100 when compatible
- fMHA: Added
xformers.ops.memory_efficient_attention_forward
,xformers.ops.memory_efficient_attention_forward_requires_grad
,xformers.ops.memory_efficient_attention_backward
for power-users who write custom autograd functions [facebookresearch#560] - fMHA: Support for custom scaling for the CUTLASS-based kernel [facebookresearch#530] - contribution from @comaniac
- fMHA/CUTLASS: The current CUDA stream is now used by the kernel [facebookresearch#491]
- fMHA/CUTLASS: Improve overall performance
- SwiGLU: Added
xformers.ops.SwiGLU
and its functional counterpart (xformers.ops.swiglu
) [facebookresearch#490] - fMHA: Possible to combine CUTLASS's forward with flash-attention's backward pass [facebookresearch#469] - improves performance on A100 for K = 128
- fMHA: Add custom
xformers.ops.unbind
operator to avoid a cat in the attention block [facebookresearch#458]
- fMHA: Added CUTLASS-based kernel for
xformers.ops.memory_efficient_attention
. This kernel is automatically depending on the inputs, and works on any GPU after P100 [facebookresearch#362]
- Removed duplicated biases in the FusedMLP layers [facebookresearch#317]
- Rotary embeddings respecting input types [facebookresearch#326]
- Poolformer style instantiating useless projection layers [facebookresearch#349]
- Fix layer position not being properly tracked, causing extra layernorms for programmatic xformers [facebookresearch#348]
- Pass use_triton flag to LayerNorm module [facebookresearch#336]
- Four blocksparsity layouts from DeepSpeed [facebookresearch#320]
- Support several initialization options [facebookresearch#312]
- Conv2DFeedforward feedforward part [facebookresearch#321]
- VisualAttention [facebookresearch#329]
- Automatic blocksparse for causal attention [facebookresearch#334]
- Better hierarchical transformer generation [facebookresearch#345]
- Fused operations with AOTAutograd/NVFuser, integration into MLP [facebookresearch#357]
- Refactor LRA code to use Pytorch Lightning [facebookresearch#343]
- Fix some torchscriptability [facebookresearch#246]
- Fix FourierMix being compatible with AMP [facebookresearch#258]
- Better asserts on QKV dimensions [facebookresearch#264]
- Better perfs for FusedMLP and FusedLinearLayer [facebookresearch#283]
- Deepnorm init missing self-attention [facebookresearch#284]
- Simplicial Embeddings [facebookresearch#259]
- Mem efficient attention, FW pass [facebookresearch#267]
- MHA benchmark
- MLP benchmark
- Move all triton kernels to triton v2 [facebookresearch#272]
- Mem efficient attention, BW pass [facebookresearch#281]
- Metaformer support [facebookresearch#294]
- Expose bias flag for feedforwards, same default as Timm [facebookresearch#220]
- Update eps value for layernorm, same default as torch [facebookresearch#221]
- PreNorm bugfix, only one input was normalized [facebookresearch#233]
- Fix bug where embedding dimensions that did not match model dim would lead to a crash [facebookresearch#244]
- Add DeepNet (DeepNorm) residual path and init [facebookresearch#227]
- Compositional Attention [facebookresearch#41]
- Experimental Ragged attention [facebookresearch#189]
- Mixture of Experts [facebookresearch#181]
- BlockSparseTensor [facebookresearch#202]
- Nd-tensor support for triton softmax [facebookresearch#210]
- Bugfix Favor, single feature map [facebookresearch#183]
- Sanity check blocksparse settings [facebookresearch#207]
- Fixed some picklability [facebookresearch#204]
- Much faster fused dropout [facebookresearch#164]
- Fused dropout repeatability [facebookresearch#173]
- Embedding weight tying option [facebookresearch#172]
- Dropout setting not properly passed in many attentions [facebookresearch#123]
- Fix self attention optimization not being triggered, broken residual path [facebookresearch#119]
- Improve speed by not using contiguous Tensors when not needed [facebookresearch#119]
- Attention mask wrapper [facebookresearch#113]
- ViT comparison benchmark [facebookresearch#117]
- Homogenizing the masks, additive or bool [facebookresearch#79][facebookresearch#85][facebookresearch#86]
- Fix causality flag not being respected [facebookresearch#103]
- Enabling FusedLayerNorm by default in the factory if Triton is available
- Fixing Favor with fp16
- Fixing Favor trainability
- Fused dropout/bias/activation layer [facebookresearch#58]
- Fused layernorm used by default in the factory [facebookresearch#92]
- Nystrom causal attention [facebookresearch#75]
- More robust blocksparse [facebookresearch#24]
- Rotary embeddings [facebookresearch#32]
- More flexible layernorm [facebookresearch#50]