-
Notifications
You must be signed in to change notification settings - Fork 71
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
error with flash attention #85
Comments
i have already set |
Hello @staple-pi, thanks for opening this issue. The error you're reporting doesn't seem related to flash_4, but your installation of torch. Can you share a bit more detail on what version of torch and on what platform you're using it? |
torch2.0.1cu117 will reproduce the error! sam-fast is specified to match torch2.2? |
Thank you for your reply! Actually, i have already install the preview version of torch(2.2.0.dev20231123+cu121). And i'm using it in the environment created by anaconda with python 3.10.13 on Windows |
@staple-pi Hm, interesting. This might be a general issue with PyTorch on Windows. Can you use |
After test, |
@staple-pi - I landed some optimizations for the AMG. Can you try again? |
unfortunately, the error still appear |
Are you using the sdp_kernel to only enable "flash_attention". Flash Attention2 is not supported on windows and we don't have an ETA on when/if this support will be added. |
hi,
I'm trying to run amg_example.py , but meet an Userwarning: 1Torch was not compiled with flash attention. (Triggered internally at ..\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:253.) return torch.nn.functional.scaled_dot_product_attention(q_, k_, v_, attn_mask=attn_bias) in lash_4.py:line369. how can i solve it?
The text was updated successfully, but these errors were encountered: