You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug float32 inputs get transformed internally into float64 inputs which leads to RuntimeError: Input type (double) and bias type (float) should be the same.
Import HopSkipJump and replace FastGradientMethod(estimator=classifier, eps=0.2) with HopSkipJump(classifier=classifier)
Run
Expected behavior
The get_started_pytorch.py should run with the HopSkipJump attack instead of the FastGradientMethod.
System information (please complete the following information):
Linux 6.1.0-23-amd64 Debian 6.1.99-1 x86_64 GNU/Linux
Python 3.11.2
adversarial-robustness-toolbox==1.18.1
numpy==2.0.1
torch==2.4.0
Possible fix
Through trial-and-error I tried to find the locations where float32 get transformed into float64. My working solution so far is this, but there might be places I missed or a better fix altogether:
Describe the bug
float32
inputs get transformed internally intofloat64
inputs which leads toRuntimeError: Input type (double) and bias type (float) should be the same
.To Reproduce
FastGradientMethod(estimator=classifier, eps=0.2)
withHopSkipJump(classifier=classifier)
Expected behavior
The get_started_pytorch.py should run with the HopSkipJump attack instead of the FastGradientMethod.
System information (please complete the following information):
Possible fix
Through trial-and-error I tried to find the locations where float32 get transformed into float64. My working solution so far is this, but there might be places I missed or a better fix altogether:
The text was updated successfully, but these errors were encountered: