Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

float type mismatch #2475

Open
j-moeller opened this issue Aug 6, 2024 · 1 comment
Open

float type mismatch #2475

j-moeller opened this issue Aug 6, 2024 · 1 comment

Comments

@j-moeller
Copy link

Describe the bug
float32 inputs get transformed internally into float64 inputs which leads to RuntimeError: Input type (double) and bias type (float) should be the same.

To Reproduce

  1. Create copy of get_started_pytorch.py
  2. Import HopSkipJump and replace FastGradientMethod(estimator=classifier, eps=0.2) with HopSkipJump(classifier=classifier)
  3. Run

Expected behavior
The get_started_pytorch.py should run with the HopSkipJump attack instead of the FastGradientMethod.

System information (please complete the following information):

  • Linux 6.1.0-23-amd64 Debian 6.1.99-1 x86_64 GNU/Linux
  • Python 3.11.2
  • adversarial-robustness-toolbox==1.18.1
  • numpy==2.0.1
  • torch==2.4.0

Possible fix

Through trial-and-error I tried to find the locations where float32 get transformed into float64. My working solution so far is this, but there might be places I missed or a better fix altogether:

@@ -438,9 +438,10 @@ class HopSkipJump(EvasionAttack):
             epsilon = 2.0 * dist / np.sqrt(self.curr_iter + 1)
             success = False

+
             while not success:
                 epsilon /= 2.0
-                potential_sample = current_sample + epsilon * update
+                potential_sample = (current_sample + epsilon * update).astype(current_sample.dtype)
                 success = self._adversarial_satisfactory(  # type: ignore
                     samples=potential_sample[None],
                     target=target,
@@ -499,6 +500,9 @@ class HopSkipJump(EvasionAttack):
             if threshold is None:
                 threshold = np.minimum(upper_bound * self.theta, self.theta)

+        upper_bound = upper_bound.astype(current_sample.dtype)
+        lower_bound = lower_bound.astype(current_sample.dtype)
+
         # Then start the binary search
         while (upper_bound - lower_bound) > threshold:
             # Interpolation point
@@ -602,7 +606,7 @@ class HopSkipJump(EvasionAttack):
                 keepdims=True,
             )
         )
-        eval_samples = np.clip(current_sample + delta * rnd_noise, clip_min, clip_max)
+        eval_samples = np.clip(current_sample + delta * rnd_noise, clip_min, clip_max, dtype=current_sample.dtype)
         rnd_noise = (eval_samples - current_sample) / delta
@beat-buesser
Copy link
Collaborator

Hi @j-moeller Thank you very much for raising this issue. I think your analysis is correct. We'll try to fix it as soon as possible.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants