Carlini-Wagner Attacks
This section has a series of coding problems using PyTorch. To run the code locally, you can follow the installation instructions at the bottom of this page. As always, we highly recommend you read all the content on this page before starting the coding exercises.
Relevant Background
The Carlini-Wagner attack -- also known as CW -- was developed by Nicolas Carlini and David Wagner to improve upon established attack methods such as those you implemented in the previous section.
Because the CW attack method is much more sophisticated than anything you looked at in the previous section, we provide some background context before diving into the specifics of the attack.
Targeted vs Untargeted Attacks
In the previous section, you implemented FGSM (Goodfellow et al., 2015), Basic Iterative Method (Kurakin et al., 2017), and PGD (Madry et al., 2019) attacks. The code you wrote for each of these attacks would be considered an untargeted attack because you weren't trying to target any particular class for misclassification; you were just trying to get the model to predict the wrong answer. In a targeted attack, however, the attacker aims to get the model to predict a specific incorrect class.
Note that it is possible (and not too difficult) to write a targeted version of FGSM, ISGM, and PGD. We don't cover these variations in this course, but understanding CW will give you some solid intuition for what those attacks would look like. As an exercise, you may choose to implement these other targeted attacks on your own.
Potential issues with PGD
It isn't actually clear when, if ever, CW attacks are a better choice in a research context than a smart implementation of PGD. While we won't take a dogmatic position on this topic, we will recommend that when doing research, PGD or one of its variants is a good place to start. Either way, we believe that having a deep understanding of CW attacks will give you insight into a number of important considerations that go into attack design.
With all that being said, here are some of the issues with PGD and similar methods that motivate the Carlini-Wagner attack.
- The epsilon clipping operation in PGD isn't differentiable. This is an issue because it can disrupt optimization. Modern optimizers can do things like update based on previous gradients, and by adding a nondifferentiable step at every update, the logic of the optimizer is no longer consistent.
- Likewise, there isn't an effective way to ensure that an image is valid (all pixel components are between zero and one) because clipping the tensor between zero and one is not differentiable.
How Does the Carlini-Wagner Attack Work?
The Carlini-Wagner attack describes a family of targeted attacks for generating adversarial examples for image models. The authors propose a , and attacks. For this page and the coding exercises, we focus on the attack. The attack design for the attack is clever and quite similar, but we will leave it to the reader to explore this more by reading the original paper if interested. The attack for is more complicated and less influential or important to understand, so only if you are especially interested should you explore the attack further.
The authors begin with the basic formalization of adversarial examples from (Szegedy et al., 2014). This represents a targeted attack where the function returns a classification and where is the target class. The function represents a distance metric while constrains the adversarial image to be between zero and one.
This formalization is slightly different from your implementations in the previous section, but the main idea should be familiar.
Change #1: Making the Classification Constraint Differentiable
The first change that Carlini and Wagner make to this objective is to make the requirement of differentiable. By default, returns an integer that represents a class. This function is not even continuous and certainly not differentiable. The authors reason that if you have a function which is differentiable and positive only if , then you could make a term in the loss and then minimize it with SGD or Adam (more on this in the section below).
If implies that the model predicts to belong to class then we can now change our original equation to the one below.
Change #2: Adding Misclassification to the Loss
Above, we mentioned that we can add as a component of the loss we want to minimize. Let's make that more concrete with a specific example of .
In the paper, Carlini and Wager propose seven possible choices for . Below is the fourth option they offer, where is the softmax probability for the target class when the adversarial example is given to the model.
If is zero, that means that the softmax probability for the target class is greater than 50%, which means that the model must predict it. If is positive, which means that the model has not yet confidently predicted the adversarial image as the target class. Therefore, we can treat as a loss term we want to minimize. As a sidenote, it turns out that is actually quite ineffective compared to other choices for . In the coding exercises, you will explore this further.
Using as a loss term, we can change our previous equation to the below where weights how much contributes to the loss.
Note that will be positive. A lower encourages to be lower, making the adversarial image more similar to the original. A higher increases the probability that the attack is successful. In the example below, you can see some results we got from optimizing the above equation for different values with equation from the original paper and the distance metric. As gets larger, the probability of attack success rises, but the image becomes increasingly suspicious.
Change #3: Change of Variables
The next issue to deal with is the box constraint: how do we keep the images between 0 and 1? The authors deal with this by introducing a change in variables. This step is a bit confusing, so let's start with some mathematical intuition. Let a given pixel component in the adversarial image be , where is the original value and is the adversarial perturbation we will add to that pixel component. Now, let be a function of which can be any positive or negative number ().
Why may we want to think about this way? Well, if we graph we can see that the equation is always between 0 and 1:
Now instead of optimizing in our questions above, we can optimize and guarantee that we will be left with a valid image.
Putting it all together:
Instead of writing:
We can say:
The authors use an norm so instead of saying , we can say where . So for our final equation, we have:
This way we are able to:
- Maximize the probability that results in misclassification.
- Minimize the norm of , making our adversarial example less suspicious
- Guarantee that is between 0 and 1 without any clipping.
Disclaimer: The specific attack for the situation that Carlini and Wagner use in the paper has the distance metric squared instead of the vanilla norm shown above. In the case, the norm looks a bit different also but conceptually is similar. One meta-level takeaway here is that good researchers think critically about specific tweaks they can make to their attack to make it more effective for whichever case they are optimizing for.
Final comments
If any of the math above is confusing to you, there is nothing to worry about. When you complete the coding exercises, everything should become more concrete. After you are finished with the coding exercises, we recommend you read back through this document to test your knowledge and make sure that you understand everything.