Robustness to adversarial perturbations are crucial to machine learning. In practice, an attacker might only select specific instances for attack e.g.,
in facial recognition systems an adversary might aim to target specific faces. In this project, we plan to tackle this problem, where we consider different perturbation methods. We also consider different methods for choosing instances to attack. Initial results show significant promise in image classification, where we are accurately able to predict labels of perturbed examples more effectively than others.