Skip to content

Panda or Paintbrush?

An Exploration of Adversarial Attacks on Image Classifiers

Deep neural network-based image classifiers significantly impact various sectors of society, with applications such as automatic driving and tumor detection. However, researchers have found that these models are vulnerable to carefully selected perturbations that can cause misclassification. These adversarial examples allow attackers to infiltrate real-world neural network image classifiers and pose a security risk. This paper examines four adversarial attacks (Fast Gradient Sign Method, Projected Gradient Descent, LocSearchAdv, and Surrogate Attack) based on the attacker’s knowledge level of the target model and compares their effectiveness on the same testing dataset by running experiments on ImageNet data using MobileViT as the target model. No attack was found to outperform the others in all respects; however, certain algorithms perform better depending on the situation. The paper also discusses possible defense strategies and ethical concerns related to adversarial attacks.