Rakesh Podder

Researcher in Cybersecurity, Firmware Security, AI, and Machine Learning. [Google Scholar]

Adversarial attacks on CNN Model

We conduct a systematic evaluation of various white-box adversarial attacks, where the attacker has complete visibility into the model’s architecture, parame- ters, and training data, on generic neural network models for images. We use a curated set of images such as the MNIST, CIFAR-10, CIFAR-100, and Fashion MNIST datasets processed by a Convolutional Neural Network (CNN). The datasets take into account the variety and complexity required to challenge the CNNs under test. We identify the intrinsic vulnerabilities of CNNs when exposed to white-box attacks such as Fast Gradient Sign Method (FGSM), Basic Iter- ative Method (BIM), Jacobian-based Saliency Map Attack (JSMA), Carlini & Wagner (C&W), Projected Gradient Descent (PGD), and DeepFool.

Autonomous vehicle navigation and healthcare diagnostics are among the many fields where the reliability and security of machine learning models for image data are critical. We conduct a comprehensive investigation into the susceptibility of Convolutional Neural Networks (CNNs), which are widely used for image data, to white-box adversarial attacks. We investigate the effects of various sophisticated attacks---Fast Gradient Sign Method, Basic Iterative Method, Jacobian-based Saliency Map Attack, Carlini \& Wagner, Projected Gradient Descent, and DeepFool---on CNN performance metrics, (e.g., loss, accuracy), the differential efficacy of adversarial techniques in increasing error rates, the relationship between perceived image quality metrics (e.g., ERGAS, PSNR, SSIM, and SAM) and classification performance, and the comparative effectiveness of iterative versus single-step attacks. Using the MNIST, CIFAR-10, CIFAR-100, and Fashion\_MNIST datasets, we explore the effect of different attacks on the CNNs performance metrics by varying the hyperparameters of CNNs. Our study provides insights into the robustness of CNNs against adversarial threats, pinpoints vulnerabilities, and underscores the urgent need for developing robust defense mechanisms to protect CNNs and ensuring their trustworthy deployment in real-world scenarios.