This is a preview and has not been published. View submission

A Robust Neural Network against Adversarial Attacks

Authors

  • Mohammad Barr Department of Electrical Engineering, College of Engineering, Northern Border University, Saudi Arabia
Volume: 15 | Issue: 2 | Pages: 20609-20615 | April 2025 | https://doi.org/10.48084/etasr.9920

Abstract

The security and dependability of neural network designs are increasingly jeopardized by adversarial attacks, which can cause false positives, degrade performance, and disrupt applications, particularly on resource-constrained Internet of Things (IoT) devices. Τhis study adopts a two-step approach: first, designs a robust Convolutional Neural Network (CNN) that achieves high performance on the MNIST dataset, and second, evaluates and enhances its resilience against advanced adversarial techniques such as Deepfool and L-BFGS. Initial evaluations revealed that while the proposed CNN performs well on standard classification tasks, it is vulnerable to adversarial attacks. To mitigate this vulnerability, APE-GAN, an innovative adversarial training technique, was employed to re-train the proposed CNN, significantly improving its robustness against adversarial attacks while optimizing performance for embedded systems with limited computational resources. Systematic experimentation demonstrates the effectiveness of APE-GAN in enhancing both the accuracy and resilience of the proposed CNN, outperforming conventional methods and establishing it as a pioneering solution in adversarial machine learning. By integrating APE-GAN into the training process, this research ensures the secure and efficient operation of the proposed CNN in real-world IoT applications, marking a significant step forward in addressing the challenges posed by adversarial attacks.

Keywords:

neural networks, adversarial attack, robustness, L-BFGS

Downloads

Download data is not yet available.

References

H. Jiang, J. Lin, and H. Kang, "FGMD: A robust detector against adversarial attacks in the IoT network," Future Generation Computer Systems, vol. 132, pp. 194–210, Jul. 2022.

Y. Wang, Y. Tan, W. Zhang, Y. Zhao, and X. Kuang, "An adversarial attack on DNN-based black-box object detectors," Journal of Network and Computer Applications, vol. 161, Jul. 2020, Art. no. 102634.

A. Aldahdooh, W. Hamidouche, S. A. Fezza, and O. Déforges, "Adversarial example detection for DNN models: a review and experimental comparison," Artificial Intelligence Review, vol. 55, no. 6, pp. 4403–4462, Aug. 2022.

L. Liu, Y. Guo, Y. Cheng, Y. Zhang, and J. Yang, "Generating Robust DNN With Resistance to Bit-Flip Based Adversarial Weight Attack," IEEE Transactions on Computers, vol. 72, no. 2, pp. 401–413, Feb. 2023.

R. Duan, Y. Chen, D. Niu, Y. Yang, A. K. Qin, and Y. He, "AdvDrop: Adversarial Attack to DNNs by Dropping Information." arXiv, 2021.

C. Szegedy et al., "Intriguing properties of neural networks." arXiv, 2013.

N. Carlini et al., "Hidden Voice Commands," presented at the 25th USENIX Security Symposium (USENIX Security 16), 2016, Art. no. 513–530, [Online]. Available: https://www.usenix.org/conference/usenixsecurity16/technical-sessions/presentation/carlini.

G. Zhang, C. Yan, X. Ji, T. Zhang, T. Zhang, and W. Xu, "DolphinAttack: Inaudible Voice Commands," in Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, Dallas Texas USA, Oct. 2017, pp. 103–117

M. Ben Ammar, R. Ghodhbani, and T. Saidani, "Enhancing Neural Network Resilence against Adversarial Attacks based on FGSM Technique," Engineering, Technology & Applied Science Research, vol. 14, no. 3, pp. 14634–14639, Jun. 2024.

I. J. Goodfellow, J. Shlens, and C. Szegedy, "Explaining and Harnessing Adversarial Examples." arXiv, Mar. 20, 2015.

N. Papernot, P. McDaniel, and I. Goodfellow, "Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples." arXiv, May 24, 2016.

M. Abadi et al., "Deep Learning with Differential Privacy," in Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, Vienna, Austria, Oct. 2016, pp. 308–318.

W. E. Zhang, Q. Z. Sheng, A. Alhazmi, and C. Li, "Adversarial Attacks on Deep Learning Models in Natural Language Processing: A Survey." arXiv, 2019.

R. R. Wiyatno, A. Xu, O. Dia, and A. de Berker, "Adversarial Examples in Modern Machine Learning: A Review." arXiv, 2019.

G. B. Ingle and M. V. Kulkarni, "Adversarial Deep Learning Attacks—A Review," in Information and Communication Technology for Competitive Strategies (ICTCS 2020), vol. 190, M. S. Kaiser, J. Xie, and V. S. Rathore, Eds. Springer Nature Singapore, 2021, pp. 311–323.

Y. Lin, H. Zhao, Y. Tu, S. Mao, and Z. Dou, "Threats of Adversarial Attacks in DNN-Based Modulation Recognition," in IEEE INFOCOM 2020 - IEEE Conference on Computer Communications, Toronto, Canada, Jul. 2020, pp. 2469–2478.

H. Xu et al., "Adversarial Attacks and Defenses in Images, Graphs and Text: A Review." arXiv, 2019.

A. Aldahdooh, W. Hamidouche, S. A. Fezza, and O. Déforges, "Adversarial example detection for DNN models: a review and experimental comparison," Artificial Intelligence Review, vol. 55, no. 6, pp. 4403–4462, Aug. 2022.

S. Y. Khamaiseh, D. Bagagem, A. Al-Alaj, M. Mancino, and H. W. Alomari, "Adversarial Deep Learning: A Survey on Adversarial Attacks and Defense Mechanisms on Image Classification," IEEE Access, vol. 10, pp. 102266–102291, 2022.

Y. Zhang and P. Liang, "Defending against Whitebox Adversarial Attacks via Randomized Discretization." arXiv, 2019.

J. Ebrahimi, A. Rao, D. Lowd, and D. Dou, "HotFlip: White-Box Adversarial Examples for Text Classification," 2017.

S. Cheng, Y. Dong, T. Pang, H. Su, and J. Zhu, "Improving Black-box Adversarial Attacks with a Transfer-based Prior." arXiv, 2019.

Y. Gil, Y. Chai, O. Gorodissky, and J. Berant, "White-to-Black: Efficient Distillation of Black-Box Adversarial Attacks." arXiv, Apr. 04, 2019.

Y. Ding, L. Wang, H. Zhang, J. Yi, D. Fan, and B. Gong, "Defending Against Adversarial Attacks Using Random Forest," in 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Long Beach, CA, USA, Jun. 2019, pp. 105–114.

V. Zantedeschi, M. I. Nicolae, and A. Rawat, "Efficient Defenses Against Adversarial Attacks," in Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, Dallas, TX, USA, Nov. 2017, pp. 39–49.

J. Wang and H. Zhang, "Bilateral Adversarial Training: Towards Fast Training of More Robust Models Against Adversarial Attacks," in 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Korea (South), Oct. 2019, pp. 6628–6637.

F. Garipay, K. Uzgören, İ. Kaya, "Side channel attack performance of correlation power analysis method in noise," International Journal on Information Technologies and Security, vol. 15, no. 1, pp. 101–111, Mar. 2023.

R. Dixit, R. Kushwah, and S. Pashine, "Handwritten Digit Recognition using Machine and Deep Learning Algorithms," International Journal of Computer Applications, vol. 176, no. 42, pp. 27–33, Jul. 2020.

S. M. Moosavi-Dezfooli, A. Fawzi, and P. Frossard, "DeepFool: a simple and accurate method to fool deep neural networks." arXiv, 2015.

F. Hu, W. Zhou, K. Liao, H. Li, and D. Tong, "Toward Federated Learning Models Resistant to Adversarial Attacks," IEEE Internet of Things Journal, vol. 10, no. 19, pp. 16917–16930, Oct. 2023.

Downloads

How to Cite

[1]
Barr, M. 2025. A Robust Neural Network against Adversarial Attacks. Engineering, Technology & Applied Science Research. 15, 2 (Apr. 2025), 20609–20615. DOI:https://doi.org/10.48084/etasr.9920.

Metrics

Abstract Views: 11
PDF Downloads: 5

Metrics Information

Most read articles by the same author(s)