Enhancing Neural Network Resilence against Adversarial Attacks based on FGSM Technique

Authors

  • Mohamed Ben Ammar Department of Information Systems, Faculty of Computing and Information Technology, Northern Border University, Saudi Arabia
  • Refka Ghodhbani Department of Computer Science, Faculty of Computing and Information Technology, Northern Border University, Saudi Arabia
  • Taoufik Saidani Department of Computer Science, Faculty of Computing and Information Technology, Northern Border University, Saudi Arabia
Volume: 14 | Issue: 3 | Pages: 14634-14639 | June 2024 | https://doi.org/10.48084/etasr.7479

Abstract

The robustness and reliability of neural network architectures are put to the test by adversarial attacks, resulting in inaccurate findings and affecting the efficiency of applications operating on Internet of Things (IoT) devices. This study investigates the severe repercussions that might emerge from attacks on neural network topologies and their implications on embedded systems. In particular, this study investigates the degree to which a neural network trained in the MNIST dataset is susceptible to adversarial attack strategies such as FGSM. Experiments were conducted to evaluate the effectiveness of various attack strategies in compromising the accuracy and dependability of the network. This study also examines ways to improve the resilience of a neural network structure through the use of adversarial training methods, with particular emphasis on the APE-GAN approach. The identification of the vulnerabilities in neural networks and the development of efficient protection mechanisms can improve the security of embedded applications, especially those on IoT chips with limited resources.

Keywords:

neural networks, adversarial attack, robustness

Downloads

Download data is not yet available.

References

H. Qiu, T. Dong, T. Zhang, J. Lu, G. Memmi, and M. Qiu, "Adversarial Attacks Against Network Intrusion Detection in IoT Systems," IEEE Internet of Things Journal, vol. 8, no. 13, pp. 10327–10335, Jul. 2021.

Y. Wang, Y. Tan, W. Zhang, Y. Zhao, and X. Kuang, "An adversarial attack on DNN-based black-box object detectors," Journal of Network and Computer Applications, vol. 161, Jul. 2020, Art. no. 102634.

A. Aldahdooh, W. Hamidouche, S. A. Fezza, and O. Déforges, "Adversarial example detection for DNN models: a review and experimental comparison," Artificial Intelligence Review, vol. 55, no. 6, pp. 4403–4462, Aug. 2022.

L. Liu, Y. Guo, Y. Cheng, Y. Zhang, and J. Yang, "Generating Robust DNN With Resistance to Bit-Flip Based Adversarial Weight Attack," IEEE Transactions on Computers, vol. 72, no. 2, pp. 401–413, Oct. 2023.

R. Duan, Y. Chen, D. Niu, Y. Yang, A. K. Qin, and Y. He, "AdvDrop: Adversarial Attack to DNNs by Dropping Information," in 2021 IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, Canada, Oct. 2021, pp. 7486–7495.

C. Szegedy et al., "Intriguing properties of neural networks." arXiv, Feb. 19, 2014.

N. Carlini et al., "Hidden voice commands," in Proceedings of the 25th USENIX Conference on Security Symposium, Austin, TX, USA, Aug. 2016, pp. 513–530.

G. Zhang, C. Yan, X. Ji, T. Zhang, T. Zhang, and W. Xu, "DolphinAttack: Inaudible Voice Commands," in Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, Dallas, TX, USA, Oct. 2017, pp. 103–117.

A. Kurakin, I. Goodfellow, and S. Bengio, "Adversarial Machine Learning at Scale," arXiv, Feb. 2017.

I. J. Goodfellow, J. Shlens, and C. Szegedy, "Explaining and Harnessing Adversarial Examples." arXiv, Mar. 20, 2015.

N. Papernot, P. McDaniel, and I. Goodfellow, "Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples." arXiv, May 23, 2016.

M. Abadi et al., "Deep Learning with Differential Privacy," in Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, Vienna, Austria, Oct. 2016, pp. 308–318.

W. E. Zhang, Q. Z. Sheng, A. Alhazmi, and C. Li, "Adversarial Attacks on Deep-learning Models in Natural Language Processing: A Survey," ACM Transactions on Intelligent Systems and Technology, vol. 11, no. 3, pp. 1–41, Dec. 2020.

R. R. Wiyatno, A. Xu, O. Dia, and A. de Berker, "Adversarial Examples in Modern Machine Learning: A Review." arXiv, Nov. 2019.

G. B. Ingle and M. V. Kulkarni, "Adversarial Deep Learning Attacks—A Review," in Information and Communication Technology for Competitive Strategies (ICTCS 2020), Singapore, Jul. 2021, pp. 311–323.

Y. Lin, H. Zhao, Y. Tu, S. Mao, and Z. Dou, "Threats of Adversarial Attacks in DNN-Based Modulation Recognition," in IEEE INFOCOM 2020 - IEEE Conference on Computer Communications, Toronto, Canada, Aug. 2020, pp. 2469–2478.

H. Xu et al., "Adversarial Attacks and Defenses in Images, Graphs and Text: A Review," International Journal of Automation and Computing, vol. 17, no. 2, pp. 151–178, Apr. 2020.

A. Aldahdooh, W. Hamidouche, S. A. Fezza, and O. Déforges, "Adversarial example detection for DNN models: a review and experimental comparison," Artificial Intelligence Review, vol. 55, no. 6, pp. 4403–4462, Aug. 2022.

S. Y. Khamaiseh, D. Bagagem, A. Al-Alaj, M. Mancino, and H. W. Alomari, "Adversarial Deep Learning: A Survey on Adversarial Attacks and Defense Mechanisms on Image Classification," IEEE Access, vol. 10, pp. 102266–102291, 2022.

U. Diaa, "A Deep Learning Model to Inspect Image Forgery on SURF Keypoints of SLIC Segmented Regions," Engineering, Technology & Applied Science Research, vol. 14, no. 1, pp. 12549–12555, Feb. 2024.

G. Alotibi, "A Cybersecurity Awareness Model for the Protection of Saudi Students from Social Media Attacks," Engineering, Technology & Applied Science Research, vol. 14, no. 2, pp. 13787–13795, Apr. 2024.

A. Alotaibi and M. A. Rassam, "Enhancing the Sustainability of Deep-Learning-Based Network Intrusion Detection Classifiers against Adversarial Attacks," Sustainability, vol. 15, no. 12, pp. 1–25, 2023.

Downloads

How to Cite

[1]
M. Ben Ammar, R. Ghodhbani, and T. Saidani, “Enhancing Neural Network Resilence against Adversarial Attacks based on FGSM Technique”, Eng. Technol. Appl. Sci. Res., vol. 14, no. 3, pp. 14634–14639, Jun. 2024.

Metrics

Abstract Views: 151
PDF Downloads: 120

Metrics Information

Most read articles by the same author(s)