This is a preview and has not been published. View submission

Detecting and Mitigating Data Poisoning Attacks in Machine Learning: A Weighted Average Approach

Authors

  • Yogi Reddy Maramreddy Department of CSE, GITAM Deemed to be University, Hyderabad, India
  • Kireet Muppavaram Department of CSE, GITAM Deemed to be University, Hyderabad, India
Volume: 14 | Issue: 4 | Pages: 15505-15509 | August 2024 | https://doi.org/10.48084/etasr.7591

Abstract

Adversarial attacks, in particular data poisoning, can affect the behavior of machine learning models by inserting deliberately designed data into the training set. This study proposes an approach for identifying data poisoning attacks on machine learning models, the Weighted Average Analysis (VWA) algorithm. This algorithm evaluates the weighted averages of the input features to detect any irregularities that could be signs of poisoning efforts. The method finds deviations that can indicate manipulation by adding all the weighted averages and comparing them with the predicted value. Furthermore, it differentiates between binary and multiclass classification instances, accordingly modifying its analysis. The experimental results showed that the VWA algorithm can accurately detect and mitigate data poisoning attacks and improve the robustness and security of machine learning systems against adversarial threats.

Keywords:

advanced machine learning attacks, poisoning attacks, attacks in intelligent networks, attack defense methods, security threats

Downloads

Download data is not yet available.

References

X. Zhang, Z. Wang, J. Zhao, and L. Wang, "Targeted Data Poisoning Attack on News Recommendation System by Content Perturbation." arXiv, Mar. 2022.

Y. Zhao, X. Gong, F. Lin, and X. Chen, "Data Poisoning Attacks and Defenses in Dynamic Crowdsourcing With Online Data Quality Learning," IEEE Transactions on Mobile Computing, vol. 22, no. 5, pp. 2569–2581, May 2023.

J. Chen, X. Zhang, R. Zhang, C. Wang, and L. Liu, "De-Pois: An Attack-Agnostic Defense against Data Poisoning Attacks," IEEE Transactions on Information Forensics and Security, vol. 16, pp. 3412–3425, 2021.

M. Dibaei et al., "Attacks and defences on intelligent connected vehicles: a survey," Digital Communications and Networks, vol. 6, no. 4, pp. 399–421, Nov. 2020.

A. Qayyum, M. Usama, J. Qadir, and A. Al-Fuqaha, "Securing Connected & Autonomous Vehicles: Challenges Posed by Adversarial Machine Learning and the Way Forward," IEEE Communications Surveys & Tutorials, vol. 22, no. 2, pp. 998–1026, 2020.

M. B. Ammar, R. Ghodhbani, and T. Saidani, "Enhancing Neural Network Resilence against Adversarial Attacks based on FGSM Technique," Engineering, Technology & Applied Science Research, vol. 14, no. 3, pp. 14634–14639, Jun. 2024.

A. Al-Marghilani, "Comprehensive Analysis of IoT Malware Evasion Techniques," Engineering, Technology & Applied Science Research, vol. 11, no. 4, pp. 7495–7500, Aug. 2021.

N. A. Alsharif, S. Mishra, and M. Alshehri, "IDS in IoT using Machine ‎Learning and Blockchain," Engineering, Technology & Applied Science Research, vol. 13, no. 4, pp. 11197–11203, Aug. 2023.

K. Muppavaram, M. S. Rao, K. Rekanar, and R. S. Babu, "How Safe Is Your Mobile App? Mobile App Attacks and Defense," in Proceedings of the Second International Conference on Computational Intelligence and Informatics ICCII 2017, Hyderabad, India, Sep. 2017, pp. 199–207.

S. Aparna, K. Muppavaram, C. C. V. Ramayanam, and K. S. S. Ramani, "Mask RCNN with RESNET50 for Dental Filling Detection," International Journal of Advanced Computer Science and Applications (IJACSA), vol. 12, no. 10, 2021.

K. Muppavaram, S. Govathoti, D. Kamidi, and T. Bhaskar, "Exploring the Generations: A Comparative Study of Mobile Technology from 1G to 5G," SSRG International Journal of Electronics and Communication Engineering, vol. 10, no. 7, pp. 54–62, Jul. 2023.

M. Jagielski, A. Oprea, B. Biggio, C. Liu, C. Nita-Rotaru, and B. Li, "Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning," in 2018 IEEE Symposium on Security and Privacy (SP), San Francisco, CA, USA, May 2018, pp. 19–35.

R. Sundar et al., "Future directions of artificial intelligence integration: Managing strategies and opportunities," Journal of Intelligent & Fuzzy Systems, vol. 46, no. 3, pp. 7109–7122, Jan. 2024.

K. Muppavaram, A. Shivampeta, S. Govathoti, D. Kamidi, K. K. Mamidi, and M. Thaile, "Investigation of Omnidirectional Vision and Privacy Protection in Omnidirectional Cameras," International Journal of Electronics and Communication Engineering, vol. 10, no. 5, pp. 105–116, May 2023.

C. Liu, B. Li, Y. Vorobeychik, and A. Oprea, "Robust Linear Regression Against Training Data Poisoning," in Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, Dallas, TX, USA, Nov. 2017, pp. 91–102.

J. Steinhardt, P. W. Koh, and P. Liang, "Certified Defenses for Data Poisoning Attacks," in Advances in 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA, Dec. 2017.

S. Hong, V. Chandrasekaran, Y. Kaya, T. Dumitraş, and N. Papernot, "On the Effectiveness of Mitigating Data Poisoning Attacks with Gradient Shaping." arXiv, Feb. 27, 2020.

M. Subedar, N. Ahuja, R. Krishnan, I. J. Ndiour, and O. Tickoo, "Deep Probabilistic Models to Detect Data Poisoning Attacks." arXiv, Dec. 03, 2019.

M. Mozaffari-Kermani, S. Sur-Kolay, A. Raghunathan, and N. K. Jha, "Systematic Poisoning Attacks on and Defenses for Machine Learning in Healthcare," IEEE Journal of Biomedical and Health Informatics, vol. 19, no. 6, pp. 1893–1905, Nov. 2015.

B. I. P. Rubinstein et al., "ANTIDOTE: understanding and defending against poisoning of anomaly detectors," in Proceedings of the 9th ACM SIGCOMM conference on Internet measurement, Chicago, IL, USA, Nov. 2009, pp. 1–14.

N. Carlini and D. Wagner, "Towards Evaluating the Robustness of Neural Networks," in 2017 IEEE Symposium on Security and Privacy (SP), San Jose, CA, USA, May 2017, pp. 39–57.

R. Shokri, M. Stronati, C. Song, and V. Shmatikov, "Membership Inference Attacks Against Machine Learning Models," in 2017 IEEE Symposium on Security and Privacy (SP), San Jose, CA, USA, May 2017, pp. 3–18.

F. Tramèr, F. Zhang, A. Juels, M. K. Reiter, and T. Ristenpart, "Stealing Machine Learning Models via Prediction {APIs}," presented at the 25th USENIX Security Symposium (USENIX Security 16), Austin, TX, USA, Aug. 2016, pp. 601–618.

E. Frank, M. A. Hall, and I. H. Witten, The Weka Workbench. Online Appendix for "Data Mining: Practical Machine Learning Tools and Techniques". 4rth ed. Burlington, MA, USA: Morgan Kaufmann, 2016.

Downloads

How to Cite

[1]
Y. R. Maramreddy and K. Muppavaram, “Detecting and Mitigating Data Poisoning Attacks in Machine Learning: A Weighted Average Approach”, Eng. Technol. Appl. Sci. Res., vol. 14, no. 4, pp. 15505–15509, Aug. 2024.

Metrics

Abstract Views: 3
PDF Downloads: 1

Metrics Information

Most read articles by the same author(s)