A Secure and Reliable Framework for Explainable Artificial Intelligence (XAI) in Smart City Applications
Received: 29 April 2024 | Revised: 15 May 2024 | Accepted: 17 May 2024 | Online: 2 August 2024
Corresponding author: Mohammad Algarni
Abstract
Living in a smart city has many advantages, such as improved waste and water management, access to quality healthcare facilities, effective and safe transportation systems, and personal protection. Explainable AI (XAI) is called a system that is capable of providing explanations for its judgments or predictions. This term describes a model, its expected impacts, and any potential biases that may be present. XAI tools and frameworks can aid in comprehending and trusting the output and outcomes generated by machine-learning algorithms. This study used XAI methods to classify cities based on smart city metrics. The logistic regression method with LIME achieved perfect accuracy, precision, recall, and F1-score, predicting correctly all cases.
Keywords:
machine learning, explainable artificial intelligence (XAI), smart city, artificial intelligenceDownloads
References
V. Hassija et al., "Interpreting Black-Box Models: A Review on Explainable Artificial Intelligence," Cognitive Computation, vol. 16, no. 1, pp. 45–74, Jan. 2024.
I. D. Apostolopoulos and P. P. Groumpos, "Fuzzy Cognitive Maps: Their Role in Explainable Artificial Intelligence," Applied Sciences, vol. 13, no. 6, Jan. 2023, Art. no. 3412.
Z. Ullah, F. Al-Turjman, L. Mostarda, and R. Gagliardi, "Applications of Artificial Intelligence and Machine learning in smart cities," Computer Communications, vol. 154, pp. 313–323, Mar. 2020.
M. Schnieder, "Using Explainable Artificial Intelligence (XAI) to Predict the Influence of Weather on the Thermal Soaring Capabilities of Sailplanes for Smart City Applications," Smart Cities, vol. 7, no. 1, pp. 163–178, Feb. 2024.
C. I. Nwakanma et al., "Explainable Artificial Intelligence (XAI) for Intrusion Detection and Mitigation in Intelligent Connected Vehicles: A Review," Applied Sciences, vol. 13, no. 3, Jan. 2023, Art. no. 1252.
M. Ahmed, S. R. Islam, A. Anwar, N. Moustafa, and A. S. K. Pathan, Eds., Explainable Artificial Intelligence for Cyber Security: Next Generation Artificial Intelligence. Springer International Publishing, 2022.
S. K. Jagatheesaperumal, Q.-V. Pham, R. Ruby, Z. Yang, C. Xu, and Z. Zhang, "Explainable AI Over the Internet of Things (IoT): Overview, State-of-the-Art and Future Directions," IEEE Open Journal of the Communications Society, vol. 3, pp. 2106–2136, 2022.
C. Hurter et al., "Usage of more transparent and explainable conflict resolution algorithm: air traffic controller feedback," Transportation Research Procedia, vol. 66, pp. 270–278, Jan. 2022.
Z. A. E. Houda, B. Brik, and L. Khoukhi, "‘Why Should I Trust Your IDS?’: An Explainable Deep Learning Framework for Intrusion Detection Systems in Internet of Things Networks," IEEE Open Journal of the Communications Society, vol. 3, pp. 1164–1176, 2022.
O. Loyola-González, "Understanding the Criminal Behavior in Mexico City through an Explainable Artificial Intelligence Model," in Advances in Soft Computing, Xalapa, Mexico, 2019, pp. 136–149.
K. A. Eldrandaly, M. Abdel-Basset, M. Ibrahim, and N. M. Abdel-Aziz, "Explainable and secure artificial intelligence: taxonomy, cases of study, learned lessons, challenges and future directions," Enterprise Information Systems, Sep. 2023.
P. Weber, K. V. Carl, and O. Hinz, "Applications of Explainable Artificial Intelligence in Finance—a systematic review of Finance, Information Systems, and Computer Science literature," Management Review Quarterly, vol. 74, no. 2, pp. 867–907, Jun. 2024.
M. M. Karim, Y. Li, and R. Qin, "Toward Explainable Artificial Intelligence for Early Anticipation of Traffic Accidents," Transportation Research Record, vol. 2676, no. 6, pp. 743–755, Jun. 2022.
Z. Li, Y. Zhu, and M. Van Leeuwen, "A Survey on Explainable Anomaly Detection," ACM Transactions on Knowledge Discovery from Data, vol. 18, no. 1, Jun. 2023, Art. no. 23.
A. Rawal, J. McCoy, D. B. Rawat, B. M. Sadler, and R. St. Amant, "Recent Advances in Trustworthy Explainable Artificial Intelligence: Status, Challenges, and Perspectives," IEEE Transactions on Artificial Intelligence, vol. 3, no. 6, pp. 852–866, Dec. 2022.
A. Procopiou and T. M. Chen, "Explainable AI in Machine/Deep Learning for Intrusion Detection in Intelligent Transportation Systems for Smart Cities," in Explainable Artificial Intelligence for Smart Cities, CRC Press, 2021.
D. Prabakar, M. Sundarrajan, S. Prasath Alias Surendhar, M. Ramachandran, and D. Gupta, "Trust Model Based Data Fusion in Explainable Artificial Intelligence for Edge Computing Using Secure Sequential Discriminant Auto Encoder with Lightweight Optimization Algorithm," in Explainable Edge AI: A Futuristic Computing Perspective, A. E. Hassanien, D. Gupta, A. K. Singh, and A. Garg, Eds. Springer International Publishing, 2023, pp. 139–160.
I. Batra, A. Malik, S. Sharma, C. Sharma, and S. Hosen, "Explainable Artificial Intelligence into Cyber-Physical System Architecture of Smart Cities: Technologies, Challenges, and Opportunities," Journal of Electrical Systems, vol. 20, no. 2, pp. 2343–2362, Apr. 2024.
M. H. Kabir, K. F. Hasan, M. K. Hasan, and K. Ansari, "Explainable Artificial Intelligence for Smart City Application: A Secure and Trusted Platform," in Explainable Artificial Intelligence for Cyber Security: Next Generation Artificial Intelligence, M. Ahmed, S. R. Islam, A. Anwar, N. Moustafa, and A.-S. K. Pathan, Eds. Springer International Publishing, 2022, pp. 241–263.
M. Monteiro, "Smart Cities Index Datasets." Kaggle, [Online]. Available: https://www.kaggle.com/datasets/magdamonteiro/smart-cities-index-datasets.
Downloads
How to Cite
License
Copyright (c) 2024 Mohammad Algarni, Shailendra Mishra
This work is licensed under a Creative Commons Attribution 4.0 International License.
Authors who publish with this journal agree to the following terms:
- Authors retain the copyright and grant the journal the right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) after its publication in ETASR with an acknowledgement of its initial publication in this journal.