Residual Attention Augmentation Graph Neural Network for Improved Node Classification
Received: 31 December 2023 | Revised: 23 January 2024 | Accepted: 24 January 2024 | Online: 6 February 2024
Corresponding author: Mubashir Ali
Abstract
Graph Neural Networks (GNNs) have emerged as a powerful tool for node representation learning within graph structures. However, designing a robust GNN architecture for node classification remains a challenge. This study introduces an efficient and straightforward Residual Attention Augmentation GNN (RAA-GNN) model, which incorporates an attention mechanism with skip connections to discerningly weigh node features and overcome the over-smoothing problem of GNNs. Additionally, a novel MixUp data augmentation method was developed to improve model training. The proposed approach was rigorously evaluated on various node classification benchmarks, encompassing both social and citation networks. The proposed method outperformed state-of-the-art techniques by achieving up to 1% accuracy improvement. Furthermore, when applied to the novel Twitch social network dataset, the proposed model yielded remarkably promising results. These findings provide valuable insights for researchers and practitioners working with graph-structured data.
Keywords:
graph neural networks, node classification, over-smoothing, citation and social networks, mixup data augmentationDownloads
References
S. Khoshraftar and A. An, "A Survey on Graph Representation Learning Methods," ACM Transactions on Intelligent Systems and Technology, vol. 15, no. 1, Jan. 2024, Art. no. 19. DOI: https://doi.org/10.1145/3633518
P. W. Battaglia et al., "Relational inductive biases, deep learning, and graph networks." arXiv, Oct. 17, 2018.
J. Zhou et al., "Graph neural networks: A review of methods and applications," AI Open, vol. 1, pp. 57–81, Jan. 2020. DOI: https://doi.org/10.1016/j.aiopen.2021.01.001
A. Elhassouny and F. Smarandache, "Trends in deep convolutional neural Networks architectures: a review," in 2019 International Conference of Computer Science and Renewable Energies (ICCSRE), Agadir, Morocco, Jul. 2019. DOI: https://doi.org/10.1109/ICCSRE.2019.8807741
H. Guo, Y. Mao, and R. Zhang, "MixUp as Locally Linear Out-of-Manifold Regularization," Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, no. 01, pp. 3714–3722, Jul. 2019. DOI: https://doi.org/10.1609/aaai.v33i01.33013714
P. H. C. Avelar, A. R. Tavares, M. Gori, and L. C. Lamb, "Discrete and Continuous Deep Residual Learning Over Graphs." arXiv, Nov. 26, 2019.
R. Liao, Z. Zhao, R. Urtasun, and R. S. Zemel, "LanczosNet: Multi-Scale Deep Graph Convolutional Networks." arXiv, Oct. 23, 2019.
A. Deptuła, "Application of the Dependency Graph Method in the Analysis of Automatic Transmission Gearboxes," Engineering, Technology & Applied Science Research, vol. 11, no. 2, pp. 7033–7040, Apr. 2021. DOI: https://doi.org/10.48084/etasr.4098
K. Ding, Z. Xu, H. Tong, and H. Liu, "Data Augmentation for Deep Graph Learning: A Survey," ACM SIGKDD Explorations Newsletter, vol. 24, no. 2, pp. 61–77, Sep. 2022. DOI: https://doi.org/10.1145/3575637.3575646
X. Han, Z. Jiang, N. Liu, and X. Hu, "G-Mixup: Graph Data Augmentation for Graph Classification," in Proceedings of the 39th International Conference on Machine Learning, Jun. 2022, pp. 8230–8248.
F. Smarandache, "Extension of HyperGraph to n-SuperHyperGraph and to Plithogenic n-SuperHyperGraph, and Extension of HyperAlgebra to n-ary (Classical-/Neutro-/Anti-)HyperAlgebra," Neutrosophic Sets and Systems, vol. 33, pp. 289–295, Feb. 2020.
K. Xu, M. Zhang, S. Jegelka, and K. Kawaguchi, "Optimization of Graph Neural Networks: Implicit Acceleration by Skip Connections and More Depth," in Proceedings of the 38th International Conference on Machine Learning, Jul. 2021, pp. 11592–11602.
T. K. Rusch, M. M. Bronstein, and S. Mishra, "A Survey on Oversmoothing in Graph Neural Networks." arXiv, Mar. 20, 2023.
D. D. Van, "Application of Advanced Deep Convolutional Neural Networks for the Recognition of Road Surface Anomalies," Engineering, Technology & Applied Science Research, vol. 13, no. 3, pp. 10765–10768, Jun. 2023. DOI: https://doi.org/10.48084/etasr.5890
H. Sasaki, S. Yamamoto, A. Agchbayar, and Ν. Nkhbayasgalan, "Extracting Problem Linkages to Improve Knowledge Exchange between Science and Technology Domains using an Attention-based Language Model," Engineering, Technology & Applied Science Research, vol. 10, no. 4, pp. 5903–5913, Aug. 2020. DOI: https://doi.org/10.48084/etasr.3598
J. Zhu, Y. Yan, L. Zhao, M. Heimann, L. Akoglu, and D. Koutra, "Beyond Homophily in Graph Neural Networks: Current Limitations and Effective Designs," in Advances in Neural Information Processing Systems, 2020, vol. 33, pp. 7793–7804.
M. Chen, Z. Wei, Z. Huang, B. Ding, and Y. Li, "Simple and Deep Graph Convolutional Networks," in Proceedings of the 37th International Conference on Machine Learning, Nov. 2020, pp. 1725–1735.
S. K. Maurya, X. Liu, and T. Murata, "Simplifying approach to node classification in Graph Neural Networks," Journal of Computational Science, vol. 62, Jul. 2022, Art. no. 101695. DOI: https://doi.org/10.1016/j.jocs.2022.101695
B. Rozemberczki and R. Sarkar, "Twitch Gamers: a Dataset for Evaluating Proximity Preserving and Structural Role-based Node Embeddings." arXiv, Feb. 16, 2021.
Downloads
How to Cite
License
Copyright (c) 2024 Muhammad Affan Abbas, Waqar Ali, Florentin Smarandache, Sultan S. Alshamrani, Muhammad Ahsan Raza, Abdullah Alshehri, Mubashir Ali
This work is licensed under a Creative Commons Attribution 4.0 International License.
Authors who publish with this journal agree to the following terms:
- Authors retain the copyright and grant the journal the right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) after its publication in ETASR with an acknowledgement of its initial publication in this journal.