A Road Accident Detection Method Utilizing Deep Learning and Fast Fourier Transform
Received: 23 January 2025 | Revised: 17 February 2025 | Accepted: 24 February 2025 | Online: 8 December 2025
Corresponding author: Mikhail Gorodnichev
Abstract
This paper presents the development of a real-time crash detection method aimed at improving the efficiency of emergency assistance to the location of the accident. The analysis involved reviewing existing classical and neural network-based crash detection approaches, focusing on architectures such as EfficientNet-B1, EfficientNet-B7, MobileNetV2, and ConvNeXtV2. A dedicated dataset consisting of 12,426 crash-related image frames was specifically compiled for this study, combining previous datasets with self-collected images. The performance of accident detection models was evaluated on this dataset, leading to the development of a new crash detection method. The ConvNeXtV2-Femto model was selected as the core architecture for the proposed system, which was modified employing Fast Fourier Convolution (FFC) to improve its performance. Comparative analysis demonstrated that the proposed model achieved a 94% accuracy, outperforming existing approaches in all metrics, including also precision, recall, and F1-score.
Keywords:
accident detection, convolutional neural networks, image processing, fast Fourier transform, transport monitoring, classificationDownloads
References
The Ministry of Internal Affairs of the Russian Federation, "Information on road safety indicators," stat.gibdd.ru. [Online]. Available: http://stat.gibdd.ru.
K. Fathallah, S. Khamlich, E. Mohammed, and B. Mohamed, "Intelligent System for the Automatic Detection and Control of Accidents on the Road in Real Time," Journal of Theoretical and Applied Information Technology, vol. 99, no. 11, pp. 2578–2594, Jun. 2021.
Z. Rahman, A. M. Ami, and M. A. Ullah, "A Real-Time Wrong-Way Vehicle Detection Based on YOLO and Centroid Tracking," in 2020 IEEE Region 10 Symposium (TENSYMP), Dhaka, Bangladesh, 2020, pp. 916–920. DOI: https://doi.org/10.1109/TENSYMP50017.2020.9230463
H. Ghahremannezhad, H. Shi, and C. Liu, "Real-Time Accident Detection in Traffic Surveillance Using Deep Learning," in 2022 IEEE International Conference on Imaging Systems and Techniques (IST), Kaohsiung, Taiwan, Jun. 2022, pp. 1–6. DOI: https://doi.org/10.1109/IST55454.2022.9827736
U. Jartarghar, D. Sanghvi, M. Nidgundi, K. Kumar, and S. Varur, "Vision Transformer-Based Multi-Phase Accident Detection and False Positive Mitigation." In Review, Jan. 2024. DOI: https://doi.org/10.21203/rs.3.rs-3903862/v1
M. M. Karim, Y. Li, R. Qin, and Z. Yin, "A system of vision sensor based deep neural networks for complex driving scene analysis in support of crash risk assessment and prevention." arXiv, Jun. 2021.
T. Tamagusko, M. G. Correia, M. A. Huynh, and A. Ferreira, "Deep Learning applied to Road Accident Detection with Transfer Learning and Synthetic Images," Transportation Research Procedia, vol. 64, pp. 90–97, 2022. DOI: https://doi.org/10.1016/j.trpro.2022.09.012
B. Kumeda, Z. Fengli, A. Oluwasanmi, F. Owusu, M. Assefa, and T. Amenu, "Vehicle Accident and Traffic Classification Using Deep Convolutional Neural Networks," in 2019 16th International Computer Conference on Wavelet Active Media Technology and Information Processing, Chengdu, China, Dec. 2019, pp. 323–328. DOI: https://doi.org/10.1109/ICCWAMTIP47768.2019.9067530
M. Tahir, Y. Qiao, N. Kanwal, B. Lee, and M. N. Asghar, "Real-Time Event-Driven Road Traffic Monitoring System Using CCTV Video Analytics," IEEE Access, vol. 11, pp. 139097–139111, 2023. DOI: https://doi.org/10.1109/ACCESS.2023.3340144
M. Machoke, J. Mbelwa, J. Agbinya, and A. E. Sam, "Performance Comparison of Ensemble Learning and Supervised Algorithms in Classifying Multi-label Network Traffic Flow," Engineering, Technology & Applied Science Research, vol. 12, no. 3, pp. 8667–8674, Jun. 2022. DOI: https://doi.org/10.48084/etasr.4852
M. U. Farooq, A. Ahmed, S. M. Khan, and M. B. Nawaz, "Estimation of Traffic Occupancy using Image Segmentation," Engineering, Technology & Applied Science Research, vol. 11, no. 4, pp. 7291–7295, Aug. 2021. DOI: https://doi.org/10.48084/etasr.4218
S. Bakheet and A. Al-Hamadi, "A deep neural framework for real-time vehicular accident detection based on motion temporal templates," Heliyon, vol. 8, no. 11, Nov. 2022, Art. no. e11397. DOI: https://doi.org/10.1016/j.heliyon.2022.e11397
Y. Yao, M. Xu, Y. Wang, D. J. Crandall, and E. M. Atkins, "Unsupervised Traffic Accident Detection in First-Person Videos." arXiv, Jul. 2019. DOI: https://doi.org/10.1109/IROS40897.2019.8967556
S. Robles-Serrano, G. Sanchez-Torres, and J. Branch-Bedoya, "Automatic Detection of Traffic Accidents from Video Using Deep Learning Techniques," Computers, vol. 10, no. 11, Nov. 2021, Art. no. 148. DOI: https://doi.org/10.3390/computers10110148
Release Traffic-Net Dataset V1 · OlafenwaMoses/Traffic-Net. (2024), M. Olafenwa . [Online]. Available: https://github.com/OlafenwaMoses/Traffic-Net/releases/tag/1.0.
K. He, X. Zhang, S. Ren, and J. Sun, "Deep Residual Learning for Image Recognition." arXiv, Dec. 2015. DOI: https://doi.org/10.1109/CVPR.2016.90
Image Dataset for driving scene classification. (2021), M. M. Karim, Y. Li, R. Qin, and Z. Yin. [Online]. Available: https://github.com/monjurulkarim/Crash_road_function_dataset.
road_accident_data. (2025), M. Gorodnichev, K. Kharrasov, and M. Moseva. [Online]. Available: https://github.com/KKharrasov/road_accident_data.
M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L.-C. Chen, "MobileNetV2: Inverted Residuals and Linear Bottlenecks." arXiv, Mar. 2019. DOI: https://doi.org/10.1109/CVPR.2018.00474
M. Tan and Q. V. Le, "EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks." arXiv, Sep. 2020.
S. Woo et al., "ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders." arXiv, Jan. 2023. DOI: https://doi.org/10.1109/CVPR52729.2023.01548
L. Chi, B. Jiang, and Y. Mu, "Fast Fourier Convolution", in 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada, Dec. 2020, pp. 4479-4488.
Downloads
How to Cite
License
Copyright (c) 2025 Mikhail Gorodnichev, Kamil Kharrasov, Marina Moseva

This work is licensed under a Creative Commons Attribution 4.0 International License.
Authors who publish with this journal agree to the following terms:
- Authors retain the copyright and grant the journal the right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) after its publication in ETASR with an acknowledgement of its initial publication in this journal.
