Research and Development of a Traffic Sign Recognition Module in Vietnam
Received: 22 November 2023 | Revised: 4 December 2023 | Accepted: 6 December 2023 | Online: 8 February 2024
Corresponding author: Nguyen Luong Thien
Abstract
Automatic traffic sign recognition is essential in researching and developing driver assistance systems and autonomous vehicles. This paper presents the research and development of an automated traffic sign recognition module in Vietnam. The recognition model is developed based on the deep learning model YOLOv5 and incorporates architectural modifications to reduce computational complexity, increase inference speed, and meet real-time requirements for embedded system applications. The model is trained using a custom dataset collected by the research team from real-world street environments in Vietnam, encompassing diverse locations, times, and weather conditions. The trained recognition model is deployed on the Jetson embedded system, yielding high-quality recognition results and meeting real-time recognition needs.
Keywords:
traffic sign recognition, YOLOv5, embedded system, image processing, deep learningDownloads
References
J. Stallkamp, M. Schlipsing, J. Salmen, and C. Igel, "The German Traffic Sign Recognition Benchmark: A multi-class classification competition," in The 2011 International Joint Conference on Neural Networks, San Jose, CA, USA, Jul. 2011, pp. 1453–1460.
W. Sun, H. Du, S. Nie, and X. He, "Traffic Sign Recognition Method Integrating Multi-Layer Features and Kernel Extreme Learning Machine Classifier," Computers, Materials & Continua, vol. 60, no. 1, pp. 147–161, 1970.
W. Ali, G. Wang, K. Ullah, M. Salman, and S. Ali, "Substation Danger Sign Detection and Recognition using Convolutional Neural Networks," Engineering, Technology & Applied Science Research, vol. 13, no. 1, pp. 10051–10059, Feb. 2023.
C. Wang, "Research and Application of Traffic Sign Detection and Recognition Based on Deep Learning," in 2018 International Conference on Robots & Intelligent System (ICRIS), Changsha, China, Feb. 2018, pp. 150–152.
A. Avramović, D. Sluga, D. Tabernik, D. Skočaj, V. Stojnić, and N. Ilc, "Neural-Network-Based Traffic Sign Detection and Recognition in High-Definition Images Using Region Focusing and Parallelization," IEEE Access, vol. 8, pp. 189855–189868, 2020.
J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, "You Only Look Once: Unified, Real-Time Object Detection," in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, Jun. 2016, pp. 779–788.
J. Redmon and A. Farhadi, "YOLO9000: Better, Faster, Stronger," in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, Jul. 2017, pp. 6517–6525.
W. Liu et al., "SSD: Single Shot MultiBox Detector," in Computer Vision – ECCV 2016, 2016, pp. 21–37.
R. Girshick, J. Donahue, T. Darrell, and J. Malik, "Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation," in 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, Jun. 2014, pp. 580–587.
R. Girshick, "Fast R-CNN," in 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, Sep. 2015, pp. 1440–1448.
S. Ren, K. He, R. Girshick, and J. Sun, "Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 6, pp. 1137–1149, Jun. 2017.
G. Jocher et al., "ultralytics/yolov5: v7.0 - YOLOv5 SOTA Realtime Instance Segmentation." Zenodo, Aug. 22, 2022.
P. V. B. Ngoc, L. H. Hoang, L. M. Hieu, N. H. Nguyen, N. L. Thien, and V. T. Doan, "Real-Time Fire and Smoke Detection for Trajectory Planning and Navigation of a Mobile Robot," Engineering, Technology & Applied Science Research, vol. 13, no. 5, pp. 11843–11849, Oct. 2023.
D. D. Van, "Application of Advanced Deep Convolutional Neural Networks for the Recognition of Road Surface Anomalies," Engineering, Technology & Applied Science Research, vol. 13, no. 3, pp. 10765–10768, Jun. 2023.
A. G. Howard et al., "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications." arXiv, Apr. 16, 2017.
Downloads
How to Cite
License
Copyright (c) 2023 Pham Xuan Tung, Nguyen Luong Thien, Pham Van Bach Ngoc, Minh Hung Vu
This work is licensed under a Creative Commons Attribution 4.0 International License.
Authors who publish with this journal agree to the following terms:
- Authors retain the copyright and grant the journal the right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) after its publication in ETASR with an acknowledgement of its initial publication in this journal.