RIOD:Reinforced Image-based Object Detection for Unruly Weather Conditions
Received: 30 November 2023 | Revised: 26 December 2023 | Accepted: 29 December 2023 | Online: 8 February 2024
Corresponding author: P. P. Pavitha
Abstract
Deep Neural Network (DNN) object detectors have proved their efficiency in the detection and classification of objects in normal weather. However, these models suffer a lot during bad weather conditions (foggy, rain, haze, night, etc.). This study presents a new scheme to reduce the aforementioned issue by attenuating the noise in the input image before feeding it to any kind of neural network-based object detector. In this study, the image optimization function transforms subpar-quality images due to bad weather into pictures with the optimal possible quality by estimating the proper illumination and transmission function. These optimized images showed improved object detection rates in the YOLOv4 and YOLOv5 models. This improvement in object detection was also noticed in the case of video input. This scheme was tested with images/videos from various weather conditions, and the results showed an encouraging improvement in detection rates.
Keywords:
self-driving vehicle, YOLOv4, YOLOv5, image pre-processing, deep learning, object detectionDownloads
References
P. P. Pavitha, K. B. Rekha, and S. Safinaz, "Perception system in Autonomous Vehicle: A study on contemporary and forthcoming technologies for object detection in autonomous vehicles," in 2021 International Conference on Forensics, Analytics, Big Data, Security (FABS), Bengaluru, India, Sep. 2021, vol. 1, pp. 1–6.
M. Hnewa and H. Radha, "Object Detection Under Rainy Conditions for Autonomous Vehicles: A Review of State-of-the-Art and Emerging Techniques," IEEE Signal Processing Magazine, vol. 38, no. 1, pp. 53–67, Jan. 2021.
W. Ritter, M. Bijelic, T. Gruber, M. Kutila, and H. Holzhüter, "DENSE: Environment Perception in Bad Weather—First Results," in Electronic Components and Systems for Automotive Applications, 2019, pp. 143–159.
M. A. Kenk and M. Hassaballah, "DAWN: Vehicle Detection in Adverse Weather Nature Dataset." Mar. 06, 2020.
W. Liu et al., "SSD: Single Shot MultiBox Detector," in Computer Vision – ECCV 2016, Amsterdam, Netherlands, 2016, pp. 21–37.
J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, "You Only Look Once: Unified, Real-Time Object Detection," in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, Jun. 2016, pp. 779–788.
Y. Chen, W. Li, C. Sakaridis, D. Dai, and L. Van Gool, "Domain Adaptive Faster R-CNN for Object Detection in the Wild." arXiv, Mar. 08, 2018.
J. Fayyad, M. A. Jaradat, D. Gruyer, and H. Najjaran, "Deep Learning Sensor Fusion for Autonomous Vehicle Perception and Localization: A Review," Sensors, vol. 20, no. 15, Jan. 2020, Art. no. 4220.
M. Hnewa and H. Radha, "Multiscale Domain Adaptive Yolo For Cross-Domain Object Detection," in 2021 IEEE International Conference on Image Processing (ICIP), Anchorage, AK, USA, Sep. 2021, pp. 3323–3327.
Q. Ding et al., "CF-YOLO: Cross Fusion YOLO for Object Detection in Adverse Weather with a High-quality Real Snow Dataset." arXiv, Jun. 03, 2022.
M. Bijelic et al., "Seeing Through Fog Without Seeing Fog: Deep Multimodal Sensor Fusion in Unseen Adverse Weather," in 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, Jun. 2020, pp. 11679–11689.
H. G. Doan and N. T. Nguyen, "Fusion Machine Learning Strategies for Multi-modal Sensor-based Hand Gesture Recognition," Engineering, Technology & Applied Science Research, vol. 12, no. 3, pp. 8628–8633, Jun. 2022.
J. Terven, D.-M. Córdova-Esparza, and J.-A. Romero-González, "A Comprehensive Review of YOLO Architectures in Computer Vision: From YOLOv1 to YOLOv8 and YOLO-NAS," Machine Learning and Knowledge Extraction, vol. 5, no. 4, pp. 1680–1716, Dec. 2023.
A. Bochkovskiy, C. Y. Wang, and H. Y. M. Liao, "YOLOv4: Optimal Speed and Accuracy of Object Detection." arXiv, Apr. 22, 2020.
G. Meng, Y. Wang, J. Duan, S. Xiang, and C. Pan, "Efficient Image Dehazing with Boundary Constraint and Contextual Regularization," in 2013 IEEE International Conference on Computer Vision, Sydney, Australia, Sep. 2013, pp. 617–624.
S. G. Narasimhan and S. K. Nayar, "Contrast restoration of weather degraded images," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 25, no. 6, pp. 713–724, Jun. 2003.
K. He, J. Sun, and X. Tang, "Single Image Haze Removal Using Dark Channel Prior," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, no. 12, pp. 2341–2353, Sep. 2011.
W. Ali, G. Wang, K. Ullah, M. Salman, and S. Ali, "Substation Danger Sign Detection and Recognition using Convolutional Neural Networks," Engineering, Technology & Applied Science Research, vol. 13, no. 1, pp. 10051–10059, Feb. 2023.
V. Saikrishnan and M. Karthikeyan, "Mayfly Optimization with Deep Learning-based Robust Object Detection and Classification on Surveillance Videos," Engineering, Technology & Applied Science Research, vol. 13, no. 5, pp. 11747–11752, Oct. 2023.
Y. Hou, Z. Ma, C. Liu, and C. C. Loy, "Learning to Steer by Mimicking Features from Heterogeneous Auxiliary Networks." arXiv, Nov. 06, 2018.
F. Yu et al., "BDD100K: A Diverse Driving Dataset for Heterogeneous Multitask Learning." arXiv, Apr. 08, 2020.
G. Varma, A. Subramanian, A. Namboodiri, M. Chandraker, and C. V. Jawahar, "IDD: A Dataset for Exploring Problems of Autonomous Navigation in Unconstrained Environments." arXiv, Nov. 26, 2018.
Downloads
How to Cite
License
Copyright (c) 2024 P. P. Pavitha, K. Bhanu Rekha, S. Safinaz
This work is licensed under a Creative Commons Attribution 4.0 International License.
Authors who publish with this journal agree to the following terms:
- Authors retain the copyright and grant the journal the right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) after its publication in ETASR with an acknowledgement of its initial publication in this journal.