A Lightweight Denoising Convolutional Neural Network for On-Device Artifact Suppression
Received: 9 October 2025 | Revised: 31 October 2025 and 19 November 2025 | Accepted: 29 November 2025 | Online: 10 December 2025
Corresponding author: Naeem Ahmed
Abstract
Image compression for mobile and streaming applications often introduces blocking, blurring, and ringing that degrade visual quality and harm downstream vision tasks. This work presents a lightweight on-device restoration model based on a Denoising Convolutional Neural Network (DnCNN) that is optimized for efficiency using structured pruning, 8-bit integer (INT8) quantization, and architectural slimming, followed by perceptual fine-tuning in MATLAB. The model was trained on the Berkeley Segmentation Dataset 400 (BSD400) and evaluated on Set5, Set14, and Berkeley Segmentation Dataset 68 (BSD68). We report standard full-reference metrics, namely Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index Measure (SSIM), a perceptual metric, Learned Perceptual Image Patch Similarity (LPIPS); and no-reference metrics, Natural Image Quality Evaluator (NIQE) and Blind/Referenceless Image Spatial Quality Evaluator (BRISQUE). On average, the compressed model attains about 29.0 dB PSNR and 0.83 SSIM, while reducing model size by about 52% to 1.0 MB and cutting CPU inference time by about 70% compared with the uncompressed DnCNN baseline. These results show that the compressed and perceptually fine-tuned DnCNN suppresses compression artifacts effectively while meeting the memory and latency constraints of mobile and embedded platforms, providing a practical receiver-side solution that remains compatible with legacy codecs.
Keywords:
image compression, artifact suppression, PSNR, pruning, quantization, DnCNN, MATLABDownloads
References
X. Wang, X. Fu, Y. Zhu, and Z.-J. Zha, "JPEG Artifacts Removal via Contrastive Representation Learning," in Computer Vision – ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XVII, Tel Aviv, Israel, Jul. 2022, pp. 615–631. DOI: https://doi.org/10.1007/978-3-031-19790-1_37
K. Cui, A. Boev, E. Alshina, and E. Steinbach, "Color Image Restoration Exploiting Inter-Channel Correlation With a 3-Stage CNN," IEEE Journal of Selected Topics in Signal Processing, vol. 15, no. 2, pp. 174–189, Feb. 2021. DOI: https://doi.org/10.1109/JSTSP.2020.3043148
L. Ma, Y. Zhao, P. Peng, and Y. Tian, "Sensitivity Decouple Learning for Image Compression Artifacts Reduction," IEEE Transactions on Image Processing, vol. 33, pp. 3620–3633, 2024. DOI: https://doi.org/10.1109/TIP.2024.3403034
L.-H. Chen, C. G. Bampis, Z. Li, A. Norkin, and A. C. Bovik, "Perceptually Optimizing Deep Image Compression." arXiv, 2020.
S. Liu, Z. Peng, Q. Yu, and L. Duan, "A novel image semantic communication method via dynamic decision generation network and generative adversarial network," Scientific Reports, vol. 14, Aug. 2024, Art. no. 19636. DOI: https://doi.org/10.1038/s41598-024-70619-9
H. Son, T. Kim, H. Lee, and S. Lee, "Enhanced Standard Compatible Image Compression Framework Based on Auxiliary Codec Networks," IEEE Transactions on Image Processing, vol. 31, pp. 664–677, Dec. 2021. DOI: https://doi.org/10.1109/TIP.2021.3134473
J.-H. Choi, J.-H. Kim, M. Cheon, and J.-S. Lee, "Deep Learning-based Image Super-Resolution Considering Quantitative and Perceptual Quality." arXiv, Apr. 19, 2019. DOI: https://doi.org/10.1016/j.neucom.2019.06.103
D. Jacobellis, D. Cummings, and N. J. Yadwadkar, "Machine Perceptual Quality: Evaluating the Impact of Severe Lossy Compression on Audio and Image Models," in 2024 Data Compression Conference (DCC), Snowbird, UT, USA, Mar. 2024, pp. 562–562. DOI: https://doi.org/10.1109/DCC58796.2024.00079
H. Le et al., "MobileCodec: Neural Inter-frame Video Compression on Mobile Devices." arXiv, Jul. 18, 2022. DOI: https://doi.org/10.1145/3524273.3532906
K. He, X. Zhang, S. Ren, and J. Sun, "Identity Mappings in Deep Residual Networks," in Computer Vision – ECCV 2016, vol. 9908, B. Leibe, J. Matas, N. Sebe, and M. Welling, Eds. Springer International Publishing, 2016, pp. 630–645. DOI: https://doi.org/10.1007/978-3-319-46493-0_38
A. Albanese and D. Brunelli, "Industrial Visual Inspection with TinyML for High-Performance Quality Control," IEEE Instrumentation & Measurement Magazine, vol. 26, no. 8, pp. 17–22, Nov. 2023. DOI: https://doi.org/10.1109/MIM.2023.10292593
A. Lahiri, S. Bairagya, S. Bera, S. Haldar, and P. K. Biswas, "Lightweight Modules for Efficient Deep Learning Based Image Restoration," IEEE Transactions on Circuits and Systems for Video Technology, vol. 31, no. 4, pp. 1395–1410, Apr. 2021. DOI: https://doi.org/10.1109/TCSVT.2020.3007723
"BSD400 dataset." [Online]. Available: https://github.com/cszn/DnCNN/tree/master/TrainingCodes/DnCNN_TrainingCodes_v1.0/data/Train400.
J. Liu, D. Liu, W. Yang, S. Xia, X. Zhang, and Y. Dai, "A Comprehensive Benchmark for Single Image Compression Artifact Reduction," IEEE Transactions on Image Processing, vol. 29, pp. 7845–7860, Jul. 2020. DOI: https://doi.org/10.1109/TIP.2020.3007828
M. Zhang, X. Yu, J. Rong, and L. Ou, "Effective Model Compression via Stage-wise Pruning." arXiv, Sep. 22, 2021.
M. V. Conde, F. Vasluianu, J. Vazquez-Corral, and R. Timofte, "Perceptual Image Enhancement for Smartphone Real-Time Applications," in 2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), Waikoloa, HI, USA, Jan. 2023, pp. 1848–1858. DOI: https://doi.org/10.1109/WACV56688.2023.00189
I. J. Ratul, Y. Zhou, and K. Yang, "Accelerating Deep Learning Inference: A Comparative Analysis of Modern Acceleration Frameworks," Electronics, vol. 14, no. 15, Jul. 2025, Art. no. 2977. DOI: https://doi.org/10.3390/electronics14152977
S. Harshitha, U. B. Mahadevaswamy, and M. Srikantaswamy, "Energy-Efficient Image Compression for Capsule Endoscopy Using a CNN-Based Feature Learning Algorithm," Engineering, Technology & Applied Science Research, vol. 15, no. 5, pp. 26217–26223, Oct. 2025 DOI: https://doi.org/10.48084/etasr.11891
B. Jacob et al., "Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference," in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, Jun. 2018, pp. 2704–2713. DOI: https://doi.org/10.1109/CVPR.2018.00286
B. Zheng, Y. Chen, X. Tian, F. Zhou, and X. Liu, "Implicit Dual-Domain Convolutional Network for Robust Color Image Compression Artifact Reduction," IEEE Transactions on Circuits and Systems for Video Technology, vol. 30, no. 11, pp. 3982–3994, Nov. 2020. DOI: https://doi.org/10.1109/TCSVT.2019.2931045
Downloads
How to Cite
License
Copyright (c) 2025 Naeem Ahmed, R. Navya, Arun Ananthanarayanan

This work is licensed under a Creative Commons Attribution 4.0 International License.
Authors who publish with this journal agree to the following terms:
- Authors retain the copyright and grant the journal the right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) after its publication in ETASR with an acknowledgement of its initial publication in this journal.
