An Enhanced Skin Cancer Detection Method Utilizing TriBlendNet and Deepdilated Focus U-Net
Received: 22 August 2025 | Revised: 5 October 2025 and 21 October 2025 | Accepted: 3 November 2025 | Online: 28 January 2026
Corresponding author: D. Manju
Abstract
Skin cancer remains a major global health concern, demanding advanced methods for accurate and early identification. This work presents an integrated framework that employs a Generative Adversarial Network (GAN) for effective data augmentation and a novel Deep Dilated-Focus U-Net enhanced with attention mechanisms for precise lesion segmentation. For classification, a hybrid model named TriBlendNet is proposed, combining the advantages of the SqueezeNet and DenseNet121 architectures. Using the SIIM-ISIC 2019 dataset, the proposed system outperforms existing models such as SqueezeNet, DenseNet121, ResNet50, and VGG16. The TriBlendNet model achieved an outstanding accuracy of 98.59%, along with high precision, recall, and specificity, showcasing its strong capability for reliable and efficient automated skin cancer detection.
Keywords:
skin cancer, deep dilated-focus U-Net, Neural Architecture Search (NAS), TriBlendNetDownloads
References
N. Zhang, Y. X. Cai, Y. Y. Wang, Y. T. Tian, X. L. Wang, and B. Badami, "Skin cancer diagnosis based on optimized convolutional neural network," Artificial Intelligence in Medicine, vol. 102, Jan. 2020, Art. no. 101756. DOI: https://doi.org/10.1016/j.artmed.2019.101756
S. K. Singh, V. Abolghasemi, and M. H. Anisi, "Fuzzy Logic with Deep Learning for Detection of Skin Cancer," Applied Sciences, vol. 13, no. 15, Aug. 2023, Art. no. 8927. DOI: https://doi.org/10.3390/app13158927
S. Sikandar, R. Mahum, A. E. Ragab, S. Y. Yayilgan, and S. Shaikh, "SCDet: A Robust Approach for the Detection of Skin Lesions," Diagnostics, vol. 13, no. 11, May 2023, Art. no. 1824. DOI: https://doi.org/10.3390/diagnostics13111824
M. Tahir, A. Naeem, H. Malik, J. Tanveer, R. A. Naqvi, and S. W. Lee, "DSCC_Net: Multi-Classification Deep Learning Models for Diagnosing of Skin Cancer Using Dermoscopic Images," Cancers, vol. 15, no. 7, Apr. 2023, Art. no. 2179. DOI: https://doi.org/10.3390/cancers15072179
J. Qadir, "Enhancing Skin Disease Diagnosis: A Hybrid Approach Combining Vision Transformer and Feature Selection Techniques.," Zanin Journal of Science and Engineering, vol. 1, no. 1, pp. 54–71, Mar. 2025. DOI: https://doi.org/10.64362/zjse.37
S. B. Mukadam and H. Y. Patil, "Skin Cancer Classification Framework Using Enhanced Super Resolution Generative Adversarial Network and Custom Convolutional Neural Network," Applied Sciences, vol. 13, no. 2, Jan. 2023, Art. no. 1210. DOI: https://doi.org/10.3390/app13021210
A. Hekler et al., "Using Multiple Dermoscopic Photographs of One Lesion Improves Melanoma Classification via Deep Learning: A Prognostic Diagnostic Accuracy Study," Journal of the American Academy of Dermatology, vol. 90, no. 5, pp. 1028–1031, May 2024. DOI: https://doi.org/10.1016/j.jaad.2023.11.065
A. E. Mrabet, M. Benaly, I. Alihamidi, B. Kouach, L. Hlou, and R. E. Gouri, "Enhancing Early Detection of Skin Cancer in Clinical Practice with Hybrid Deep Learning Models," Engineering, Technology & Applied Science Research, vol. 15, no. 2, pp. 20927–20933, Apr. 2025. DOI: https://doi.org/10.48084/etasr.9753
International Skin Imaging Collaboration, "SIIM-ISIC 2020 Challenge Dataset." International Skin Imaging Collaboration, 2020.
"Skin Cancer MNIST: HAM10000." Kaggle, [Online]. Available: https://www.kaggle.com/datasets/kmader/skin-cancer-mnist-ham10000.
A. T. Priyeshkumar, G. Shyamala, T. Vasanth, and P. V. Selvan, "Transforming Skin Cancer Diagnosis: A Deep Learning Approach with the Ham10000 Dataset," Cancer Investigation, vol. 42, no. 10, pp. 801–814, Nov. 2024. DOI: https://doi.org/10.1080/07357907.2024.2422602
J. Amin, M. Azhar, H. Arshad, A. Zafar, and S. H. Kim, "Skin-lesion segmentation using boundary-aware segmentation network and classification based on a mixture of convolutional and transformer neural networks," Frontiers in Medicine, vol. 12, Mar. 2025, Art. no. 1524146. DOI: https://doi.org/10.3389/fmed.2025.1524146
M. Tan and Q. V. Le, "EfficientNetV2: Smaller Models and Faster Training," 2021.
Z. Liu, H. Mao, C. Y. Wu, C. Feichtenhofer, T. Darrell, and S. Xie, "A ConvNet for the 2020s." arXiv, 2022. DOI: https://doi.org/10.1109/CVPR52688.2022.01167
A. Dosovitskiy et al., "An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale." arXiv, 2020.
I. J. Goodfellow et al., "Generative Adversarial Networks." arXiv, 2014.
M. M. Musthafa, T. R. Manesh, V. V. Kumar, and S. Guluwadi, "Enhanced skin cancer diagnosis using optimized CNN architecture and checkpoints for automated dermatological lesion classification," BMC Medical Imaging, vol. 24, no. 1, Aug. 2024, Art. no. 201. DOI: https://doi.org/10.1186/s12880-024-01356-8
G. M. S. Himel, Md. M. Islam, Kh. A. Al-Aff, S. I. Karim, and Md. K. U. Sikder, "Skin Cancer Segmentation and Classification Using Vision Transformer for Automatic Analysis in Dermatoscopy-Based Noninvasive Digital System," International Journal of Biomedical Imaging, vol. 2024, pp. 1–18, Feb. 2024. DOI: https://doi.org/10.1155/2024/3022192
M. F. Aslan, "Comparison of vision transformers and convolutional neural networks for skin disease classification," in Proceedings of the International Conference on New Trends in Applied Sciences, 2023, vol. 1, pp. 31–39. DOI: https://doi.org/10.58190/icontas.2023.51
O. Ronneberger, P. Fischer, and T. Brox, "U-Net: Convolutional Networks for Biomedical Image Segmentation." arXiv, 2015. DOI: https://doi.org/10.1007/978-3-319-24574-4_28
S. Woo, J. Park, J. Y. Lee, and I. S. Kweon, "CBAM: Convolutional Block Attention Module." arXiv, July 18, 2018. DOI: https://doi.org/10.1007/978-3-030-01234-2_1
Downloads
How to Cite
License
Copyright (c) 2025 D. Manju, K. Kishore Kumar, Movva Pavani, N. V. S. Pavan Kumar, V. S. N. Murthy, Rajesh Kumar Verma, Padmini Debbarma, M. Koteswara Rao, Anand Kumar Saraswathi Rathod, Krishna Mohan Bh.

This work is licensed under a Creative Commons Attribution 4.0 International License.
Authors who publish with this journal agree to the following terms:
- Authors retain the copyright and grant the journal the right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) after its publication in ETASR with an acknowledgement of its initial publication in this journal.
