An Image Fusion Technique Based on Hadamard Transform and HVS
Abstract
The main endeavor of image fusion is to obtain an image that contains more visual quality information than any one of the source images. In general, the source images may be multi focus, multi modality, multi resolution, multi temporal, panchromatic, satellite images considered for fusion. This paper discusses image fusion using Hadamard Transform (HT). In this work, Human Visual System (HVS) is investigated for image fusion in the HT domain. The proposed fusion process contains three important parts, (1) divide source images into sub images / blocks and transform them into HT domain. (2) multiply transformed coefficients with HVS based weightage matrix of HT and select the highest value from them (3) fuse the corresponding block of selected coefficients from source images in to an empty image. The utility of HVS makes the coefficients more significant. The performance of the proposed method is analyzed and compared with Discrete Wavelet Transform (DWT) based image fusion technique. Implementation in HT domain is simple and time saving when compared with DWT.
Keywords:
Hadamard Transform, HT, human visual system, HVS, discrete wavelet transform, DWT, image FusionDownloads
References
A. Garzelli, “Possibilities and limitations of the use of wavelets in image fusion”, IEEE Geoscience and Remote Sensing Symposium. Vol. 1, pp. 66-68, 2002
G. Paella, “A general frame work for multiresolution image fusion: from pixels to regions”, Information Fusion, Vol. 4, No. 4, pp. 259-280, 2003 DOI: https://doi.org/10.1016/S1566-2535(03)00046-0
D. A. Godse, D. S. Bormane, “Wavelet based image fusion using pixel based maximum selection rule”, International Journal of Engineering Science and Technology , Vol. 3, No. 7, pp. 5572-5577, 2011
N. Mitianoudi, T. Stathaki, “Pixel-based and region-based image fusion schemes using ICA bases”, Information Fusion, Vol. 8, No. 2, pp. 131-142, 2007 DOI: https://doi.org/10.1016/j.inffus.2005.09.001
J. Jayanth, S. Koliwad, “Performance degraded by the sensor noise at pixel level image fusion”, International Journal of Computer Applications, Vol. 8, No. 9, pp. 23-28, 2010 DOI: https://doi.org/10.5120/1235-1650
J. Tang, “A contrast based image fusion technique in the DCT domain”, Digital Signal Processing, Vol. 14, No. 3, pp. 218-226, 2004 DOI: https://doi.org/10.1016/j.dsp.2003.06.001
J. Johnson, M.Puschel, “In search of the optimal walsh-hadamard transform”, International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Vol. 6, pp. 3347-3350, 2000, pp. 3347-3350, 2000.
J. Mannos, D. Sakrison, “The effect of a visual fidelity criterion in the encoding of images”, IEEE. Trans. Information Theory, Vol. 20, No. 4, pp. 525-536, 1974 DOI: https://doi.org/10.1109/TIT.1974.1055250
T. N. Pappas, J. P. Allebach, D. L. Nehhoff, “Model-based digital halftoning”, IEEE Signal Processing Magazine, Vol. 20, No. 4, pp.14-27, 2003 DOI: https://doi.org/10.1109/MSP.2003.1215228
J. Sullivan, L. Ray, R. Miller, “Design of minimum visual modulation halftone patterns”, IEEE Transactions on Systems, Man and Cybernetics, Vol. 21, No. 1, pp. 33-38, 1991 DOI: https://doi.org/10.1109/21.101134
K. Veeraswamy, S. Srinivaskumar, B. N. Chatterji, “Designing quantization table for Hadamard transform based on human visual system for image compression”, Graphics, Vision and Image Processing, Vol. 7, No. 3, pp. 31-38, 2007
L. Zhang, L. Zhang, X. Mou, D. Zhang, “FSIM: A Feature Similarity Index for Image Quality Assessment”, IEEE transactions on Image Processing, Vol. 20, No. 8, pp.2378-2386, 2011 DOI: https://doi.org/10.1109/TIP.2011.2109730
G. Qu, D. Zhang, P. Yan, “Information measure for performance of image fusion”, Electronic Letters, Vol. 38, No. 7, pp. 313-315, 2002 DOI: https://doi.org/10.1049/el:20020212
C. S. Xydeas, V. Petrovic, “Objective image fusion performance measure”, Electronic Letters, Vol. 36, No. 4, pp. 308-309, 2000 DOI: https://doi.org/10.1049/el:20000267
S. Daly, “Subroutine for the generation of a two dimensional human visual contrast sensitivity function”, Tech. Rep 2332037, Eastman Kodak, 1987
O. Rockinger, “Image sequence fusions using a shift-invariant wavelet transform”, IEEE International Conference on Image Processing, Vol. 3, pp. 288-291, 1997
M. B. A. Haghighat, A. Aghagolzadeh, H. Seyedarabi, “Multi-focus image fusion for visual sensor networks in DCT domain”, Computers & Electrical Engineering, Vol. 37, No. 5, pp. 789-797, 2011 DOI: https://doi.org/10.1016/j.compeleceng.2011.04.016
R. Vadhi, V. Kilari, S. K. Samayamantula, “Uniform based approach for image fusion”, in Eco-friendly Computing and Communication Systems, Springer Berlin Heidelberg, pp 186-194, 2012 DOI: https://doi.org/10.1007/978-3-642-32112-2_23
D. Srinivas Rao, M. Seetha, M. H. M. Krishna Prasad, “Quality assessment parameters for iterative image fusion using fuzzy and neuro fuzzy logic and applications”, Procedia Technology, Vol. 19, pp. 888-894, 2015 DOI: https://doi.org/10.1016/j.protcy.2015.02.127
H. Lin, Y. Tian, R. Pu, L. Liang, “Remotely Sensing Image Fusion Based on Wavelet Transform and Human Visual System”, International Journal of Signal Processing, Image Processing and Pattern Recognition, Vol. 8, No. 7, pp. 291-298, 2015 DOI: https://doi.org/10.14257/ijsip.2015.8.7.28
Y. Yang, W. Zheng, S. Huang, “Effective Multifocus Image Fusion Based on HVS and BP Neural Network”, The Scientific World Journal, Vol. 2014, Article ID 281073, pp.1-8, 2014 DOI: https://doi.org/10.1155/2014/281073
Downloads
How to Cite
License
Authors who publish with this journal agree to the following terms:
- Authors retain the copyright and grant the journal the right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) after its publication in ETASR with an acknowledgement of its initial publication in this journal.