Skip Navigation Links
Journal of Vibration Testing and System Dynamics

C. Steve Suh (editor), Pawel Olejnik (editor),

Xianguo Tuo (editor)

Pawel Olejnik (editor)

Lodz University of Technology, Poland

Email: pawel.olejnik@p.lodz.pl

C. Steve Suh (editor)

Texas A&M University, USA

Email: ssuh@tamu.edu

Xiangguo Tuo (editor)

Sichuan University of Science and Engineering, China

Email: tuoxianguo@suse.edu.cn


Image Fusion Performance-gains at Different Fusion Levels

Journal of Vibration Testing and System Dynamics 5(2) (2021) 121--130 | DOI:10.5890/JVTSD.2021.06.002

Xin Zeng, Zhongqiang Luo , Xingzhong Xiong

Artificial Intelligence Key Laboratory of Sichuan Province, Sichuan University of Science and Engineering, Yibin, 644000, China

Download Full Text PDF

 

Abstract

Image fusion is a branch of multi-source information fusion, which plays an increasingly significant role in the military field. Since the environment is full of many interference factors, including light, dust, etc., the target object cannot be clearly identified. Image fusion based on visible image and infrared image is attractive and promising for the object detection applications. This paper analyzes and compares pixel-level, feature-level and decision-level image fusion, and summarizes the performance-gains of image fusion at different levels with examples. It is concluded that pixel-level fusion can be used to process more delicately than feature-level fusion, and the result of feature-level fusion is more delicate than decision-level fusion. Furthermore, we conclude a creative idea, that is pixel-level and feature-level methods can be combined in the future.

References

  1. [1]  An, W.B. and Wang, H.M. (2020), Infrared and visible image fusion with supervised convolutional neural network, Optik, 219, 1-27.
  2. [2]  Zhao, C., Huang, Y., and Qiu, S. (2019), Infrared and visible image fusion algorithm based on saliency detection and adaptive double-channel spiking cortical model, Infrared Physics {$\&$ Technology}, 102, 1-12.
  3. [3]  Xing, C., Wang, Z., Quan, O., and Dong, C. (2018), Method based on bitonic filtering decomposition and sparse representation for fusion of infrared and visible images, IET Image Processing, 12(12), 2300-2310.
  4. [4]  Li, J., Song, M., and Peng, Y. (2018), Infrared and visible image fusion based on robust principal component analysis and compressed sensing, Infrared Physics {$\&$ Technology}, 89, 129-139.
  5. [5]  Ma, T., Ma, J., Fang, B., Hu, F., Quan, S., and Du, H. (2018), Multi-scale decomposition based fusion of infrared and visible image via total variation and saliency analysis, Infrared Physics {$\&$ Technology}, 92, 154-162.
  6. [6]  Liu, Y., Chen, X., Cheng, J., Peng, H., and Wang, Z. (2018), Infrared and visible image fusion with convolutional neural networks, Internet Journal of Wavelets, Multiresolution and Information Processing, 16(3), 1-20.
  7. [7]  Kulkarni, S.C. and Rege, P.P. (2020), Pixel level fusion techniques for SAR and optical images: A review, Information Fusion, 59, 13-29.
  8. [8]  Bulanon, D.M., Burks, T.F., and Alchanatis, V. (2009), Image fusion of visible and thermal images for fruit detection, Biosystems Engineering, 103(1), 12-22.
  9. [9]  Binal, P., Dippal, I., and Ritesh, P. (2020), An Efficient Image Fusion of Visible and Infrared Band Images using Integration of Anisotropic Diffusion and Discrete Wavelet Transform, Journal of Communications Software and Systems, 16(1), 30-36.
  10. [10]  Ma, J., Chen, C., Li, C., and Huang, J. (2016), Infrared and visible image fusion via gradient transfer and total variation minimization, Information Fusion, 31, 100-109.
  11. [11]  Sun, X., Wang, J., Chen, R., Kong, L., and She, M.F.H. (2011), Directional Gaussian Filter-based LBP Descriptor For Textural Image Classification, Procedia Engineering, 15, 1771-1779.